Software-defined networking (SDN), the hottest emerging networking technology, is also spilling into storage. That spillover accelerated today when Oracle acquired startup Xsigo, which allows servers to connect to any storage or network devices.
Oracle did not disclose the price, but it obviously was less than the $1.2 billion VMware paid for SDN startup Nicira Networks last week. Cisco also has a $100 million investment in Insieme to deliver similar technology.
Xsigo didn’t use the term SDN to describe itself, but Oracle did when announcing the deal. Xsigo marketed itself with the simpler term of I/O virtualization. Maybe that’s because using its software required customers to purchase a fabric director switch, and sometimes expansion switches and I/O cards.
But Oracle may choose to deploy the IP differently. Oracle isn’t likely to go into much detail for its plans until after the deal closes, which probably won’t be before October.
The first priority for Oracle when it makes an acquisition is to use the new technology to optimize Oracle products. But in a Q&A document Oracle released today, it said it would continue to support Xsigo customers and the cloud as well as the Oracle stack.
“While we expect to optimize Xsigo’s performance with the Oracle stack, Xsigo’s products will continue to support all heterogeneous environments and benefit any cloud deployment,” was Oracle’s answer to whether Xsigo products will continue to operate with non-Oracle systems.
Like VMware and Cisco, Oracle recognizes that server virtualization is changing the way those servers connect to storage and networks. Oracle will use Xsigo to address those changes.
“Oracle recognizes that achieving revolutionary improvements in both performance and efficiency requires a paradigm shift in the way compute and storage systems are interconnected and how that system interconnection is managed,” Oracle said in its release. “Xsigo simplifies cloud infrastructure and operations by allowing customers to dynamically connect any server to any network and storage, resulting in increased asset utilization and application performance while reducing cost. Because Xsigo consolidates and virtualizes the physical resources utilized to interconnect servers and storage, Xsigo is uniquely positioned to simplify the management of virtualized server and storage connectivity.”
Along with its directors and switches, Xsigo’s technology includes Fabric Accelerator software that connects virtual machines to storage and networks through software links that Xsigo calls Private Virtual Interconnects. It also has a Fabric Manager application to create, monitor and manage connections between servers and storage/networks.
Xsigo, founded in 2004, claims more than 300 customers including eBay and CarFax. Oracle said it expects Xsigo management and employees to join Oracle after the acquistion closes.
As with practically any industry, product names are crucial when selling storage.
Storage products sit in the heart of data centers and protect business information. lAn individual storage system may stay in use for four or five years, and there is a great likelihood that a successor to that system will be purchased at least once to minimize the risks and operational changes of moving to another platform.
This is where the name becomes important. The identity of the system is associated with that name and the vendor. A recognizable name can be a major factor in sales.
Naming is a complex exercise for vendors when brining out a new storage system. There may be great value in association with the preceding product. Sometimes there is greater value in not having the same name as the predecessor. But the name needs to be memorable so that the customer can immediately associate the name with the product. What is the most suitable yet memorable name? Is it part of an overall product naming convention by the vendor? Why do that? These are questions vendors must answer.
Sometimes vendors make up names by taking words or syllables that seem to convey information about the product. Other times, they jam together two words with no space but with the second word capitalized to make it recognizable. The latter is an unimaginative way to identify something. It could be better to go with a unique name. These sometimes stick for a long time. An example would be EMC’s Symmetrix.
Probably the worst way to create memorable names is through using multiple descriptive words as the product identity. There have been many of these cases in the past and some are laughable. When you add the company name in front of a string of descriptive words, most people can’t remember that name in a cadence. Obviously, someone has worked hard at making it so customers won’t remember the product.
Another issue to address when naming products is whether the name should describe what the product does, describe its capabilities, or allude to something in the industry. The last option is tough because it’s difficult to find descriptive words that are not already overused in the industry. And something that sounds clever today might seem out-of-date in the future.
Because the branding and naming exercise usually happens late in the process of delivering a product, developers tend to know their products by their internal code names. Sometimes vendors publicly refer to products by their code names in development, and the media and customers follow suit. Then, the vendor will change the name when delivering the product. EMC’s Project Lightning (called VFCache at release) and Project Thunder (not yet named) are examples of this.
Branding is an inexact science and most vendors need to pay more attention to it. Memorable names help sales, and should be a big focus of a product launch. Of course, sound products also help.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Veeam Software this week continued its “freemium” strategy of offering free features from its virtual machine backup software in hopes of gaining publicity and new users.
The newest freebie is Veeam Explorer for Microsoft Exchange, which lets virtual machine admins search and retrieve items inside Exchange without an agent. The admins can browse Exchange databases from a compressed backup file. Veeam claims the databases will be searchable in less than two minutes. Items can be exported to PST or MSG files.
Veeam Explorer for Exchange is now an “exclusive beta,” which means it is available for what product strategy specialist Rick Vanover calls “our largest fans.” That group consists of large customers as well as frequent Veaam tweeters and bloggers who will spread the word.
The Exchange feature requires the full Veeam Backup & and Replication application now, but will be added to the Veeam Backup Free Edition that Veeam launched in June. That free edition does ad hoc and limited backups of VMware and Microsoft Hyper-V but lacks support for deduplication, replication, incremental backups and a backup scheduler.
“The free version was light, but we gave it legs by adding this tool,” Vanover said.
The Exchange feature will also be built into the next version of Backup & Replication, due before the end of the year.
Vanover said the “freemium model helps us reach people, and in some cases is an eye opener for them. We’re banking on a lot of interest for Explorer for Exchange. Exchange is a beast. A lot of people have their own personal ‘big data’ in Exchange. This tool lets them work with it right from their backups.”
Veeam’s competitive situation changed earlier this month with Dell’s $2.4 billion acquisition of Quest Software. Quest owned Veeam’s major VM-only backup rival vRanger. Dell hasn’t said much about its plans for Quest’s backup products, but it can pump more development and distribution resources into vRanger than Quest did.
Vanover said the immediate impact for Veeam is that its close partnership with Dell will end.
“That changed our relationship with Dell,” he said. “We’ll still go the distributor route with them, but in terms of joint promotions, that’s non-existent now.”
Dell’s strategy for dealing with “big data” is to shrink it.
The shrinking tool Dell is using is partner RainStor’s database packaged with the Dell DX Object Storage platform. Dell will sell the combination under the Big Data Retention brand. The RainStor database is also certified to work with Dell’s EqualLogic and Compellent SANs, but the reseller deal is limited to DX for now.
The RainStor database, which can work as a standalone repository or as an analytics platform with Hadoop, has its own patented form of deduplication that Dell claims can provide an average data compression ratio of up to 40 to 1. The RainStor database dedupes data and writes a file to any type of storage, such as the DX Object Storage. Dell already owns deduplication technology it acquired via its Ocarina Network acquisition. The Ocarina deduplication has been built into the Dell DR4000 disk backup system and also is expected to be integrated into the Dell Fluid File system.
“We consider [RainStor’s deduplication] to be a complementary technology to Ocarina,” said Amy Price, a manager at Dell’s Storage Data Management Group.
Big Data Retention Solution customers can add capacity from as small 2 TBs while scaling to petabytes and billions of objects without the need to manage LUNS and RAID groups that is part of tradiational storage.
Tintri, which sells appliances designed specifically for virtual machine storage, grabbed $25 million in funding today. CEO Kieran Harty said the money will be used mostly for expanding sales and marketing internationally.
Harty said Tintri is ahead of its business plan with more than 100 customers in five quarters since launching its VMstore appliances.
“We didn’t need money, but we decided it was better to be aggressive,” said Harty, whose company has $60 million in total funding.
VMstore uses solid state drives (SSDs) and hard drives to optimize performance. VMstore’s I/O requests map to the virtual disks they occur on and it uses no LUNs, tiers or volumes for provisioning storage.
Harty said two things have surprised him since Tintri started selling product.
“We thought customers would primarily use us in test and development and with a small amounts of virtual machines, but they’ve moved into production quicker and used a larger number of VMs,” he said. “We track how many VMs customers are running on appliances, and we see huge growth as they get more confident. They add more VMs, and then they need more capacity. We started seeing a lot of repeat buys this year.
“The other surprising thing is, we were skeptical about VDI [virtual desktop infrastructure]. VDI has been talked about for a while, but it’s actually happening now. About 30 percent of our customers are using VDI. One of the big drivers there has been the iPad and tablet devices in markets like healthcare and legal. They’re moving to a model of hosted desktops.”
Harty said Tintri execs benchmark their business model against those of backup vendor Data Domain (now part of EMC) and network firewall vendor Palo Alto Networks, two companies that cruised to profitability and went public. Measuring against those companies, Harty said he expects Tintri to hit break-even by the first half of 2014.
In the meantime, it will build out its channel under Brian Gladden, the former head of sales at Avere and alliance manager at EMC before joining Tintri as director of sales channel and business development in April.
Harty said Tintri will also maintain an aggressive product upgrade schedule, with the next release coming later this year. He won’t say what features will be added, but he is planning on adding Microsoft Hyper-V support in a future release. Tintri now supports only VMware.
“We’re actively looking at that,” he said of Hyper-V. “We expect that hypervisor will be important in the market, but we don’t expect it to be that significant until early 2013 at the earliest.”
Menlo Ventures led Tintri’s D round with previous investors NEA and Lightspeed Venture Partners contributing.
EMC executives said they expect IT spending to be lower than they originally forecast for this year, while claiming their sales will be impacted less than competitors.
Speaking on EMC’s quarterly earnings call, CEO Joe Tucci said he expects global IT spending to increase around 3% over last year, down from previous estimates of close to 4%. He said Europe buying was especially “choppy,” but “there is an air of uncertainty that permeates the world stage right now” when it comes to spending. He said customers are showing “more caution, more scrutiny and making more decisions” around purchases.
EMC president David Goulden added, “it’s clear things have gotten weaker over the last few months.”
EMC reported second-quarter revenue of $5.31 billion, up 10% over last year. Its midrange storage product revenue grew 7% year over year and its high-end VMAX revenue increased 3% year over year following a 10% year-over-year drop in the first quarter. The VMAX increase came after an upgrade of the platform in May.
Tucci said the spending slowdown means customers may delay buying storage but “you can only push off storage purchases for so long” because data growth remains high. He said he expects EMC will continue to outgrow the rest of the storage market in revenue growth. EMC has not changed its revenue forecast for 2012.
Goulden said flash storage will play a big part of EMC’s upcoming product launches, with its “Project Thunder” and XtremIO all-flash arrays on the roadmap, and enhancements to the VFCache PCIe flash appliance planned for later this year. He said EMC would add inline deduplication, higher capacity PCIe cards, multi-level flash (MLC) and integration with Cisco UCS blade servers for VFCache, which first launched in February.
Tucci said VMware’s $1.05 billion acquisition of software-defined networking (SDN) startup Nicira does not signal a change in the close relationship EMC and VMware had with Cisco. Tucci said there is still “a tremendous amount of opportunity” for Cisco products with VMware. He also said EMC and Cisco remain “extremely committed” to their VCE joint development arrangement.
Goulden said the worst is over regarding hard drive shortages caused by late 2011 flooding in Thailand and prices will drop in the second half of the year, but they will still not fall to pre-flood prices.
NAS acceleration vendor Avere Systems this week introduced a smaller version of its FXT NAS Edge filer — the FXT 3100 — that optimizes mission-critical applications from remote and branch offices across a WAN to core data centers. This latest product comes days after the company secured $20 million in Series C funding, bringing its total investment to $52 million.
The Avere FXT 3100 Edge filer contains 48 GB of DRAM and 1GB of NVRAM to accelerate read, write and metadata performance for active data. One 2U device holds 1.2 TB of data using 10,000 rpm SAS drives. The FXT 3100 has two 10 Gigabit Ethernet ports and six Gigabit Ethernet ports, and can be clustered to 25 devices. Data or changed blocks that reside in the 3100 is pushed across the WAN to the Avere core FXT 4000 or 3000 devices located in data centers.
“This is not for local peformance,” said Rebecca Thompson, Avere’s vice president of marketing, “so there is no flash in this unit. We are dealing with WAN latency. That is what we are mitigating. This is for data outside the data center, such as engineering applications, scientific application for genomics and remote rendering for media entertainment.”
The FXT 3100 Edge filer is built on the Avere Operating System (AOS), which does automatic tiering, advanced monitoring of the NAS environment and uses a global namespace to manage all storage as a single pool. The FXT 3100 will be available in 30 days at a list price of $42,500.
Avere, which has about 80 employees, will be using its latest funding to build out its sales and marketing teams, CEO Ron Bianchini said. He expects to add channel partners. Lightspeed Venture Partners led the C round with previous investors Menlo Ventures, Norwest Venture Partners and Tenaya Capital contributing.
“It’s not really about the product anymore,” Bianchini said, “It’s about getting the message out.”
In 2009, the company secured $15 million in Series A funding from Menlo Ventures and Norwest Venture Partners. In 2010, the company got $17 million in Series B funding led by Tenaya Capital and Menlo Ventures and Norwest Venture Partners.
That’s no surprise, considering NexGen had one model at launch. This week it turned its n5 storage system into a three-product series to cover more of the midrange market.
“Today [before the additions] we have a single product, and we know that’s not adequate to cover the market,” said Chris McCall, NexGen VP of marketing.
NexGen’s new platform consists of the n5-50, n5-100 and n5-150. The n5-100 is almost identical to the original n5 system launched last November. It has 1.24 TB of PCIe-based flash (two 68 GB Fusion-io cards) and 32 TB of raw capacity. The difference between the n5-100 and the original n5 is the new system supports Gigabit Ethernet and 10 Gigabit Ethernet instead of one or the other. The n5-50 has 770 GB of solid state and 16 TB of raw capacity and the n5-150 has 2.4 TB GB of flash and 48 TB of raw capacity. All three models include eight GbE and four 10GbE data ports and four GbE management ports.
NexGen claims the n5-50 can attain 50,000 IOPS, the n5-100 100,000 IOPS and the n5-150 150,000 IOPS. NexGen’s ioControl QoS software lets customers provision minimum IOPS levels to each volume. The systems range in price from $55,000 to $108,000. The n5-50 and n5-100 are expected to be available next month with the n5-150 to follow in September.
McCall said NexGen’s architecture is more efficient than using all flash as other startups have done. He claims NexGen storage systems run as fast as all-solid state disk (SSD) storage – especially for writes.
“Most vendors integrating solid state did it via disk drives,” he said. “Typically they add SSD as read-caches but they’re not doing write workloads to solid state. The world is going from saying ‘I need more performance’ to ‘I need to manage that performance.’’
Perhaps the next step for NexGen will be the ability to cluster its systems. McCall said “we’re strongly looking into it” but there is an engineering challenge. “A clustered system uses a network between nodes,” he said. “Because SSD is so low latency, the network can undermine performance between nodes.”
News that Pat Gelsinger will move into the VMware CEO job and his predecessor Paul Maritz will join EMC as chief strategist has EMC-watchers wondering what the changes will mean to the storage giant’s succession plans.
Gelsinger and EMC CFO David Goulden were considered the top candidates to replace Joe Tucci when he steps down as CEO. The moves that EMC made today included a promotion for Goulden, who adds COO and president to his CFO titles. EMC also kept Gelsinger in the family, because EMC is the majority owner of VMware.
Tucci already changed the timing of his retirement last January when he said he would stay on past the end of this year into 2013. During a conference call today to discuss the executive changes, Tucci said he would stick around for at least 17 more months.
“I am incredibly energized,” Tucci said. “I truly love EMC and VMware. I do believe I add value, and I am fully committed to staying on as chairman of EMC and VMware and CEO of EMC at least through the end of 2013. I am convinced my successor will come from within.”
Goulden becomes the clear No. 2 to Tucci in his new role. Goulden will remain CFO while also heading EMC’s business units, sales, customer operations, services and marketing.
But that doesn’t mean Goulden is a lock to become the next CEO. EMC can still bring back Gelsinger, who will have more than a year as CEO under his belt by then. Maritz, who remains on the VMware board and will help drive EMC’s “big data” strategy, also has to be considered a candidate. EMC has other well respected executives. High-ranking executives Howard Elias and Jeremy Burton have been given more responsibilities to help replace Gelsinger, and Bill Scannell was promoted to president of global sales and customer operations last week.
When asked if he would stay on beyond 2013, Tucci said: “I will not overextend my welcome by any means. As long as I’m providing value and the board wants me … I’m not putting any firm end date in the sand. Now key executives are getting increased experience. Dave Goulden hasn’t been a COO of a company of this size, and Pat hasn’t been a CEO.”
The executive changes take effect Sept. 1, days after VMWorld 2012 ends.
“It will be an honor to take the baton from Paul at VMworld for the next lap of the journey,” Gelsinger said.
Gelsinger joined EMC in 2009 after spending 30 years at Intel. He led EMC’s product strategy as president and chief operating officer of EMC information infrastructure products.
Maritz served as VMware CEO since 2008, when he replaced VMware founder Diane Green more than four years after EMC acquired VMware. Maritz described his new role as a full-time position, but said he would remain on the west coast rather than move nearer to Hopkinton, Mass.
Tucci said Maritz suggested turning over the VMware reigns to Gelsinger as both vendors prepare for a major IT transition to cloud computing. Tucci said the timing was good because both EMC and VMware have been doing well. He said he believed customers will be happy with the changes.
“The time to make these kinds of changes is off the position of strength when you are performing well and have customer permission to play in new markets,” he said. “VMware is moving to the next phase of cloud computing.”
The executive swapping also can be taken as a sign that EMC is considering spinning VMware back in instead of running it as a separate company. The way Tucci and the EMC board move execs back and forth suggests they already see VMware more as a division of EMC rather than an independent company.
Like many people in the high tech world, I don’t usually pay attention to the latest entertainment gossip. But while watching the news recently at a hotel, I found myself barraged with information about Katie Holmes deciding to leave Tom Cruise. There was so much earnest reporting of vague speculation that the sheer magnitude made me wonder what I had missed and what were the circumstances.
So why did Katie decide to leave Tom? Well, there are plenty of talking heads leading to uninformed conclusions. That mirrors the storage industry at times. In this case, the perceptions about what possibly could have happened were presented with such conviction that they must be true. The consensus conclusion was that religious differences were at the heart of this. There is no arguing with religion – just try and enter a discussion about Windows and Mac.
No matter how much “news” I hear, I know I can’t really believe what is being said, no matter how fervently. I do know that only a few people understand for sure what is happening, and the public will be told a variation of the truth. And, I do know I don’t really care. It is their personal problem and I don’t see that as a spectator sport. It’s not quite the same as watching “the big one” 27-car pile-up at Talladega in a NASCAR race.
Still, I can’t help thinking of the implications in the area of the digital information they have created and protected. Who gets the data? What information would they want individually that is in electronic form?
• Tax records?
• Business records?
• Wedding pictures? (I realize there needs to be multiple independent file system structures for these. This may be where we get into multi-tenancy isolation issues.)
• Pictures of children?
• I’m sure there are other types of important digital file as well that amassed during the marriage.
If they go to court — and it looks now like they won’t — some information could be part of a court order for discover. Other information is personal and while some of the information may be priceless to one person, it may not be quite so valuable to the other. But it is information that exists in digital form somewhere and has to be split up in some way. Where is the information and who parcels it out accordingly?
Were they practicing safe data protection? It is doubtful they were backing up to a cloud location because of security and privacy concerns for celebrities. Do they simply make a copy of all files and hand them over? Or, do they hide some data – delete files, digitally overwrite disks, send backup copies to the shredder? Emotion and lack of clear judgment (beyond normal operational failures that most business experience) may cause some data to be deleted or “lost.”
All these concerns have similarities to business issues. Many of them can be mapped to business circumstances. Getting access to the information could be made frustratingly difficult. In the words of the talking heads, “this could get ugly.”
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).