Virtual desktop infrastructure (VDI) is a big use case for storage that includes flash, as well as storage built for virtualization. Nutanix fits both categories with its hyper-converged Virtual Computing Platform that includes flash and hard disk drives, so it’s no surprise that VDI is a key market for the startup.
This week Nutanix unveiled a per desktop program to make it easier for customers to size their systems for VDI. Customers tell Nutanix how many virtual desktops they want to deploy, Nutanix recommends the right system and guarantees performance based on user profiles. Nutanix has four VDI customers profiles – kiosk, task, knowledge user and power user.
Greg Smith, Nutanix senior director of product marketing, said pricing for the VDI-specific systems start at $200 per desktop for storage and computing. The systems start at 150 virtual desktops.
If performance is unsatisfactory, Nutanix will provide the right hardware and software to make it work, Smith said.
“Normally the onus is on the end user to build it themselves,” he said, “We’re making things simpler. Customers tell us how many VDI users thy have, what types of users they are, and we provide the infrastructure.”
Smith said Nutanix customers often start with VDI and then add other applications to their appliances. Per desktop customers can go back and add servers and clusters for other apps if they want to expand.
There was an interesting comment made by a person in the audience where I was giving a talk. It was about a major technology area that I had worked in early in my career and the comment was that it was a niche technology now. I had not thought about that much until then but it made sense. And it brought on other thoughts about how natural was it for a technology to reach a zenith and then be eclipsed by other technologies. Maybe it will disappear altogether, or it will continue to have value for a protracted period without being the primary technology in use.
So, how does a technology go from mainstream to niche? I asked that question to people in the industry to get their opinions. First, the discussion gravitated towards eliminating technologies that had never gained a measure of success. The measure of success was generally considered to be “in widespread usage” if not the dominant one in use. The favorites or pet technologies that some were enamored with because of potential or “coolness” were also eliminated. Those that had not yet achieved mainstream usage are usually referred to as “emerging” or “developing.”
So mainstream is a technology considered to be widespread or in dominant usage. When the technology is no longer the primary one in use, but still has value and has not disappeared, it becomes a niche. It is usually characterized by declining or at least not increasing revenue.
When a technology becomes niche, the perceptions around it change and that could impact the future of that technology. This could lead to economic impacts for companies and to career impacts for individuals. Of course, technologies that have been successful in high value solutions have a long tail – they continue to generate revenue and continue careers for long periods. That is especially true in the storage industry where change is much slower than most realize.
A technology that has moved from mainstream to niche is interesting to track as the industry continues to evolve. It is a signpost of sorts indicating inflection points in the industry. It’s not a bad thing – it’s just the natural order. Those that have started working in technologies that have come and gone such as vacuum tubes can probably tell the story of the gradual decline after solid state electronics replaced them.
The question left hanging here is what niche technologies today were once mainstream. It does not mean they are no longer highly valuable. And, they will probably continue to be used for a long time. I’ve worked on several that are not dominant anymore. They still have great value and have been the foundation for other technologies and systems. Because of their widespread use, they may not be niche. Things just change. But it does make for interesting discussions.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Hewlett-Packard is gaining momentum and market share with its 3PAR StoreServ arrays, and HP will try to continue that momentum next week at EMC World in Las Vegas.
HP reps will descend on EMC World to try and lure EMC customers to the 3PAR array with the HP Online Import software. The software was developed to migrate data from EMC Clariion and first-generation VNX midrange systems to 3PAR arrays. EMC requires a controller upgrade to go from its earlier systems to VNX2 arrays.
HP’s Online Import software is similar to the utility it developed to move customers from its EVA arrays to 3PAR.
“What if we could make it easier to go from VNX to 3PAR than from VNX to VNX2?” asked Craig Nunes, HP’s VP of storage marketing.
A free 180-day license for the online import utility is included with all new 3PAR arrays. Besides making for an easier migration, Nunes said the 3PAR arrays will convert thick volumes to thin volumes, saving a considerable amount of capacity.
“It’s no coincidence we’re introducing this today, in advance of EMC World,” Nunes said. “It’s a great opportunity to give that CX and VNX base insight into other options.”
The online import is an extension of HP’s “No is a three-letter word” campaign started earlier this year to try to sway EMC customers to switch.
“We’re calling it 72 hours of ‘yes,’ in Las Vegas,” Nunes said. “It’s a social media effort aimed at bringing concierge style services to EMC World attendees – theatre tickets, trips to Grand Canyon and other services they can get by tweeting #72hoursofyes.”
Nunes said HP’s “base of operations” will be at the Canal Shoppes above the Venetian Hotel where EMC World will take place. “We’ll also have a team on the ground in blue HP Storage Yes Team t-shirts connecting with EMC attendees,” he said.
HP’s come-on to EMC customers is of a type that may not be out of place in Vegas. HP’s press release claims attendees can request a complimentary ride from the airport, coffee “or anything else to make their trip more enjoyable.”
That doesn’t leave much out in Vegas. It also could end up costing HP more in services than it makes from a 3PAR array.
“Actually, we say anything you can get from your hotel concierge,” Nunes clarified, although that probably doesn’t leave much out either.
Red Hat doubled down on open source storage software today when it acquired startup Inktank for $175 million. Inktank’s Ceph software gives Red Hat object and block storage to go with the GlusterFS-based file storage that Red Hat acquired in 2011.
Ceph is an open-source scalable distributed file system created by Sage Weil, who founded Inktank in 2012 and is its CTO. Inktank began selling Inktank Ceph Enterprise as a subscription based storage-only product in November 2013. In February, Inktank upgraded Ceph Enterprise to 1.1 and received formal certification for the Red Hat Enterprise Linux OpenStack platform.
Inktank had $14.4 million in venture funding.
According to the FAQ:
“By aligning two leading open source communities, Red Hat can offer its customers a very competitive alternative to traditional proprietary storage systems from companies like EMC. Given the size of the storage opportunity, increasing the Red Hat investment in this area made a lot of sense, especially considering Inktank’s strong position with OpenStack.”
The FAQ said Red Hat will continue to sell and support Ceph’s products under the Red Hat brand, and will develop a roadmap to deliver compatible products for file, block and object storage. Red Hat said it will continue to support the Inktank development community.
In a blog on the Ceph community web site, Weil wrote that “Red Hat is one of only a handful of companies that I trust to steward the Ceph project. When we started Inktank … our goal was to build the business by making Ceph successful as a broad-based, collaborative open source project with a vibrant user, developer, and commercial community. Red Hat shares this vision.”
Weil wrote the deal will require Inktank to change one part of its product strategy. Because Red Hat favors a pure open source model, Inktank will make its Calamari monitoring and diagnostics tool open sourced. Calamari is currently proprietary.
Red Hat expects the acquisition to close in May.
On a webcast to discuss the deal, Weil said he would join Red Hat to run the Ceph initiative. Weil and Red Hat CTO Brian Stevens said it was too soon to say whether Ceph will remain a standalone product or be bundled with other Red Hat software, or when we might see open-stack Calamari.
CommVault missed its revenue expectations last quarter, a notion CEO Bob Hammer found especially frustrating because he sees great opportunity for the backup vendor to flourish. He also sees silver linings in CommVault’s impending Simpana product upgrade and the cloud.
CommVault’s revenue of $157 million last quarter increased 13 percent over last year but fell about $3 million short of Wall Street’s forecast. Hammer blamed the shortfall on failure to close big deals, particularly in North America. And he blamed that partly the vendor moving sales resources to the cloud and other parts of the world, and the distraction from winding down its Dell partnership.
CommVault executives say their issues are short-term, and maintain the company is on track to become a billion-dollar revenue player (its revenue for the just-completed fiscal year was $586 million). They said enterprise deals ($100,000 and up) did not fall through because customers went to competitors, and some have closed this quarter.
Hammer said he sees great potential for CommVault. The data protection space is wide open with the cloud changing market dynamics, Symantec plodding along without a CEO or clear direction and smaller vendors such as Actifio gaining momentum.
“Despite the weak quarter in the Americas, my confidence on the business in general is the highest it’s ever been,” Hammer said Friday on CommVault’s earnings call. “So I’m really confident that there is extremely high probability that if we get the execution pieces in place, we’re going to hit numbers.”
Later on the call, he added of the poor quarter, “You can tell from my voice, obviously, that it’s an execution issue and fundamentally pisses me off. So instead of fooling around with it, we said we’re going to hit this with a damn sledgehammer. So we put the engine in place to solve that problem. That’s pure execution.”
Hammer pointed to a massive shift to the cloud as part of his reason for optimism. He said approximately 200 service providers use Simpana for data protection, and CommVault will continue to invest heavily in cloud technologies.
Another reason for optimism is Simpana 10 R2, a major upgrade to CommVault’s backup and data management application due in July. “This release will include enhancements to core data management protection, particularly in the areas of virtualization, archiving and snapshot and replication management,” Hammer said.
He added the upgrade will include “new technology to securely and automatically move data to the cloud, in the cloud, and cloud-to-cloud, a standalone mobile solution with added capabilities for document sharing and data loss prevention … new solutions for operations management and intelligence and operations analytics … the ability to economically recover, use, replace and browse data in live native format and virtualized environments providing the capability to immediately restore, copy, back data into a usable state.”
Hammer revealed CommVault is preparing integrated appliances for archiving and cloud gateways that will involve partners. He said these appliances will be “engineered by CommVault and built on commercially available servers and storage.”
NaviSite, a subsidiary of Time Warner Cable, offers managed cloud services via its four co-locations in the United States and England. NaviSite already supports the Actifio Copy Data Storage (CDS) systems for some time now, with customers placing an Actifio device on-premises and one in a NaviSite co-location for data protection. It also uses EMC Atmos for cloud storage.
NaviSite now hopes to expand its customer base by tapping into EMC Data Domain users.
“There are a considerable number of customers in the Data Domain installed base,” said Chris Patterson, NaviSite’s vice president of product management. “The Vault does like-for-like replication from one array to another. We thought it was best to support the more popular array out there.”
Patterson said NaviSite has a large Data Domain system at its data center. It sells customers smaller Data Domain devices to place on premise, or customers can use Data Domain appliances they already own. Like with the Actifio system, the Data Domain on-prem device acts as target for applications. Then the data is replicated to the system in NaviSite’s public cloud.
“If a customer has a lot of data, more that 10 or 20 terabytes, we ship a small Data Domain to them so they can sync up (the data) on their site and they can send it back to us,” Patterson said.
Hitachi Data Systems wasn’t the only vendor to launch a new enterprise SAN array this week. Hewlett-Packard also brought out its XP7.
That’s not a coincidence, because the XP7 and the HDS Virtual Storage Platform (VSP) G1000 use the same hardware architecture. HP has been licensing the technology from Hitachi for 15 years, and brings out its enterprise arrays at the same time as HDS.
The dual rollout raises two questions for HP. First, what’s the difference between the two arrays, and how does the XP7 fit alongside HP’s own 3PAR StoreServ family in HP’s enterprise strategy?
HP people can be touchy about the first question. They refer to the HDS deal as a technology partnership rather than a straight OEM deal. As HP storage and social media expert Calvin Zito wrote on his blog this week:
“One of my big pet peeves in when people say that we rebadge an HDS array. That couldn’t be further from the truth and I dare say that HP has made far more contributions to the XP platform over the years because of the technology agreement with Hitachi Ltd.”
HP brings its software and firmware to the array, adding features such as Performance Advisor, HP advanced clustering features and integration with HP servers. However, these features are overshadowed by Hitachi’s storage virtualization capabilities that allow the arrays to support systems from any major storage vendor and the new Hitachi flash modules that also work with the XP7.
Most of HP’s storage focus is on the 3PAR platform, which spans from the midrange into the enterprise. That is HP’s best-selling storage system and fits into large implementations, so why sell the XP7 to compete against itself? The simplest reason is mainframe connectivity. The XP7/VSP G1000 IP goes back to the days when large storage systems were almost always connected to mainframes. 3PAR, which came along in the early 2000s, was designed for web and cloud hosting companies.
“We recognize that customers and the industry is going through a transformation from reliable robust legacy applications to new styles of IT-as-a-service, cloud-based virtual environment,” said Kyle Fitze, HP’s director for XP storage. ”Customers in some cases can’t move that fast because there are challenges around business needs and the ability to introduce new technology in a seamless way.
“XP is for conservative mission critical customers with expectations of high performance and low latency. StoreServ is for customers re-architecting their data centers, changing applications and moving to a services-oriented model.”
While EMC exceeded its overall revenue forecast for last quarter, its storage revenue was a bit disappointing. EMC’s storage revenue made up $3.68 billion of the company’s $5.5 billion (counting VMware, RSA, Pivotal and other sources). On the storage side, that was a three percent decline from last year mainly because of a tough quarter for EMC’s VMAX enterprise storage array.
VMAX sales dropped 22 percent last quarter from the previous year. Revenue from the rest of EMC’s storage portfolio actually increased six percent, but VMAX is its largest and most expensive platform. The pattern was similar to what IBM reported last week, when its enterprise DS8000 dragged the entire storage hardware group to a 23 percent drop from last year.
As he did in January after EMC missed its forecast for 2013, EMC CEO Joe Tucci spoke of changes in spending patterns in the IT world today. Tucci said those changes bring challenges in the short term while raising long-term opportunities for vendors who get it right.
“The Information Technology industry is going through a major transformation, a secular shift from the client/server PC era of computing to a mobile, cloud, big data, social networking era,” Tucci said on EMC’s earnings call. “As we navigate through this transition, we and the rest of the industry are facing a global market which is exhibiting an air of caution in spending, resulting from an array of economic and political uncertainties around the world. Collectively, these two factors are creating an environment that is not for the faint of heart.”
David Goulden, CEO of EMC’s Information Infrastructure, blamed the VMAX decline on “math factors” (last year was strong for VMAX, so there was a tough comparison and changes in EMC’s order fulfillment process resulted in a larger product backlog) and product cycle. The VMAX, like IBM’s DS8000, is due for an upgrade and customers could be waiting for that before they buy their next one. Hitachi Data Systems and Hewlett-Packard upgraded their high-end arrays this week, putting pressure on their rivals. “We do have a refresh plan during the year,” Goulden said. “I won’t tell you exactly when. We don’t want to impact our own business more than we have to, but that certainly is a factor.”
EMC reported better results for its “emerging storage” category. Revenues for that group increased 81 percent year-over-year, although that growth is less impressive when you consider emerging products such as the XtremIO all-flash array and ViPR software-defined storage were not even selling yet a year ago. The emerging storage group also includes Isilon clustered NAS and Atmos object-based cloud storage. Taken together, the technologies in that group could determine the future of EMC storage.
Other news from the earnings call:
• The all-flash XtremIO array picked up “dozens” of new customers and more than 70 percent of VMAX and VNX2 midrange systems shipped with some flash capacity. EMC said it sold more than 17 PB of flash in the quarter, up 70 percent year-over-year. Goulden said EMC added 20 TB XtremIO systems in the quarter and has a “very aggressive roadmap this year” to expand the flash platform and integrate it with other EMC products.
• VCE Vblock converged appliances that EMC sells in partnership with Cisco and VMware grew 50 percent year-over-year with most of the units bought by new customers.
• Goulden said Data Domain backup appliances “had another excellent quarter” but did not provide specific numbers. EMC’s total backup and recovery revenue grew four percent year-over-year.
• Syncplicity file sharing software revenue more than doubled year over year.
• EMC estimated that more than $2 billion in revenue in 2013 came from cloud providers.
When Pure Storage pocketed $150 million in funding last August, CEO Scott Dietzen said that gigantic round would fuel rapid growth for the all-flash array vendor in the face of increasing competition from EMC and other large storage vendors.
Apparently, the $150 million wasn’t enough to fund that growth spurt. Today Pure closed an even bigger round, picking up another $225 million to bring its total funding to $470 million. That’s either a lot of growth or a lot of money being burned.
In his blog today and in an interview with Storage Soup, Pure Storage president David Hatfield explained why the company went back so soon for so much money. He said it wasn’t out of necessity because Pure has not yet spent most of its last round and could be cash-flow positive if the leadership team wanted that. But Pure wants to keep growing its engineering, international sales breadth, brand support and channel.
Hatfield said current and new investors were eager to pump more money into Pure, so Pure took it.
The title of Hatfield’s blog includes the term “Building a War Chest,” and that tells you what you need to know about the all-flash storage market today. EMC, NetApp, IBM, Hitachi Data Systems (HDS), Hewlett-Packard and Dell are all pushing flash either in hybrid or all-flash systems. Then there are the all-flash pioneers such as Pure, Nimbus and Violin Memory vying to push spinning disk out of the enterprise. It’s easily the most competitive storage market today.
As for growth, Hatfield said Pure is “adding two or three people a day,” including new memebers of its large executive team. On the product front, Pure is in beta with replication, its major missing software piece. Hatfield said there are plans to continue to scale up the platform to reach hundreds of TB on a system, increase interoperability with third-party software applications and move beyond tier one storage.
On the customer front, Pure claims is revenue grew 700 percent in 2013 over 2012 and has been increasing more than 50 percent sequentially each quarter. Pure said it shipped more than 1,000 FlashArrays in 2013.
Hatfield said despite the large vendors’ talk about flash and their new all-flash systems, they are still committed to spinning disk while Pure is pure f lash. “EMC would rather sell a $1.5 million VMAX instead of a $300,000 [all-flash] XtremIO,” he said. “We’re competing with hybrid models. They’re selling disk first, then flash as a tier. We have a two-plus year lead on technology. As [legacy vendors] try to close the technology gap, they have a business dilemma. Their multi-billion dollar disk franchise is at risk. We have the ability to attack it, and not feather in flash as a performance tier.”
They have a huge war chest to fund that attack. The latest round included new investor Wellington Management Company as well as previous investors T. Rowe Price Associates, Tiger Global, Greylock Partners, Index Ventures, Redpoint Ventures, and Sutter Hill Ventures.
Micron Technology unveiled the M500DC SATA solid state drive (SSD) that is targeted for both mission-critical storage and cloud-based Web 2.0 storage as the company tries to grab more of the data center market share by appealing to customers who are more cost-conscious as well as those who want performance and endurance.
The M500DC, which is part of Micron’s M500 portfolio, is designed for endurance to handle transactional databases, virtualization, big data and content streaming. The M500DC SATA SSD is built on the company’s MLC NAND flash technology and custom firmware. It’s integrated with Micron’s Extended Performance and Enhanced Reliability Technology (XPERT) feature suite, which is an architecture that integrates storage media and controller to extend drive life to meet demanding data center workloads.
“This product casts a wide net,” said Matt Shaine, Micron’s product marketing manager for enterprise SSD. “Out customer use cases for this product are all over tha map in terms of capacity, performance and endurance at an attractive price point. Our data center customers give us a lot of feedback on requirements, and they essenttially make up of two groups.”
Shaine said one group looks more at the affordable price rather than feature, performance and endurance. The other group value the mixed use of random performance, full data protection and data-path protection.
The SSD combines a 6Gbps Marvell controller with Micron’s 20-nm MLC NAND. There are some server-type features such as die-level redundancy for physical flash failures, onboard capacitors for power-loss protection and advanced signal processing to extend the life of the NAND.
The SSD comes in both 1.8-inch and 2.5-form factors and 120, 240, 480 and 800 GBs of storage capacities. The 800 GB SSD has sequential reads of 425 MBs per second and write performance of 375 MBs per second. The 800 GB SSD has a 65,000 IOPS random read performance and write performance of 24,000 IOPs. It also has an endurance random input of 1.9 Petabytes.
The 480 SSD also has an endurance random input of 1.9 Petabytes but it has a sequential read performance of 425 MBs per second and sequential writes of 375 MBps. Its random read runs at 63, 000 IOPs and random writes at 35,000 IOPs.
Micron claims the new SSD can achieve one to three drive fills per day over five years so it reduces the need to replace drives on a frequent basis.
“This is a more rugged drive that can handle longer workloads,” Shaine said.