Hewlett Packard Enterprise today made its second major storage acquisition of 2017, spending $1.2 billion on all-flash and hybrid array vendor Nimble Storage.
The Nimble Storage acquisition comes less than two months after HPE bought hyper-converged pioneer SimpliVity for $650 million.
We’ll have more on the HPE-Nimble Storage acquistion on SearchStorage, but here is what we know so far:
HPE executives see Nimble as a complement to its 3PAR storage portfolio that it acquired for $2.35 billion in 2010. 3PAR StoreServ has served as HPE’s flagship storage platform for both flash and hybrid arrays.
In a blog post today, HPE enterprise group GM Antonio Neri characterized 3PAR as supporting “customers experiencing rapid growth and needing a highly scalable, all-flash data center capable of supporting millions of IOPS, petabyte capacity, and a multi-tenant architecture, priced from the midrange to the high-end.” He added “Nimble is ideal for customers needing advanced, flash-optimized data services, including all-flash, hybrid-flash, and multicloud support, underpinned by machine-learning based predictive analytics, all at entry to midrange price points and designed with ease of use at its core.”
Neri and HPE GM of storage and big data Bill Philbin both cited Nimble’s InfoSight predictive analytics as a key part of the Nimble Storage acquisition. Neri wrote that HPE would incorporate InfoSight throughout its storage portfolio. Along with 3PAR, HPE sells XP high-end enterprise storage and SMB-level MSA platforms through OEM deals, and StoreVirtual virtual storage based on technology acquired from acquired from LeftHand Networks in 2008.
Like the other major vendors, HPE has seen storage revenues decline in recent years. It is coming off its worse quarter in years – a 12% year-over-year drop in revenue to $730 million.
Nimble, which became a public company in late 2013, today reported its revenue for last quarter and last year. Revenue for the quarter increased 30% year-over-year to $117 million and for the full year it increased 25% to $403 million.
However, its losses also grew. Nimble lost $158 million last year compared to $120 million the previous year. Its fourth-quarter loss was $36 million compared to $32 million a year ago. Nimble claimed 10,000 customers.
“As proud as we are of what we have accomplished, we face a challenge of scale and significant exposure as a standalone storage company,” Nimble CEO Suresh Vasudevan wrote in a note on the company’s website today. “Our aspiration has always been to be an innovation leader, and see our technology deployed in organizations around the globe. But, as we weighed the opportunities and risks, we concluded that an acquisition makes sense at the right price with the right partner. We believe we’ve found both.”
Nimble launched in 2010 mainly as a lower-cost alternative to EMC and NetApp in the midrange. From the start, Nimble arrays featured data reduction and used small amounts of flash and the vendor added InfoSight in 2013. Last week Nimble went into beta with Nimble Cloud Volumes, a flash fabric designed to transparently move data between its storage arrays and Amazon Web Services and Microsoft Azure.
“When looking at opportunities to complement our existing portfolio, Nimble jumped straight to the top of the list based on combined business opportunity and similarities in engineering design and culture,” HPE’s Philbin wrote. “Much like 3PAR started high and then addressed the needs of customers pushing down market, our interest in Nimble started with an acknowledgement that the flash market is rapidly evolving and those same needs are moving even lower.
“Entry and midrange customers are demanding the same flash-optimized data services that their Enterprise counterparts have enjoyed for several years,” he continued. “However, in this space there is also a need for incredibly straightforward and simple deployment and an expectation for support experience driven by the consumer interactions we all take for granted on our smart phones and devices.”
The HPE-Nimble Storage acquisition is expected to close in April.
In its early days, Nutanix’s biggest challenge was convincing people that hyper-converged appliances were a better fit for their organizations than traditional server/storage systems. Now that Nutanix has created a hyper-convergence market, its biggest challenge is fighting off larger vendors looking to take over.
Nutanix’s competition — which includes Dell EMC/VMware, Cisco, Hewlett Packard Enterprise (HPE) and, soon, NetApp — was a recurring theme on the hyper-converged vendor’s earnings call Thursday night. Nutanix revenue of $182 million in the quarter beat expectations, but executives admitted they could have done better and their forecast for this quarter proved disappointing.
The most interesting Nutanix competition comes from Dell EMC, which also owns VMware. Dell EMC still sells Nutanix software on Dell PowerEdge servers, but has its own VxRail hyper-converged array driven by VMware vSAN software. VMware and Nutanix partnered closely when Nutanix first launched, but now VMware competes directly in hyper-convergence and Nutanix has its own AHV hypervisor that competes with VMware.
“People have speculated about the demise of this relationship ever since the Dell-EMC merger was announced 18 months ago,” Nutanix CEO Dheeraj Pandey said on the earnings call. “And Dell has had EVO:RAIL and VxRail and vSAN Ready Nodes to sell for a long time now. They say they lead with VxRail- and VMware-only environments. So it’s our job then to convince the customer why Nutanix is a better fit to support their multihypervisor strategy. And with that there is only one product in town that works at scale.”
Nutanix revenue under fire from new competitors
HPE beefed up its hyper-converged technology with a $650 million acquisition of SimpliVity, which helped pioneer hyper-convergence with Nutanix. But Pandey dismissed SimpliVity as a serious competitor. He pointed out that Cisco was an early SimpliVity partner, but went to Springpath for the software for its HyperFlex hyper-converged product.
“If SimpliVity really were bringing that customer delight to its end users, it wouldn’t have ended up the way it did,” Pandey said. “There was a reason why Cisco passed on them, even though they were close partners. HPE picked up a relatively distressed asset, which will buy them nothing more than some short-lived press.
“But we’re absolutely not concerned about HPE or Cisco or NetApp in terms of competition,” he said. “They’re not software companies with experience in building full-stack operating systems that address compute and storage and networking and security and overall management.”
Pandey said Nutanix’s AHV hypervisor was included in 21% of the nodes it sold last quarter, compared to 17% the previous quarter. He said customers who license AHV usually keep VMware at the start, but a few have ditched VMware after getting comfortable with AHV.
Growing pains continue to dog sales
Pandey finds himself defending Nutanix against more than other hyper-converged vendors these days. During Pure Storage’s earnings call this week, the all-flash array executives described hyper-convergence as a fit only for branch offices and virtual desktops. Pandey disputed that, pointing to his forecast that Nutanix revenue will hit $1 billion next year.
“We cannot be selling remote office, branch office systems and making $1 billion in annualized run rate,” he said. “Definitely more than 50% of our workloads are with Tier 1 enterprise workloads. VDI [virtual desktop infrastructure] is about a little less than 30% for our workloads, and a lot of the Tier 1 workloads include Microsoft, SAP, Oracle.”
Still, Nutanix is experiencing growing pains in its sixth month as a public company. While Nutanix revenue of $182 million last quarter beat expectations and grew 78% year-over-year, guidance in the $180 million to $190 million for this quarter fell below expectations. Pandey said sales in North America fell short of Nutanix’s goal last quarter as well. And the company keeps suffering heavy losses — $93.2 million last quarter.
Nutanix’s stock price dropped from $31.12 at Thursday’s market close to $25.52 at today’s open after the earnings report.
Pandey attributed disappointing North American results to promotions of salespeople to management jobs, which he said will prove positive in the long run. He also pointed out Nutanix added 900 new customers in the quarter compared to 700 in the previous quarter, bringing its total to more than 5,300.
“Our win rates remain strong and consistent with prior quarters, and we are pleased with our ability to demonstrate the value of our full-stack operating system and competitive opportunities,” he said.
Virtual backup specialist Veeam Software is treating hyper-converged Cisco HyperFlex systems the same way it treats storage arrays when it comes to snapshots.
Veeam has long supported snapshots for storage arrays, and integrates with arrays from Hewlett Packard Enterprise, NetApp, Nimble Storage and Dell EMC’s VNXe. Customers can run backups directly from Veeam’s Storage Snapshots instead of using VMware VM snapshots. Cisco HyperFlex systems boxes include VMware hypervisors.
“It’s similar to what we’ve done with existing storage arrays,” Veeam VP of product strategy Doug Hazelman said. “The integration is little different because HyperFlex is different than primary storage. Each VM has its own volume on HyperFlex. We don’t need to use VMware snapshots.”
Hazelman said Veeam works closely with most hyper-converged vendors, but Cisco HyperFlex is the first hyper-converged system to integrate with Veeam’s data protection. The vendors have no reseller or formal business partnership, but Veeam VP of alliances Andy Vandeveld said they have many joint channel partners who have requested integration of Veeam technology on Cisco HyperFlex systems.
HyperFlex integration will be included in Veeam Availability Suite 9.5 update 2, which is due by mid-year.
Software-defined storage startup Hedvig Inc. secured another $21.5 million in funding to create new bundled hardware and software component options, expand into the Asia-Pacific region, and add engineering, sales and channel resources.
Leading the Series C funding round were new investments from Hewlett Packard Enterprise’s Pathfinder venture arm and EDBI, the corporate investment arm of the Singapore Economic Development Board. Hedvig also received contributions from existing Silicon Valley-based investors Atlantic Bridge Ventures, True Ventures and Vertex Ventures.
Milan Shetti, CTO of HPE’s data center infrastructure group, will serve as a technical advisor to Hedvig. He wrote in an email, “Hedvig’s mission of improving the economics of storing and managing the world’s data is directly aligned with our strategy, and we look forward to working with the Hedvig team as it continues to shape hybrid IT.”
HPE’s relationship with Hedvig goes beyond funding. Hedvig plans to offer a bundled version of its software with HPE servers, according to Rob Whiteley, vice president of marketing at Hedvig.
The new financing boosted the Santa Clara, Calif., vendor’s total to $52 million since June 2012.
“We’ve taken a fairly conservative approach to raising capital. We wanted to make sure that we got our initial set of customers and waited for the software-defined storage market to mature before we really went aggressively after the market,” Whiteley said.
Hedvig sees Docker interest rising
The Hedvig Distributed Storage Platform can run on commodity hardware and pool server-based storage across multiple sites, on premises and in the cloud. The API-driven product supports block, file and object storage. Most customers implement the Hedvig storage management software as a complement to the all-flash tier they use for their most mission-critical data, according to Whiteley.
He said Hedvig would use the latest cash infusion to create “end-to-end solutions” that more tightly bundle hardware, orchestration tools and software components such as Docker containers.
Whiteley said Hedvig has seen considerable interest in Docker and wants to ease procurement for customers. Hedvig joined the new Docker certification program, and its Docker Volume plugin is now available in the Docker store.
“We might take Docker Datacenter, which includes all of their software suite, our software and HPE hardware, to create an end-to-end Docker solution, as an example,” Whiteley said.
Whitely said the HPE bundle partnership spawned from Hedvig’s financial service customers that were HPE server shops. He said they were asking Hedvig to put its software on HPE server hardware.
The Hedvig storage management software is certified to run on commodity server hardware from Cisco, Dell, HPE, Lenovo, Quanta and Super Micro. But Whiteley said certification merely signals the software is compatible with the hardware. He said Hedvig has yet to offer pre-integrated, pre-tested bundles such as the one it plans with HPE.
“Mid-enterprise” companies of between 1,000 and 5,000 employees tend to want a “hardware/software bundle,” Whiteley said, whereas financial service companies and service providers are more comfortable procuring their own hardware.
So far, Hedvig has taken a “meet in the channel” approach, where the channel partner bundled the hardware and software for the customer, Whiteley said.
The new Hedvig-HPE bundle could be sold through HPE’s direct sales force or partner ecosystem, and through Hedvig’s direct sales force or channel, according to Whiteley. He said Hedvig’s channel partners currently account for approximately 80% of sales, and the company’s direct sales force handles about 20%.
Whitley said Hedvig currently has about 50 customers, and top verticals are financial services, service providers, manufacturing and retail. He said the average deployment in 2016 was about 750 TB, with the largest deployment at about 3 PB. Whiteley said some customers have started as small as 25 TB.
To date, the vendor has operated only in North America Asia-Pacific market. Working with Singapore-based EDBI should help as Hedvig sets up operations, he said.
Hedvig plans to ramp up its engineering, sales and channel teams across all three regions. Whiteley said the company currently has 50 retail channel partners but hopes to boost the number of resellers, system integrators and distributors to at least 100 by year’s end.
Pure Storage revealed two surprises during its fourth quarter earnings call.
One the plus side, the all-flash vendor exceeded its forecast and expectations for revenue during the quarter and all of last year. Pure Storage revenue for the fourth-quarter hit $228 million, representing 52% year-over-year growth and exceeding the high point of its previous guidance. Pure said it picked up 450 new customers in the quarter, bringing its total to more than 3,000. For the year, Pure Storage revenue of $728 million represented growth of 65% over the previous year. That’s impressive in an industry with little growth at all.
On the down side, Pure’s guidance for this quarter came in below Wall Street expectations. Pure forecasted from $171 million to $179 million, below analysts’ consensus of $201 million. That guidance represents about 17.5% of the vendor’s full year guidance of $975 million to $1.01 billion.
Pure executives on the Wednesday night call chalked up the low estimate to seasonality – the first quarter is traditionally the slowest for storage sales – and pointed out their full-year guidance met expectations. They say this will still be a strong year due to sales of the relatively new FlashBlade for early NVMe sales and coming synchronous replication software that will bolster Pure’s disaster recovery capabilities. The high end of the Pure Storage revenue guidance calls for $1 billion a year, and it could register its first profits by the end of 2017. (Pure lost $43 million last quarter and $245 million for the year.)
But analysts on the earnings call were not convinced, and kept asking Pure execs for a better explanation for the gloomy first-quarter forecast.
“We are convinced 2017 will be Pure’s best year yet,” CEO Scott Dietzen said. “We are thrilled that our data platform is in a position to drive $1 billion in revenue in just our sixth year of selling.”
Dietzen said the formula for Pure Storage revenue to reach $1 billion this year is to increase FlashArray SAN sales from 25% to 30% and generate close to $90 million in FlashBlade revenue. The goal is for FlashBlade to double the $43 million in revenue FlashArray had in its first full year of sales in 2013.
FlashBlade’s success depends on its ability to displace large NAS systems from NetApp and Dell EMC’s Isilon, which also now support all-flash configurations. Dietzen said competitors’ products were built for disk and retrofitted for flash, while Pure designed FlashBlade specifically for flash.
“This [FlashBlade] comes out of a design that’s built around silicon and fast networking and it doesn’t have the legacy holdbacks that are inherent in these 20 plus-year-old designs that we compete against,” he said.
Dietzen said while there is “some tightening in NAND supply,” the shortage had a minimal impact on Pure’s sales last quarter. So far, only Hewlett Packard Enterprise has claimed the shortage cut into sales significantly.
NetApp’s recent claim that it will launch a hyper-converged infrastructure (HCI) platform in the coming months leaves IBM noticeably absent in HCI among major storage vendors.
And IBM is likely to stay on the hyper-converged infrastructure market sidelines for the foreseeable future.
In a recent interview with TechTarget editors, IBM storage general manager Ed Walsh said IBM doesn’t need hyper-convergence because it has a converged infrastructure (CI) platform that accomplishes the same things. IBM VersaStack combines IBM’s storage with Cisco switching and UCS servers in a similar bundle that other storage vendors sell with Cisco servers and networking.
The difference is that CI bundles such as VersaStack consist of traditional products sold as one package, while HCI puts storage, compute and virtualization in one box.
“They’re solving the same customer problem,” Walsh said of products in the converged and hyper-converged infrastructure market. “They both drive down Opex, give you a better user experience and free up your people to do other things. You can be a purist and say what we do is converged, not hyper-converged, but it’s about the job you’re trying to do. The two worlds are blending together.
“Both give you flexibility in how you deploy time, storage and CPUs,” he continued. “If you have a VMware stack and that’s called hyper-converged to you, we do that. If you want to make it easy to increase CPUs separate from storage, we do that. If you’re saying that’s converged and not hyper-converged, OK, but that’s 80% of the market.”
Of course, the hyper-converged infrastructure market is growing fast and certainly has the attention of server vendors Dell EMC, Hewlett Packard Enterprise, Cisco and Lenovo. They are all moving fast to compete with and/or partner with Nutanix, which created the HCI market.
IBM and NetApp don’t sell x86 servers used with most hyper-converged systems, which may have delayed their entry into hyper-convergence. Walsh’s take on HCI is similar to what NetApp CEO George Kurian said a few months ago when he claimed NetApp’s FlexPod CI partnership with Cisco addressed the same needs of HCI. But last month Kurian said NetApp would enter the hyper-converged infrastructure market with a product based on its SolidFire all-flash platform and Data Fabric technology for moving on-premises data to the cloud.
NetApp has yet to say where it will get servers from, and we still can’t be sure whether it will have true hyper-convergence or just re-package existing flash and cloud technology.
IBM’s Walsh does not sound like he is re-considering. Not only does he see CI doing all that HCI does, but he points out CI has scaling advantages over HCI. The CPU and storage are independent products in converged infrastructure.
“We see after 12 months or so our clients want more flexibility in how they deploy storage to servers,” he said. “People are looking to refresh storage and CPU at different intervals.”
Storage is among the casualties of Hewlett Packard Enterprise’s struggles following the break-up of Hewlett-Packard.
The vendor reported HPE storage revenue declined 12% year-over-year to $730 million in the fourth quarter of 2016. The poor storage sales came across the board, with only 3PAR all-flash systems showing an increase — but even that increase was less than expected.
“We’re not happy with the storage performance this quarter,” HPE CEO Meg Whitman said. “I’m quite happy with the all-flash situation, but there are other things that we’re going to buck up.”
Whitman said “other parts of the business were weaker than they probably should have been.” She blamed these HPE storage weaknesses on execution, softness in the overall storage market and a shortage in NAND flash supply.
HPE’s all-flash sales were less than they could have been. The vendor reported 3PAR all-flash arrays increased 28%year-over-year, but all-flash revenue has risen in triple digits at HPE and other vendors in recent quarters. NetApp reported a 185% spike in all-flash sales last quarter, and HPE’s fourth-quarter all-flash revenue rose close to 100% year-over year.
Whitman said HPE’s flash sales would have been “considerably higher” if not for the NAND shortage. But other large storage vendors say the NAND shortage has not had a great negative effect on sales.
HPE storage not the vendor’s only problem area
Storage was far from the only sore spot for HPE as it struggles to find its footing after the HP split. Server sales fell 12%, networking dropped 33%, enterprise services declined 11% and software slipped 8%. HPE’s overall revenue of $11.4 billion dropped 10% from last year and missed Wall Street analysts’ consensus forecast by $700 million.
Whitman tried to paint a rosy view of the future of HPE storage. She said she expected the NAND shortage to lift soon, which will help flash sales. She also pointed to last week’s addition of streamlined licensing and the addition of compression to 3PAR to fill what had been “actually a competitive hole in our product.”
HPE closed its $650 million acquisition of hyper-converged startup SimpliVity last week. Whitman put the hyper-converged market at approximately $2.4 billion and growing approximately 25% annually. She said SimpliVity will deliver to HPE “a whole new group of storage sellers where we can have broader market coverage” and pledged to become a hyper-converged leader.
“We see this as a significant opportunity,” she said.
HPE will have to take advantage of any opportunity it finds because it has a lot more challenges than opportunities these days.
Cloudian and Panzura have come up with new cloud data archiving products to help organizations move on-premises data to the cloud.
Cloudian’s new HyperStore 4000 is a 7U scale-out object storage enclosure that stores up to 700 TB and includes two separate compute nodes per chassis. It can be configured as a three-way cluster for data availability and the system has built-in, hybrid cloud tiering. Like Cloudian’s 1U HyperStore 1500 appliance, the 4000 can store data on premises or in the Amazon Web Services (AWS), Microsoft Azure and Google public clouds. It also can tier data to the Cloudian public cloud.
Jon Toor, Cloudian’s chief marketing officer, said the appliance is aimed largely at the entertainment, video surveillance and genome sequencing industries, or as a replacement for tape archives.
Panzura launched a new cloud data archiving appliance as part of its Freedom Archive platform, and is packaging that with its Freedom NAS and Freedom Collaboration file sync products. The vendor said its Panzura 5500 Series Flash Cache can support up to 1,200 active users. The Freedom Archive virtual appliance that launched in late 2016 runs on VMware vSphere and supports up to 500 users.
Panzura’s products all integrate on-premises storage with the public cloud for cloud data archiving. Freedom NAS stores active data in local cache while moving colder data to the cloud. Freedom Collaboration stores data in a central cloud repository and makes all files read and write accessible on each.
“It takes the archiving piece and adds additional functionality as the company grows into the cloud,” said Barry Phillips, Panzura’s chief marketing officer.
Cloud data archiving ‘requires trust’
Scott Sinclair, senior analyst at Enterprise Strategy Group, said Panzura’s encryption makes it a good fit for protecting sensitive information that organizations are reluctant to move off-premises for cloud data archiving.
“The cloud offers a number of benefits, but some businesses are reluctant to leverage public cloud resources for sensitive information,” Sinclair said. “With FIPS 140-2 certification and AES-256 bit encryption to secure at rest and in-flight, Panzura is working to alleviate any potential security concerns.
“There are other hybrid cloud solutions that offer encryption. The success in storing sensitive data in the cloud requires more than the right technology. It also requires trust,” Sinclair said. “Some businesses have already benefited by moving digital archives to the cloud, while others remain reluctant. Panzura has the right technology to find success in this space. The question is whether they can convince those businesses still questioning the cloud to make the move forward.”
NetApp is entering the hyper-convergence arena with the latecomer’s cry: We may be last into the market, but we’ll be the best.
Emboldened by the recent strong revenue growth of its late-to-market All Flash FAS (AFF) arrays, CEO George Kurian Wednesday night outlined plans to play rapid catch-up in hyper-convergence.
Kurian said a NetApp hyper-converged product will launch in the May to July timeframe. He gave few details, but said the system will be built on SolidFire all-flash storage and NetApp Data Fabric technology that links on-premises and cloud storage.
NetApp has been noticeably absent from the hyper-converged infrastructure (HCI) market after its 2014 plans for an EVO:RAIL system in partnership with VMware never got off the ground.
Three months ago, Kurian indicated there was no NetApp hyper-converged strategy. He said NetApp addressed the advantages of hyper-convergence through its FlexPod converged infrastructure partnership with Cisco. FlexPods bundle storage, compute and virtualization as separate pieces rather than in the same chassis. Kurian has also pointed to NetApp’s cloud-centric SolidFire arrays as an answer for customers who want hyper-convergence.
How will NetApp handle hyper-convergence?
During NetApp’s quarterly earnings call Wednesday night, Kurian talked less about what a NetApp hyper-converged infrastructure might look like than what is missing from current HCI appliances.
“We will do what has not yet been done by the immature first generation of hyper-converged solutions — bring hyper-converged infrastructure to the enterprise by allowing customers to run multiple workloads without compromising performance, scale or efficiency,” he said.
What will NetApp do differently and better? Kurian said the vendor will have the first fully cloud-integrated hyper-converged system that moves data across tiers on-premises and in the public and private clouds. That is something executives at HCI market-leader Nutanix say they are working on now.
Kurian characterized current hyper-converged products as “first-generation,” lacking enterprise data management and consistent performance. He said that relegates them to departmental use and the low-end of the market, a statement that almost all the current hyper-converged vendors would dispute.
Along with Nutanix, server vendors make up most of the HCI market. Dell EMC, Cisco, Hewlett Packard Enterprise (with its recent SimpliVity acquisition) and Lenovo all sell HCI appliances.
Kurian said he doesn’t mind playing catch up.
“There have been lots of companies that have gone after the early-adopter segment with a subset of the features that enterprise customers really want and have failed in the long run,” he said. “And so, first to market doesn’t necessarily mean the big winner, right?”
Kurian points to NetApp’s all-flash arrays to back up his theory. NetApp lagged behind other large vendors as well as startups in offering mainstream all-flash arrays. It never got its home-grown FlashRay product out the door as a GA product, and only found success on its second attempt to build FAS into an all-flash option.
Kurian said NetApp’s all-flash arrays grew 160% year-over-year to approximately $350 million last quarter. That includes AFF, the EF Series for high-performance computing and SolidFire. But that still leaves NetApp well behind market leader EMC, which claims its all-flash XtremIO array generated more than $1 billion in bookings in 2016.
NetApp won’t be first with all-flash hyper-converged either. Most vendors in the market have all-flash options, and Dell EMC claims 60% of its VxRail customers deploy all-flash appliances. Cisco added an all-flash version of its HyperFlex HCI appliance this week.
In a blog posted soon after the earnings call, John Rollason, NetApp’s director of product marketing for next generation data center, echoed Kurian’s comments about current HCI systems. Rollason criticized hyper-converged systems for having fixed ratios of compute-to-storage resources, and lacking performance guarantees for mixed workloads and mature data services. He said they were limited to eight-node clusters that result in silos. While not all of those criticisms are valid for all hyper-converged systems, Rollason’s and Kurian’s comments provide hints as to what NetApp will try to do. It is pledging hyper-converged systems that scale higher with predictable performance aimed for enterprise.
Who will NetApp partner with?
We don’t yet know what NetApp will do for virtualization and compute. You can expect a NetApp hyper-converged system to incorporate VMware. It has had a good working relationship with VMware, despite VMware’s being owned by NetApp storage rival EMC and now Dell.
NetApp will also need a server partner. NetApp FlexPod partner Cisco is a possibility. Cisco has its own HyperFlex HCI appliance and added an all-flash version this week, but allows several HCI software applications, including VMware’s vSAN, to run their software on its UCS servers. NetApp can also go the OEM route that EMC went before getting bought by Dell. EMC’s first hyper-converged systems used servers from Quanta before switching to Dell PowerEdge in September.
NetApp promises more details soon. Whatever it plans, it will have to be good to make up for being late.
Dell EMC’s VxRail turned one today, and the vendor marked the anniversary by adding the hyper-converged platform to its Enterprise Hybrid Cloud package.
Dell EMC claims over 1,000 customers for VxRail through the end of 2016, with more than 8,000 nodes, 100,000 CPU cores and 65 PB of storage capacity shipped in the system. VxRail is EMC’s first successful hyper-converged appliance, following a short, failed attempt with a Vspex Blue product launched in 2015.
Like Vspex Blue, VxRail is based on Dell-owned VMware’s vSAN hyper-converged software. It also runs on Dell PowerEdge servers, although VxRail originally incorporated Quanta servers until the Dell-EMC acquisition closed last September. VxRail launched just after VMware upgraded vSAN to version 6.2, which added data reduction and other capabilities that improved its performance with flash storage. Dell EMC VxRail senior vice president Gil Shneorson said 60% of VxRail sales have been on all-flash appliances.
“We’re definitely seeing the combination of hyper-converged and all-flash taking off in a meaningful way,” he said.
Now VxRail is an option for Dell EMC Enterprise Hybrid Cloud (EHC) customers. EHC is a set of applications and services running on Dell EMC hardware that provide automation, orchestration and self-service features. The software includes VMware vRealize cloud management, ViPR Controller and PowerPath/VE storage management, and EMC Storage Analytics.
Other EHC storage options include EMC VMAX, XtremIO, Unity, ScaleIO and Isilon arrays sold as VxBlock reference architectures with Dell PowerEdge servers. EHC is also available with VxRack Flex hyper-converged systems that use Dell EMC ScaleIO software instead of VxRail appliances. Data protection options include Avamar, RecoverPoint and Vplex software and Data Domain backup hardware.
Along with the Dell EMC VxRail option, the vendor is adding subscription support and encryption as a service to EHC. Dell EMC does not break out EHC financials, but Dell EMC senior vice president of hybrid cloud platforms Peter Cutts said its revenue was in the “hundreds of millions of dollars” last year.
Adding a Dell EMC VxRail options lets EHC customers start with as few as 200 virtual machines.
“This gives customers the ability to start smaller, configure EHC as an appliance and go forward in that direction,” Cutts said.
For now, organizations who want to use VxRail with EHC need to buy a new appliance. Cutts said the vendor is working on allowing customers to convert existing VxRail appliances to EHC but that is not yet an option.
Using VxRail as part of EHC makes sense as vendors begin to position hyper-converged systems as enterprise cloud building blocks. Hyper-converged market leader Nutanix now positions its appliances that way, emphasizing its software stack’s ability to move data from any application, hypervisor or cloud to any other application, hypervisor or cloud. Nutanix is VxRail’s chief competitor.
“We’ve seen requests for more data center-type features and functionality,” Shneorson said. “VxRail is being put into data centers in much larger clusters than we originally anticipated. We’re seeing a shift from an initial focus on remote offices and test/dev to mission critical data center use.”
But unlike Nutanix, Dell EMC still also sells traditional storage. So Shneorson admits hyper-converged is not a universal answer because not every organization wants to scale their storage and compute in lockstep.
“It’s a matter of economics,” he said. “The advantage of hyper-converged is you can start small and grow in small increments. But some customers’ environments are already large and predictable in growth. By using shared storage you can get any ratio of CPU to disk. With hyper-converged, there is always a set ratio of CPU to disk. If you want massive amounts of storage with a small amount of CPUs for example, you would be better served by a traditional architecture.”