Red Hat doubled down on open source storage software today when it acquired startup Inktank for $175 million. Inktank’s Ceph software gives Red Hat object and block storage to go with the GlusterFS-based file storage that Red Hat acquired in 2011.
Ceph is an open-source scalable distributed file system created by Sage Weil, who founded Inktank in 2012 and is its CTO. Inktank began selling Inktank Ceph Enterprise as a subscription based storage-only product in November 2013. In February, Inktank upgraded Ceph Enterprise to 1.1 and received formal certification for the Red Hat Enterprise Linux OpenStack platform.
Inktank had $14.4 million in venture funding.
According to the FAQ:
“By aligning two leading open source communities, Red Hat can offer its customers a very competitive alternative to traditional proprietary storage systems from companies like EMC. Given the size of the storage opportunity, increasing the Red Hat investment in this area made a lot of sense, especially considering Inktank’s strong position with OpenStack.”
The FAQ said Red Hat will continue to sell and support Ceph’s products under the Red Hat brand, and will develop a roadmap to deliver compatible products for file, block and object storage. Red Hat said it will continue to support the Inktank development community.
In a blog on the Ceph community web site, Weil wrote that “Red Hat is one of only a handful of companies that I trust to steward the Ceph project. When we started Inktank … our goal was to build the business by making Ceph successful as a broad-based, collaborative open source project with a vibrant user, developer, and commercial community. Red Hat shares this vision.”
Weil wrote the deal will require Inktank to change one part of its product strategy. Because Red Hat favors a pure open source model, Inktank will make its Calamari monitoring and diagnostics tool open sourced. Calamari is currently proprietary.
Red Hat expects the acquisition to close in May.
On a webcast to discuss the deal, Weil said he would join Red Hat to run the Ceph initiative. Weil and Red Hat CTO Brian Stevens said it was too soon to say whether Ceph will remain a standalone product or be bundled with other Red Hat software, or when we might see open-stack Calamari.
CommVault missed its revenue expectations last quarter, a notion CEO Bob Hammer found especially frustrating because he sees great opportunity for the backup vendor to flourish. He also sees silver linings in CommVault’s impending Simpana product upgrade and the cloud.
CommVault’s revenue of $157 million last quarter increased 13 percent over last year but fell about $3 million short of Wall Street’s forecast. Hammer blamed the shortfall on failure to close big deals, particularly in North America. And he blamed that partly the vendor moving sales resources to the cloud and other parts of the world, and the distraction from winding down its Dell partnership.
CommVault executives say their issues are short-term, and maintain the company is on track to become a billion-dollar revenue player (its revenue for the just-completed fiscal year was $586 million). They said enterprise deals ($100,000 and up) did not fall through because customers went to competitors, and some have closed this quarter.
Hammer said he sees great potential for CommVault. The data protection space is wide open with the cloud changing market dynamics, Symantec plodding along without a CEO or clear direction and smaller vendors such as Actifio gaining momentum.
“Despite the weak quarter in the Americas, my confidence on the business in general is the highest it’s ever been,” Hammer said Friday on CommVault’s earnings call. “So I’m really confident that there is extremely high probability that if we get the execution pieces in place, we’re going to hit numbers.”
Later on the call, he added of the poor quarter, “You can tell from my voice, obviously, that it’s an execution issue and fundamentally pisses me off. So instead of fooling around with it, we said we’re going to hit this with a damn sledgehammer. So we put the engine in place to solve that problem. That’s pure execution.”
Hammer pointed to a massive shift to the cloud as part of his reason for optimism. He said approximately 200 service providers use Simpana for data protection, and CommVault will continue to invest heavily in cloud technologies.
Another reason for optimism is Simpana 10 R2, a major upgrade to CommVault’s backup and data management application due in July. “This release will include enhancements to core data management protection, particularly in the areas of virtualization, archiving and snapshot and replication management,” Hammer said.
He added the upgrade will include “new technology to securely and automatically move data to the cloud, in the cloud, and cloud-to-cloud, a standalone mobile solution with added capabilities for document sharing and data loss prevention … new solutions for operations management and intelligence and operations analytics … the ability to economically recover, use, replace and browse data in live native format and virtualized environments providing the capability to immediately restore, copy, back data into a usable state.”
Hammer revealed CommVault is preparing integrated appliances for archiving and cloud gateways that will involve partners. He said these appliances will be “engineered by CommVault and built on commercially available servers and storage.”
NaviSite, a subsidiary of Time Warner Cable, offers managed cloud services via its four co-locations in the United States and England. NaviSite already supports the Actifio Copy Data Storage (CDS) systems for some time now, with customers placing an Actifio device on-premises and one in a NaviSite co-location for data protection. It also uses EMC Atmos for cloud storage.
NaviSite now hopes to expand its customer base by tapping into EMC Data Domain users.
“There are a considerable number of customers in the Data Domain installed base,” said Chris Patterson, NaviSite’s vice president of product management. “The Vault does like-for-like replication from one array to another. We thought it was best to support the more popular array out there.”
Patterson said NaviSite has a large Data Domain system at its data center. It sells customers smaller Data Domain devices to place on premise, or customers can use Data Domain appliances they already own. Like with the Actifio system, the Data Domain on-prem device acts as target for applications. Then the data is replicated to the system in NaviSite’s public cloud.
“If a customer has a lot of data, more that 10 or 20 terabytes, we ship a small Data Domain to them so they can sync up (the data) on their site and they can send it back to us,” Patterson said.
Hitachi Data Systems wasn’t the only vendor to launch a new enterprise SAN array this week. Hewlett-Packard also brought out its XP7.
That’s not a coincidence, because the XP7 and the HDS Virtual Storage Platform (VSP) G1000 use the same hardware architecture. HP has been licensing the technology from Hitachi for 15 years, and brings out its enterprise arrays at the same time as HDS.
The dual rollout raises two questions for HP. First, what’s the difference between the two arrays, and how does the XP7 fit alongside HP’s own 3PAR StoreServ family in HP’s enterprise strategy?
HP people can be touchy about the first question. They refer to the HDS deal as a technology partnership rather than a straight OEM deal. As HP storage and social media expert Calvin Zito wrote on his blog this week:
“One of my big pet peeves in when people say that we rebadge an HDS array. That couldn’t be further from the truth and I dare say that HP has made far more contributions to the XP platform over the years because of the technology agreement with Hitachi Ltd.”
HP brings its software and firmware to the array, adding features such as Performance Advisor, HP advanced clustering features and integration with HP servers. However, these features are overshadowed by Hitachi’s storage virtualization capabilities that allow the arrays to support systems from any major storage vendor and the new Hitachi flash modules that also work with the XP7.
Most of HP’s storage focus is on the 3PAR platform, which spans from the midrange into the enterprise. That is HP’s best-selling storage system and fits into large implementations, so why sell the XP7 to compete against itself? The simplest reason is mainframe connectivity. The XP7/VSP G1000 IP goes back to the days when large storage systems were almost always connected to mainframes. 3PAR, which came along in the early 2000s, was designed for web and cloud hosting companies.
“We recognize that customers and the industry is going through a transformation from reliable robust legacy applications to new styles of IT-as-a-service, cloud-based virtual environment,” said Kyle Fitze, HP’s director for XP storage. ”Customers in some cases can’t move that fast because there are challenges around business needs and the ability to introduce new technology in a seamless way.
“XP is for conservative mission critical customers with expectations of high performance and low latency. StoreServ is for customers re-architecting their data centers, changing applications and moving to a services-oriented model.”
While EMC exceeded its overall revenue forecast for last quarter, its storage revenue was a bit disappointing. EMC’s storage revenue made up $3.68 billion of the company’s $5.5 billion (counting VMware, RSA, Pivotal and other sources). On the storage side, that was a three percent decline from last year mainly because of a tough quarter for EMC’s VMAX enterprise storage array.
VMAX sales dropped 22 percent last quarter from the previous year. Revenue from the rest of EMC’s storage portfolio actually increased six percent, but VMAX is its largest and most expensive platform. The pattern was similar to what IBM reported last week, when its enterprise DS8000 dragged the entire storage hardware group to a 23 percent drop from last year.
As he did in January after EMC missed its forecast for 2013, EMC CEO Joe Tucci spoke of changes in spending patterns in the IT world today. Tucci said those changes bring challenges in the short term while raising long-term opportunities for vendors who get it right.
“The Information Technology industry is going through a major transformation, a secular shift from the client/server PC era of computing to a mobile, cloud, big data, social networking era,” Tucci said on EMC’s earnings call. “As we navigate through this transition, we and the rest of the industry are facing a global market which is exhibiting an air of caution in spending, resulting from an array of economic and political uncertainties around the world. Collectively, these two factors are creating an environment that is not for the faint of heart.”
David Goulden, CEO of EMC’s Information Infrastructure, blamed the VMAX decline on “math factors” (last year was strong for VMAX, so there was a tough comparison and changes in EMC’s order fulfillment process resulted in a larger product backlog) and product cycle. The VMAX, like IBM’s DS8000, is due for an upgrade and customers could be waiting for that before they buy their next one. Hitachi Data Systems and Hewlett-Packard upgraded their high-end arrays this week, putting pressure on their rivals. “We do have a refresh plan during the year,” Goulden said. “I won’t tell you exactly when. We don’t want to impact our own business more than we have to, but that certainly is a factor.”
EMC reported better results for its “emerging storage” category. Revenues for that group increased 81 percent year-over-year, although that growth is less impressive when you consider emerging products such as the XtremIO all-flash array and ViPR software-defined storage were not even selling yet a year ago. The emerging storage group also includes Isilon clustered NAS and Atmos object-based cloud storage. Taken together, the technologies in that group could determine the future of EMC storage.
Other news from the earnings call:
• The all-flash XtremIO array picked up “dozens” of new customers and more than 70 percent of VMAX and VNX2 midrange systems shipped with some flash capacity. EMC said it sold more than 17 PB of flash in the quarter, up 70 percent year-over-year. Goulden said EMC added 20 TB XtremIO systems in the quarter and has a “very aggressive roadmap this year” to expand the flash platform and integrate it with other EMC products.
• VCE Vblock converged appliances that EMC sells in partnership with Cisco and VMware grew 50 percent year-over-year with most of the units bought by new customers.
• Goulden said Data Domain backup appliances “had another excellent quarter” but did not provide specific numbers. EMC’s total backup and recovery revenue grew four percent year-over-year.
• Syncplicity file sharing software revenue more than doubled year over year.
• EMC estimated that more than $2 billion in revenue in 2013 came from cloud providers.
When Pure Storage pocketed $150 million in funding last August, CEO Scott Dietzen said that gigantic round would fuel rapid growth for the all-flash array vendor in the face of increasing competition from EMC and other large storage vendors.
Apparently, the $150 million wasn’t enough to fund that growth spurt. Today Pure closed an even bigger round, picking up another $225 million to bring its total funding to $470 million. That’s either a lot of growth or a lot of money being burned.
In his blog today and in an interview with Storage Soup, Pure Storage president David Hatfield explained why the company went back so soon for so much money. He said it wasn’t out of necessity because Pure has not yet spent most of its last round and could be cash-flow positive if the leadership team wanted that. But Pure wants to keep growing its engineering, international sales breadth, brand support and channel.
Hatfield said current and new investors were eager to pump more money into Pure, so Pure took it.
The title of Hatfield’s blog includes the term “Building a War Chest,” and that tells you what you need to know about the all-flash storage market today. EMC, NetApp, IBM, Hitachi Data Systems (HDS), Hewlett-Packard and Dell are all pushing flash either in hybrid or all-flash systems. Then there are the all-flash pioneers such as Pure, Nimbus and Violin Memory vying to push spinning disk out of the enterprise. It’s easily the most competitive storage market today.
As for growth, Hatfield said Pure is “adding two or three people a day,” including new memebers of its large executive team. On the product front, Pure is in beta with replication, its major missing software piece. Hatfield said there are plans to continue to scale up the platform to reach hundreds of TB on a system, increase interoperability with third-party software applications and move beyond tier one storage.
On the customer front, Pure claims is revenue grew 700 percent in 2013 over 2012 and has been increasing more than 50 percent sequentially each quarter. Pure said it shipped more than 1,000 FlashArrays in 2013.
Hatfield said despite the large vendors’ talk about flash and their new all-flash systems, they are still committed to spinning disk while Pure is pure f lash. “EMC would rather sell a $1.5 million VMAX instead of a $300,000 [all-flash] XtremIO,” he said. “We’re competing with hybrid models. They’re selling disk first, then flash as a tier. We have a two-plus year lead on technology. As [legacy vendors] try to close the technology gap, they have a business dilemma. Their multi-billion dollar disk franchise is at risk. We have the ability to attack it, and not feather in flash as a performance tier.”
They have a huge war chest to fund that attack. The latest round included new investor Wellington Management Company as well as previous investors T. Rowe Price Associates, Tiger Global, Greylock Partners, Index Ventures, Redpoint Ventures, and Sutter Hill Ventures.
Micron Technology unveiled the M500DC SATA solid state drive (SSD) that is targeted for both mission-critical storage and cloud-based Web 2.0 storage as the company tries to grab more of the data center market share by appealing to customers who are more cost-conscious as well as those who want performance and endurance.
The M500DC, which is part of Micron’s M500 portfolio, is designed for endurance to handle transactional databases, virtualization, big data and content streaming. The M500DC SATA SSD is built on the company’s MLC NAND flash technology and custom firmware. It’s integrated with Micron’s Extended Performance and Enhanced Reliability Technology (XPERT) feature suite, which is an architecture that integrates storage media and controller to extend drive life to meet demanding data center workloads.
“This product casts a wide net,” said Matt Shaine, Micron’s product marketing manager for enterprise SSD. “Out customer use cases for this product are all over tha map in terms of capacity, performance and endurance at an attractive price point. Our data center customers give us a lot of feedback on requirements, and they essenttially make up of two groups.”
Shaine said one group looks more at the affordable price rather than feature, performance and endurance. The other group value the mixed use of random performance, full data protection and data-path protection.
The SSD combines a 6Gbps Marvell controller with Micron’s 20-nm MLC NAND. There are some server-type features such as die-level redundancy for physical flash failures, onboard capacitors for power-loss protection and advanced signal processing to extend the life of the NAND.
The SSD comes in both 1.8-inch and 2.5-form factors and 120, 240, 480 and 800 GBs of storage capacities. The 800 GB SSD has sequential reads of 425 MBs per second and write performance of 375 MBs per second. The 800 GB SSD has a 65,000 IOPS random read performance and write performance of 24,000 IOPs. It also has an endurance random input of 1.9 Petabytes.
The 480 SSD also has an endurance random input of 1.9 Petabytes but it has a sequential read performance of 425 MBs per second and sequential writes of 375 MBps. Its random read runs at 63, 000 IOPs and random writes at 35,000 IOPs.
Micron claims the new SSD can achieve one to three drive fills per day over five years so it reduces the need to replace drives on a frequent basis.
“This is a more rugged drive that can handle longer workloads,” Shaine said.
Conversations at the recent National Association of Broadcasters (NAB) conference led me to the conclusion that object storage is becoming a common denominator between block and file storage within companies in this vertical market. I noticed a separation in the storage systems used for different business groups in a company.
That separation is happening because storage systems have different requirements among the groups. Block storage has a variety of performance, capacity, and resiliency needs. File storage — either on block storage system or with NAS systems — are different in scale and performance and economics. The businesses have evolved separately and the accounting for storage expenses has never moved to a service model.
Broadcasters at the conference talked using object storage to build a hybrid cloud or private loud. The distinction between hybrid and private cloud was that hybrid clouds also include the use of public clouds.
The different use cases are the same situation as other industries that have deployed object storage systems faced previously. Broadcast and entertainment companies use object storage for content distribution, content repositories and to share data with file sync and share software along with high performance file transfer software.
Ultimately, there were no real differences in the characteristics of the needs from the different groups. Their storage characteristics include massive scale of capacity and number of files stored. Object storage has the capabilities to address these needs, and can be deployed as a common solution to provide economies both in the acquisition and in the operational costs. And the object storage system could be deployed as a service, charging users through a capacity-on-demand model. The economics overcame traditional parochialism.
This could be thought of as “technology as the unifier.” Not exactly, though, because there remains the need for “special usage” storage to satisfy other needs. Block systems and NAS systems with certain characteristics are still required and that is unlikely to change much. So could be said that object storage is the common denominator for meeting new storage demands.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Data analytics and security vendor Splunk made it easier to use its software with NetApp and VMware with the latest version of its Splunk App for VMware.
The San Francisco-based Splunk’s software collects data from applications, operating systems, servers and storage, and uses the data for operational intelligence.
The upgraded Splunk App for VMware provides on automated drill down into data from the NetApp Data Ontap operating system in VMware environments.
Splunk correlates and maps data across virtualization and storage tiers to handle storage latency and capacity problems.
Leena Joshi, Splunk’s senior director for solutions marketing, said Splunk singled out NetApp ONTAP because of the company’s open APIs and because “a lot of our customers have NetApp installations.
“We already supported NetApp but what we have done is made the process automated,” Joshi said. “We just made it easier. We have taken advantage of (NetApp’s) open APIs to map VDMK file names to NetApp Ontap.”
The app provides capabilities such as analytics for root-cause discovery, capacity planning and optimization, chargeback, outlier detection, troubleshooting, and security intelligence. It also helps forecast future CPU, memory and disk requirements for VMware vCenter and ESXi hosts.
But this week two large vendors rolled out cloud DR. VMware added Disaster Recoveyr to its vCloud Hybrid Service (vCHS) and IBM added its Virtual Server Recovery (VSR) DR service to its SoftLayer cloud.
VMware has had DR on its roadmap since it launched VMwre vCloud Hybrid Service in late 2013. The vendor maintains five data centers in the U.S. and U.K. for the service.
Customers install a virtual appliance on-site, and use VMware’s data centers to replicate and fail over VMDKs. VMware said it can deliver a 15-minute recover point object (RPO) and subscriptions start at $835 a month for 1 TB of storage. Customers pick which data center location they want to use. The service includes two free DR tests per year.
“We identified DR as one of the key canonical uses of the hybrid cloud,” said Angelos Kottas, director of product marketing for VMware’s Hybrid Cloud unit. He added there is a “pent-up demand for a public cloud service optimized by the hybrid cloud.”
IBM will make its three-year-old Virtual Server Recovery (VSR) service available on its SoftLayer cloud for the first time. IBM claims it can recover workloads running on Windows, Linux and AIX servers within minutes.
Carl Brooks, a 451 Research analyst, said VMware is playing catchup to Amazon and other cloud services while IBM is shifting its business model with the new DR services.
“IBM is doing this now with SoftLayer,” he said. “It shows that IBM is changing its business model to include the cloud rather than traditional data center infrastructure, which is anti-cloud. It’s still on the Big Blue environment, still using Tivoli management software, but now SoftLayer is driving it.
“It’s business as usual but better for IBM. For VMware, it’s a new frontier.”