Red Hat today rebranded its storage software platforms in hopes of clarifying their intended use cases.
Inktank Ceph Enterprise becomes Red Hat Ceph Storage and Red Hat Storage Server becomes Red Hat Gluster Storage. The scale-out open source platforms fall under a Red Hat Storage umbrella. Both storage platforms came to Red Hat through acquisitions. Red Hat picked up file-based Gluster in 2011 and added block and object storage vendor Inktank in May 2014. Since the Inktank acquisition, there have been questions about whether Red Hat would keep the two platforms separate or merge them.
Both will live on, Red Hat Storage direct of product marketing Ross Turk said. Red Hat positions its Gluster Storage as best suited for enterprise virtualization (VMware), analytics (Hadoop and Splunk), and sync and share workloads. Ceph Storage is designed for OpenStack, object storage (Amazon S3-compatible) and other cloud infrastructure workloads. They both can play in archiving and rich media markets.
“This represents a sea change in how we talk about our products, and we’re starting with the workloads,” Turk said. “We’re trying not to think of Gluster as file and Ceph as object and block. We’re trying instead to think of what the best architectural fit is for each of them.”
Put another way, Gluster is more apt for traditional file-out storage needs while Ceph is more for public cloud storage developers and enterprises with large development teams.
“Ceph was built for the cloud and for building infrastructure,” Turk said. “Red Hat Gluster Storage is a completely different beast. It’s purpose built as a scale-out file store. Gluster was built to consolidate storage resources on a bunch of servers under a single namespace.”
Turk said the two platforms are licensed separately, and Red Hat has customers using both.
Condusiv Technologies this week launched Diskeeper Server software to defrag disks connected to SANs.
Wait, is fragmentation even a thing for SANs? You never hear SAN vendors or even customers talk about it. What are all those RAID schemes and expensive SAN controllers for anyway?
Condusiv insists fragmentation is a problem at the logical disk layer, whether the SAN vendors admit it or not. And that fragmentation is impacting performance of applications on physical servers.
“We’ve expanded our fragmentation technology below local storage to include SAN storage,” said Brian Morin, Condusiv’s senior vice president of global marketing.
That means Condusiv is expanding its focus from desktop and laptop PCs to servers. Condusiv has been around for 33 years, but was known as Diskeeper until 2012. It claims to have sold 45 million licenses, but with the PC market shrinking and flash becoming popular, the company is targeting SANs as its new frontier.
It’s not defragging disks in SAN arrays, but preventing files from being broken into pieces before being written to hard disk drives or solid-state drives non-sequentially. That way, it prevents fragmentation before it becomes an issue.
Morin said Diskeeper Server can improve the way the Windows file system writes files to disk. “We monitor the system and see how data is being created,” he said. “Whenever a file is created or extended, Windows looks for a fixed size allocation because it doesn’t know how big the file is going to be. That causes extra I/O, or fragmentation. We give it the intelligence to say ‘This file is going to be this big.’ It finds the correct allocation and prevents that fragmentation from occurring in the first place.”
There can be a performance hit from the fragmentation prevention process, but Morin said that is less of a drag than the results of fragmented files.
“If Windows splits a file into 20 pieces, it’s writing 20 different I/O streams,” Morin said. “Windows sees that file as 20 separate pieces and issues I/O operations for every piece of that file. That’s a lot of I/O overhead to the physical server.”
Condusiv claims 73 percent of physical servers – about 2 million of them — are attached to SANs and fragmentation dampens their performance by 25 percent or more.
Diskeeper Server has a list price of $399.95 per server.
Quorum today hired John Newsom as CEO to help the disaster recover as a service (DRaaS) startup navigate a rapidly growing market with a crowded field of competitors.
Newsom replaced Walter Angerer, who served as Quorum’s CEO since Nov. 2013 and moves into the executive chairman role.
Newsom has spent most of his 20-year career in tech at Quest Software as an executive in engineering, sales and product management. He joined Quest in 1996 when it was a $6 million revenue company and was still there when Dell acquired it for $2.4 billion in 2012.
Quorum sells several backup, DR and archiving appliances that can be implemented on-site or in the cloud. The vendor promises simple one-click DR. Newsom said in Quorum he sees a company with good products that needs to find partners to help bring it to market.
“Our product is strong,” he said. “We need to accelerate how we get into the market. I know I can radically help with that. We’re a simple solution, easy for partners to adopt and the most robust on the market.”
Quorum clams its revenue grew 600 percent over the past two years, and its channel reseller revenue grew by 200 percent. Newsom said Quorum has the potential to fill the role for DR as a service that ServiceNow.com did for IT service management and Salesforce.com did for CRM.
“We see people re-inventing old systems that have been around forever,” he said. “People say, ‘how can I do it with fewer dollars?’ The cloud is part of that.”
The cloud is a major part of DR now, and new cloud DR products and vendors are becoming frequent occurrences.
“Everybody is jumping in, from mom and pop startups and big players who recognize they have to be in this,” Newsom said. “I see an opportunity for us with bigger strategic players, instead of them having to re-create what we’ve done.”
Angerer will stay involved in Quorum as executive chairman, but gave up the CEO post because he is also CEO of Parsec Labs. He briefly found a replacement last May when Quorum hired Edward Sharp as CEO, but Sharp lasted only three months and is current chief strategy and technology officer at PMC Sierra
Amid all of the financial numbers and strategy talk at the EMC Strategic Forum Tuesday, the vendor gave a detailed description of the first product that will spring from its 2014 DSSD acquisition. The target ship date for the flash appliance is late 2015.
The DSSD system is a 5u direct attached storage box that uses proprietary PCIe flash modules to connect to multiple servers. Data moves from the flash module to the application server through a PCIe fabric. Each system has 36 flash modules with ports on the back to connect up to 48 servers.
DSSD founder Bill Moore uncovered the appliance on stage at the Forum in New York, and showed off its parts.
“PCI was not built for a shared fabric,” Moore said. “We built the world’s largest PCI shared fabric in the back [of the system].”
Moore said the advantage of shared storage is that it will improve performance and eliminate bottlenecks of the traditional PCIe approach that connects one server to storage. DSSD appliances will pool capacity as well as performance, he said.
He added that DSSD engineers have taken customer NAND chips and controllers and designed their own flash module. He said the custom modules built with EMC firmware gives the box 10 times the performance and five times reliability as using the same NAND as competitors.
On a day when EMC hyped all of its products – including those of its VMware and Pivotal companies – its execs saved their most lavish praise for DSSD.
Here’s how EMC Information Infrastructure CEO David Goulden introduced Moore’s demo: “What if we could give you something that was competitive on price points with dense and low-cost flash systems? What if it was orders of magnitude faster? Not hundreds of thousands of IOPS, but millions of IOPS. And what if the latency was a 10 of other systems … we’re talking a few microseconds.”
Moore added, “There are three metrics for high performance – eye-watering, face melting and head exploding. This is bridging the gap between a face-melting and head-exploding appliance.”
Fortunately, we have at least six months to ice our faces and hold our heads to prepare for the DSSD appliance.
Other products EMC teased during the event included a software-only version of VNX that is in beta; new VCE Vblocks including ScaleIO storage, VMware NXS software-defined networking and eventually DSSD systems; and a VMware EVO: Rack system under the VCE brand. VMware CEO Pat Gelsinger said EMC will be the first EVO:RACK partner, with an announcement at VMWorld in August. EVO:RACK is a rackscale version of VMware’s VSAN hyper-converged storage that is now available in smaller EVO:RAIL bundles.
The enterprise storage market is splitting into two areas that are still indistinct but will become a lot less so. It’s important to understand this evolution if you are to make sense of storage vendor product strategy.
Before explaining this tectonic motion, it is necessary to define what I mean by enterprise storage.
When I talk about enterprise storage, I mean storage systems used for enterprises that traditionally have had their own data centers and have certain expectations about storage capabilities and characteristics. This segment includes two dynamics. One is the current operations and critical business applications that must continue without interruption while meeting increasing demands. The other is a transition to IT as a Service methodologies with private cloud implementations.
The traditional IT environment has relied on storage systems to provide resiliency and data protection. And as requirements for availability and business continuity have increased over time, high value capabilities have been designed into storage systems. This has made systems more sophisticated (also more complicated) and more expensive. The capabilities are enumerated in product data sheets.
Private cloud deployments mimic the approaches used in public clouds where storage is more of a commodity and functionality is added in the software for the application level. This means the resiliency and availability does not depend on the underlying storage system, but must be developed into the application or software where it executes.
This second segment leads to opportunity for people designing storage systems. The opportunity is to provide storage platforms for this market that are different than the more sophisticated systems required to meet traditional IT demands. These systems can be as simple as enclosures with devices and control hardware with functionality that routes access.
While private clouds do not eliminate the need for sophisticated storage systems, they represent a set of revolving requirements and product offerings. This is where the “land grab” is on for storage vendors to establish market share. The change presents an opportunity for new storage vendors that can meet the cost and margin demands to enter the market, and allows storage consumers to buy storage that better fits their changing needs.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Violin Memory initiated a turnaround plan under a new CEO a year ago and still has little to show for it.
It’s been a busy year for Violin under CEO Kevin DeNuccio, with several new product rollouts to help the all-flash vendor overhaul its portfolio to remain competitive in a hot area. Still, Violin’s revenues for the fourth quarter of 2014 shrunk from the previous year as did its full year revenues. Losses continue and guidance for this quarter came in light.
The best thing you can say about Violin’s earning results from Thursday was that the vendor lost less money last quarter than it did in the same quarter a year ago. But it lost twice as much in the fourth quarter of 2014 as it did in the third.
Here are the numbers:
• Fourth quarter revenue of $20.5 million was down six percent from last year (excluding revenue from discontinued PCIe flash revenue) and down 10 percent from the previous quarter. It also fell $3.5 million below analysts’ expectations.
• Fourth quarter loss was $46.8 million, compared to third quarter loss of $23.5 million and $56.5 million loss from fourth quarter of 2013.
• Full year revenue of $79 million was down 27 percent from previous year, and full year loss of $108.9 million was an improvement over the loss of $149.8 million from 2013.
• Guidance for this quarter was in the range of $21 million to $24 million, with the high end hitting analysts’ expectations.
DeNuccio said he remains optimistic that he can complete what he calls “one of the greatest technology transitions” he has ever seen. That optimism is based on two factors: Violin’s newly updated Flash Storage Platform (FSP) and the rising wave of flash storage.
“While the fourth quarter was technically challenging, it is important to recognize that we have built a strong foundation in our people, our technology and our financials,” DeNuccio said on Thursday’s call with analysts. “We are not just in a better, stronger place, but we are well positioned at the forefront of one of the greatest technology transitions I’ve witnessed in my career.”
DeNuccio blamed the disappointing fourth quarter on customers holding off on buying until they can test the new FSP systems. He said he expects to increase revenue 10 percent every quarter this year, but gave no timetable for turning a profit.
Violin upgraded its Concerto operating system a month ago, adding features such as lock-level inline deduplication and compression, snapshots, clones, replication, CDP and better management features. The goal is the make Violin’s all-flash arrays better suited for primary storage than for high performance use cases.
Violin executives say the average selling price for the new primary storage arrays will be up to four times that of Violin’s previous platform. CFO Cory Sindelar estimated the new average price will be $500,000 to $1 million range, up from $200,000 to $250,000 for its old systems.
But Violin faces tough competition for those dollars in the flash market, mainly from EMC, IBM and well-funded privately held Pure Storage. None of those vendors are backing off in their dedication to flash, as are any of the other mainstream storage vendors or startup SolidFire. And new flash platforms are still coming into the market, such as SanDisk’s InfiniFlash launched this week.
So that great tech transition is hardly a sure bet.
Storage revenue rebounded in the fourth quarter of 2014, no thanks to the large vendors.
Total storage revenue grew 7.2 percent to nearly $10.6 billion and external (networked) storage increased 3.4 percent to $7.15 billion, according to IDC’s worldwide quarterly disk storage tracker.
For the year, storage spending rose 3.6 percent to $36.2 billion and total capacity increased 43 percent to 99.2 exabytes.
That comes despite a year-over-year decline of nearly seven percent in overall storage spending in the first quarter of 2014, and two quarters of external storage declines during the year.
No. 1 EMC’s revenue growth of 3.3 percent to $2.35 billion decreased its market share from 23.1 percent to 22.2 percent in overall storage from a year ago. No. 2 Hewlett-Packard increased revenue 4.8 percent to $1.46 billion but its share dropped from 14.1 percent to 13.8 percent. No. 3 Dell increased revenue 5.2 percent to $952 million. That put Dell barely above IBM, whose revenue dropped 23.8 percent to $951 million. Dell and IBM each had nine percent market share. No. 5 NetApp’s revenue also declined 3.5 percent and its market share dropped from eight percent to 7.2 percent at $764 million.
So how did the market grow if the combined revenue of the top five vendors declined? Smaller vendors (IDC’s “Others” category) and original design manufacturers (ODMs) who sell directly to hyper-scale data centers led the rally.
Other vendors increased 20.2 percent to $2.74 million and increased share from 23.1 percent to 25.9 percent. ODMs increased revenue by 39.4 percent to $1.36 billion and grew market share from 9.9 percent to 12.8 percent.
In external storage, EMC grew 3.3 percent to $2.35 billion to remain at 32.9 percent market share. No. 2 IBM dropped 7.2 percent year-over-year to $837 million to fall from 13 percent to 11.7 percent share and No. 3 NetApp decreased 3.5 percent to $764 million. No. 4 HP grew 3.3 percent to $689 million and 9.6 percent share. HDS’ 3.5 percent increase to $577 million left it at 8.1 percent share. The revenue from other storage vendors increased 12.3 percent to $1.93 billion and jumped from 24.9 percent to 27 percent market share.
IDC research director Eric Sheppard attributed the fourth-quarter growth to year-end seasonality, demand for mid-range systems with flash capacity and hyper-scale data center storage.
Dell led in total storage capacity shipped during the year at 10.85 exabytes, followed by HP with 9.98 exabytes and EMC with 9.96 exabytes.
Western Digital, which has been collecting flash-related startups over the past few years, today beefed up its archiving portfolio by acquiring object storage vendor Amplidata.
Amplidata will become part of Western Digital’s HGST subsidiary. The deal is expected to close this month. Western Digital did not disclose the acquisition price.
Amplidata claims its Himalaya object storage software can handle zettabytes of data and trillions of stored objects under a single global namespace. The software was called AmpliStore until Amplidata re-branded it as Himalaya in June 2014.
Western Digital invested $10 million in Amplidata last year, and Amplidata was a joint development partner in HGST’s Active Archive platform. HGST sees Active Archives as large repositories of recently created data that needs to be accessed at least occasionally, unlike cold archived data that is rarely accessed.
Amplidata also has an OEM deal with storage systems vendor Quantum, which uses Amplidata in its Lattus and StorNext archiving products. It appears that relationship will continue, as Western Digital’s press release today quoted Quantum CEO Jon Gacek saying he was “excited about the acquisition” and is looking forward to new partnership opportunities.
Until today, HGST has concentrated more on storage performance products than archiving with its acquisitions. It bought all-solid state drive array vendor Skyera in December 2014, after picking up NAND controller vendor sTec, PCIe-based flash card vendor Virident Systems and application acceleration vendor VeloBit.
Who’s next for Western Digital/HGST? Keep an eye on Avere Systems. Western Digital led a $20 million investment round in Avere last July, Avere is also an Active Archive partner and its NAS acceleration appliances incorporate flash.
Nimble Storage exceeded expectations with $68.3 million in revenue in its fiscal fourth quarter, helped by a new wave of Fibre Channel-based arrays that factored into about 10% of the company’s bookings.
But, the San Jose, California-based storage startup’s stock fell after Thursday’s fiscal 2015-ending earnings call, as financial analysts ratcheted down estimates in response to Nimble’s fiscal 2016 first-quarter guidance. Nimble predicted that revenue will be roughly flat, at $68 million to $70 million, and operating losses will range from $9 million to $10 million.
“Our guidance accounts for the seasonality effects of Q1,” said Nimble CFO Anup Singh. He noted the first quarter is typically the industry’s slowest.
Singh insisted that Nimble remains on target to break even by the end of the fiscal year, on Jan. 31, 2016. He said the company expects similar operating losses in the first and second quarters before improvement in Q3 and breakeven in Q4. He confirmed the nonlinear progression was purely a function of operating expenses – not slowing revenue or gross margin – in response to a question from a Wells Fargo analyst.
Nimble specializes in hybrid arrays combining solid-state and hard-disk drives. During the past year, the company introduced new capabilities such as Adaptive Flash, InfoSight performance monitoring, scale-out clustering, Triple Parity RAID and Fibre Channel (FC) storage networking.
The $63.8 million in revenue for fiscal 2015 marked a 15.5% revenue increase over the fiscal 2015 third-quarter and 64% growth over the fiscal 2014 fourth-quarter, when Nimble began operating as a public company. The $228 million in fiscal 2015 revenue was 81% higher than the prior fiscal year’s $126 million.
The average selling price per deal hit record levels in the fourth quarter, with a significantly higher share of bookings exceeding $100,000 and $250,000, according to Nimble CEO Suresh Vasudevan.
Vasudevan said Nimble added 650 new customers during the fourth quarter and had an installed base approaching 5,000 by the end of fiscal 2015. He claimed 83 Fibre Channel customers were on board by Jan. 31, after the company added support for the storage networking technology in November.
“As we had anticipated, Fibre Channel is helping to increase deal sizes and is helping drive large enterprise penetration,” said Vasudevan. He said the pace of FC adoption exceeded expectations, and more than 70% of the FC customers were net new for Nimble.
Vasudevan cited an example of an unnamed Fortune 100 customer that spends tens of millions of dollars on storage and had been using technology from an industry-leading legacy vendor. “Without Fibre Channel, we would not even have made the consideration list,” he said.
Another area of customer growth for Nimble were the SmartStack pre-validated reference architectures for combining technology from Cisco and software vendors such as Microsoft, VMware, Citrix and Oracle into a converged infrastructure. Vasudevan said the SmartStack customer base grew threefold between fiscal 2014 and fiscal 2015.
“The frequency with which we are competing against hyper-converged vendors has increased . . . but it still represents a very small single-digit percentage of our total frequency,” said Vasudevan. “More often than that what we tend to see is competition against the likes of FlexPod or VCE.”
Vasudevan said Nimble’s Adaptive Flash, which can “dial the ratio of flash from very low to very high levels,” allowed the company to compete in twice as many all-flash array environments in the fourth quarter than in the prior quarter.
“Our win rates against those were higher by a decent margin than what we had seen ever before,” he said.
Vasudevan cited the company’s InfoSight-led support as a driver of repeat deployments. He said four major global companies addressed Nimble’s sales team at a kickoff event this month and told them InfoSight was “game-changing” in their day-to-day operational management of storage.
Several financial analysts asked about Nimble’s product roadmap during the earnings call. Vasudevan did not provide specifics other than to say the company planned to focus on differentiation through technologies such as its Adaptive Flash, file system and InfoSight and through integration efforts with alliance partners.
Joe Wittine, a senior equity research analyst at Longbow Research, said he heard deduplication is in the works and asked about the level of customer demand for the data-reduction technology. Vasudevan said dedupe can help in virtual desktop infrastructure (VDI) environments where there are hundreds of desktop images that look the same. He said one option to reduce space is dedupe and another is zero-copy cloning, which Nimble supports.
“In some of those modes, deduplication can help you optimize cost even more. I won’t comment specifically on timing,” said Vasudevan.
IBM has finally hopped onto the bandwagon with solid-state array vendors that use multilevel cell (MLC) NAND technology and guarantee the read/write endurance of flash modules. Those changes came after lots of behind-the-scenes work.
Engineers from the company’s Texas Memory Systems acquisition and IBM researchers from Zurich and other locations combined to develop the new FlashCore technology at the heart of the FlashSystem V9000 and 900 arrays.
As only a small number of vendors do, IBM buys NAND chips and makes the modules that go into its all-flash arrays (AFAs). But last year IBM was the only AFA vendor to make flash drives using enterprise MLC flash (eMLC). In the summer, IBM Fellow and CTO Andrew Walls said eMLC was as an important part of IBM’s strategy, bringing a 10x improvement in endurance over typical MLC-based solid-state drives (SSDs).
Last week, Walls said, “Our design goal with the FlashCore technology, with our advanced flash management, was to take endurance out of the equation. You simply run it and don’t worry about it.”
Flash can wear out over time due to the program/erase process for writing data to NAND chips. All the bits in a flash block need to be erased before a write takes places. The program/erase process eventually breaks down the oxide layer that traps electrons at floating gate transistors, leading to errors. The industry’s wear-out figures for eMLC flash are about 30,000 program/erase cycles and, for MLC, 10,000 or even as few as 3,000 cycles.
But anecdotal evidence is mounting that flash is not wearing out as once feared.
“It’s not happening at all,” asserted Gartner Research VP Joe Unsworth, speaking at IBM’s FlashSystem launch event last week. “We see very few failures of drives period, and of course, let’s not forget SSDs fail predictably. So, you can see as this occurs. Right now, we’re seeing about every six months, 2% to 4% flash wear across the solid-state array. That’s not much at all.”
Plenty of vendors have worked hard to improve the endurance of flash. Here’s a glimpse of what Walls said IBM did to improve the endurance of its MicroLatency flash modules without sacrificing performance or low latency.
—Collaborated with Micron, which provided the interface to the “inner workings of the flash,” enabling IBM to monitor and control the flash and change read thresholds.
—Set up a characterization lab in Poughkeepsie, New York, to test flash devices and observe how flash blocks behave as engineers tried different error correcting code (ECC) and garbage collection algorithms and other techniques.
—Developed an ECC algorithm that Walls said allows IBM to correct a high bit error rate and read data only once. “That is a significant step forward. It also allows us to stay in FPGA technology, and it is an algorithm that allows us to get extremely good endurance,” he said.
—Developed health binning and heat segregation technology instead of using the symmetric wear-leveling algorithms that Walls said ensure all cells handle about the same amount of writes in typical SSDs.
“When you do that, unfortunately the endurance of your flash is now going to be determined by your weakest cells, because they’re going to get punished the same as all the rest, and you will wear out depending on that,” he said.
Walls compared IBM’s approach to pack mules in the Grand Canyon carrying loads of 50 pounds, 100 pounds or 200 pounds to enable them to do the job with half the number of animals.
“We monitor the health and assess the health of each flash block as they age, and we determine and grade each of the flash blocks. The flash blocks that are the healthiest [are] going to get the hottest data,” Walls said. He said, as flash blocks age and get weaker, they handle colder data.
“That technique alone has given us a 57% improvement in endurance in most typical workloads,” he said.
Walls claimed that IBM reduced write amplification by up to 45% by grouping like heat levels.
One result of IBM’s efforts is a new FlashSystem Tier 1 Guarantee, which includes “MicroLatency” performance and read/write endurance for up to seven years as long the system is under warranty or maintenance.
That brings IBM up to par with other all-flash array vendors. In June 2014, when SearchStorage.com published a guide to 15 all-flash arrays, IBM was the only vendor that would not replace flash modules if they wore out before the warranty expired. Dell’s Compellent all-flash model noted a caveat that the SSDs had to be within the “rated life” period. None of the other vendors mentioned any restrictions, although the length of their guarantees varied.