Storage Soup

February 27, 2015  5:42 PM

IBM sheds light on work to boost flash endurance

Carol Sliwa Carol Sliwa Profile: Carol Sliwa

IBM has finally hopped onto the bandwagon with solid-state array vendors that use multilevel cell (MLC) NAND technology and guarantee the read/write endurance of flash modules. Those changes came after lots of behind-the-scenes work.

Engineers from the company’s Texas Memory Systems acquisition and IBM researchers from Zurich and other locations combined to develop the new FlashCore technology at the heart of the FlashSystem V9000 and 900 arrays.

As only a small number of vendors do, IBM buys NAND chips and makes the modules that go into its all-flash arrays (AFAs). But last year IBM was the only AFA vendor to make flash drives using enterprise MLC flash (eMLC). In the summer, IBM Fellow and CTO Andrew Walls said eMLC was as an important part of IBM’s strategy, bringing a 10x improvement in endurance over typical MLC-based solid-state drives (SSDs).

Last week, Walls said, “Our design goal with the FlashCore technology, with our advanced flash management, was to take endurance out of the equation. You simply run it and don’t worry about it.”

Flash can wear out over time due to the program/erase process for writing data to NAND chips. All the bits in a flash block need to be erased before a write takes places. The program/erase process eventually breaks down the oxide layer that traps electrons at floating gate transistors, leading to errors. The industry’s wear-out figures for eMLC flash are about 30,000 program/erase cycles and, for MLC, 10,000 or even as few as 3,000 cycles.

But anecdotal evidence is mounting that flash is not wearing out as once feared.

“It’s not happening at all,” asserted Gartner Research VP Joe Unsworth, speaking at IBM’s FlashSystem launch event last week. “We see very few failures of drives period, and of course, let’s not forget SSDs fail predictably. So, you can see as this occurs. Right now, we’re seeing about every six months, 2% to 4% flash wear across the solid-state array. That’s not much at all.”

Plenty of vendors have worked hard to improve the endurance of flash. Here’s a glimpse of what Walls said IBM did to improve the endurance of its MicroLatency flash modules without sacrificing performance or low latency.

—Collaborated with Micron, which provided the interface to the “inner workings of the flash,” enabling IBM to monitor and control the flash and change read thresholds.

—Set up a characterization lab in Poughkeepsie, New York, to test flash devices and observe how flash blocks behave as engineers tried different error correcting code (ECC) and garbage collection algorithms and other techniques.

—Developed an ECC algorithm that Walls said allows IBM to correct a high bit error rate and read data only once. “That is a significant step forward. It also allows us to stay in FPGA technology, and it is an algorithm that allows us to get extremely good endurance,” he said.

—Developed health binning and heat segregation technology instead of using the symmetric wear-leveling algorithms that Walls said ensure all cells handle about the same amount of writes in typical SSDs.

“When you do that, unfortunately the endurance of your flash is now going to be determined by your weakest cells, because they’re going to get punished the same as all the rest, and you will wear out depending on that,” he said.

Walls compared IBM’s approach to pack mules in the Grand Canyon carrying loads of 50 pounds, 100 pounds or 200 pounds to enable them to do the job with half the number of animals.

“We monitor the health and assess the health of each flash block as they age, and we determine and grade each of the flash blocks. The flash blocks that are the healthiest [are] going to get the hottest data,” Walls said. He said, as flash blocks age and get weaker, they handle colder data.

“That technique alone has given us a 57% improvement in endurance in most typical workloads,” he said.

Walls claimed that IBM reduced write amplification by up to 45% by grouping like heat levels.

One result of IBM’s efforts is a new FlashSystem Tier 1 Guarantee, which includes “MicroLatency” performance and read/write endurance for up to seven years as long the system is under warranty or maintenance.

That brings IBM up to par with other all-flash array vendors. In June 2014, when published a guide to 15 all-flash arrays, IBM was the only vendor that would not replace flash modules if they wore out before the warranty expired. Dell’s Compellent all-flash model noted a caveat that the SSDs had to be within the “rated life” period. None of the other vendors mentioned any restrictions, although the length of their guarantees varied.

February 27, 2015  8:22 AM

Dell Ventures plants a stake in Exablox

Dave Raffo Dave Raffo Profile: Dave Raffo
Exablox, Object storage, Storage

Dell Ventures is the newest investor in object-based storage startup Exablox after leading a $16 million funding round this week.

Dell joined previous Exablox funders DCM Ventures, Norwest Venture Partners and US Venture Partners in the round, bringing the startup’s total funding to $38.5 million.

Exablox OneBlox appliances consist of object storage integrated with a distributed file system, and serve as primary storage or backup.
Exablox CEO Doug Brockett said the funding round does not include any product or marketing partnerships between his company and Dell. But it does show that Dell sees value in Exablox technology.

“Dell certainly understands this market that we sell into extremely well, and Dell understands storage extremely well,” he said. “There is no agreement in place to do anything [with ExaBlox technology] yet. Over time we can see what happens. We both see the same market opportunities out there.”

Brockett said OneBlox has sold in the United States and Canada since launching in 2013. Exablox will use the funding to expand its international sales, including channel recruitment. Brockett said the company has a “rich roadmap” for product upgrades in 2015 and will add engineering with the funding, but that expansion “will pale in comparison to what we’re doing and the sales and marketing side.”

Exablox’s upgrade strategy has been to make changes in software that customers install on the same hardware that they bought initially.

“We’re software-defined storage. We don’t ask people to adopt new hardware,” Brockett said.

February 26, 2015  7:24 PM

Dell to ship 2nd wave of XC Series hyper-converged appliances

Carol Sliwa Carol Sliwa Profile: Carol Sliwa
Hyper-convergence, Nutanix, Storage

Dell plans to ship the second version of its XC Series of hyper-converged appliances next week, just four months after getting the first wave of products out the door in November.

The XC Series appliances bundle Dell’s hardware, Nutanix’s software, and VMware’s ESXi or Microsoft’s Hyper-V hypervisor technology. Version 2.0 uses the 4.1 release of the Nutanix software, which aggregates and manages the clustered server and direct-attached storage resources.

Version 2.0 of the XC Series appliances will be the first to run Dell’s 13th generation PowerEdge server technology with the latest Intel Xeon processor E5-2600 v3 product family. Other differences between the first and second editions include flexible options for the numbers and capacities of solid-state drives (SSDs) and hard-disk drives (HDDs), processor cores and speeds, and DIMM and memory configurations.

With the second wave of XC products, Dell also plans to introduce next week a new 1U model based on Dell’s PowerEdge R630 server technology. The more compact XC630 will support more virtual desktop users in half the rack space of Dell’s original XC720xd at a lower cost, according to Travis Vigil, executive director of product management for Dell Storage.

Dell will also release two new higher density 2U appliances based on its PowerEdge R730xd servers. The XC730xd-12 has a dozen 3.5-inch drive slots and options for two to four 200 GB, 400 GB or 800 GB SSDs and four to eight 4 TB HDDs. The XC730xd-24 has two dozen 2.5-inch drive slots and can handle two to four SSDs and a minimum of four and maximum of 20 HDDs of 1 TB capacity.

“With the XC730xd offering 60% more storage than the predecessor version, we think we’ll probably see more interest in big data type workloads,” said Vigil.

Dell product literature claims the XC730xd-12 can run storage-heavy Microsoft Exchange and SharePoint, data warehouse and big data workloads, and the XC730xd-24 is suitable for performance-intensive Microsoft SQL Server and Oracle OLTP workloads.

The XC630’s spec sheet says the product targets compute and performance-intensive virtual desktop infrastructure (VDI), test and development, private cloud and virtual server workloads. The 1U XC630-10 has 10 2.5-inch drive slots and can hold two to four SSDs at raw capacities of either 200 GB, 400 GB or 800 GB and four to eight 1 TB HDDs.

Vigil said there is no limit on the number of systems that can be clustered. He said a typical configuration ranges from three to 10 units, but he noted that Nutanix customers have clustered to upwards of 100 units.

All XC Series appliances are currently available only as hybrid storage configurations, but Vigil said plans call for an all-flash storage option this year. Dell also plans to add support for the open source KVM hypervisor this year, according to Vigil.

List pricing for the 1U XC630 starts at about $32,000, including the appliance, the Nutanix software, two 200 GB SATA SSD, four 1 TB HDDs and a one-year Dell ProSupport service contract. The starting list price for the 2U XC730xd is about $45,000 with two 200 GB SATA SSDs and four 4 TB HDDs and a one-year Dell support contract, according to a Dell spokesman.

“The official growth rate for these hyper-converged solutions, like we have with the XC Series, is an order of magnitude greater than what we’re seeing with traditional data center hardware spending,” said Vigil. “So, we’re very optimistic about the future here. We’re happy with the demand that we’ve seen so far.”

Dell also sells an appliance for VMware’s EVO:RAIL, which began shipping last fall. Dell’s software-defined storage offerings also include reference architectures for software from vendors such as Microsoft, Nexenta and Red Hat with its servers and storage.

February 26, 2015  3:26 PM

Avago buys Emulex for $606M to expand storage portfolio

Dave Raffo Dave Raffo Profile: Dave Raffo
Emulex, LSI, Storage, Storage Networking

Networking and wireless chip maker Avago Technologies is pushing deeper into enterprise storage with its planned acquisition of storage networking vendor Emulex.

Avago said Wednesday evening that it has agreed to acquire Emulex for $606 million – considerably less than Broadcom offered for Emulex in a hostile takeover attempt in 2009.

Like its main rival QLogic, Emulex sells Fibre Channel host bus adapters and Ethernet storage connectivity products through major storage vendors. The deal comes 14 months after Avago splashed $6.6 billion on storage component firm LSI. Avago then sold LSI’s solid-state drive controller and PCI Express (PCIe) flash card business to Seagate for $540 million in May 2014, but retained its SAS controller and PCIe products.

“Emulex is complementary to Avago’s enterprise storage businesses, and aligns very well with the Avago business model,” Avago CEO Hock Tan said of the deal.

Tan said Emulex’s storage OEM partners are among the same vendors who sell Avago’s SAS, RAID, and PCIe Express switching products. EMC and Hewlett-Packard are Emulex’s largest storage partners.

“We expect this transaction to allow us to offer one of the broadest suites of silicon and software storage solutions to the enterprise and data center markets,’ Tan added.

Tan said he expects the deal to close by the end of June. Emulex will operate as a business unit inside Avago’s enterprise storage segment.

Tan projected that Emulex would add about $250 million to $300 million in annual revenue over the first year, which is below previous expectations. Emulex reported $111 million in revenue last quarter, and analysts expected it to generate about $400 million in 2015 as a standalone company.

Still, Tan said he sees strong interest in Fibre Channel and Fibre Channel over Ethernet revenue for storage. “We see that this Fiber Channel business is really a very sustainable stable business,” he said. “It’s a kind of business where we see a lot of barriers to entry, obviously. And we see a very unique technology, which is very hard to replicate because of all the criteria that fits our business model. So it’s a logical and strategic next stop for us to add Fibre Channel and Fibre Channel over Ethernet into our suite of component solutions and software.”

Emulex fought off a takeover attempt by Broadcom in 2009, enacting a poison pill to keep the networking company from buying a controlling interest. Emulex said Broadcom’s offers undervalued Emulex’s shares, but that offer looks good now. Broadcom’s opening offer was worth $764 million, and it increased to $925 million before walking away.

Emulex CEO Jeff Benck, who was not with Emulex when Broadcom made its move, called the Avago deal “a great opportunity for Emulex” in a press release.

Avago will pay $8 per share for Emulex. The HBA vendor’s stock rose from $6.36 at the close of Wednesday to $8.03 at today’s opening. Still, at least 10 law firms have already said they will investigate whether the Emulex board got the best price for the company.

February 25, 2015  6:47 AM

Pivot3 pockets $45M to ride hyper-convergence wave

Dave Raffo Dave Raffo Profile: Dave Raffo

Pivot3, the hyper-convergence vendor that concentrates on the surveillance and virtual desktop infrastructure (VDI) markets, picked up $45 million in funding this week to expand sales and marketing of its vStac appliances.

Pivot3 CEO Ron Nash said the company will rapidly expand its workforce, which stands at 92 today. He said the goal is to hire 17 people in each of the next two quarters, mostly in sales and marketing with a few developers. He said the company is looking to double its growth rate in the security market this year.

Nash said Pivot3 has more than 1,600 customer systems installed. “The first product most people buy from us is surveillance,” he said. “We started going into broader applications because after we installed the first system for video surveillance, they ask us, ‘Can we run Microsoft Exchange or something else on it?’ We say, ‘Yeah, sure.’”

Pivot3 began selling what it called “serverless computing” appliances in 2008, moving the application server into the storage node along with Xen hypervisors. That was hyper-convergence, although no one used that term at the time. The original use case was storage for surveillance video, and then the vendor launched VDI appliances in 2011.

Pivot3 originally sold software only on appliances, but recently began selling its software separately for customers who want to install it on blades for VDI.

The hyper-converged market is taking off, with VMware making a splash with its Virtual SAN (VSAN) software and its large partners such as Dell, Hewlett-Packard, EMC and NetApp who sell VSAN through VMware’s EVO:RAIL program. Dedicated hyper-convergence startups Nutanix, SimpliVity, Scale Computing, Maxta Software and Nimboxx make it a crowded market.

“Hyper-converged infrastructure is a nice topic of conversation among people, there’s a lot of activity,” Nash said. “There’s a huge wave.”

Nash said the entrance of VMware and the other large vendors could help Pivot3 by bringing attention to the market. Now it’s up to Pivot3’s expanded sales team to convince people that Pivot3 does hyper-convergence better than the others.

“The whole purpose of hyper-convergence is to give you better capabilities and a better price,” he said.

New investor Argonaut Private Equity led the round, which included S3 Ventures, InterWest Partners, Mesirow Financial Private Equity and the Wilson Sonsini Goodrich & Rosati. Pivot3 received $12 million in funding last August, and has around $145 million in total funding.

February 24, 2015  8:42 PM

SwiftStack brings in new CEO

Sonia Lelii Sonia Lelii Profile: Sonia Lelii
Open source storage, Storage, SwiftStack

Object storage vendor SwiftStack has a new CEO. The company today announced Don Jaworski is taking the helm as co-founder Joe Arnold relinquishes the role but remains president and chief product officer.

Jaworski formerly held executive jobs at NetApp, Brocade, Engine Yard, BlueCoat (formerly, Nokia Internet Communications, Ipsilon Networks and Sun Microsystems, which was acquired by Oracle. SwiftStack offers an OpenStack-based object storage service based on the Swift object storage platform and software defined storage controller. It recently raised $16 million in a series B funding round, bringing its total investment to $23.6 million.

“Lots of people asked me, ‘Why join SwiftStack?”’ Jaworski said. “For me, it’s pretty straight forward. SwiftStack is a unique product. We have a great set of investors and a great team that is focused and has a unique talent.”

Jaworski said the company has “50 or greater” customers and a growing number of developers building applications based on its API.

“We will focus on engineering and sales,” he said. “Finally, we will focus our investment in leveraging OpenStack Swift API. We have a large number of developers building applications against the API. ”

Jaworski said that SwiftStack’s customers include those who are experiencing rapid growth of data  and want to “leverage deeper content” particularly with Web applications that go after mobile adoption. Customers also are focused on running IT as a service.

“They what to know how they can be more agile,” Jaworski said. “More cost effective and how to move more quickly into being IT as a service.”

February 24, 2015  4:46 PM

Druva seeks to build out cloud privacy, security

Dave Raffo Dave Raffo Profile: Dave Raffo

End point backup specialist Druva has come up with a data protection privacy framework for customers who use its inSync software in the cloud.

Druva CEO Jaspreet Singh said 70 percent of new customers choose a cloud option for deployment. With customers around the world using Druva’s managed clouds, the vendor is taking steps to address geo-specific privacy concerns. Druva-supported regions include Germany, Japan and Australia as well as the United States, and those areas all have different security and regulation rules.

“Our business model has moved to the cloud,” Singh said. “A lot of global customers are concerned about not just security but also privacy. When end users contribute corporate data in a cloud used by multiple people, that requires a lot of understanding about who has access to what.”

Druva’s privacy framework includes support for 11 global regions with policies that ensure data meets local requirements. Druva also stores unique block data separated from metadata with a unique envelope key encryption model to prevent third-party access to corporate data. InSync allows organizations to identify officers who may handle sensitive material, and prevents that data from being visible to others in the organization. It also tracks all data access and file sharing with audit logs. The software also sets adaptive administrator roles, allowing a defined legal administrator to override privacy controls to enforce data governance.

“In the U.S., verticals such as healthcare and federal contractors have different notions of what privacy means,” Singh said. “The European Union has a different take on privacy. It’s all centered around who controls and processes information.”

Another step for privacy and security is that Druva maintains customer data in its private cloud rather than in a public cloud. “We struggle with the perception of the public cloud,” Singh said.

February 19, 2015  12:12 PM

Cloud provider Axcient raises another $25 million in funding

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Cloud-based startup Axcient raised $25 million in a series E funding round this week, bringing the total amount it has raised to $113.5 million. The company plans to hire more engineers, build out sales and marketing in Europe and launch a program that pays partners upfront when they resell Axcient’s cloud service.

“We are giving them two years of margins upfront,” Axcients CEO Justin Moore said. “In general, in a software-as-a-service model, it takes 12 to 18 months to break even on the amount of spending on sales and marketing to acquire customers. VARs run a cash business, not an equity business. This is a way to incentivize VARS.”

Axcient has a backup and disaster recovery-as-a-service cloud platform that protects data, applications and IT infrastructure from downtime and data loss. Its flagship Business Recovery Cloud backs up and mirrors customers’ entire business in the cloud, which includes emails, files, application, operating system and the network. The new reseller compensation program is called SaaS:FLO.

Moore said the latest $25 million will be used to hire at least 20 more engineers, bringing the number up to 80, and add 30 to 40 percent to the sales and marketing head count. He also plans to open a data center in Europe in the third quarter. Axcient, which has 170 employees, currently claims more than 4,000 customers.

Last year, the company had a 50 percent growth in new customers. It acquired the Delaware, NJ-based DirectRestore for an undisclosed amount, adding more granular recovery technology for files, objects, databases and applications at faster speeds.

Moore said Axcient also put out more than 30 new releases to its Business Recovery Cloud in 2014, improving performance for recovery point objectives and system failovers.

Axcient launched in 2008 with a $6 million series A funding round supplied by Peninsula Ventures, Allegis Capital and Peninsula Capital. Its largest round was a $50 million series D round in 2013. The latest investment round was led by Industry Ventures, with investors Allegis Capital, Peninsula Ventures, Scale Venture Partners and Thomvest Ventures participating. Silver Lake Partners provided new debt financing.

February 17, 2015  8:24 AM

IBM’s spectrum of new storage brands

Randy Kerns Randy Kerns Profile: Randy Kerns
IBM Storage, Storage

IBM has rebranded storage products and introduced XIV as a software-only offering under a new overarching identity – IBM Spectrum. The decoder ring for these products is:


This name change will take a long time to become second nature for many in the storage industry. So, why change the names? Changes of product names occur more often than you might expect, and with greater frequency in some companies. There are common reasons for change. The first one is to reposition or re-launch a product or family. This can be effective in gaining attention and establishing a new direction for a product. The other is due to a change in the leadership of marketing. Some marketing leaders have a desire to establish a new order for their tenure and changing product names is often part of the makeover.

But, this rebranding by IBM is different. IBM has always had a collection of software and hardware storage products that have been developed independently, either internally or through acquired companies. The products typically served different purposes, although there have been overlaps. IBM famously had different organizations and sales for the different product areas, which made the products seem uncoordinated to their customers.

This re-branding is about moving to a coordinated portfolio of storage products. This change could be an important inflection point for IBM in storage. Managing a portfolio of products that can be incorporated into a set of integrated solutions for customers can lead to greater efficiencies in delivery.

Currently the portfolio does not include all of the high value offerings from IBM. Many customers have put their trust in IBM and purchased storage products that are integral to their business operations. Those customers expect IBM to continue to provide solutions to them and movement to an inclusive portfolio approach will be watched closely.

I think re-training to use new names will be difficult and frustrating. It will take time and require periodic referring to the decoder ring. But, it could be worth that if it really is a movement to a portfolio approach. The change in focus to having products to apply as a solution rather than independent software or hardware products can be positive for IBM. It is a major undertaking requiring changes at every level and with every message. It is a challenge.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).

February 12, 2015  9:57 AM

NetApp blames sales slump on budget restraints

Dave Raffo Dave Raffo Profile: Dave Raffo
NetApp, Storage

A rocky quarter with a disappointing forecast for this quarter left NetApp CEO Tom Georgens defensive about his company’s product portfolio and strategy.

NetApp Wednesday reported $1.55 billion in revenue, down four percent from last year and well below its previous forecast. Its forecast for this quarter of between $1.55 billion and $1.65 billion was also less than the analyst’s consensus expectation of $1.69 billion. NetApp’s revenues have decreased year-over-year for five straight quarters. It would have to hit the high end of its guidance this quarter to match its revenue for the same quarter last year.

Georgens opened the earnings call with analysts by saying, “We are clearly disappointed.” Later, he added, “probably a little bit more than disappointed. Probably I’m downright angry … we need to do better than that.” He vowed to make the necessary investments to fix NetApp’s sales execution problems, which he insists are problems with the business model and not product-related.

Georgens blamed the results on large companies putting breaks on their spending. He said the start of a new year (NetApp’s quarter included January) prompted organizations to re-think their budgets and many large deals did not get completed. Georgens emphasized that was because of budget reasons, and not because competitors won those deals instead. Perhaps the worst part of that is “deals that were pushed out may not return in the near term.”

Despite Georgens’ insistence that NetApp’s products are good enough to win deals, analysts on the earnings call questioned if the vendor’s portfolio is broad enough and raised issues with its around flash and cloud storage strategies.

An analyst asked if NetApp’s slowness to bring its all-flash FlashRay appliance to market is limiting its flash adoption. FlashRay is currently in limited release with a one-controller model. Georgens said the vendor is selling plenty of flash in its EF Series and FAS arrays.

“I think FlashRay is going to serve a segment of the market,” he said. “But that’s not the totality of the flash portfolio. The overall majority of our flash is in the other products. I think from a performance perspective, certainly all-flash FAS has a feature set far beyond any other flash array, and compelling performance and compelling efficiency. I think you’ll see a lot more of that product in the near future while we continue to evolve and develop FlashRay to serve the segments that it is targeted at.”

Another analyst asked if customers may be holding off on large deals because they are evaluating NetApp’s Cloud OnTap – a software-only version of Ontap designed for public clouds.

“I don’t think Cloud Ontap is the substitute necessarily for the types of systems that we are selling with Clustered Ontap on-premise,” he said, referring to NetApp’s main operating system. “I think cloud Ontap is the completion of a story … but I don’t think it’s changing the dynamic of ‘do I buy an on-premise system of do I buy that?’ I think Cloud OnTap is basically symbolic, and emblematic of NetApp’s end-to-end seamless cloud strategy and proving that it’s real.”

Georgens was also asked if NetApp needs to acquire companies to broaden its product portfolio. He said he would continue to pursue “tuck-in” acquisitions such as the SteelStore cloud backup appliance move it made in October, but larger deals are harder to predict and plan for. Georgens defended NetApp’s acquisition record, saying the E Series and OnCommand Insight software are selling well.

In recent quarters, NetApp revenue took a hit from a drop in OEM sales mainly because IBM ended its partnership to sell E Series storage. But last quarter, NetApp’s branded revenue also fell two percent year-over-year. NetApp’s declines compare to rival EMC’s three percent growth in storage revenue, which also came in below expectations.

Georgens said he noticed normal spending patterns in late 2014 – the fourth calendar quarter is the most active for storage sales – but things changed in January.

“Certainly we saw bullishness going into the end of the year and then still we had expectations of a relatively strong normal quarter end in January, and a fair amount of that business didn’t come through the way we would have thought,” he said.

“Every deal has a story, but when you go through 40 deals and only one or two are competitive and the rest of them are deals specific, I don’t think the competition is really the issue. And on top of that, the feedback from the field was optimistic the whole quarter.”

The poor numbers brought a series of downgrades and price share reductions from financial analysts.

“NetApp’s tone re-confirms lingering investor concerns on competitive issues,” analyst Alex Kurtz of Sterne Agee wrote in a note to clients today. “We believe the core issue for NetApp is ongoing challenges in the breadth of its product portfolio that remains more limited … relative to EMC.”

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: