The prospect of flash drives wearing out is a non-issue, according to NetApp.
Mark Welke, NetApp’s senior director of product marketing, said the company has not had a single solid-state drive (SSD) wear out since it began selling flash with its storage systems in late 2008.
“We’re typically seeing an average of about a 10% lifetime wear for all of the SSDs that we have out there today,” said Welke, during an interview this week with SearchStorage.com. “So, I think that a lot of the hype that was created out there [about SSDs wearing out], it just doesn’t exist.”
The SSD wear-out factor stems from the process of writing data to a NAND flash chip. All of the bits in a flash block must be erased before the write can take place. That program/erase process eventually breaks down the oxide layer that traps the electrons at the floating-gate transistors. The deterioration can distort the manufacturer-set threshold value at which a charge is determined to be a zero or a one and result in errors.
That’s a condensed version of the way experts described the inner workings to me in 2009 when flash was coming into vogue. They said the deterioration was less of a problem with single-level cell (SLC) flash than with multilevel (MLC) flash. The wear-out figures they cited were 100,000 program/erase cycles for SLC, 30,000 for enterprise-grade MLC (eMLC), and 10,000 or possibly as low as 3,000 for MLC.
Welke said NetApp started with SLC SSDs in its FAS systems and followed with varying grades of less expensive multilevel cell (MLC) drives, starting with its EF540 about two years ago. The company doesn’t make its own SSDs. NetApp purchases them from a variety of manufacturers, which have included SanDisk and Toshiba.
NetApp is able to monitor the flash environments of customers through its auto-support capabilities and database. Welke said SSD failures have occasionally happened, but they were not the result of wear-out. They were typically due to SSD firmware issues, which he said have largely been fixed.
“We’re less than 30 seconds downtime per year. There aren’t many failures,” Welke said.
So, is wear-out just a bunch of hype?
Lots of SSD manufacturers worked hard on improvements to product architectures, algorithms and controllers to boost the endurance and reliability of MLC and other flash technologies and help bring down the high cost of solid-state storage in comparison to traditional disk-based systems.
George Crump, president and founder of Storage Switzerland, likened the prospect of SSD wear-out to the impending doom predicted in the year 2000 with computers that stored year values as two digits.
“I made a comment once to a data center guy and said ‘I guess that was no big deal.’ And he looked very sternly at me and said, ‘Well, that’s because a lot of people worked a lot of hours to make sure it wasn’t a big deal,’ ” recalled Crump. “It could have been a very big deal.”
Crump said that not only have flash controller manufacturers gotten very good at handling errors and generally advancing the technology but also flash use cases have changed. He said a small amount of highly expensive flash and memory was often used for caching in the early days, with data constantly moving in and out.
“As that cache area got bigger and bigger, and eventually it led to all-flash, the amount of times that I had to turn data over went down significantly, and therefore, the cause for wear-out minimized,” said Crump.
The fourth quarter of 2014 was tough for solid-state storage vendor SanDisk. Still, the vendor said its enterprise solid-state drive (SSD) sales increased and it remains on track for $1 billion in enterprise SSD revenue in 2015.
SanDisk warned earlier this month that it would miss its revenue target for the quarter. On Wednesday it said its revenue was $1.735 billion, about $100 million below Wall Street expectations. The downfall came on its retail and client SSD products. SanDisk reported strong enterprise sales although it is still early days for its Fusion-io products and its ULLtraDIMM prospects are dim because of a legal issue.
CEO Sanjay Mehrotra said SanDisk’s enterprise SSD revenue grew last quarter, and doubled for the full year over 2013. He said SAS SSDs still led the way with a big bump in CloudSpeed SATA SSDs as well.
He added that SanDisk has completed its integration of PCIe flash pioneer Fusion-io and expects an increase in sales of those products. However, revenue is likely to remain lower in SanDisk’s OEM model than it was for Fusion-io before the acquisition.
SanDisk forecasts total revenue of $6.5 billion to $6.8 billion for 2015, but client SSD sales are expected to decline while enterprise revenue increases.
“Enterprise SSDs will certainly achieve $1 billion in revenue for us in 2015, and we’ll continue to grow in that space in the future,” Mehrotra said. “We have previously articulated our goal to be a number one market share leader in enterprise SSDs, and we are well on our way.”
He said SanDisk has the “broadest [flash] portfolio in the business,” with its SSD, PCIe and software products.
SanDisk’s fledgling UltraDIMM products are stalled now due to a lawsuit involving its memory channel storage card supplier Diablo. Netlist, charging that Diablo Technology infringed its patents to develop its chips, has gained a court injunction to stop Diablo from shipping the chips used in SanDisk ULLtraDIMM products. Diablo has appealed the U.S. District court ruling in federal court, and is hoping to reach a settlement with Netlist before the case goes to trial.
Mehrotra said a settlement must be reached quickly if the ULLtraDIMM platform is to achieve significant sales this year.
“Our ULLtraDIMM expectations in 2015 are fairly small in terms of revenue, as it’s a very new product category and we’re continuing to engage with the customer base,” he said. “But clearly the ULLtraDIMM product category does get impacted by the injunction that we currently have. And that injunction would have to be lifted soon. The legal matter would have to be dissolved soon in order for ULLtraDIMM momentum to begin again, otherwise the sales of ULLtraDIMM would get impacted.”
One of the selling points of ViPR that EMC has pushed since launch was that it would work with third-party and commodity hardware. But ViPR still does a lot more with EMC storage than any other platforms. The biggest upgrades to ViPR Controller 2.2 are flash support for EMC’s XtremIO, and VMAX and data protection-as-as-service support for EMC’s Data Domain and Vplex MetroPoint products.
The ViPR upgrades allow customers to automate storage provisioning in the supported flash arrays and backup products, plus adds flash monitoring and alerting capabilities.
EMC also upgraded ViPR SRM (storage resource management). Version 3.6 includes better integration with EMC Data Protection Advisor to automate SLA reporting of compliance with backup policies for EMC Avamar and Symantec backup apps.
“You can focus on the apps instead of the ops,” said Kate Canestrari, senior directory of product marketing in the EMC’s Emerging Technologies Division. “We reduce time admins are spending within provisioning and managing their environment, whether its EMC arrays, third-party arrays or commodity hardware.”
While ViPR has some level of support for NetApp, Hitachi Data System and IBM XIV arrays, it remains slanted far towards EMC platforms. Of course, that is no surprise. EMC didn’t develop ViPR to move customers to other vendors’ arrays.
EMC’s ScaleIO 1.31 now allows administrators to deploy software with the ScaleIO Data Client (SDC) as part of VMware’s ESX 5.5 hypervisor. The new version also supports Microsoft Hyper-V, KVN, and Xen hypervisors, but the integration is not yet as great as with VMware.
“Today we’re changing the game for VMware deployments with kernel integration,” Sam Grocott, SVP of EMC’s Emerging Technologies Division, wrote in a blog explaining the upgrades.
Brocade executives have talked a lot about flash as a disruptive technology that’s driving 16 Gbps Fibre Channel (FC) storage networking. As the year draws to a close, rival Cisco said most customers now buy 16 Gbps FC switches when they deploy new enterprise-class flash storage.
Rhajeev Bhardwaj, vice president of product management for data center switching and storage products at Cisco, said flash works fine in existing FC environments because the technology was designed for high performance and low latency. But, he recommended a minimum of 8 Gbps FC and strong consideration of 16 Gbps FC for flash-based storage because it provides the most flexibility and “investment protection.”
“If customers are investing for the future, start with 16 Gig,” he said. “If the servers are 8 Gig, 8 Gig is still going to work with 16 Gig switches. Then you don’t have to change your infrastructure.”
Bhardwaj said some customers deploy flash with iSCSI storage and Ethernet-based Nexus switches, but they tend to be mid-market companies. He said most enterprise customers of high-end, flash storage go with Fibre Channel.
“Fibre Channel is proven. It’s robust. It’s mature,” Bhardwaj said.
Cisco’s support for 16 Gbps FC lagged Brocade’s rollout by nearly a year, although Bhardwaj claimed the company’s 16 Gbps FC technology “leapfrogged” the competition with higher director-class performance, true high availability, and the ability to upgrade modular systems with new line cards rather than a disruptive chassis replacement.
Meanwhile, Brocade has promoted of its flash efforts. The company rolled out a Solid State Ready program for flash and hybrid array vendors to test their systems with the company’s Fibre Channel and Ethernet switches. Jack Rondoni, vice president of storage networking at Brocade, claimed SSDs are causing people to think differently about their storage architectures.
Brocade has already indicated plans to have 32 Gbps FC products, also known as Gen 6 FC, by the end of next year. Cisco’s 32 Gbps FC technology won’t begin to emerge until the 2016 timeframe, according to Bhardwaj.
Bhardwaj said he thinks demand for 32 Gbps FC will depend on application workloads. Customers tend to deploy the higher bandwidth gear once the new technology is roughly similar in price to the older generation, he added.
“There are three legs in this race. There’s what goes on the server, the host bus adapter. There’s the switching infrastructure and then, of course, the storage arrays. Usually what happens is the switch shows up first, with the host bus adapters kind of the same time. But, the storage arrays are farther behind,” said Bhardwaj. “From a customer perspective, they have to live with this mismatch in speeds. Everything is not ready. So, I think this is a journey that takes time.”
Backup appliance revenue increased for the second straight quarter, with Symantec making up more than one-third of the Backup appliance revenue increased for the second straight quarter, with Symantec making up more than one-third of the gains according to the latest gains according to the latest IDC numbers.
Backup appliance revenue increased 11.2 percent year-over-year to $789.2 million in the third quarter of 2014. That follows an 8.5 percent year-over-year increase in the second quarter of 2014 and a 2.5 percent decline in the first quarter.
The market that IDC classifies as purpose-built backup appliances includes disk targets that require separate backup software and appliances with backup software integrated. EMC’s Data Domain is an example of the first category with Symantec’s NetBackup appliances an example of the integrated systems.
NetBackup appliances made solid gains last quarter, as Symantec revenue grew 37.9 percent year over year. Symantec remains a distant second behind EMC, but its market share ticked up from 10.4 percent last year to 12.9 percent. EMC revenue grew 7.2 percent but it lost share from 64.8 percent to 62.5 percent. Symantec revenue grew from $74.1 million in the third quarter of 2013 to $102.2 million in the same quarter this year. That $26.1 million gain made up a good chunk of the $79.7 million increase of the overall market.
EMC and Symantec were the only vendors with more than 10 percent market share. IBM remained third with $50.6 million and dropped 14.7 percent year-over-year. No. 4 Hewlett-Packard took a 17 percent hit to $26.2 million. Quantum increased 28.9 percent to $20.5 million and grew revenue share from 2.2 percent to 2.6 percent, passing Barracuda Networks to move into fifth place.
The rest of the market – “others” in IDC’s list – grew 40.6 percent to $86.4 million and 12.2 percent market share.
IDC said capacity shipped hit 687,474 TB, up 81.5 percent from last year. Open systems revenue increased 12.9 percent while mainframe revenue dropped 1.1 percent.
Tintri, which has earned a good reputation for providing storage for VMware virtual machines, this week made its Microsoft Hyper-V support generally available on its VMstore systems. That paves the way for multiple-hypervisor users on VMstore.
Tintri also supports Red Hat Enterprise Virtualization (RHEV) hypervisors, and Tintri customers can use VMStore with different hypervisors for production data on the same box. That’s still unusual, however. Saradhi Sreegiriraju, Tintri VP of Product Management, said Tintri customers are overwhelmingly using VMware in production but like to kick the tires with Hyper-V and RHEV — particularly in test/dev.
“We have customers who want to explore other hypervisors,” he said. “Some like to use Hyper-V for non-production workloads and VMware for production – that’s what they’ve been using and they don’t want to upset that applecart.”
Sreegiriraju said customers who used Hyper-V in beta had the same use cases typical to Tintri’s VMware customers – a lot of VDI, for instance.
He also said the Hyper-V beta program brought Tintri into many shops that weren’t customers. “We’ve had a lot of prospects who want to deploy Hyper-V or who had asked us to come back when we have Hyper-V support,” he said.
Hyper-V support could provide a nice benefit for Tintri after VMware makes Virtual Volumes (VVOLS) available. VVOLS will enable traditional storage systems to natively store virtual machine disks (VMDKs), which Tintri has done from the start. But if VVOLS helps competitors catch up with Tintri on VMware storage, it can still expand into Hyper-V’s installed base.
Sreegiriraju said VVOLs won’t change Tintri’s value proposition because many legacy vendors who adopt it still have storage systems based on LUNs and volume management. Tintri will also support VVOLs. “VVOLs are nothing more than APIs that storage systems need to implement,” he said. “It doesn’t fundamentally change anything for those systems. We believe there will be a lot of challenges adopting old storage systems that worked on LUNs and volumes to now work with VVOLs.
“And VVOLs will be limited to VMware, and we now work on Hyper-V and Red Hat too.”
DataGravity picked up $50 million in funding today, two months after coming out of stealth with systems built for storage admins and data scientists.
The vendor – founded by EqualLogic veterans Paula Long and John Joseph – now has amassed $92 million over three funding rounds.
CEO Long said DataGravity will use the funding to market the Discovery Series arrays it began shipping in October, as well as to build out customer support. Discovery Series arrays combine hybrid flash storage with data analytics, e-discovery and data protection features.
DataGravity has a few more than 90 employees now, and Long said the current funding should take the company to around 300 before the next funding round.
On the product development front, she said DataGravity has a “big and rich roadmap that will build on four pillars of data-aware storage – storage visualization, data governance, data privacy and data protection.”
DataGravity is still know largely as the old EqualLogic crew inside the storage world, so the next logical step would be for it to make a name for itself based on its Discovery Series products. Dell acquired iSCSI SAN pioneer EqualLogic for $1.4 million in 2008.
“EqualLogic’s focus was on storage automation so that storage could manage itself,” Long said. “DataGravity believes that too, but also in multi-protocol [iSCSI and file] storage and that you should know what’s in your data and benefit from it.
“We joke that at EqualLogic we used to make an A-plus storage admin. At DataGravity we tell that person to make room for a data scientist and data security person too.”
Accel Partners led the funding round, with previous investors Andreessen Horowitz and General Catalyst Partners contributing. Accel’s Ping Li joins DataGravity board. He also serves on the boards of Cloudera, Code 42, Nimble Storage and Primary Data.
External storage revenue increased less than one percent year-over-year last quarter, with none of the major vendors growing more than 3.5 percent. That’s better than the two previous quarters of year-over-year declines, but hardly suggests a recovery.
The networked storage revenue of $5.8 billion ticked up from $5.754 million in the third quarter of 2013. EMC held its lead with $1.82 billion, up 3.5 percent from $1.71 billion the previous year. Its overall market share increased from 30.6 percent to 31.4 percent. No. 2 Net App increased 0.3 percent to 12.9 percent market share, No. 4 Hewlett-Packard (HP) went from 9.5 percent share to 9.7 share with 2.7 percent revenue growth, and No. 6 Dell grew 6.2 percent revenue and increased share from 7.2 percent to 7.3 percent.
No. 3 IBM and No. 5 Hitachi Data Systems were the big losers. IBM’s revenue of $591 million was a 9.9 percent drop from last year, and its market share fell from 11.1 percent to 10.2 percent. HDS revenue dropped $2.7 percent to $432 million, and its market share went from 8.3 percent to 7.4 percent.
The losses of IBM and HDS were picked up mostly by smaller vendors. The “others” category grew 6.2 percent to $1.23 million and edged up from 20.1 percent to 21.2 percent.
External disk storage growth lagged behind overall disk storage, which includes server-based and direct attached storage. Total disk storage grew 5.1 percent to $8.75 billion. EMC also led overall disk storage revenue, followed by HP, Dell, IBM and NetApp. The overall disk storage market bounced back from the second quarter, when it fell 5.11.4 percent from last year.
IDC pointed out that server-based storage and smaller external arrays outperformed the market. Server-based storage revenue increased 10 percent and sub-$100,000 external array revenue grew more than six percent. IDC this quarter began tracking Original Design Manufacturers storage sold directly to hyperscale data centers. That ODM storage revenue grew 22 percent, accounted for 43 percent of storage capacity and drove much of the growth in the overall storage market.
IBM and HDS both rely largely on storage for large enterprises, which apparently dropped off in favor of server-based, entry level and ODM storage.
All storage systems have a controller, which is a device with a processor that sends instructions to the disks. Storage controllers differ from vendor to vendor, but generally fit into three types — custom designed, purpose-built, and commodity server-based.
Each type of implementation has advantages and drawbacks. Vendors will highlight the characteristics of what their systems offer. It is up to customers to do the evaluation to determine what system best fits their needs.
These are the key characteristics for each type of storage controller:
Custom designed storage controllers have hardware that is specific for that storage system. Custom ASICs (Application Specific Integrated Circuits) or FPGAs (Field Programmable Gate Arrays) are common in custom controllers. They might also have custom logic or use standard components such as Intel processors. Storage software exploits the custom hardware.
Performance and reliability are the two major advantages of a custom controller design. Performance acceleration comes from using custom hardware for data movement, RAID protection, compression, encryption, or other processing intensive operations. Reliability improvements come from built-in error checking and a reduction in number of components required.
The disadvantages are that it is usually costs more to implement a custom design and it takes the vendor longer to develop a new or updated storage controller.
A purpose-built storage controller uses commonly available elements such as processors and adapter boards integrated into a package. The storage software has an understanding of the specific hardware in a configuration.
The advantages of purpose-built storage controllers include:
• Serviceability is improved because of the ability to non-disruptively replace components.
• Scaling can be done non-disruptively by adding adapter cards.
• Reliability is improved with testing and control for components used in the controller.
• Technology is advanced by leveraging other company’s research and development for the components used.
The disadvantages of purpose-built controllers are the generational changes that occur in common hardware such as the processor technology require engineering changes and more testing of the newly integrated controller.
Some storage controllers use standard servers with the storage software as an application that runs on the server. The server can be a brand name or a white box server.
The main advantage of using a commodity server as a storage controller is cost. The high volume for the server makes it the least expensive of the implementations, and there are many sources for the servers.
The disadvantages are that the server may be less reliable than the other implementations, it may be difficult to provide non-disruptive serviceability for component replacement and upgrades/scaling, variations in server hardware may create support problems, and storage software may require regular updating.
I went into detail into the different architectures of storage systems in an Industry Insight article I wrote, which is available here.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Overland Storage and Sphere 3D completed their merger Tuesday, and while the combined company will be called Sphere 3D, it looks a lot more like the old Overland Storage.
Officially, Overland becomes a wholly owned subsidiary of Sphere 3D after Sphere 3D paid $81 million for Overland stock. The Overland, Tandberg Data (acquired by Overland last January), Sphere 3D and V3 Systems (acquired by Sphere 3D last January) brands remain.
Overland CEO Erik Kelly will run the new company as CEO and chairman (a position he held at Sphere 3D before the merger). Former Sphere 3D CEO Peter Tassiopoulos ranks below Kelly as vice chairman and president. Overland CFO Kurt Kalbfleisch will take that same job with Sphere 3D. The new Sphere 3D board includes former Overland directors Vic Mahadevan and Dan Bordessa, and Sphere 3D holdovers Peter Ashkin, Mario Biasini, and Glenn Bowman along with Kelly and Tassiopoulos.
Most of the products and revenue will also come from Overland because Sphere 3D didn’t have much of either before the merger. Sphere 3D reported third-quarter earnings Monday, claiming $1.6 million in revenue and $3.75 million in losses.
Overland last month reported $22.9 million in revenue for its most recent quarter, and lost $7.3 million. The new company’s goal is to turn Overland’s disk and tape storage products and Sphere 3D’s Glassware virtualization products into one profitable business instead of two money-losing companies. That will be a tough trick.