Storage Soup


March 15, 2011  8:17 PM

Better MLCs should help enterprise SSD adoption

Dave Raffo Dave Raffo Profile: Dave Raffo

When it became evident that solid state drives (SSDs) were still moving at a trickle into the enterprise about a year ago, industry experts said it would take two things to push their adoption. One was automatic tiering software and the other was enterprise-grade multi-level cell (MLC) flash that would bring down the price.

Now most storage array vendors have automated tiering software. And with Seagate’s launch of its Pulsar SSDs today, several of the leading drive makers have MLC devices to go with the more expensive single layer cell (SLC) drives on the market. So will SSDs now ramp quickly in the enterprise?

Probably not, although their use will certainly increase at a greater rate than they been.

Gartner analyst Joseph Unsworth said the enterprise SSD market went from around 320,000 units and $485 million in sales in 2009 to over 1 million units and over $1 billion in sales last year. He forecasts 9.4 million units and $4.2 billion in sales by 2015.

Seagate’s internal projections see SSDs catching up to 3.5-inch hard drives in the enterprise in units shipped in 20013 but not making much of a dent on 2.5-inch hard drives.

“The enterprise doesn’t turn on a dime,” Unsworth said. “There are long qualifications and there’s still confusion from end users. They know SSDs are important, but traditional data center professionals aren’t saying ‘We have to have this.’ They’re deployed in application workloads where they make most sense, like search, social media, data warehousing and stream video.”

MLC drives are cheaper than SLC drives but they wear out a lot faster. Until recently they were limited to consumer devices but are working into the enterprise. IBM said it will ship STEC MLC drives on midrange and enterprise storage arrays, and Seagate’s MLC drives will likely find their way into arrays from EMC and other array vendors.

Still, Unsworth added that pricing is decreasing at a slower rate for SSDs in storage systems than in servers.

He said the price erosion for the server side is forecasted for about 34% over the next five years with storage expected to drop about 30%. “It will take more MLC, more [product] competition and more scale to bring prices down,” he said.

March 10, 2011  3:37 PM

Non-compliance = big fines, bad rep

Randy Kerns Randy Kerns Profile: Randy Kerns

The Department of Health and Human Services has levied a hefty fine of $4.3 million against Maryland health care provider Cignet Health for HIPAA violations.

This is a significant event for institutions that deal with information governed by regulations for storing and managing records. The article’s statement that this is the first enforcement of the HIPAA regulations is inaccurate, but it is the first enforcement since the more stringent HiTECH Act was passed. Previous enforcements involved regional hospitals and did not receive significant publicity.

So why did the Department of Health and Human Services strike now? HHS is being punitive with the fine and public notification because of what seems like willful disregard for protecting information. The HHS said Cignet refused to provide 41 patients with copies of their medical records and failed to respond to repeated requests from the HHS Office of Civil Rights.

But the fine also sends a clear message to other healthcare organizations to comply or face fines and — more importantly — public embarrassment.

As a quick review, the HIPAA (Health Insurance Portability and Accountability Act of 1996) and the Health Information Technology for Economic and Clinical Health Act (HITECH Act) of 2009 impose requirements on control of access, breach notification, and storage of information. Evaluator Group articles about the need to meet compliance requirements for HIPAA are at www.evaluatorgroup.com.

The fine against Cignet reminds me of a conversation I had with the CIO and other senior management of a regional hospital about 18 months ago. We spoke about the archiving requirements for Electronic Medical Records (EMR) and the different retention requirements based on that type of information.

After discussing the retention requirements and the need for using storage systems that met compliance requirements that would pass an audit, the CIO said the hospital was storing all of its data on standard disk systems. When asked about meeting compliance requirements, he said he was not concerned.

He explained that the public depended on this regional hospital. If it was audited due to some complaint or had a loss of data, the public could not do without it and would have to support it. He said his budget did not allow for taking the proper measures for storing data to comply with regulations.

That was an interesting discussion. He was admitting the hospital knowingly violated the regulations regarding the privacy of data but was unwilling to even consider doing something about it. Aside from being appalled, I thought the arrogance would cause an even greater impact when an incident occurred.

Maybe with some institutions a $4.3 million fine is not a major impact. But for most it would be. I would think it tough to put on a budget line item.

But the damage to the institution goes beyond the impact on its budget. The bad publicity can harm its reputation and affect its support over the long term. For the healthcare information professional, the peer group will be aware of failings. Not only will this cause the institution and its staff to be held with a low regard, it may have an effect on potential future employment opportunities.

The media, customers and the Department of Health and Human Services all have long memories. Any other type of incident will cause the lack of privacy protection to be brought up repeatedly. While a fine is a one-time event, the bad reputation may be permanent.


March 9, 2011  10:46 PM

NetApp bags LSI’s Engenio storage group for $480 million

Dave Raffo Dave Raffo Profile: Dave Raffo

NetApp CEO Tom Georgens acquired his old company today, when he said NetApp will buy LSI Corp.’s Engenio storage business for $480 million.

 

No storage acquisition should be considered a surprise these days, and there had been rumblings for months that LSI was looking to sell off Engenio. But the acquirer and the price were a little unexpected. NetApp had shied away from big acquisitions since EMC outbid it for Data Domain in the middle of 2009, and it already has a storage platform that stretches from low end through the enterprise. And $480 million seems like a low price in the wake Hewlett-Packard’s $2.35 billion acquisition of 3PAR, EMC’s $2.25 billion pickup of Isilon and Dell’s $820 acquisition of Compellent in the last six months.

 

The deal is expected to close in about 60 days.

 

NetApp’s management team certainly knows what it is getting. Georgens was Engenio’s CEO for two years before joining NetApp in 2007, and NetApp chief strategy officer Vic Mahadevan went from LSI to NetApp last year.

 

 

“At first I was surprised, but recognizing Tom had come from [Engenio], it started making more sense,” Evaluator Group analyst Randy Kerns said. “It makes sense when you consider NetApp got a reasonable price, new market opportunities and OEMs it didn’t have before.”

 

In a conference call today to discuss the deal, Georgens said Engenio’s systems will allow NetApp to go after different workloads than NetApp’s FAS line. Those workloads include video capture, video surveillance, genomics sequencing and scientific research that Georges characterized as bandwidth intensive.

 

“There are workloads we’re not going to service with [NetApp] DataOntap,” he said. “This is targeted at workloads that are separate and distinct from OnTap.”

 

The deal also should strengthen NetApp’s relationship with IBM. LSI sells its Engenio storage exclusively through OEMs, and IBM sells more LSI storage than any other vendor. IBM also sells NetApp’s FAS platform. LSI’s other OEM partners include Oracle,Teradata and Dell. Georgens said NetApp will also sell LSI systems through NetApp channels.

 

One thing he wants to avoid doing is setting up a competitive situation between FAS and LSI’s platforms. NetApp executives say having one unified platform sets it apart from their major competitors, who sell different platforms in the midrange and enterprise and for NAS and SAN customers.

 

Georgens said NetApp’s new portfolio will be different than EMC’s situation with its midrange Clariion and enterprise Symmetrix platforms.

 

“The problem with Symmetrix and Clariion is, the target markets are overlapping,” he said. “They all have replication, snapshots and other things in parallel. This [LSI] is not our SAN product. We have a SAN product called FAS. This is targeted at workloads where we’re not going to sell FAS.”

 

For more on this deal, see SearchStorage.com

 

 

 


March 9, 2011  4:15 PM

Fusion-io flashes IPO filing

Dave Raffo Dave Raffo Profile: Dave Raffo

NAND flash start-up Fusion-io is looking to go public, thanks to early success that it owes in a large part to Facebook.

Fusion-io filed a registration form for an initial public offering (IPO) today, declaring it wants to raise up to $150 million. The company has raised $103 million in venture capital funding.

The move is no surprise, considering Fusion-io CEO David Flynn and board member Steve Wozniak have been dropping hints about its rapid revenue growth. Fusion-io’s S-1 filing shed light on its sales of its PCIe-based flash cards that are sold mostly through OEM deals with server vendors Dell, Hewlett-Packard and IBM.

Fusion-io claims it had $36.2 million in revenue for its fiscal year that ended in June of 2010, and another $58.2 million in the last six months of 2010. Those numbers were up from $10.1 million for fiscal 2009 and $11.9 million in the last six months of that year. Fusion-io is losing money, which is common for storage companies when they first go public. The vendor lost $31.7 million for fiscal 2010, $8.2 million over the last six months of last year and a total of $77.1 million in its history.

Fusion-io’s ioMemory hardware creates a high-capacity memory tier and integrates with its VSL virtualization software. It also has directCache automated tiering software and ioSphere platform management software in customer trials.

Who’s buying Fusion-io products? Mostly Facebook and one other unidentified customer. Fusion-io’s filing said Facebook is its largest customer and accounted for a “substantial portion” of its revenue from late last year, and that will continue through the end of this month before trailing off. The other large end user customer is also expected to make large purchases through the end of this month and then decline. Facebook’s purchase came through one of the server vendors, because Fusion-io said 92% of its revenue for the last half of 2010 came from its OEM partners – up from 75% for fiscal 2010.

Investment banker Credit Suisse, one of the IPO underwriters, has also said it is using Fusion-io cards with its trading platform.

There is no shortage of enterprise Flash vendors today, especially those selling solid state drives (SSDs). Fusion-io’s filing listed EMC, Hitachi Data Systems, NetApp, Intel, LSI Corp., Micron, Samsung , Seagate , STEC, Toshiba Corp. and Western Digital as competitors. Those are just the public companies. Other private companies coming up behind Fusion-io include Violin Memory, Alacritech, and Texas Memory Systems.


March 7, 2011  4:19 PM

Western Digital to buy Hitachi GST for $4.3 billion

Dave Raffo Dave Raffo Profile: Dave Raffo

Western Digital today said it intends to buy Hitachi Global Storage Technologies (HGST) for $4.3 billion, reducing the number of drive vendors to four and making Western Digital a larger force in the enterprise hard drive and solid state drive (SSD) worlds.

Western Digital executives said they expect the deal to close in September. It will leave Western Digital, Seagate, Samsung and Toshiba as the surviving hard drive vendors at a time when digital data keeps growing at record rates and the way people store that data is evolving.

“There’s an unprecedented demand for digital storage, both locally and in the cloud,” Western Digital CEO John Coyne said during a conference call today to discuss the deal. “And there’s increasing diversity of storage devices, connected to PCs, edge devices and the cloud.”

The deal makes Western Digital’s portfolio more diverse by giving it an enterprise presence. HGST and Seagate recently brought out 3 TB SATA drives, increasing the largest capacity drives by a third. And enterprise SSDs jointly developed by HGST and Intel are now sampling with OEM customers.

HGST CEO Steve Milligan, who will join Western Digital as president, said the Intel relationship will continue under the Western Digital banner.

“It’s a key part of HGST’s strategy, and will be a key part of the combined company,” he said.

Western Digital and HGST had a combined 49% market share of hard drives in 2010, according to a research note issued today by Stifel Nicolaus analyst Aaron Rakers. Rakers wrote that Western Digital had a 31.3% share, with Seagate next at 30%.

Seagate was the enterprise (Fibre Channel and SAS interfaces) leader with 63% market share, followed by HGST with 24.6%. Western Digital began shipping SAS drives last year but it remains an insignificant part of its business.

  Bookmark and Share     0 Comments     RSS Feed     Email a friend


March 4, 2011  3:51 PM

Big data storage systems rallied in 2010

Dave Raffo Dave Raffo Profile: Dave Raffo

High end storage systems are back in style, at least according to the latest storage revenue numbers from IDC.

IDC’s worldwide quarterly disk storage tracker research shows that storage systems with an average selling price of $250,000 and above rallied in 2010, finishing the year with 30.2% market share. That, according to senior research analyst Amita Potnis, brings the high-end back to its 2008 pre-financial crisis level.

“There were multiple drivers beyond the remarkable growth in high-end systems, including demand for storage consolidation and datacenter upgrades supported by new product push from a number of vendors,” Pontis said in the IDC release.

Hitachi Data Systems had the most significant high-end product release of 2010 with its Virtual Storage Platform (VSP). HDS revenues jumped nearly 30% in the fourth quarter over 2009 after the VSP release.

Other than the rise of the high end, the fourth quarter of 2010 looked a lot like the rest of the year for storage sales. Ethernet storage – in the form of NAS and iSCSI – continued to outpace the market by a wide margin, as did vendors NetApp and EMC.

For the fourth quarter, IDC put external storage system revenue at over $6 billion for an increase of 16.2% over 2009. The NAS market grew 41.3% with EMC owning 52.8% of the market and NetApp 23.7%. The iSCSI SAN market grew 42.1% in the quarter, led by Dell with 32.6% and HP with 14.7%.

NetApp overall revenue grew 43.7% year-over-year in the fourth quarter, and it increased market share from 8.4% in 2009 to 10.3%. EMC remained the overall leader with 26% share, followed by IBM at 16.3% and HP at 11.6%. HDS (8.7%) and Dell (7.9%) round out the top six behind NetApp. HDS (29.7% growth), EMC (26.3%) and NetApp outpaced the overall market gain. IBM, HP and Dell lost market share in the quarter.

For the entire year, IDC put external storage revenue at $21.2 billion for an 18.3% increase over 2009. EMC led the way with 25.6% market share, followed by IBM at 13.8%, NetApp and HP with 11.1% each, and Dell with 9.1%. Only EMC and NetApp gained market share for 2010 among the top five. 

 

 

 

 

 

 

 


March 1, 2011  6:34 PM

What’s next for unified storage?

Randy Kerns Randy Kerns Profile: Randy Kerns

Unified storage has gone from a specialty item to something offered from nearly every storage vendor in recent years. In the beginning, vendors such as NetApp took added block capability to their file system storage and NetApp’s biggest rivals have since followed down that unified path.

 

The evolution continues, however, and multiprotocol systems will likely include more technological advances over the coming years.

 

As a refresher, I define Unified Storage as:

 

Unified Storage is a storage system that provides both file and block access simultaneously. The block access is accomplished through use of an interface such as Fibre Channel, SAS, or iSCSI over Ethernet. The file-based access is to a file system on the storage system using either CIFS or NFS over Ethernet. 

 

 

An implied piece of unified storage is that it requires unified management, one storage system management for block and file data. Without that, the critical goal of consolidation and simplification is compromised. 

  

 

Some vendors have provided block storage through both Fibre Channel and iSCSI, while others stick to iSCSI only because it is simpler to deliver. The following diagram gives a very general view that compares the implementations for block and file storage:   

 

 

 

 

 

 

 

Unified storage systems are commonly offered by storage vendors, but that doesn’t mean every new storage system you buy must be unified. Certain high-end IT environments with specific usage requirements would use non-unified systems.If you only need high performance block storage, for instance, a unified system isn’t necessary.   

  

However, there are excellent uses of unified storage:

 

  • In a virtual server environment, a unified storage system presents an opportunity to meet demands for quickly provisioning virtual machines and meeting operational requirements. A virtual machine could be provisioned with a datastore based on NFS with its file I/O while the block storage capability of the unified storage would allow Real Device Mapping (RDM) to attach a physical disk to a virtual machine to meet application requirements.
  • If there is a predominance of one type of usage such as file storage for unstructured data but still there is a need for some block storage (an Exchange database for example), a unified storage system allows for consolidation to a single platform.
  • Unified storage provides great flexibility for an organization that needs to repurpose storage because its needs are changing.
  • Unified storage also provides a single resource that can be provisioned as needed for the usage type required – block or file.

 

 

What’s Next?

 

But vendors haven’t just been combining block and file protocols in the same package. Recent features added to unified systems include automated tiering, solid state devices (SSDs) as a tier for higher performance, and support for cascading read/write-capable snapshots to add value for use cases such as virtual desktop infrastructures (VDIs).

 

What should be expected next for unified storage? It’s likely that vendors will package other capabilities together and call that the new “unified storage.” That would dilute the meaning of “unified” and require a qualifying phrase after it.

 

More likely, there will an additional, high-value capability for storage that will have its own identity. Maybe it could be something like having a storage system with the capability to intelligently (and automatically) do archiving as well. Call it “archiving-enabled” storage. This is more evolutionary than revolutionary. But, it will be uniquely defined.

 

 

 

 

 


February 23, 2011  7:08 PM

Scale-out vs. scale-up: the basics

Randy Kerns Randy Kerns Profile: Randy Kerns

There’s been a lot of talk about scale-out and scale-up storage lately, and I get a sense that a lot of people don’t understand that these terms are not synonymous. And that causes confusion among IT professionals when they are planning product purchases and trying to determine how these types of products bring value versus cost and complexity to their environments.

 

 

To make informed buying decisions, IT pros need to understand the difference between scale-up and scale-out. The following are the basics, which can be built upon for more detailed considerations.

 

Scale-up, as the following simple diagram shows, is taking an existing storage system and adding capacity to meet increased capacity demands.    

 

 

Scale-up can solve a capacity problem without adding infrastructure elements such as network connectivity. However, it does require additional space, power, and cooling. Scaling up does not add controller capabilities to handle additional host activities. That means it doesn’t add costs for extra control functions either.

So the costs have not scaled at the same rate for the initial storage system plus storage devices – only additional devices have been added.

 

Scale-out storage usually requires additional storage (called nodes) to add capacity and performance. Or in the case of monolithic storage systems, it scales by adding more functional elements (usually controller cards).One difference between scaling out and just putting more storage systems on the floor is that scale-out storage continues to be represented as a single system.

There are several methods for accomplishing scale out, including clustered storage systems and grid storage. The definitions of these two types can also be confusing, and other factors add to the complexity (that’s a subject for another article), but the fundamental premise is that a scale-out solution is accessed as a single system.

This diagram shows an example of a scale-out storage solution. In this diagram, the scaling is only with an additional node but a scale-out solution could have many nodes that are interconnected across geographical distances.  

 

 

The scale-out storage in this example added both the control function and capacity but maintained a single system representation for access. This scaling may have required additional infrastructure such as storage switches to connect the storage to the controller and a connection between the nodes in the cluster or grid. These connections let the solution work as a single system.

 

Scaling-out adds power, cooling, and space requirements, and the cost includes the additional capacity, control elements and infrastructure.  With the scale-out solution in this example, capacity increased and performance scaled with the additional control capabilities.

 

Not all scaling solutions are so simple. Many storage systems can scale out and up. The following diagram illustrates this:

 

Considerations

 

When looking at scale-up or scale-out storage, consider these factors:

 

  • Costs Scale up adds capacity but not the controller or infrastructure costs.  If the measure is dollar per GB, scale-up will be less expensive.
  • Capacity. Either solution can meet capacity requirements but there may be a limit on the scale-up capacity based on how much capacity or how many devices an individual storage controller can attach.
  • Performance. Scale out has the potential capability to aggregate IOPS and bandwidth of multiple storage controllers. Representing the nodes as a single system may introduce latency, but this is implementation specific. 
  • Management. Scale up would have a single storage system management characterization. Scale-out systems typically have an aggregated management capability but there may be variations between vendor offerings. 
  • Complexity. Scale-up storage is expected to be simple, while scale-out systems may be more complex because they require elements to manage. 
  • Availability. Additional nodes should provide greater availability in case one element fails or goes out of service. This depends on the particular implementation.

There is a great deal to consider when making a choice between scale out and scale up. The decision will ultimately depend on how one vendor implements its solution and its capabilities and features compared to another vendor. But, it is always best to start with a basic understanding and then look at the differences.   

 


February 23, 2011  2:34 PM

Dell closes Compellent, opens new storage era

Dave Raffo Dave Raffo Profile: Dave Raffo

When Dell closed its $800 million acquisition of Compellent Tuesday, it also closed the books on a year of deals that will transform Dell’s storage portfolio.

Dell first said it wanted to buy Compellent last December. Before that, it made two smaller 2010 acquisitions that could enhance its overall storage product line. It bought the IP of scale-out NAS vendor Exanet and acquired data reduction startup Ocarina Networks. Dell also tried to buy 3PAR but was outbid by Hewlett-Packard several months before it grabbed Compellent instead. In another storage move, Dell launched a DX object-based storage platform in 2010.

A Dell spokesman said the vendor will have more details about its overall storage strategy next month, but it appears that Compellent will bring Dell into its post-EMC era. Or as Dell executives like to say, its storage business has gone from being mostly a reseller of storage to becoming a full provider of storage technology.

Dell lists the Compellent Series 40 as its multiprotocol storage system for enterprise applications with EqualLogic as its scalable Ethernet storage and PowerVault as the entry level DAS and iSCSI SAN choice. Of course, EMC Clariion has filled the multiprotocol SAN role for Dell for close to a decade.  Dell will not OEM and may not even resell the EMC VNX that will replace the Clariion, and definitely will not sell the EMC VNXe SMB system that competes with EqualLogic and PowerVault.

A slide attached to Dell executive Rob Williams’ blog posted yesterday lists Compellent as Dell’s high end block storage offering with EMC’s CX4 and the EqualLogic PS Series as midrange SAN products, and PowerVault as the entry level system. That’s odd positioning because Compellent has always been characterized as a midrange system – actually, more towards the lower end of the midrange.

Williams did shed some light on Dell’s storage strategy in his blog:

“Together with Dell EqualLogic, PowerVault and Compellent, our block and file storage offerings now span every layer from low- to mid-range offerings for SMBs and Public institutions to higher-end enterprise ready offerings for large corporations and organizations. 

Last year, we also acquired two more storage assets that bring with them important IP. Exanet provides Dell with scale-out file storage capabilities, and moves us for the first time beyond the arena of mid-range block storage and into the playing field of mid-range file storage.  We hope to launch our first Exanet NAS product early this summer.  The second is Ocarina, which gives us content-aware file deduplication capabilities that we plan to add across our entire storage portfolio over time.”

There may be more to come. When they first spoke about buying Compellent in December, Dell executives said they may look to add data management features though an acquisition.  All of this makes Dell worth watching through 2011.

 


February 22, 2011  3:11 PM

Alternatives available for HPC shops lacking Lustre support

Dave Raffo Dave Raffo Profile: Dave Raffo

With Oracle showing a lot less love for open source storage software than Sun did, high performance computing (HPC) shops are nervous about the future of Lustre and HPC hardware vendors are taking steps to pick up the slack.

 

Last year the Illumos project sprung up as the governing body for OpenSolaris to help spur development of the source code used for ZFS. Now the Lustre community is rallying around the storage file system used in most large HPC implementations as Oracle shows no signs of supporting it.

Since late last year, Xyratex acquired ClusterStor for its Lustre expertise, start-up Whamcloud started aggressively hiring Lustre developers and partnering with hardware vendors on support deals, and Cray, DataDirect Networks (DDN), Lawrence Livermore and Oak Ridge National labs launched OpenSFS.org to develop future releases of Lustre.

DDN last week launched a promotional offer to provide Lustre customers a third year of support for free if they purchase a two-year DDN file system support contract. The support comes through DDN’s alliance with Whamcloud.

“There is a general uneasiness in the industry, and people are looking for somebody to step up,” DDN marketing VP Jeff Denworth said. “There’s been a gross defection of talent from Oracle around the Lustre file system.”

DataDirect Networks customer Stephen Simms, manager of the Data Capacitor project at Indiana University, said it will take initiatives like those undertaken by DDN, Whamcloud, and OpenSFS to save Lustre.

  

“Without people to develop Lustre, then Lustre is going to die,” he said. “National laboratories have a strong investment in Lustre, and they will do their best to keep it alive, but without the pool of talent that exists in for-profit companies, where are you going to be? You’re going to be an organization that desperately needs a file system with only a handful of developers.”

  

Simms said Lustre is a crucial piece of IU’s Data Capacitor, a high speed, high bandwidth storage system that serves IU campuses and other scientific research sites on the TeraGrid network. The IU team modified Lustre file system code to provide the automatic mapping of user IDs (UIDs) across TeraGrid sites.

  

“We wouldn’t be able to develop this system for mapping UID space if the file system were closed,” he said. “The fact that it’s open has been a big deal for us. It’s important to have the expertise someplace. It’s a concern that there are so few Lustre developers, and they’ve left Oracle and where have they gone? Some have gone to Xyratex and some to WhamCloud, and who knows where others have gone? It’s important to keep momentum.”

 

 


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: