Storage Soup

March 21, 2011  1:20 PM

Storage tiering draws interest, with good reason

Randy Kerns Randy Kerns Profile: Randy Kerns

Storage tiering has generated a good amount of interest lately, largely because of the performance improvements it can bring to organizations using solid state drives (SSDs) in their storage setups.

When it comes to tiering, it’s worth listening to what storage vendors have to say. There are actual benefits, and several approaches with real competitive differentiation.

Performance and cost are the big benefits of storage tiering. The goals are to balance the workload across the different storage elements and maximize the value returned from the investment.

Using SSD in a tiering approach provides improvements in latency and transfer rate. Intelligent tiering can maximize SSD usage as a resource, which in turn can reduce the number (and cost) of SSDs in a system.

If you’re considering adding tiers or changing your tiering strategy, you might first want to look at the available methods.

Storage systems have tiers usually consisting of several types of hard disk drives (HDDs) and perhaps SSDs. There may be two types of HDDs – a high RPM drive with less capacity for higher performance and a lower RPM HDD with higher capacity. Customers seem to be moving towards SSDs for performance and larger capacity HDDs as the tiers of choice. The movement of data between tiers is based on analysis of access patterns to data.

Most of the tiering systems being offered by vendors now have sub-LUN movement of the data (early implementations only moved entire volumes/LUNs). Key choices to make around the implementations deal with the “real-time” analysis and the automation around the movement of the data. Another difference for consideration is the potential contention created by the movement using system resources that are also needed to satisfy host requests.

Another tiering method is through intelligent placement of sub-LUN segments of data along with caching tiers. This approach uses the analysis of patterns of data access to determine where to place the data, and cache tiers to control performance. The goal with this approach is to provide the performance based on data access characteristics without introducing additional resource consumption for movement.

There are also hybrid approaches where intelligent placement and caching are combined with limited background movement. These approaches try to strike a balance where the intelligence of the tiering controls the resource usage while balancing the potential performance gains.

The Evaluator Group has a review of major vendors’ automated tiering systems.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).

March 18, 2011  8:31 PM

Nirvanix says cloud can ease data concerns in Japan

Dave Raffo Dave Raffo Profile: Dave Raffo

Cloud proponents look at the disasters in Japan and say even the most horrific incidents don’t have to cause people to lose data.

By cloud proponents, I mean people who sell cloud products and services. Cloud storage vendor Nirvanix has one of its five worldwide data centers in Japan. Although the Nirvanix data center is 200 miles from the area where Japan has been hit by an earthquake, tsunami and potential nuclear meltdown, it has offered to move any customer data from the Japan site to one of its other data centers for free. Nirvanix also has facilities in Los Angeles, New York, Dallas, and Frankfurt, Germany.

Several of Nirvanix’s approximately 50 customers in Japan have taken the vendor up on its offer, Nirvanix CEO Scott Genereux said. But he said the offer was made mostly to give customers peace of mind. He said the data center has its own power and shouldn’t be affected by rolling blackouts expected in Japan. And even if the building were wiped out, all customer data could be restored at another site.

“Without having to touch tape filers, aging NAS filers onsite or a physical RAID box on the floor, customers can move their data,” he said. “They don’t have to wait for a repair specialist to come and add drives.

“This is the power of the public cloud. Customers can select the location of their data. You can’t do that with a private cloud. That’s just isolated storage.”

Of course, companies using private clouds or no clouds can also migrate data offsite. That’s one of the keys to good disaster recovery – getting data to a safe location in case your main location becomes unavailable. And the cloud can be a simple way to do that.

Dick Mulvihill, managing partner of Chicago-based data protection cloud provider HexiSTOR, said major disasters have eased a lot of his customers’ fears about trusting the management of their data to a separate party.

“The whole mindset of worry about data going off site to the cloud has changed,” he said. “It’s becoming a more popular option because of the simplicity and automation of the cloud. For dealing with natural and man-made disasters, using the cloud is good governance.”

March 18, 2011  12:21 PM

Dell set to roll out NAS, dedupe across storage platform

Dave Raffo Dave Raffo Profile: Dave Raffo

Now that Dell has closed on its Compellent acquisition, the next steps in its storage product expansion will be to add Exanet scalable NAS and Ocarina primary data deduplication to its two main platforms.

The $820 million Compellent acquisition gives Dell a second SAN product to go with its 2008 EqualLogic iSCSI SAN buy. Dell executives say the smaller 2010 acquisitions of Exanet and Ocarina will complement those SAN purchases.

Dell is close to completing the integration of Exanet’s clustered NAS file system with EqualLogic and entry level PowerVault storage. Those products will launch in mid-year, probably at the inaugural Dell Storage Forum conference in June. Later this year, they expect to add Ocarina’s content-aware compression and data deduplication into the Exanet file system.

Longer range plans – probably in 2012 — include integrating the Exanet file system with Compellent storage, developing dedupe for block storage, and coming up with a common management application for EqualLogic, Compellent and Exanet.

“Exanet and Ocarina will become ubiquitous technologies across the two platforms,” said Travis Vigil, Dell’s executive director of product marketing for enterprise storage. “We’ll have a file system that will provide unified storage for EqualLogic, PowerVault and Compellent. Once you get that common file system, we can take things that EqualLogic and Compellent have done with automated tiering and load balancing within the array and do that across the storage environment.”

Vigil admitted there is overlap between EqualLogic – which plays in the low to middle of the midrange – and Compellent, which sells into the middle and higher end of the midrange. The plan is to extend Compellent higher into the enterprise to fill some of the gap left when Hewlett-Packard outbid Dell for 3PAR last year. “If you don’t have overlap, you have a gap,” Vigil said.

Vigil said Dell will not sell the Ocarina Storage Optimizer dedupe appliances that Ocarina sold before the acquisition, but will support customers who bought the products.

Dell also will continue to OEM EMC Clariion SAN, Celerra NAS and the smaller Data Domain backup devices for the current lifecycle of those products, but Vigil said the “our focus is on Dell IP and the fluid data architecture.” Dell will resell but not OEM the EMC VNX unified storage systems that will eventually replace the Clariion and Celerra families.

March 16, 2011  8:36 PM

IT Pulse shows importance of storage in IT

Randy Kerns Randy Kerns Profile: Randy Kerns


I’m always looking at how storage is perceived by the IT industry and its customers. It’s obvious to me that storage is critical for successful IT, but that understanding seems to wax and wane with others. One positive indicator of the importance of storage was the IBM Pulse event a few weeks ago.


IBM Pulse focuses on service management, mostly software solutions that provide services for IT. These services involve the data center, security, facilities, and asset management. Content is divided into six streams, and each stream is divided into tracks. That provides attendees with the opportunity to check out a succession of sessions on topical areas. That also made it easy to gauge the coverage of storage in this service management conference.


I was impressed by the quantity of storage related presentations and the level of information provided. One entire track in the Service Management for the Data Center stream was dedicated to storage management. Another stream covered Security and Compliance Management and touched on storage. Product presentations were given at the level of what a systems administrator and an IT manager would need to know.


In addition to the speakers on the topics for storage, there were exhibits and labs for discussions and demonstrations. I  had interesting and lively discussions with technical people at these events. 


The importance of storage is clear among all vendors today – so much so that the discussions cross many areas that may not have included storage in the past. IBM deals with the entire scope of IT, yet its conference that is not perceived to be about storage covers a great deal of storage management software and storage systems. IBM’s GM of storage Doug Balog and vice president of storage marketing Dan Galvan were at the event.


This shows that storage is a prime area of focus for IT vendors — so much so that the discussions cross many areas that may not have included storage in the past. It also shows that storage is not only a necessary piece of IT, it’s a critical one.



March 15, 2011  8:17 PM

Better MLCs should help enterprise SSD adoption

Dave Raffo Dave Raffo Profile: Dave Raffo

When it became evident that solid state drives (SSDs) were still moving at a trickle into the enterprise about a year ago, industry experts said it would take two things to push their adoption. One was automatic tiering software and the other was enterprise-grade multi-level cell (MLC) flash that would bring down the price.

Now most storage array vendors have automated tiering software. And with Seagate’s launch of its Pulsar SSDs today, several of the leading drive makers have MLC devices to go with the more expensive single layer cell (SLC) drives on the market. So will SSDs now ramp quickly in the enterprise?

Probably not, although their use will certainly increase at a greater rate than they been.

Gartner analyst Joseph Unsworth said the enterprise SSD market went from around 320,000 units and $485 million in sales in 2009 to over 1 million units and over $1 billion in sales last year. He forecasts 9.4 million units and $4.2 billion in sales by 2015.

Seagate’s internal projections see SSDs catching up to 3.5-inch hard drives in the enterprise in units shipped in 20013 but not making much of a dent on 2.5-inch hard drives.

“The enterprise doesn’t turn on a dime,” Unsworth said. “There are long qualifications and there’s still confusion from end users. They know SSDs are important, but traditional data center professionals aren’t saying ‘We have to have this.’ They’re deployed in application workloads where they make most sense, like search, social media, data warehousing and stream video.”

MLC drives are cheaper than SLC drives but they wear out a lot faster. Until recently they were limited to consumer devices but are working into the enterprise. IBM said it will ship STEC MLC drives on midrange and enterprise storage arrays, and Seagate’s MLC drives will likely find their way into arrays from EMC and other array vendors.

Still, Unsworth added that pricing is decreasing at a slower rate for SSDs in storage systems than in servers.

He said the price erosion for the server side is forecasted for about 34% over the next five years with storage expected to drop about 30%. “It will take more MLC, more [product] competition and more scale to bring prices down,” he said.

March 10, 2011  3:37 PM

Non-compliance = big fines, bad rep

Randy Kerns Randy Kerns Profile: Randy Kerns

The Department of Health and Human Services has levied a hefty fine of $4.3 million against Maryland health care provider Cignet Health for HIPAA violations.

This is a significant event for institutions that deal with information governed by regulations for storing and managing records. The article’s statement that this is the first enforcement of the HIPAA regulations is inaccurate, but it is the first enforcement since the more stringent HiTECH Act was passed. Previous enforcements involved regional hospitals and did not receive significant publicity.

So why did the Department of Health and Human Services strike now? HHS is being punitive with the fine and public notification because of what seems like willful disregard for protecting information. The HHS said Cignet refused to provide 41 patients with copies of their medical records and failed to respond to repeated requests from the HHS Office of Civil Rights.

But the fine also sends a clear message to other healthcare organizations to comply or face fines and — more importantly — public embarrassment.

As a quick review, the HIPAA (Health Insurance Portability and Accountability Act of 1996) and the Health Information Technology for Economic and Clinical Health Act (HITECH Act) of 2009 impose requirements on control of access, breach notification, and storage of information. Evaluator Group articles about the need to meet compliance requirements for HIPAA are at

The fine against Cignet reminds me of a conversation I had with the CIO and other senior management of a regional hospital about 18 months ago. We spoke about the archiving requirements for Electronic Medical Records (EMR) and the different retention requirements based on that type of information.

After discussing the retention requirements and the need for using storage systems that met compliance requirements that would pass an audit, the CIO said the hospital was storing all of its data on standard disk systems. When asked about meeting compliance requirements, he said he was not concerned.

He explained that the public depended on this regional hospital. If it was audited due to some complaint or had a loss of data, the public could not do without it and would have to support it. He said his budget did not allow for taking the proper measures for storing data to comply with regulations.

That was an interesting discussion. He was admitting the hospital knowingly violated the regulations regarding the privacy of data but was unwilling to even consider doing something about it. Aside from being appalled, I thought the arrogance would cause an even greater impact when an incident occurred.

Maybe with some institutions a $4.3 million fine is not a major impact. But for most it would be. I would think it tough to put on a budget line item.

But the damage to the institution goes beyond the impact on its budget. The bad publicity can harm its reputation and affect its support over the long term. For the healthcare information professional, the peer group will be aware of failings. Not only will this cause the institution and its staff to be held with a low regard, it may have an effect on potential future employment opportunities.

The media, customers and the Department of Health and Human Services all have long memories. Any other type of incident will cause the lack of privacy protection to be brought up repeatedly. While a fine is a one-time event, the bad reputation may be permanent.

March 9, 2011  10:46 PM

NetApp bags LSI’s Engenio storage group for $480 million

Dave Raffo Dave Raffo Profile: Dave Raffo

NetApp CEO Tom Georgens acquired his old company today, when he said NetApp will buy LSI Corp.’s Engenio storage business for $480 million.


No storage acquisition should be considered a surprise these days, and there had been rumblings for months that LSI was looking to sell off Engenio. But the acquirer and the price were a little unexpected. NetApp had shied away from big acquisitions since EMC outbid it for Data Domain in the middle of 2009, and it already has a storage platform that stretches from low end through the enterprise. And $480 million seems like a low price in the wake Hewlett-Packard’s $2.35 billion acquisition of 3PAR, EMC’s $2.25 billion pickup of Isilon and Dell’s $820 acquisition of Compellent in the last six months.


The deal is expected to close in about 60 days.


NetApp’s management team certainly knows what it is getting. Georgens was Engenio’s CEO for two years before joining NetApp in 2007, and NetApp chief strategy officer Vic Mahadevan went from LSI to NetApp last year.



“At first I was surprised, but recognizing Tom had come from [Engenio], it started making more sense,” Evaluator Group analyst Randy Kerns said. “It makes sense when you consider NetApp got a reasonable price, new market opportunities and OEMs it didn’t have before.”


In a conference call today to discuss the deal, Georgens said Engenio’s systems will allow NetApp to go after different workloads than NetApp’s FAS line. Those workloads include video capture, video surveillance, genomics sequencing and scientific research that Georges characterized as bandwidth intensive.


“There are workloads we’re not going to service with [NetApp] DataOntap,” he said. “This is targeted at workloads that are separate and distinct from OnTap.”


The deal also should strengthen NetApp’s relationship with IBM. LSI sells its Engenio storage exclusively through OEMs, and IBM sells more LSI storage than any other vendor. IBM also sells NetApp’s FAS platform. LSI’s other OEM partners include Oracle,Teradata and Dell. Georgens said NetApp will also sell LSI systems through NetApp channels.


One thing he wants to avoid doing is setting up a competitive situation between FAS and LSI’s platforms. NetApp executives say having one unified platform sets it apart from their major competitors, who sell different platforms in the midrange and enterprise and for NAS and SAN customers.


Georgens said NetApp’s new portfolio will be different than EMC’s situation with its midrange Clariion and enterprise Symmetrix platforms.


“The problem with Symmetrix and Clariion is, the target markets are overlapping,” he said. “They all have replication, snapshots and other things in parallel. This [LSI] is not our SAN product. We have a SAN product called FAS. This is targeted at workloads where we’re not going to sell FAS.”


For more on this deal, see




March 9, 2011  4:15 PM

Fusion-io flashes IPO filing

Dave Raffo Dave Raffo Profile: Dave Raffo

NAND flash start-up Fusion-io is looking to go public, thanks to early success that it owes in a large part to Facebook.

Fusion-io filed a registration form for an initial public offering (IPO) today, declaring it wants to raise up to $150 million. The company has raised $103 million in venture capital funding.

The move is no surprise, considering Fusion-io CEO David Flynn and board member Steve Wozniak have been dropping hints about its rapid revenue growth. Fusion-io’s S-1 filing shed light on its sales of its PCIe-based flash cards that are sold mostly through OEM deals with server vendors Dell, Hewlett-Packard and IBM.

Fusion-io claims it had $36.2 million in revenue for its fiscal year that ended in June of 2010, and another $58.2 million in the last six months of 2010. Those numbers were up from $10.1 million for fiscal 2009 and $11.9 million in the last six months of that year. Fusion-io is losing money, which is common for storage companies when they first go public. The vendor lost $31.7 million for fiscal 2010, $8.2 million over the last six months of last year and a total of $77.1 million in its history.

Fusion-io’s ioMemory hardware creates a high-capacity memory tier and integrates with its VSL virtualization software. It also has directCache automated tiering software and ioSphere platform management software in customer trials.

Who’s buying Fusion-io products? Mostly Facebook and one other unidentified customer. Fusion-io’s filing said Facebook is its largest customer and accounted for a “substantial portion” of its revenue from late last year, and that will continue through the end of this month before trailing off. The other large end user customer is also expected to make large purchases through the end of this month and then decline. Facebook’s purchase came through one of the server vendors, because Fusion-io said 92% of its revenue for the last half of 2010 came from its OEM partners – up from 75% for fiscal 2010.

Investment banker Credit Suisse, one of the IPO underwriters, has also said it is using Fusion-io cards with its trading platform.

There is no shortage of enterprise Flash vendors today, especially those selling solid state drives (SSDs). Fusion-io’s filing listed EMC, Hitachi Data Systems, NetApp, Intel, LSI Corp., Micron, Samsung , Seagate , STEC, Toshiba Corp. and Western Digital as competitors. Those are just the public companies. Other private companies coming up behind Fusion-io include Violin Memory, Alacritech, and Texas Memory Systems.

March 7, 2011  4:19 PM

Western Digital to buy Hitachi GST for $4.3 billion

Dave Raffo Dave Raffo Profile: Dave Raffo

Western Digital today said it intends to buy Hitachi Global Storage Technologies (HGST) for $4.3 billion, reducing the number of drive vendors to four and making Western Digital a larger force in the enterprise hard drive and solid state drive (SSD) worlds.

Western Digital executives said they expect the deal to close in September. It will leave Western Digital, Seagate, Samsung and Toshiba as the surviving hard drive vendors at a time when digital data keeps growing at record rates and the way people store that data is evolving.

“There’s an unprecedented demand for digital storage, both locally and in the cloud,” Western Digital CEO John Coyne said during a conference call today to discuss the deal. “And there’s increasing diversity of storage devices, connected to PCs, edge devices and the cloud.”

The deal makes Western Digital’s portfolio more diverse by giving it an enterprise presence. HGST and Seagate recently brought out 3 TB SATA drives, increasing the largest capacity drives by a third. And enterprise SSDs jointly developed by HGST and Intel are now sampling with OEM customers.

HGST CEO Steve Milligan, who will join Western Digital as president, said the Intel relationship will continue under the Western Digital banner.

“It’s a key part of HGST’s strategy, and will be a key part of the combined company,” he said.

Western Digital and HGST had a combined 49% market share of hard drives in 2010, according to a research note issued today by Stifel Nicolaus analyst Aaron Rakers. Rakers wrote that Western Digital had a 31.3% share, with Seagate next at 30%.

Seagate was the enterprise (Fibre Channel and SAS interfaces) leader with 63% market share, followed by HGST with 24.6%. Western Digital began shipping SAS drives last year but it remains an insignificant part of its business.

March 4, 2011  3:51 PM

Big data storage systems rallied in 2010

Dave Raffo Dave Raffo Profile: Dave Raffo

High end storage systems are back in style, at least according to the latest storage revenue numbers from IDC.

IDC’s worldwide quarterly disk storage tracker research shows that storage systems with an average selling price of $250,000 and above rallied in 2010, finishing the year with 30.2% market share. That, according to senior research analyst Amita Potnis, brings the high-end back to its 2008 pre-financial crisis level.

“There were multiple drivers beyond the remarkable growth in high-end systems, including demand for storage consolidation and datacenter upgrades supported by new product push from a number of vendors,” Pontis said in the IDC release.

Hitachi Data Systems had the most significant high-end product release of 2010 with its Virtual Storage Platform (VSP). HDS revenues jumped nearly 30% in the fourth quarter over 2009 after the VSP release.

Other than the rise of the high end, the fourth quarter of 2010 looked a lot like the rest of the year for storage sales. Ethernet storage – in the form of NAS and iSCSI – continued to outpace the market by a wide margin, as did vendors NetApp and EMC.

For the fourth quarter, IDC put external storage system revenue at over $6 billion for an increase of 16.2% over 2009. The NAS market grew 41.3% with EMC owning 52.8% of the market and NetApp 23.7%. The iSCSI SAN market grew 42.1% in the quarter, led by Dell with 32.6% and HP with 14.7%.

NetApp overall revenue grew 43.7% year-over-year in the fourth quarter, and it increased market share from 8.4% in 2009 to 10.3%. EMC remained the overall leader with 26% share, followed by IBM at 16.3% and HP at 11.6%. HDS (8.7%) and Dell (7.9%) round out the top six behind NetApp. HDS (29.7% growth), EMC (26.3%) and NetApp outpaced the overall market gain. IBM, HP and Dell lost market share in the quarter.

For the entire year, IDC put external storage revenue at $21.2 billion for an 18.3% increase over 2009. EMC led the way with 25.6% market share, followed by IBM at 13.8%, NetApp and HP with 11.1% each, and Dell with 9.1%. Only EMC and NetApp gained market share for 2010 among the top five. 








Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: