Storage Soup

March 9, 2011  10:46 PM

NetApp bags LSI’s Engenio storage group for $480 million

Dave Raffo Dave Raffo Profile: Dave Raffo

NetApp CEO Tom Georgens acquired his old company today, when he said NetApp will buy LSI Corp.’s Engenio storage business for $480 million.


No storage acquisition should be considered a surprise these days, and there had been rumblings for months that LSI was looking to sell off Engenio. But the acquirer and the price were a little unexpected. NetApp had shied away from big acquisitions since EMC outbid it for Data Domain in the middle of 2009, and it already has a storage platform that stretches from low end through the enterprise. And $480 million seems like a low price in the wake Hewlett-Packard’s $2.35 billion acquisition of 3PAR, EMC’s $2.25 billion pickup of Isilon and Dell’s $820 acquisition of Compellent in the last six months.


The deal is expected to close in about 60 days.


NetApp’s management team certainly knows what it is getting. Georgens was Engenio’s CEO for two years before joining NetApp in 2007, and NetApp chief strategy officer Vic Mahadevan went from LSI to NetApp last year.



“At first I was surprised, but recognizing Tom had come from [Engenio], it started making more sense,” Evaluator Group analyst Randy Kerns said. “It makes sense when you consider NetApp got a reasonable price, new market opportunities and OEMs it didn’t have before.”


In a conference call today to discuss the deal, Georgens said Engenio’s systems will allow NetApp to go after different workloads than NetApp’s FAS line. Those workloads include video capture, video surveillance, genomics sequencing and scientific research that Georges characterized as bandwidth intensive.


“There are workloads we’re not going to service with [NetApp] DataOntap,” he said. “This is targeted at workloads that are separate and distinct from OnTap.”


The deal also should strengthen NetApp’s relationship with IBM. LSI sells its Engenio storage exclusively through OEMs, and IBM sells more LSI storage than any other vendor. IBM also sells NetApp’s FAS platform. LSI’s other OEM partners include Oracle,Teradata and Dell. Georgens said NetApp will also sell LSI systems through NetApp channels.


One thing he wants to avoid doing is setting up a competitive situation between FAS and LSI’s platforms. NetApp executives say having one unified platform sets it apart from their major competitors, who sell different platforms in the midrange and enterprise and for NAS and SAN customers.


Georgens said NetApp’s new portfolio will be different than EMC’s situation with its midrange Clariion and enterprise Symmetrix platforms.


“The problem with Symmetrix and Clariion is, the target markets are overlapping,” he said. “They all have replication, snapshots and other things in parallel. This [LSI] is not our SAN product. We have a SAN product called FAS. This is targeted at workloads where we’re not going to sell FAS.”


For more on this deal, see




March 9, 2011  4:15 PM

Fusion-io flashes IPO filing

Dave Raffo Dave Raffo Profile: Dave Raffo

NAND flash start-up Fusion-io is looking to go public, thanks to early success that it owes in a large part to Facebook.

Fusion-io filed a registration form for an initial public offering (IPO) today, declaring it wants to raise up to $150 million. The company has raised $103 million in venture capital funding.

The move is no surprise, considering Fusion-io CEO David Flynn and board member Steve Wozniak have been dropping hints about its rapid revenue growth. Fusion-io’s S-1 filing shed light on its sales of its PCIe-based flash cards that are sold mostly through OEM deals with server vendors Dell, Hewlett-Packard and IBM.

Fusion-io claims it had $36.2 million in revenue for its fiscal year that ended in June of 2010, and another $58.2 million in the last six months of 2010. Those numbers were up from $10.1 million for fiscal 2009 and $11.9 million in the last six months of that year. Fusion-io is losing money, which is common for storage companies when they first go public. The vendor lost $31.7 million for fiscal 2010, $8.2 million over the last six months of last year and a total of $77.1 million in its history.

Fusion-io’s ioMemory hardware creates a high-capacity memory tier and integrates with its VSL virtualization software. It also has directCache automated tiering software and ioSphere platform management software in customer trials.

Who’s buying Fusion-io products? Mostly Facebook and one other unidentified customer. Fusion-io’s filing said Facebook is its largest customer and accounted for a “substantial portion” of its revenue from late last year, and that will continue through the end of this month before trailing off. The other large end user customer is also expected to make large purchases through the end of this month and then decline. Facebook’s purchase came through one of the server vendors, because Fusion-io said 92% of its revenue for the last half of 2010 came from its OEM partners – up from 75% for fiscal 2010.

Investment banker Credit Suisse, one of the IPO underwriters, has also said it is using Fusion-io cards with its trading platform.

There is no shortage of enterprise Flash vendors today, especially those selling solid state drives (SSDs). Fusion-io’s filing listed EMC, Hitachi Data Systems, NetApp, Intel, LSI Corp., Micron, Samsung , Seagate , STEC, Toshiba Corp. and Western Digital as competitors. Those are just the public companies. Other private companies coming up behind Fusion-io include Violin Memory, Alacritech, and Texas Memory Systems.

March 7, 2011  4:19 PM

Western Digital to buy Hitachi GST for $4.3 billion

Dave Raffo Dave Raffo Profile: Dave Raffo

Western Digital today said it intends to buy Hitachi Global Storage Technologies (HGST) for $4.3 billion, reducing the number of drive vendors to four and making Western Digital a larger force in the enterprise hard drive and solid state drive (SSD) worlds.

Western Digital executives said they expect the deal to close in September. It will leave Western Digital, Seagate, Samsung and Toshiba as the surviving hard drive vendors at a time when digital data keeps growing at record rates and the way people store that data is evolving.

“There’s an unprecedented demand for digital storage, both locally and in the cloud,” Western Digital CEO John Coyne said during a conference call today to discuss the deal. “And there’s increasing diversity of storage devices, connected to PCs, edge devices and the cloud.”

The deal makes Western Digital’s portfolio more diverse by giving it an enterprise presence. HGST and Seagate recently brought out 3 TB SATA drives, increasing the largest capacity drives by a third. And enterprise SSDs jointly developed by HGST and Intel are now sampling with OEM customers.

HGST CEO Steve Milligan, who will join Western Digital as president, said the Intel relationship will continue under the Western Digital banner.

“It’s a key part of HGST’s strategy, and will be a key part of the combined company,” he said.

Western Digital and HGST had a combined 49% market share of hard drives in 2010, according to a research note issued today by Stifel Nicolaus analyst Aaron Rakers. Rakers wrote that Western Digital had a 31.3% share, with Seagate next at 30%.

Seagate was the enterprise (Fibre Channel and SAS interfaces) leader with 63% market share, followed by HGST with 24.6%. Western Digital began shipping SAS drives last year but it remains an insignificant part of its business.

March 4, 2011  3:51 PM

Big data storage systems rallied in 2010

Dave Raffo Dave Raffo Profile: Dave Raffo

High end storage systems are back in style, at least according to the latest storage revenue numbers from IDC.

IDC’s worldwide quarterly disk storage tracker research shows that storage systems with an average selling price of $250,000 and above rallied in 2010, finishing the year with 30.2% market share. That, according to senior research analyst Amita Potnis, brings the high-end back to its 2008 pre-financial crisis level.

“There were multiple drivers beyond the remarkable growth in high-end systems, including demand for storage consolidation and datacenter upgrades supported by new product push from a number of vendors,” Pontis said in the IDC release.

Hitachi Data Systems had the most significant high-end product release of 2010 with its Virtual Storage Platform (VSP). HDS revenues jumped nearly 30% in the fourth quarter over 2009 after the VSP release.

Other than the rise of the high end, the fourth quarter of 2010 looked a lot like the rest of the year for storage sales. Ethernet storage – in the form of NAS and iSCSI – continued to outpace the market by a wide margin, as did vendors NetApp and EMC.

For the fourth quarter, IDC put external storage system revenue at over $6 billion for an increase of 16.2% over 2009. The NAS market grew 41.3% with EMC owning 52.8% of the market and NetApp 23.7%. The iSCSI SAN market grew 42.1% in the quarter, led by Dell with 32.6% and HP with 14.7%.

NetApp overall revenue grew 43.7% year-over-year in the fourth quarter, and it increased market share from 8.4% in 2009 to 10.3%. EMC remained the overall leader with 26% share, followed by IBM at 16.3% and HP at 11.6%. HDS (8.7%) and Dell (7.9%) round out the top six behind NetApp. HDS (29.7% growth), EMC (26.3%) and NetApp outpaced the overall market gain. IBM, HP and Dell lost market share in the quarter.

For the entire year, IDC put external storage revenue at $21.2 billion for an 18.3% increase over 2009. EMC led the way with 25.6% market share, followed by IBM at 13.8%, NetApp and HP with 11.1% each, and Dell with 9.1%. Only EMC and NetApp gained market share for 2010 among the top five. 








March 1, 2011  6:34 PM

What’s next for unified storage?

Randy Kerns Randy Kerns Profile: Randy Kerns

Unified storage has gone from a specialty item to something offered from nearly every storage vendor in recent years. In the beginning, vendors such as NetApp took added block capability to their file system storage and NetApp’s biggest rivals have since followed down that unified path.


The evolution continues, however, and multiprotocol systems will likely include more technological advances over the coming years.


As a refresher, I define Unified Storage as:


Unified Storage is a storage system that provides both file and block access simultaneously. The block access is accomplished through use of an interface such as Fibre Channel, SAS, or iSCSI over Ethernet. The file-based access is to a file system on the storage system using either CIFS or NFS over Ethernet. 



An implied piece of unified storage is that it requires unified management, one storage system management for block and file data. Without that, the critical goal of consolidation and simplification is compromised. 



Some vendors have provided block storage through both Fibre Channel and iSCSI, while others stick to iSCSI only because it is simpler to deliver. The following diagram gives a very general view that compares the implementations for block and file storage:   








Unified storage systems are commonly offered by storage vendors, but that doesn’t mean every new storage system you buy must be unified. Certain high-end IT environments with specific usage requirements would use non-unified systems.If you only need high performance block storage, for instance, a unified system isn’t necessary.   


However, there are excellent uses of unified storage:


  • In a virtual server environment, a unified storage system presents an opportunity to meet demands for quickly provisioning virtual machines and meeting operational requirements. A virtual machine could be provisioned with a datastore based on NFS with its file I/O while the block storage capability of the unified storage would allow Real Device Mapping (RDM) to attach a physical disk to a virtual machine to meet application requirements.
  • If there is a predominance of one type of usage such as file storage for unstructured data but still there is a need for some block storage (an Exchange database for example), a unified storage system allows for consolidation to a single platform.
  • Unified storage provides great flexibility for an organization that needs to repurpose storage because its needs are changing.
  • Unified storage also provides a single resource that can be provisioned as needed for the usage type required – block or file.



What’s Next?


But vendors haven’t just been combining block and file protocols in the same package. Recent features added to unified systems include automated tiering, solid state devices (SSDs) as a tier for higher performance, and support for cascading read/write-capable snapshots to add value for use cases such as virtual desktop infrastructures (VDIs).


What should be expected next for unified storage? It’s likely that vendors will package other capabilities together and call that the new “unified storage.” That would dilute the meaning of “unified” and require a qualifying phrase after it.


More likely, there will an additional, high-value capability for storage that will have its own identity. Maybe it could be something like having a storage system with the capability to intelligently (and automatically) do archiving as well. Call it “archiving-enabled” storage. This is more evolutionary than revolutionary. But, it will be uniquely defined.






February 23, 2011  7:08 PM

Scale-out vs. scale-up: the basics

Randy Kerns Randy Kerns Profile: Randy Kerns

There’s been a lot of talk about scale-out and scale-up storage lately, and I get a sense that a lot of people don’t understand that these terms are not synonymous. And that causes confusion among IT professionals when they are planning product purchases and trying to determine how these types of products bring value versus cost and complexity to their environments.



To make informed buying decisions, IT pros need to understand the difference between scale-up and scale-out. The following are the basics, which can be built upon for more detailed considerations.


Scale-up, as the following simple diagram shows, is taking an existing storage system and adding capacity to meet increased capacity demands.    



Scale-up can solve a capacity problem without adding infrastructure elements such as network connectivity. However, it does require additional space, power, and cooling. Scaling up does not add controller capabilities to handle additional host activities. That means it doesn’t add costs for extra control functions either.

So the costs have not scaled at the same rate for the initial storage system plus storage devices – only additional devices have been added.


Scale-out storage usually requires additional storage (called nodes) to add capacity and performance. Or in the case of monolithic storage systems, it scales by adding more functional elements (usually controller cards).One difference between scaling out and just putting more storage systems on the floor is that scale-out storage continues to be represented as a single system.

There are several methods for accomplishing scale out, including clustered storage systems and grid storage. The definitions of these two types can also be confusing, and other factors add to the complexity (that’s a subject for another article), but the fundamental premise is that a scale-out solution is accessed as a single system.

This diagram shows an example of a scale-out storage solution. In this diagram, the scaling is only with an additional node but a scale-out solution could have many nodes that are interconnected across geographical distances.  



The scale-out storage in this example added both the control function and capacity but maintained a single system representation for access. This scaling may have required additional infrastructure such as storage switches to connect the storage to the controller and a connection between the nodes in the cluster or grid. These connections let the solution work as a single system.


Scaling-out adds power, cooling, and space requirements, and the cost includes the additional capacity, control elements and infrastructure.  With the scale-out solution in this example, capacity increased and performance scaled with the additional control capabilities.


Not all scaling solutions are so simple. Many storage systems can scale out and up. The following diagram illustrates this:




When looking at scale-up or scale-out storage, consider these factors:


  • Costs Scale up adds capacity but not the controller or infrastructure costs.  If the measure is dollar per GB, scale-up will be less expensive.
  • Capacity. Either solution can meet capacity requirements but there may be a limit on the scale-up capacity based on how much capacity or how many devices an individual storage controller can attach.
  • Performance. Scale out has the potential capability to aggregate IOPS and bandwidth of multiple storage controllers. Representing the nodes as a single system may introduce latency, but this is implementation specific. 
  • Management. Scale up would have a single storage system management characterization. Scale-out systems typically have an aggregated management capability but there may be variations between vendor offerings. 
  • Complexity. Scale-up storage is expected to be simple, while scale-out systems may be more complex because they require elements to manage. 
  • Availability. Additional nodes should provide greater availability in case one element fails or goes out of service. This depends on the particular implementation.

There is a great deal to consider when making a choice between scale out and scale up. The decision will ultimately depend on how one vendor implements its solution and its capabilities and features compared to another vendor. But, it is always best to start with a basic understanding and then look at the differences.   


February 23, 2011  2:34 PM

Dell closes Compellent, opens new storage era

Dave Raffo Dave Raffo Profile: Dave Raffo

When Dell closed its $800 million acquisition of Compellent Tuesday, it also closed the books on a year of deals that will transform Dell’s storage portfolio.

Dell first said it wanted to buy Compellent last December. Before that, it made two smaller 2010 acquisitions that could enhance its overall storage product line. It bought the IP of scale-out NAS vendor Exanet and acquired data reduction startup Ocarina Networks. Dell also tried to buy 3PAR but was outbid by Hewlett-Packard several months before it grabbed Compellent instead. In another storage move, Dell launched a DX object-based storage platform in 2010.

A Dell spokesman said the vendor will have more details about its overall storage strategy next month, but it appears that Compellent will bring Dell into its post-EMC era. Or as Dell executives like to say, its storage business has gone from being mostly a reseller of storage to becoming a full provider of storage technology.

Dell lists the Compellent Series 40 as its multiprotocol storage system for enterprise applications with EqualLogic as its scalable Ethernet storage and PowerVault as the entry level DAS and iSCSI SAN choice. Of course, EMC Clariion has filled the multiprotocol SAN role for Dell for close to a decade.  Dell will not OEM and may not even resell the EMC VNX that will replace the Clariion, and definitely will not sell the EMC VNXe SMB system that competes with EqualLogic and PowerVault.

A slide attached to Dell executive Rob Williams’ blog posted yesterday lists Compellent as Dell’s high end block storage offering with EMC’s CX4 and the EqualLogic PS Series as midrange SAN products, and PowerVault as the entry level system. That’s odd positioning because Compellent has always been characterized as a midrange system – actually, more towards the lower end of the midrange.

Williams did shed some light on Dell’s storage strategy in his blog:

“Together with Dell EqualLogic, PowerVault and Compellent, our block and file storage offerings now span every layer from low- to mid-range offerings for SMBs and Public institutions to higher-end enterprise ready offerings for large corporations and organizations. 

Last year, we also acquired two more storage assets that bring with them important IP. Exanet provides Dell with scale-out file storage capabilities, and moves us for the first time beyond the arena of mid-range block storage and into the playing field of mid-range file storage.  We hope to launch our first Exanet NAS product early this summer.  The second is Ocarina, which gives us content-aware file deduplication capabilities that we plan to add across our entire storage portfolio over time.”

There may be more to come. When they first spoke about buying Compellent in December, Dell executives said they may look to add data management features though an acquisition.  All of this makes Dell worth watching through 2011.


February 22, 2011  3:11 PM

Alternatives available for HPC shops lacking Lustre support

Dave Raffo Dave Raffo Profile: Dave Raffo

With Oracle showing a lot less love for open source storage software than Sun did, high performance computing (HPC) shops are nervous about the future of Lustre and HPC hardware vendors are taking steps to pick up the slack.


Last year the Illumos project sprung up as the governing body for OpenSolaris to help spur development of the source code used for ZFS. Now the Lustre community is rallying around the storage file system used in most large HPC implementations as Oracle shows no signs of supporting it.

Since late last year, Xyratex acquired ClusterStor for its Lustre expertise, start-up Whamcloud started aggressively hiring Lustre developers and partnering with hardware vendors on support deals, and Cray, DataDirect Networks (DDN), Lawrence Livermore and Oak Ridge National labs launched to develop future releases of Lustre.

DDN last week launched a promotional offer to provide Lustre customers a third year of support for free if they purchase a two-year DDN file system support contract. The support comes through DDN’s alliance with Whamcloud.

“There is a general uneasiness in the industry, and people are looking for somebody to step up,” DDN marketing VP Jeff Denworth said. “There’s been a gross defection of talent from Oracle around the Lustre file system.”

DataDirect Networks customer Stephen Simms, manager of the Data Capacitor project at Indiana University, said it will take initiatives like those undertaken by DDN, Whamcloud, and OpenSFS to save Lustre.


“Without people to develop Lustre, then Lustre is going to die,” he said. “National laboratories have a strong investment in Lustre, and they will do their best to keep it alive, but without the pool of talent that exists in for-profit companies, where are you going to be? You’re going to be an organization that desperately needs a file system with only a handful of developers.”


Simms said Lustre is a crucial piece of IU’s Data Capacitor, a high speed, high bandwidth storage system that serves IU campuses and other scientific research sites on the TeraGrid network. The IU team modified Lustre file system code to provide the automatic mapping of user IDs (UIDs) across TeraGrid sites.


“We wouldn’t be able to develop this system for mapping UID space if the file system were closed,” he said. “The fact that it’s open has been a big deal for us. It’s important to have the expertise someplace. It’s a concern that there are so few Lustre developers, and they’ve left Oracle and where have they gone? Some have gone to Xyratex and some to WhamCloud, and who knows where others have gone? It’s important to keep momentum.”



February 17, 2011  12:19 PM

NetApp can’t keep up with FAS3200 demand

Dave Raffo Dave Raffo Profile: Dave Raffo


If you want to buy a NetApp FAS3200 storage system, you may have to wait a while – especially if you want it with Flash Cache.

NetApp executives Wednesday said the vendor has received about four times as many orders as expected for the systems that launched last November, and sold out of them because of supply constraints. The failure to fill orders cost NetApp about $10 million to $15 million in sales last quarter according to CEO Tom Georgens, causing the vendor to miss Wall Street analysts’ expectations with $1.268 billion of revenue.

Georgens said orders of Flash Cache I/O modules were especially greater than expected, and a shortage of those components limited sales.

Georgens said “we have not resolved these problems yet,” and forecasted lower sales revenue for this quarter than expected as a result. Georgens said the time customers have to wait for FAS3200 systems varies, and would not confirm if the lead time was as long as six weeks as one analyst on the NetApp conference call suggested.

 “We’re working like mad to close this gap,” Georgens said. “It’s disappointing to be having this conversation. I can’t tell you with full confidence that we’re going to clear this up. I can tell you with full confidence that we’re working on this night and day. … Overall, in terms of constraints, it’s primarily in the I/O expansion module, certain semiconductor components along the way.”

Georgens also said NetApp is unlikely to follow competitors into large acquisitions. He said NetApp would pursue acquisitions that bring it into new markets or add features that would strengthen its current products. Since EMC outbid NetApp for Data Domain in mid-2009, NetApp has made two smaller acquisitions, picking up object storage vendor Bycast and storage management software vendor Akorri.

There has been speculation that NetApp would acquire a data analytics vendor such as ParAccel or Aster Data to counter EMC’s “big data” 2010 acquisition of Greenplum. However, Georgens said it is unlikely that NetApp will go that route.

 “I think there are more attractive adjacent markets for us to pursue,” Georgens said. “Partnerships are available in data analytics. That’s how we’ll go after that, as opposed to being in the analytics business ourselves.”

February 15, 2011  4:38 PM

Anatomy of poor IT buying decisions

Randy Kerns Randy Kerns Profile: Randy Kerns


I had an interesting conversation about IT decision-making while attending an education foundation meeting that was focused on education. The meeting was a chance to speak with CIOs and other senior IT people in a non-business setting. These discussions can be far-ranging and unguarded. In this non-sales, non-consultative environment, the discussion came around to the challenges each executive faced and the risks they had in their positions. 


The challenges included personnel issues as well as the products and solutions they had deployed in their IT operations. While it was interesting to hear about what systems (my focus was on storage but they talked about complete solutions that encompassed hardware and software) they had recently purchased, the more interesting discussions were about why they had made these purchases.


One theme came through frequently – many felt they had been stampeded into making a decision and purchase. They felt stampeded in the sense that they weren’t given time to do an adequate (in their opinions) evaluation to make an informed decision.


The reasons for not being given enough time included their receiving pressure from above saying the company required immediate action, and business would be lost if the solution was not delivered immediately. Organizationally, a group (internal customer for example) would look elsewhere if it failed to get an immediate solution.


“Look elsewhere” took the form of outsourcing, bringing their own systems to meet the needs, or threats of withdrawing budgetary support.


One person in the group said he was being forced to make “haphazard decisions” rather than informed ones. And these haphazard decisions included no thought of how this would affect the overall strategy or IT operations. The use of the word haphazard struck me as a somewhat alarming term. 



From my standpoint, this looked like a great opportunity to do a case study of what the lack of a strategy or formal process can mean to IT.  Asking what the results were for these stampeded or haphazard decisions did not yield a clear answer, however.  It became obvious that the real effect may not become apparent for a few years. The initial responses were more emotional (and negative) than actual quantified results. 


What I took away from this discussion was that decisions were still being made that were not informed and not really strategic. Whether they were bad decisions or not would take some time to find out. The people making the haphazard decisions were aware they were not proceeding correctly, and were embarrassed by it. But they felt they had no real choice. They had to make a decision and needed whatever information they could access quickly given the time they had. 


Fortunately, the majority of decisions about purchasing systems and solutions (especially storage systems) do not take this route. But the reality is some decisions are being made haphazardly, and that can present problems.


(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).











Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: