Storage Soup


March 7, 2011  4:19 PM

Western Digital to buy Hitachi GST for $4.3 billion

Dave Raffo Dave Raffo Profile: Dave Raffo

Western Digital today said it intends to buy Hitachi Global Storage Technologies (HGST) for $4.3 billion, reducing the number of drive vendors to four and making Western Digital a larger force in the enterprise hard drive and solid state drive (SSD) worlds.

Western Digital executives said they expect the deal to close in September. It will leave Western Digital, Seagate, Samsung and Toshiba as the surviving hard drive vendors at a time when digital data keeps growing at record rates and the way people store that data is evolving.

“There’s an unprecedented demand for digital storage, both locally and in the cloud,” Western Digital CEO John Coyne said during a conference call today to discuss the deal. “And there’s increasing diversity of storage devices, connected to PCs, edge devices and the cloud.”

The deal makes Western Digital’s portfolio more diverse by giving it an enterprise presence. HGST and Seagate recently brought out 3 TB SATA drives, increasing the largest capacity drives by a third. And enterprise SSDs jointly developed by HGST and Intel are now sampling with OEM customers.

HGST CEO Steve Milligan, who will join Western Digital as president, said the Intel relationship will continue under the Western Digital banner.

“It’s a key part of HGST’s strategy, and will be a key part of the combined company,” he said.

Western Digital and HGST had a combined 49% market share of hard drives in 2010, according to a research note issued today by Stifel Nicolaus analyst Aaron Rakers. Rakers wrote that Western Digital had a 31.3% share, with Seagate next at 30%.

Seagate was the enterprise (Fibre Channel and SAS interfaces) leader with 63% market share, followed by HGST with 24.6%. Western Digital began shipping SAS drives last year but it remains an insignificant part of its business.

March 4, 2011  3:51 PM

Big data storage systems rallied in 2010

Dave Raffo Dave Raffo Profile: Dave Raffo

High end storage systems are back in style, at least according to the latest storage revenue numbers from IDC.

IDC’s worldwide quarterly disk storage tracker research shows that storage systems with an average selling price of $250,000 and above rallied in 2010, finishing the year with 30.2% market share. That, according to senior research analyst Amita Potnis, brings the high-end back to its 2008 pre-financial crisis level.

“There were multiple drivers beyond the remarkable growth in high-end systems, including demand for storage consolidation and datacenter upgrades supported by new product push from a number of vendors,” Pontis said in the IDC release.

Hitachi Data Systems had the most significant high-end product release of 2010 with its Virtual Storage Platform (VSP). HDS revenues jumped nearly 30% in the fourth quarter over 2009 after the VSP release.

Other than the rise of the high end, the fourth quarter of 2010 looked a lot like the rest of the year for storage sales. Ethernet storage – in the form of NAS and iSCSI – continued to outpace the market by a wide margin, as did vendors NetApp and EMC.

For the fourth quarter, IDC put external storage system revenue at over $6 billion for an increase of 16.2% over 2009. The NAS market grew 41.3% with EMC owning 52.8% of the market and NetApp 23.7%. The iSCSI SAN market grew 42.1% in the quarter, led by Dell with 32.6% and HP with 14.7%.

NetApp overall revenue grew 43.7% year-over-year in the fourth quarter, and it increased market share from 8.4% in 2009 to 10.3%. EMC remained the overall leader with 26% share, followed by IBM at 16.3% and HP at 11.6%. HDS (8.7%) and Dell (7.9%) round out the top six behind NetApp. HDS (29.7% growth), EMC (26.3%) and NetApp outpaced the overall market gain. IBM, HP and Dell lost market share in the quarter.

For the entire year, IDC put external storage revenue at $21.2 billion for an 18.3% increase over 2009. EMC led the way with 25.6% market share, followed by IBM at 13.8%, NetApp and HP with 11.1% each, and Dell with 9.1%. Only EMC and NetApp gained market share for 2010 among the top five. 

 

 

 

 

 

 

 


March 1, 2011  6:34 PM

What’s next for unified storage?

Randy Kerns Randy Kerns Profile: Randy Kerns

Unified storage has gone from a specialty item to something offered from nearly every storage vendor in recent years. In the beginning, vendors such as NetApp took added block capability to their file system storage and NetApp’s biggest rivals have since followed down that unified path.

 

The evolution continues, however, and multiprotocol systems will likely include more technological advances over the coming years.

 

As a refresher, I define Unified Storage as:

 

Unified Storage is a storage system that provides both file and block access simultaneously. The block access is accomplished through use of an interface such as Fibre Channel, SAS, or iSCSI over Ethernet. The file-based access is to a file system on the storage system using either CIFS or NFS over Ethernet. 

 

 

An implied piece of unified storage is that it requires unified management, one storage system management for block and file data. Without that, the critical goal of consolidation and simplification is compromised. 

  

 

Some vendors have provided block storage through both Fibre Channel and iSCSI, while others stick to iSCSI only because it is simpler to deliver. The following diagram gives a very general view that compares the implementations for block and file storage:   

 

 

 

 

 

 

 

Unified storage systems are commonly offered by storage vendors, but that doesn’t mean every new storage system you buy must be unified. Certain high-end IT environments with specific usage requirements would use non-unified systems.If you only need high performance block storage, for instance, a unified system isn’t necessary.   

  

However, there are excellent uses of unified storage:

 

  • In a virtual server environment, a unified storage system presents an opportunity to meet demands for quickly provisioning virtual machines and meeting operational requirements. A virtual machine could be provisioned with a datastore based on NFS with its file I/O while the block storage capability of the unified storage would allow Real Device Mapping (RDM) to attach a physical disk to a virtual machine to meet application requirements.
  • If there is a predominance of one type of usage such as file storage for unstructured data but still there is a need for some block storage (an Exchange database for example), a unified storage system allows for consolidation to a single platform.
  • Unified storage provides great flexibility for an organization that needs to repurpose storage because its needs are changing.
  • Unified storage also provides a single resource that can be provisioned as needed for the usage type required – block or file.

 

 

What’s Next?

 

But vendors haven’t just been combining block and file protocols in the same package. Recent features added to unified systems include automated tiering, solid state devices (SSDs) as a tier for higher performance, and support for cascading read/write-capable snapshots to add value for use cases such as virtual desktop infrastructures (VDIs).

 

What should be expected next for unified storage? It’s likely that vendors will package other capabilities together and call that the new “unified storage.” That would dilute the meaning of “unified” and require a qualifying phrase after it.

 

More likely, there will an additional, high-value capability for storage that will have its own identity. Maybe it could be something like having a storage system with the capability to intelligently (and automatically) do archiving as well. Call it “archiving-enabled” storage. This is more evolutionary than revolutionary. But, it will be uniquely defined.

 

 

 

 

 


February 23, 2011  7:08 PM

Scale-out vs. scale-up: the basics

Randy Kerns Randy Kerns Profile: Randy Kerns

There’s been a lot of talk about scale-out and scale-up storage lately, and I get a sense that a lot of people don’t understand that these terms are not synonymous. And that causes confusion among IT professionals when they are planning product purchases and trying to determine how these types of products bring value versus cost and complexity to their environments.

 

 

To make informed buying decisions, IT pros need to understand the difference between scale-up and scale-out. The following are the basics, which can be built upon for more detailed considerations.

 

Scale-up, as the following simple diagram shows, is taking an existing storage system and adding capacity to meet increased capacity demands.    

 

 

Scale-up can solve a capacity problem without adding infrastructure elements such as network connectivity. However, it does require additional space, power, and cooling. Scaling up does not add controller capabilities to handle additional host activities. That means it doesn’t add costs for extra control functions either.

So the costs have not scaled at the same rate for the initial storage system plus storage devices – only additional devices have been added.

 

Scale-out storage usually requires additional storage (called nodes) to add capacity and performance. Or in the case of monolithic storage systems, it scales by adding more functional elements (usually controller cards).One difference between scaling out and just putting more storage systems on the floor is that scale-out storage continues to be represented as a single system.

There are several methods for accomplishing scale out, including clustered storage systems and grid storage. The definitions of these two types can also be confusing, and other factors add to the complexity (that’s a subject for another article), but the fundamental premise is that a scale-out solution is accessed as a single system.

This diagram shows an example of a scale-out storage solution. In this diagram, the scaling is only with an additional node but a scale-out solution could have many nodes that are interconnected across geographical distances.  

 

 

The scale-out storage in this example added both the control function and capacity but maintained a single system representation for access. This scaling may have required additional infrastructure such as storage switches to connect the storage to the controller and a connection between the nodes in the cluster or grid. These connections let the solution work as a single system.

 

Scaling-out adds power, cooling, and space requirements, and the cost includes the additional capacity, control elements and infrastructure.  With the scale-out solution in this example, capacity increased and performance scaled with the additional control capabilities.

 

Not all scaling solutions are so simple. Many storage systems can scale out and up. The following diagram illustrates this:

 

Considerations

 

When looking at scale-up or scale-out storage, consider these factors:

 

  • Costs Scale up adds capacity but not the controller or infrastructure costs.  If the measure is dollar per GB, scale-up will be less expensive.
  • Capacity. Either solution can meet capacity requirements but there may be a limit on the scale-up capacity based on how much capacity or how many devices an individual storage controller can attach.
  • Performance. Scale out has the potential capability to aggregate IOPS and bandwidth of multiple storage controllers. Representing the nodes as a single system may introduce latency, but this is implementation specific. 
  • Management. Scale up would have a single storage system management characterization. Scale-out systems typically have an aggregated management capability but there may be variations between vendor offerings. 
  • Complexity. Scale-up storage is expected to be simple, while scale-out systems may be more complex because they require elements to manage. 
  • Availability. Additional nodes should provide greater availability in case one element fails or goes out of service. This depends on the particular implementation.

There is a great deal to consider when making a choice between scale out and scale up. The decision will ultimately depend on how one vendor implements its solution and its capabilities and features compared to another vendor. But, it is always best to start with a basic understanding and then look at the differences.   

 


February 23, 2011  2:34 PM

Dell closes Compellent, opens new storage era

Dave Raffo Dave Raffo Profile: Dave Raffo

When Dell closed its $800 million acquisition of Compellent Tuesday, it also closed the books on a year of deals that will transform Dell’s storage portfolio.

Dell first said it wanted to buy Compellent last December. Before that, it made two smaller 2010 acquisitions that could enhance its overall storage product line. It bought the IP of scale-out NAS vendor Exanet and acquired data reduction startup Ocarina Networks. Dell also tried to buy 3PAR but was outbid by Hewlett-Packard several months before it grabbed Compellent instead. In another storage move, Dell launched a DX object-based storage platform in 2010.

A Dell spokesman said the vendor will have more details about its overall storage strategy next month, but it appears that Compellent will bring Dell into its post-EMC era. Or as Dell executives like to say, its storage business has gone from being mostly a reseller of storage to becoming a full provider of storage technology.

Dell lists the Compellent Series 40 as its multiprotocol storage system for enterprise applications with EqualLogic as its scalable Ethernet storage and PowerVault as the entry level DAS and iSCSI SAN choice. Of course, EMC Clariion has filled the multiprotocol SAN role for Dell for close to a decade.  Dell will not OEM and may not even resell the EMC VNX that will replace the Clariion, and definitely will not sell the EMC VNXe SMB system that competes with EqualLogic and PowerVault.

A slide attached to Dell executive Rob Williams’ blog posted yesterday lists Compellent as Dell’s high end block storage offering with EMC’s CX4 and the EqualLogic PS Series as midrange SAN products, and PowerVault as the entry level system. That’s odd positioning because Compellent has always been characterized as a midrange system – actually, more towards the lower end of the midrange.

Williams did shed some light on Dell’s storage strategy in his blog:

“Together with Dell EqualLogic, PowerVault and Compellent, our block and file storage offerings now span every layer from low- to mid-range offerings for SMBs and Public institutions to higher-end enterprise ready offerings for large corporations and organizations. 

Last year, we also acquired two more storage assets that bring with them important IP. Exanet provides Dell with scale-out file storage capabilities, and moves us for the first time beyond the arena of mid-range block storage and into the playing field of mid-range file storage.  We hope to launch our first Exanet NAS product early this summer.  The second is Ocarina, which gives us content-aware file deduplication capabilities that we plan to add across our entire storage portfolio over time.”

There may be more to come. When they first spoke about buying Compellent in December, Dell executives said they may look to add data management features though an acquisition.  All of this makes Dell worth watching through 2011.

 


February 22, 2011  3:11 PM

Alternatives available for HPC shops lacking Lustre support

Dave Raffo Dave Raffo Profile: Dave Raffo

With Oracle showing a lot less love for open source storage software than Sun did, high performance computing (HPC) shops are nervous about the future of Lustre and HPC hardware vendors are taking steps to pick up the slack.

 

Last year the Illumos project sprung up as the governing body for OpenSolaris to help spur development of the source code used for ZFS. Now the Lustre community is rallying around the storage file system used in most large HPC implementations as Oracle shows no signs of supporting it.

Since late last year, Xyratex acquired ClusterStor for its Lustre expertise, start-up Whamcloud started aggressively hiring Lustre developers and partnering with hardware vendors on support deals, and Cray, DataDirect Networks (DDN), Lawrence Livermore and Oak Ridge National labs launched OpenSFS.org to develop future releases of Lustre.

DDN last week launched a promotional offer to provide Lustre customers a third year of support for free if they purchase a two-year DDN file system support contract. The support comes through DDN’s alliance with Whamcloud.

“There is a general uneasiness in the industry, and people are looking for somebody to step up,” DDN marketing VP Jeff Denworth said. “There’s been a gross defection of talent from Oracle around the Lustre file system.”

DataDirect Networks customer Stephen Simms, manager of the Data Capacitor project at Indiana University, said it will take initiatives like those undertaken by DDN, Whamcloud, and OpenSFS to save Lustre.

  

“Without people to develop Lustre, then Lustre is going to die,” he said. “National laboratories have a strong investment in Lustre, and they will do their best to keep it alive, but without the pool of talent that exists in for-profit companies, where are you going to be? You’re going to be an organization that desperately needs a file system with only a handful of developers.”

  

Simms said Lustre is a crucial piece of IU’s Data Capacitor, a high speed, high bandwidth storage system that serves IU campuses and other scientific research sites on the TeraGrid network. The IU team modified Lustre file system code to provide the automatic mapping of user IDs (UIDs) across TeraGrid sites.

  

“We wouldn’t be able to develop this system for mapping UID space if the file system were closed,” he said. “The fact that it’s open has been a big deal for us. It’s important to have the expertise someplace. It’s a concern that there are so few Lustre developers, and they’ve left Oracle and where have they gone? Some have gone to Xyratex and some to WhamCloud, and who knows where others have gone? It’s important to keep momentum.”

 

 


February 17, 2011  12:19 PM

NetApp can’t keep up with FAS3200 demand

Dave Raffo Dave Raffo Profile: Dave Raffo

 

If you want to buy a NetApp FAS3200 storage system, you may have to wait a while – especially if you want it with Flash Cache.

NetApp executives Wednesday said the vendor has received about four times as many orders as expected for the systems that launched last November, and sold out of them because of supply constraints. The failure to fill orders cost NetApp about $10 million to $15 million in sales last quarter according to CEO Tom Georgens, causing the vendor to miss Wall Street analysts’ expectations with $1.268 billion of revenue.

Georgens said orders of Flash Cache I/O modules were especially greater than expected, and a shortage of those components limited sales.

Georgens said “we have not resolved these problems yet,” and forecasted lower sales revenue for this quarter than expected as a result. Georgens said the time customers have to wait for FAS3200 systems varies, and would not confirm if the lead time was as long as six weeks as one analyst on the NetApp conference call suggested.

 “We’re working like mad to close this gap,” Georgens said. “It’s disappointing to be having this conversation. I can’t tell you with full confidence that we’re going to clear this up. I can tell you with full confidence that we’re working on this night and day. … Overall, in terms of constraints, it’s primarily in the I/O expansion module, certain semiconductor components along the way.”

Georgens also said NetApp is unlikely to follow competitors into large acquisitions. He said NetApp would pursue acquisitions that bring it into new markets or add features that would strengthen its current products. Since EMC outbid NetApp for Data Domain in mid-2009, NetApp has made two smaller acquisitions, picking up object storage vendor Bycast and storage management software vendor Akorri.

There has been speculation that NetApp would acquire a data analytics vendor such as ParAccel or Aster Data to counter EMC’s “big data” 2010 acquisition of Greenplum. However, Georgens said it is unlikely that NetApp will go that route.

 “I think there are more attractive adjacent markets for us to pursue,” Georgens said. “Partnerships are available in data analytics. That’s how we’ll go after that, as opposed to being in the analytics business ourselves.”


February 15, 2011  4:38 PM

Anatomy of poor IT buying decisions

Randy Kerns Randy Kerns Profile: Randy Kerns

 

I had an interesting conversation about IT decision-making while attending an education foundation meeting that was focused on education. The meeting was a chance to speak with CIOs and other senior IT people in a non-business setting. These discussions can be far-ranging and unguarded. In this non-sales, non-consultative environment, the discussion came around to the challenges each executive faced and the risks they had in their positions. 

 

The challenges included personnel issues as well as the products and solutions they had deployed in their IT operations. While it was interesting to hear about what systems (my focus was on storage but they talked about complete solutions that encompassed hardware and software) they had recently purchased, the more interesting discussions were about why they had made these purchases.

 

One theme came through frequently – many felt they had been stampeded into making a decision and purchase. They felt stampeded in the sense that they weren’t given time to do an adequate (in their opinions) evaluation to make an informed decision.

 

The reasons for not being given enough time included their receiving pressure from above saying the company required immediate action, and business would be lost if the solution was not delivered immediately. Organizationally, a group (internal customer for example) would look elsewhere if it failed to get an immediate solution.

 

“Look elsewhere” took the form of outsourcing, bringing their own systems to meet the needs, or threats of withdrawing budgetary support.

 

One person in the group said he was being forced to make “haphazard decisions” rather than informed ones. And these haphazard decisions included no thought of how this would affect the overall strategy or IT operations. The use of the word haphazard struck me as a somewhat alarming term. 

 

 

From my standpoint, this looked like a great opportunity to do a case study of what the lack of a strategy or formal process can mean to IT.  Asking what the results were for these stampeded or haphazard decisions did not yield a clear answer, however.  It became obvious that the real effect may not become apparent for a few years. The initial responses were more emotional (and negative) than actual quantified results. 

 

What I took away from this discussion was that decisions were still being made that were not informed and not really strategic. Whether they were bad decisions or not would take some time to find out. The people making the haphazard decisions were aware they were not proceeding correctly, and were embarrassed by it. But they felt they had no real choice. They had to make a decision and needed whatever information they could access quickly given the time they had. 

 

Fortunately, the majority of decisions about purchasing systems and solutions (especially storage systems) do not take this route. But the reality is some decisions are being made haphazardly, and that can present problems.

 

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).

 

 

 

 

 

 

 

 

 

 


February 10, 2011  5:28 PM

Take a hard (and soft) look at storage project ROI benefits

Randy Kerns Randy Kerns Profile: Randy Kerns

While working on a project for the IT organization of a large company recently, I began questioning the information it included in its Return On Investment (ROI) calculations. The organization used ROI as a measurement to buy storage as part of a storage efficiency project, but I found problems with the measurement based on what was included in the calculations.

Before explaining the difficulties I had with the information they used, let’s first look at what is typically included in ROI calculations. The definition I give for ROI is: the assessment of return on investment money for savings or gains from project implementation. For a storage project, the ROI calculations include:

  • Cost of the solution – hardware and software plus implementation costs,
  • Savings in administration and operations, and
  • Gains in increased business, productivity, customer service, etc.
  • ROI is usually expressed as a percentage of gain with a payback over a given period of time. I can still hear a salesman saying, “It pays for itself in just 11 months.” That’s the way ROI is used as an economic metric for decision making.

    But, there are two types of gains usually included in ROI calculations. They are called hard benefits and soft benefits. The hard and soft benefits are usually defined like this:

    Hard benefits include capital savings or deferred capital investments, operational savings, and productivity improvements.

    Soft benefits consists of opportunity costs such as maximizing revenue potential by making enterprise information and applications more available, and cost avoidance through minimizing down time.

    It is easy to add items — taking credit for potential economic benefits — or make your numbers overly optimistic in soft benefits. I’ve seen this become a game of liar’s poker in a competitive situation. Because of that, I recommend not including soft benefits while making a decision. Many times the soft benefits are being listed by a party with a lot to gain. It is best to leave those soft benefits as potential opportunities – preferably just listed but not quantified.

    Soft gains can still be used to help validate the accuracy of the project planning and for future corrective action. That’s why as part of the closed-loop process we cover in our Evaluator Group classes, we recommend including soft gains in the actual returns at the completion of the ROI time period.

    The important thing to take from this is that using ROI for economic decision-making needs to have a more discerning review of the inputs, and the review process must include a validation of the underlying assumptions.

    (Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


    February 9, 2011  2:10 PM

    EMC prepares block-level primary deduplication

    Dave Raffo Dave Raffo Profile: Dave Raffo

    Although it may have seemed like it, EMC didn’t actually upgrade its entire product portfolio during its much-hyped product launch last month. During an investors’ event webcast Tuesday, EMC executives said they have more product upgrades coming at EMC World in May, including primary block-level data deduplication.

    EMC president Pat Gelsinger called primary block dedupe the missing piece in one key part of EMC’s strategy — aligning storage with VMware.

    “There is one feature NetApp has that we don’t have – primary block deduplication,” EMC president Pat Gelsinger said. “We’ll have that in the second half.”

    Gelsinger said block primary dedupe is just one piece of the data reduction picture, but the rollout will “close that last piece of competitive gap that we don’t have.”

    EMC is the market leader in backup deduplication with its Data Domain and Avamar products and offers compression for primary data, but NetApp beat it to the punch with primary dedupe across its FAS storage arrays. Other vendors are also working on delivering dedupe for primary data. Dell acquired the technology when it bought Ocarina last year but hasn’t delivered a timeframe for integrating it with its storage systems. Xiotech and BlueArc have signed OEM agreements with Permabit to use Permabit’s Albiero primary dedupe software. IBM is going the compression route for primary data with its Storwize acquisition. EMC’s primary dedupe announcement may pressure its competitors to move forward with their products.

    Gelsinger said other products coming this year include VPlex with asynchronous replication for distances and a Symmetrix VMAX refresh, which he said will be “expanding down and across emerging markets.”

    Whatever products EMC delivers this year, you can expect the vendor to put a cloud twist on them. CEO Joe Tucci made the cloud the big theme of his presentation Tuesday.

    “We did not invent the cloud, but we recognized the opportunity of the cloud early,” he said. “We believe this will be the next big game-changing trend in the IT industry. It’s a wave of disruption, and I don’t believe there will be an IT segment that will be immune to this wave.”


    Forgot Password

    No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

    Your password has been sent to: