Storage Soup


March 1, 2011  6:34 PM

What’s next for unified storage?

Randy Kerns Randy Kerns Profile: Randy Kerns

Unified storage has gone from a specialty item to something offered from nearly every storage vendor in recent years. In the beginning, vendors such as NetApp took added block capability to their file system storage and NetApp’s biggest rivals have since followed down that unified path.

 

The evolution continues, however, and multiprotocol systems will likely include more technological advances over the coming years.

 

As a refresher, I define Unified Storage as:

 

Unified Storage is a storage system that provides both file and block access simultaneously. The block access is accomplished through use of an interface such as Fibre Channel, SAS, or iSCSI over Ethernet. The file-based access is to a file system on the storage system using either CIFS or NFS over Ethernet. 

 

 

An implied piece of unified storage is that it requires unified management, one storage system management for block and file data. Without that, the critical goal of consolidation and simplification is compromised. 

  

 

Some vendors have provided block storage through both Fibre Channel and iSCSI, while others stick to iSCSI only because it is simpler to deliver. The following diagram gives a very general view that compares the implementations for block and file storage:   

 

 

 

 

 

 

 

Unified storage systems are commonly offered by storage vendors, but that doesn’t mean every new storage system you buy must be unified. Certain high-end IT environments with specific usage requirements would use non-unified systems.If you only need high performance block storage, for instance, a unified system isn’t necessary.   

  

However, there are excellent uses of unified storage:

 

  • In a virtual server environment, a unified storage system presents an opportunity to meet demands for quickly provisioning virtual machines and meeting operational requirements. A virtual machine could be provisioned with a datastore based on NFS with its file I/O while the block storage capability of the unified storage would allow Real Device Mapping (RDM) to attach a physical disk to a virtual machine to meet application requirements.
  • If there is a predominance of one type of usage such as file storage for unstructured data but still there is a need for some block storage (an Exchange database for example), a unified storage system allows for consolidation to a single platform.
  • Unified storage provides great flexibility for an organization that needs to repurpose storage because its needs are changing.
  • Unified storage also provides a single resource that can be provisioned as needed for the usage type required – block or file.

 

 

What’s Next?

 

But vendors haven’t just been combining block and file protocols in the same package. Recent features added to unified systems include automated tiering, solid state devices (SSDs) as a tier for higher performance, and support for cascading read/write-capable snapshots to add value for use cases such as virtual desktop infrastructures (VDIs).

 

What should be expected next for unified storage? It’s likely that vendors will package other capabilities together and call that the new “unified storage.” That would dilute the meaning of “unified” and require a qualifying phrase after it.

 

More likely, there will an additional, high-value capability for storage that will have its own identity. Maybe it could be something like having a storage system with the capability to intelligently (and automatically) do archiving as well. Call it “archiving-enabled” storage. This is more evolutionary than revolutionary. But, it will be uniquely defined.

 

 

 

 

 

February 23, 2011  7:08 PM

Scale-out vs. scale-up: the basics

Randy Kerns Randy Kerns Profile: Randy Kerns

There’s been a lot of talk about scale-out and scale-up storage lately, and I get a sense that a lot of people don’t understand that these terms are not synonymous. And that causes confusion among IT professionals when they are planning product purchases and trying to determine how these types of products bring value versus cost and complexity to their environments.

 

 

To make informed buying decisions, IT pros need to understand the difference between scale-up and scale-out. The following are the basics, which can be built upon for more detailed considerations.

 

Scale-up, as the following simple diagram shows, is taking an existing storage system and adding capacity to meet increased capacity demands.    

 

 

Scale-up can solve a capacity problem without adding infrastructure elements such as network connectivity. However, it does require additional space, power, and cooling. Scaling up does not add controller capabilities to handle additional host activities. That means it doesn’t add costs for extra control functions either.

So the costs have not scaled at the same rate for the initial storage system plus storage devices – only additional devices have been added.

 

Scale-out storage usually requires additional storage (called nodes) to add capacity and performance. Or in the case of monolithic storage systems, it scales by adding more functional elements (usually controller cards).One difference between scaling out and just putting more storage systems on the floor is that scale-out storage continues to be represented as a single system.

There are several methods for accomplishing scale out, including clustered storage systems and grid storage. The definitions of these two types can also be confusing, and other factors add to the complexity (that’s a subject for another article), but the fundamental premise is that a scale-out solution is accessed as a single system.

This diagram shows an example of a scale-out storage solution. In this diagram, the scaling is only with an additional node but a scale-out solution could have many nodes that are interconnected across geographical distances.  

 

 

The scale-out storage in this example added both the control function and capacity but maintained a single system representation for access. This scaling may have required additional infrastructure such as storage switches to connect the storage to the controller and a connection between the nodes in the cluster or grid. These connections let the solution work as a single system.

 

Scaling-out adds power, cooling, and space requirements, and the cost includes the additional capacity, control elements and infrastructure.  With the scale-out solution in this example, capacity increased and performance scaled with the additional control capabilities.

 

Not all scaling solutions are so simple. Many storage systems can scale out and up. The following diagram illustrates this:

 

Considerations

 

When looking at scale-up or scale-out storage, consider these factors:

 

  • Costs Scale up adds capacity but not the controller or infrastructure costs.  If the measure is dollar per GB, scale-up will be less expensive.
  • Capacity. Either solution can meet capacity requirements but there may be a limit on the scale-up capacity based on how much capacity or how many devices an individual storage controller can attach.
  • Performance. Scale out has the potential capability to aggregate IOPS and bandwidth of multiple storage controllers. Representing the nodes as a single system may introduce latency, but this is implementation specific. 
  • Management. Scale up would have a single storage system management characterization. Scale-out systems typically have an aggregated management capability but there may be variations between vendor offerings. 
  • Complexity. Scale-up storage is expected to be simple, while scale-out systems may be more complex because they require elements to manage. 
  • Availability. Additional nodes should provide greater availability in case one element fails or goes out of service. This depends on the particular implementation.

There is a great deal to consider when making a choice between scale out and scale up. The decision will ultimately depend on how one vendor implements its solution and its capabilities and features compared to another vendor. But, it is always best to start with a basic understanding and then look at the differences.   

 


February 23, 2011  2:34 PM

Dell closes Compellent, opens new storage era

Dave Raffo Dave Raffo Profile: Dave Raffo

When Dell closed its $800 million acquisition of Compellent Tuesday, it also closed the books on a year of deals that will transform Dell’s storage portfolio.

Dell first said it wanted to buy Compellent last December. Before that, it made two smaller 2010 acquisitions that could enhance its overall storage product line. It bought the IP of scale-out NAS vendor Exanet and acquired data reduction startup Ocarina Networks. Dell also tried to buy 3PAR but was outbid by Hewlett-Packard several months before it grabbed Compellent instead. In another storage move, Dell launched a DX object-based storage platform in 2010.

A Dell spokesman said the vendor will have more details about its overall storage strategy next month, but it appears that Compellent will bring Dell into its post-EMC era. Or as Dell executives like to say, its storage business has gone from being mostly a reseller of storage to becoming a full provider of storage technology.

Dell lists the Compellent Series 40 as its multiprotocol storage system for enterprise applications with EqualLogic as its scalable Ethernet storage and PowerVault as the entry level DAS and iSCSI SAN choice. Of course, EMC Clariion has filled the multiprotocol SAN role for Dell for close to a decade.  Dell will not OEM and may not even resell the EMC VNX that will replace the Clariion, and definitely will not sell the EMC VNXe SMB system that competes with EqualLogic and PowerVault.

A slide attached to Dell executive Rob Williams’ blog posted yesterday lists Compellent as Dell’s high end block storage offering with EMC’s CX4 and the EqualLogic PS Series as midrange SAN products, and PowerVault as the entry level system. That’s odd positioning because Compellent has always been characterized as a midrange system – actually, more towards the lower end of the midrange.

Williams did shed some light on Dell’s storage strategy in his blog:

“Together with Dell EqualLogic, PowerVault and Compellent, our block and file storage offerings now span every layer from low- to mid-range offerings for SMBs and Public institutions to higher-end enterprise ready offerings for large corporations and organizations. 

Last year, we also acquired two more storage assets that bring with them important IP. Exanet provides Dell with scale-out file storage capabilities, and moves us for the first time beyond the arena of mid-range block storage and into the playing field of mid-range file storage.  We hope to launch our first Exanet NAS product early this summer.  The second is Ocarina, which gives us content-aware file deduplication capabilities that we plan to add across our entire storage portfolio over time.”

There may be more to come. When they first spoke about buying Compellent in December, Dell executives said they may look to add data management features though an acquisition.  All of this makes Dell worth watching through 2011.

 


February 22, 2011  3:11 PM

Alternatives available for HPC shops lacking Lustre support

Dave Raffo Dave Raffo Profile: Dave Raffo

With Oracle showing a lot less love for open source storage software than Sun did, high performance computing (HPC) shops are nervous about the future of Lustre and HPC hardware vendors are taking steps to pick up the slack.

 

Last year the Illumos project sprung up as the governing body for OpenSolaris to help spur development of the source code used for ZFS. Now the Lustre community is rallying around the storage file system used in most large HPC implementations as Oracle shows no signs of supporting it.

Since late last year, Xyratex acquired ClusterStor for its Lustre expertise, start-up Whamcloud started aggressively hiring Lustre developers and partnering with hardware vendors on support deals, and Cray, DataDirect Networks (DDN), Lawrence Livermore and Oak Ridge National labs launched OpenSFS.org to develop future releases of Lustre.

DDN last week launched a promotional offer to provide Lustre customers a third year of support for free if they purchase a two-year DDN file system support contract. The support comes through DDN’s alliance with Whamcloud.

“There is a general uneasiness in the industry, and people are looking for somebody to step up,” DDN marketing VP Jeff Denworth said. “There’s been a gross defection of talent from Oracle around the Lustre file system.”

DataDirect Networks customer Stephen Simms, manager of the Data Capacitor project at Indiana University, said it will take initiatives like those undertaken by DDN, Whamcloud, and OpenSFS to save Lustre.

  

“Without people to develop Lustre, then Lustre is going to die,” he said. “National laboratories have a strong investment in Lustre, and they will do their best to keep it alive, but without the pool of talent that exists in for-profit companies, where are you going to be? You’re going to be an organization that desperately needs a file system with only a handful of developers.”

  

Simms said Lustre is a crucial piece of IU’s Data Capacitor, a high speed, high bandwidth storage system that serves IU campuses and other scientific research sites on the TeraGrid network. The IU team modified Lustre file system code to provide the automatic mapping of user IDs (UIDs) across TeraGrid sites.

  

“We wouldn’t be able to develop this system for mapping UID space if the file system were closed,” he said. “The fact that it’s open has been a big deal for us. It’s important to have the expertise someplace. It’s a concern that there are so few Lustre developers, and they’ve left Oracle and where have they gone? Some have gone to Xyratex and some to WhamCloud, and who knows where others have gone? It’s important to keep momentum.”

 

 


February 17, 2011  12:19 PM

NetApp can’t keep up with FAS3200 demand

Dave Raffo Dave Raffo Profile: Dave Raffo

 

If you want to buy a NetApp FAS3200 storage system, you may have to wait a while – especially if you want it with Flash Cache.

NetApp executives Wednesday said the vendor has received about four times as many orders as expected for the systems that launched last November, and sold out of them because of supply constraints. The failure to fill orders cost NetApp about $10 million to $15 million in sales last quarter according to CEO Tom Georgens, causing the vendor to miss Wall Street analysts’ expectations with $1.268 billion of revenue.

Georgens said orders of Flash Cache I/O modules were especially greater than expected, and a shortage of those components limited sales.

Georgens said “we have not resolved these problems yet,” and forecasted lower sales revenue for this quarter than expected as a result. Georgens said the time customers have to wait for FAS3200 systems varies, and would not confirm if the lead time was as long as six weeks as one analyst on the NetApp conference call suggested.

 “We’re working like mad to close this gap,” Georgens said. “It’s disappointing to be having this conversation. I can’t tell you with full confidence that we’re going to clear this up. I can tell you with full confidence that we’re working on this night and day. … Overall, in terms of constraints, it’s primarily in the I/O expansion module, certain semiconductor components along the way.”

Georgens also said NetApp is unlikely to follow competitors into large acquisitions. He said NetApp would pursue acquisitions that bring it into new markets or add features that would strengthen its current products. Since EMC outbid NetApp for Data Domain in mid-2009, NetApp has made two smaller acquisitions, picking up object storage vendor Bycast and storage management software vendor Akorri.

There has been speculation that NetApp would acquire a data analytics vendor such as ParAccel or Aster Data to counter EMC’s “big data” 2010 acquisition of Greenplum. However, Georgens said it is unlikely that NetApp will go that route.

 “I think there are more attractive adjacent markets for us to pursue,” Georgens said. “Partnerships are available in data analytics. That’s how we’ll go after that, as opposed to being in the analytics business ourselves.”


February 15, 2011  4:38 PM

Anatomy of poor IT buying decisions

Randy Kerns Randy Kerns Profile: Randy Kerns

 

I had an interesting conversation about IT decision-making while attending an education foundation meeting that was focused on education. The meeting was a chance to speak with CIOs and other senior IT people in a non-business setting. These discussions can be far-ranging and unguarded. In this non-sales, non-consultative environment, the discussion came around to the challenges each executive faced and the risks they had in their positions. 

 

The challenges included personnel issues as well as the products and solutions they had deployed in their IT operations. While it was interesting to hear about what systems (my focus was on storage but they talked about complete solutions that encompassed hardware and software) they had recently purchased, the more interesting discussions were about why they had made these purchases.

 

One theme came through frequently – many felt they had been stampeded into making a decision and purchase. They felt stampeded in the sense that they weren’t given time to do an adequate (in their opinions) evaluation to make an informed decision.

 

The reasons for not being given enough time included their receiving pressure from above saying the company required immediate action, and business would be lost if the solution was not delivered immediately. Organizationally, a group (internal customer for example) would look elsewhere if it failed to get an immediate solution.

 

“Look elsewhere” took the form of outsourcing, bringing their own systems to meet the needs, or threats of withdrawing budgetary support.

 

One person in the group said he was being forced to make “haphazard decisions” rather than informed ones. And these haphazard decisions included no thought of how this would affect the overall strategy or IT operations. The use of the word haphazard struck me as a somewhat alarming term. 

 

 

From my standpoint, this looked like a great opportunity to do a case study of what the lack of a strategy or formal process can mean to IT.  Asking what the results were for these stampeded or haphazard decisions did not yield a clear answer, however.  It became obvious that the real effect may not become apparent for a few years. The initial responses were more emotional (and negative) than actual quantified results. 

 

What I took away from this discussion was that decisions were still being made that were not informed and not really strategic. Whether they were bad decisions or not would take some time to find out. The people making the haphazard decisions were aware they were not proceeding correctly, and were embarrassed by it. But they felt they had no real choice. They had to make a decision and needed whatever information they could access quickly given the time they had. 

 

Fortunately, the majority of decisions about purchasing systems and solutions (especially storage systems) do not take this route. But the reality is some decisions are being made haphazardly, and that can present problems.

 

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).

 

 

 

 

 

 

 

 

 

 


February 10, 2011  5:28 PM

Take a hard (and soft) look at storage project ROI benefits

Randy Kerns Randy Kerns Profile: Randy Kerns

While working on a project for the IT organization of a large company recently, I began questioning the information it included in its Return On Investment (ROI) calculations. The organization used ROI as a measurement to buy storage as part of a storage efficiency project, but I found problems with the measurement based on what was included in the calculations.

Before explaining the difficulties I had with the information they used, let’s first look at what is typically included in ROI calculations. The definition I give for ROI is: the assessment of return on investment money for savings or gains from project implementation. For a storage project, the ROI calculations include:

  • Cost of the solution – hardware and software plus implementation costs,
  • Savings in administration and operations, and
  • Gains in increased business, productivity, customer service, etc.
  • ROI is usually expressed as a percentage of gain with a payback over a given period of time. I can still hear a salesman saying, “It pays for itself in just 11 months.” That’s the way ROI is used as an economic metric for decision making.

    But, there are two types of gains usually included in ROI calculations. They are called hard benefits and soft benefits. The hard and soft benefits are usually defined like this:

    Hard benefits include capital savings or deferred capital investments, operational savings, and productivity improvements.

    Soft benefits consists of opportunity costs such as maximizing revenue potential by making enterprise information and applications more available, and cost avoidance through minimizing down time.

    It is easy to add items — taking credit for potential economic benefits — or make your numbers overly optimistic in soft benefits. I’ve seen this become a game of liar’s poker in a competitive situation. Because of that, I recommend not including soft benefits while making a decision. Many times the soft benefits are being listed by a party with a lot to gain. It is best to leave those soft benefits as potential opportunities – preferably just listed but not quantified.

    Soft gains can still be used to help validate the accuracy of the project planning and for future corrective action. That’s why as part of the closed-loop process we cover in our Evaluator Group classes, we recommend including soft gains in the actual returns at the completion of the ROI time period.

    The important thing to take from this is that using ROI for economic decision-making needs to have a more discerning review of the inputs, and the review process must include a validation of the underlying assumptions.

    (Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


    February 9, 2011  2:10 PM

    EMC prepares block-level primary deduplication

    Dave Raffo Dave Raffo Profile: Dave Raffo

    Although it may have seemed like it, EMC didn’t actually upgrade its entire product portfolio during its much-hyped product launch last month. During an investors’ event webcast Tuesday, EMC executives said they have more product upgrades coming at EMC World in May, including primary block-level data deduplication.

    EMC president Pat Gelsinger called primary block dedupe the missing piece in one key part of EMC’s strategy — aligning storage with VMware.

    “There is one feature NetApp has that we don’t have – primary block deduplication,” EMC president Pat Gelsinger said. “We’ll have that in the second half.”

    Gelsinger said block primary dedupe is just one piece of the data reduction picture, but the rollout will “close that last piece of competitive gap that we don’t have.”

    EMC is the market leader in backup deduplication with its Data Domain and Avamar products and offers compression for primary data, but NetApp beat it to the punch with primary dedupe across its FAS storage arrays. Other vendors are also working on delivering dedupe for primary data. Dell acquired the technology when it bought Ocarina last year but hasn’t delivered a timeframe for integrating it with its storage systems. Xiotech and BlueArc have signed OEM agreements with Permabit to use Permabit’s Albiero primary dedupe software. IBM is going the compression route for primary data with its Storwize acquisition. EMC’s primary dedupe announcement may pressure its competitors to move forward with their products.

    Gelsinger said other products coming this year include VPlex with asynchronous replication for distances and a Symmetrix VMAX refresh, which he said will be “expanding down and across emerging markets.”

    Whatever products EMC delivers this year, you can expect the vendor to put a cloud twist on them. CEO Joe Tucci made the cloud the big theme of his presentation Tuesday.

    “We did not invent the cloud, but we recognized the opportunity of the cloud early,” he said. “We believe this will be the next big game-changing trend in the IT industry. It’s a wave of disruption, and I don’t believe there will be an IT segment that will be immune to this wave.”


    February 7, 2011  7:49 PM

    Violin gets funding from Toshiba, Juniper; HP too?

    Dave Raffo Dave Raffo Profile: Dave Raffo

    Flash memory array startup Violin Memory today said it received $35 million in funding, naming NAND flash maker Toshiba America and networking vendor Juniper Networks as investors. There were other investors not named, including one that sounds a lot like Hewlett-Packard.

    Violin’s press release said Toshiba and Juniper were investors, “along with other corporate partners, crossover investment funds, high net worth industry leaders and private equity general partners.” In an interview with StorageSoup, Violin Memory CEO Don Basile said the other investors include a “large storage and systems company that doesn’t want to be named.”

    Basile wouldn’t confirm or deny if that partner was HP, but said the investor was a company that Violin has worked with. HP last June posted blazing fast online transaction processing (OLTP) benchmarks using four Violin Memory Arrays with HP ProLiant Blade Servers, marking the first time HP used another vendor’s memory system in that type of benchmark. There have been whispers as well as published reports in recent weeks that HP and Violin are working on deepenning their relationship.

    Basile said the new investors will lead to partnerships that will boost Violin’s distribution. He said two major trends are poised to drive Flash storage implementation. He credited Oracle CEO Larry Ellison with identifying the first trend last year when he said Oracle would use flash to speed up its databases. The second trend Basile sees is for flash to provide a foundation for building private clouds. That’s where Juniper comes in.

    “Juniper exists in the highest end of data centers, and paired with a memory array like Violin’s, it can enable private clouds to be built,” Basile said. “We have [flash] in processors, and now we’ll have higher performance networking and I/O tiers to match those processors. This is something that will start to be talked about this year, and in 2012 people will start to roll out cloud services based on these technologies.”

    Basile said the funding will allow Violin to double its headcount this year, mostly by expanding its sales and engineering. The company current has just over 100 employees. He also promised new product rollouts in the coming months. Violin’s current products include a 3200 Flash Memory Array, a 3140 Capacity Flash Memory Array, and a vCache NFS caching device built on technology it acquired last year from Gear6.

    Violin and Toshiba announced a strategic partnership last April, disclosing that Violin is using Toshiba’s NAND flash chips.


    February 3, 2011  6:36 PM

    New WhipTail CEO anticipates rise of solid state, demise of spinning disk

    Dave Raffo Dave Raffo Profile: Dave Raffo

    Solid state storage startup WhipTail Technologies is looking to move into its next phase of development after hiring a new CEO this week.

    Dan Crain, whose 27 years in the IT business includes five years as CTO of Brocade, takes over at WhipTail with the belief that the spinning disk era in enterprise storage is coming to an end. Crain said it’s a matter of time until the right type of solid storage takes over, and he believes WhipTail’s I/O acceleration appliances are on the right track.

    “Without a doubt, we’re getting to the end of spinning media and disk,” he said. “It’s been around for 40 years, and the physics are getting more challenging. Solid state as a medium for persistent storage is getting there.”

    Crain said he knows just having solid state in a storage system isn’t enough. All of the major storage vendors do that now, and smaller vendors such as Nimbus Data, Texas Memory Systems, Avere Systems, Violin Memory, and Alacritech have all-solid state disk (SSD) systems.

    Whiptail’s Data Center XLR8r and Virtual Desktop XLR8r 2u multi-level cell (MLC) appliances use NAND and DRAM, and focus on block storage, particularly applications with high I/O requirements. Crain said he sees all-SSD appliances the way to go rather than hybrid systems that use solid state and spinning disk as separate tiers. He said the market will take care of one SSD hurdle – price – while superior technology can overcome reliability issues.

    “A lot of tiered systems exist because the cost of solid state storage media was very, very, very expensive,” Crain said. “The price has fallen substantially now. With Flash-based disk, NAND-memory based SSDs, the more you write to them, the more they wear out. We have patented technologies that help deal with that. Hybrid systems that move things in and out of SSDs cause write wear. We don’t think that’s the way to go.”

    Crain said the Data Center appliance is tuned around database functions such as transaction logs and indexes, while the Virtual Desktop model is tuned for virtual desktop performance loads. “We’re not going to be the production data store for databases, we will be the index and transactional log storage device,” he said. “We keep system files, not user data.”

    Since it started in 2008, WhipTail has been run by founders Ed Rebholz and James Candelaria. Crain replaces Rebholz as CEO, while Candelaria stays on as CTO. Crain said WhipTail is still in the early stage of product sales with “several dozen” customers, and he expects to rollout product enhancements soon.

    “We have a lot of growth to do over the next two or three years and you’ll see us produce a lot of interesting technology,” he said. “Some of it will be very radical.”


    Forgot Password

    No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

    Your password has been sent to: