Storage Soup

April 6, 2011  12:24 PM

Storage buying requires an application perspective

Randy Kerns Randy Kerns Profile: Randy Kerns

Much information provided about storage systems reflects details about the specifications of the storage systems – known as speeds and feeds. This has been the norm, and if that information was not presented prominently, many would think the vendor is trying to focus attention away from flaws in its product.

But times have changed, and the Information Technology professionals making decisions on acquiring the right storage system need to look at different factors.

What is different for making the decision and why there has been a change is interesting, and may require explanation. First, let’s look at the reason why. In general, the number of storage professionals, whether they are called storage administrators or not, has significantly declined. Professionals with exclusively storage responsibilities are mostly found only in the largest IT operations.

That leaves fewer storage specialists, mostly because of reductions in staffs. The IT people remaining have to take on multiple responsibilities. Additionally, the requirements for storage and the availability of solutions that require less detailed management or control have enabled successful deployment and operation without the storage specialist.

Changes in staffing have led to changes in decision-making when acquiring storage systems. The primary focus now is usually applying the storage system to an application to meet requirements for business issues. In this scenario, the storage system is integrated into the environment as a solution for one application. It is implemented and managed for the specific requirements of that application. How the storage can quickly meet the application needs is the most important consideration.

The need for information about speeds and feeds is still there, but that is not the most important representation to the IT professional making the evaluation. At least it shouldn’t be. There are some products (and product marketing) that continues to focus on that message. There are more opportunities for systems that solve a business problem and quickly meet application needs.

There will be a form of natural selection occurring here. Vendors that understand and can adapt to representing their products to their customer needs will ultimately be more successful. Those stuck in their old methods and thinking will have a more difficult time. The Evaluator Group publishes a series of Evaluation Guides on how to make informed decisions regarding purchasing of storage at These guides attempt to focus on what is important to look at regarding the purchase of storage solutions.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).

April 4, 2011  11:46 PM

Belluzzo takes Quantum leap from CEO job; COO Gacek steps up

Dave Raffo Dave Raffo Profile: Dave Raffo

Backup vendor Quantum switched CEOs today. Rick Belluzzo stepped down after nine years and Jon Gacek stepped up from his role as COO and president to replace him.

Belluzzo remains chairman of Quantum’s board, and both men said the vendor’s goals remain the same under Gacek: to remain the leader in open systems tape libraries, start taking disk backup market share from EMC’s Data Domain and expand its StorNext file system’s role in rich media and archiving.

“We’re in a different place than we were over the last few years,” Gacek said in an interview today. “The company needed to transform into a systems company [from a tape company]. I feel like with our products and value to end users, we’re in a much different place. I don’t think people understand how good the new DXi [data deduplication] software is yet. In bakeoffs against Data Domain, we’re more than competitive.”

Quantum had to remake its disk backup business strategy after ECM spent $2.1 billion to buy Data Domain in 2009. Before that, EMC sold Quantum’s first generation DXi software through an OEM deal. That OEM deal went away after the Data Domain acquisition, and Quantum has struggled to recoup that lost revenue through sales of its branded DXi appliances. It has since overhauled its entire DXi hardware platform, and upgraded the DXi software to version 2.0 in January.

“EMC tends to just badmouth our technology,” Gacek said. “They say, ‘We used to sell it, it’s not very good.’ But that was a couple of generations ago. We tell the customer, ‘We’ll bring the product in, go ahead and run it against Data Domain.’ Our win rates go up when we do that.”

Still, Quantum hasn’t made much of a dent in Data Domain’s dedupe backup business. While announcing the CEO change today, Quantum disclosed that its revenue for last quarter was around $165 million – near the low end of its forecast and about the same as a year ago.

Gacek said the strategy is to take on Data Domain through Quantum’s own channel, but he’ll keep the door open to other large OEM deals to try and pick up the slack.

“We have to control our own destiny, and that’s why we’re focused on our branded channel,” he said. “I don’t run the other companies, but it looks like EMC is running the table on the other guys. EMC is taking IBM and HP and Oracle to the woodshed [for disk backup]. I’m not chasing OEMs, but I do think the space will get more and more competitive.”

Quantum has an OEM deal with Fujitsu, but that doesn’t come close to replacing the lost revenue from EMC.

Gacek joined Quantum from ADIC after Quantum acquired its tape competitor in 2006. [Quantum’s dedupe and StorNext technology come from ADIC]. He began at Quantum as CFO, became COO in 2009, and had president added to his title in January. Belluzzo said the management transition was planned for more than a year, but Gacek said he had no assurances in January that he would be the next CEO.

It’s hard to believe Belluzzo didn’t feel at least some pressure to resign. Quantum’s stock price opened at $2.50 today – well above the 12 cents it dipped to in late 2009 but still far below the $4.02 it was at before the economy tanked in late 2008. Since he became CEO in Sept. 2002, Quantum’s stock has risen 21% compared to Nasdaq’s overall growth of 145%.

While Quantum still hasn’t been able to record significant growth in the hot dedupe market, Belluzzo said he’s leaving the company in good shape.

“Although we have had our share of challenges, the company is well positioned to play an expanded role in the storage industry,” he said.

March 30, 2011  3:26 PM

Establishing data relevance can help archiving strategy

Randy Kerns Randy Kerns Profile: Randy Kerns

As highlighted in many reports, the massive amounts of data being created today cause concerns for Information Technology pros – especially those who manage storage.

These concerns involve how to process data, what to keep and where to put it. Issues include how to present the information, where to store data, what are the requirements for that data, and how much will it cost to retain it. Most of the newly created data is in the form of files.
The massive amount of data being created is great news for storage vendors because it means that more storage is required. But storing all this newly created data may be unsustainable for organizations because of the cost required, as well as the physical space and power that storage systems use.

All of this data may require a new approach to storage and archiving. That approach involves creating a method to establish data relevance as part of the analytics performed when data is ingested. Relevance implies that there would be immediate data analysis. For example, data received from a source (monitoring equipment, feedback data, etc.) would immediately go into a data analytics process. The relevant source data received would go to an archiving storage system while the analytic processing continued. The valuable information in intermediary form would be retained in the analytic nodes or on a shared storage system.

The source data sent to the archive would be available for data mining or reprocessing if required. The archive system would handle the data protection process – a one-time protection for new data based on the requirements established for the business.

The processing for establishing the data relevance could be performed by the analytics engine or as part of advanced functions in a storage system. The data relevance engine would move the relevant data to the most appropriate location based on a set of rules on the analysis. Some data could be retained on primary storage but the majority would be stored directly on a more economical archiving system.

This may not really be a new model, but it reduces the steps and time it takes to manage the data. Making a solution like this available for IT would have high economic value and immediate benefit in dealing with the massive amounts of data being created.

March 29, 2011  12:40 PM

Permabit aims its primary dedupe at low-end NAS market

Dave Raffo Dave Raffo Profile: Dave Raffo

Permabit is expanding its Albireo primary data deduplication application with the Albireo Virtual Data Optimizer (VDO), which can bring dedupe to Linux-based SMB NAS systems.

Albireo, launched by Permabit last year, lets storage vendors embed inline post-process deduplication in their systems. OEM partners BlueArc, Xiotech, and LSI Engenio have signed on to use Albireo for enterprise storage systems. VDO is aimed at products on the other end of the storage market.

Permabit CTO Jered Floyd said the original Albireo is a software library that vendors can integrate with their file systems or block storage systems. It sits out of the data path, and lets the storage system control the data. VDO is a plug-in that sits in the data path between the file server and disk infrastructure. Floyd said that lets VDO add compression if its OEM partners choose to.

“It’s different from the Albireo deduplication library, because now we do own the data,” he said. “It’s not a concern for these OEMs, who are already depending on [open source] third party software for data placement and other capabilities.”

NAS vendors such as Overland Storage’s Snap platform, Buffalo Technology, NetGear, and Cisco Linksys would be candidates for VDO. Permabit CEO Tom Cook said there are at least 25 vendors who would fit the bill, and they make up about 18% of the NAS market. VDO can also be used in block and unified storage systems.

“They could do their own development, but they have other priorities,” Cook said of these NAS vendors. “Their other option would be a standalone dedupe appliance that sits in front of their system, and that drives dedupe right out of the market from a price standpoint.”

He expects VDO to be available to OEMs in the second half of the year.

Cook also confirmed what had been a poorly kept secret in the storage industry – LSI is working to add Albireo to its SAN platform. And he said he doesn’t expect NetApp’s pending purchase of LSI’s Engenio storage division to change that. NetApp has its own primary dedupe for its FAS storage platform, which is one of the main competitors for Albireo.

“The NetApp [Engenio] deal presents interesting business options for us,” Cook said. “I can’t comment beyond that.”

March 24, 2011  1:10 PM

DataDirect Networks ready to aim directly at NetApp NAS

Dave Raffo Dave Raffo Profile: Dave Raffo

When NetApp closes its $480 million acquisition of LSI’s Engenio storage division, it will move into head-to-head competition with high performance computing storage vendor DataDirect Networks in markets where NetApp barely plays today. And DDN will soon respond by moving into NetApp’s mainstream NAS space.

DDN is preparing to launch – probably next month – a NASScaler product that DDN’s EVP of strategy and technology Jean-Luc Chatelain said will be “aimed at the NetApp market” rather than HPC.

“It has standard IT NAS-type behavior,” Chatelain said. “We realized the demand for the density, bandwidth, capacity and performance that we used to see in specialty machines has migrated toward the traditional NAS market. It’s the standard NFS behavior on top of high performance computing.”

The NASScaler will be DDN’s fourth file storage system, to go with its xStreamScaler for media and entertainment, GridScaler for cloud and HPC and ExaScaler for supercomputing.

DDN bills itself as the largest private storage vendor, an assessment that IDC agrees with. DDN executives claim the vendor generated $180 million in revenue in 2010 and grew about 40% in 2009 and 2010. The vendor’s storage sells into what EMC calls “big data” markets, which are the same ones NetApp intends to chase with LSI Engenio. Those markets include HPC, media and entertainment, digital security, and as a platform for cloud providers.

It will take awhile before DDN can provide NetApp with solid competition in mainstream NAS, but the vendors will contend for both end user customers and OEM partners in the HPC space. The Engenio 7900 Storage System competes with DDN’s products, and is sold by OEMs including Cray, Teradata and SGI.

“It will be interesting to see what happens now,” Chatelain said. “NetApp is not focused on the domain where we play. NetApp is not a brand name in the world of high performance computing or rich media. We are known as people committed to those verticals.”

March 21, 2011  1:20 PM

Storage tiering draws interest, with good reason

Randy Kerns Randy Kerns Profile: Randy Kerns

Storage tiering has generated a good amount of interest lately, largely because of the performance improvements it can bring to organizations using solid state drives (SSDs) in their storage setups.

When it comes to tiering, it’s worth listening to what storage vendors have to say. There are actual benefits, and several approaches with real competitive differentiation.

Performance and cost are the big benefits of storage tiering. The goals are to balance the workload across the different storage elements and maximize the value returned from the investment.

Using SSD in a tiering approach provides improvements in latency and transfer rate. Intelligent tiering can maximize SSD usage as a resource, which in turn can reduce the number (and cost) of SSDs in a system.

If you’re considering adding tiers or changing your tiering strategy, you might first want to look at the available methods.

Storage systems have tiers usually consisting of several types of hard disk drives (HDDs) and perhaps SSDs. There may be two types of HDDs – a high RPM drive with less capacity for higher performance and a lower RPM HDD with higher capacity. Customers seem to be moving towards SSDs for performance and larger capacity HDDs as the tiers of choice. The movement of data between tiers is based on analysis of access patterns to data.

Most of the tiering systems being offered by vendors now have sub-LUN movement of the data (early implementations only moved entire volumes/LUNs). Key choices to make around the implementations deal with the “real-time” analysis and the automation around the movement of the data. Another difference for consideration is the potential contention created by the movement using system resources that are also needed to satisfy host requests.

Another tiering method is through intelligent placement of sub-LUN segments of data along with caching tiers. This approach uses the analysis of patterns of data access to determine where to place the data, and cache tiers to control performance. The goal with this approach is to provide the performance based on data access characteristics without introducing additional resource consumption for movement.

There are also hybrid approaches where intelligent placement and caching are combined with limited background movement. These approaches try to strike a balance where the intelligence of the tiering controls the resource usage while balancing the potential performance gains.

The Evaluator Group has a review of major vendors’ automated tiering systems.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).

March 18, 2011  8:31 PM

Nirvanix says cloud can ease data concerns in Japan

Dave Raffo Dave Raffo Profile: Dave Raffo

Cloud proponents look at the disasters in Japan and say even the most horrific incidents don’t have to cause people to lose data.

By cloud proponents, I mean people who sell cloud products and services. Cloud storage vendor Nirvanix has one of its five worldwide data centers in Japan. Although the Nirvanix data center is 200 miles from the area where Japan has been hit by an earthquake, tsunami and potential nuclear meltdown, it has offered to move any customer data from the Japan site to one of its other data centers for free. Nirvanix also has facilities in Los Angeles, New York, Dallas, and Frankfurt, Germany.

Several of Nirvanix’s approximately 50 customers in Japan have taken the vendor up on its offer, Nirvanix CEO Scott Genereux said. But he said the offer was made mostly to give customers peace of mind. He said the data center has its own power and shouldn’t be affected by rolling blackouts expected in Japan. And even if the building were wiped out, all customer data could be restored at another site.

“Without having to touch tape filers, aging NAS filers onsite or a physical RAID box on the floor, customers can move their data,” he said. “They don’t have to wait for a repair specialist to come and add drives.

“This is the power of the public cloud. Customers can select the location of their data. You can’t do that with a private cloud. That’s just isolated storage.”

Of course, companies using private clouds or no clouds can also migrate data offsite. That’s one of the keys to good disaster recovery – getting data to a safe location in case your main location becomes unavailable. And the cloud can be a simple way to do that.

Dick Mulvihill, managing partner of Chicago-based data protection cloud provider HexiSTOR, said major disasters have eased a lot of his customers’ fears about trusting the management of their data to a separate party.

“The whole mindset of worry about data going off site to the cloud has changed,” he said. “It’s becoming a more popular option because of the simplicity and automation of the cloud. For dealing with natural and man-made disasters, using the cloud is good governance.”

March 18, 2011  12:21 PM

Dell set to roll out NAS, dedupe across storage platform

Dave Raffo Dave Raffo Profile: Dave Raffo

Now that Dell has closed on its Compellent acquisition, the next steps in its storage product expansion will be to add Exanet scalable NAS and Ocarina primary data deduplication to its two main platforms.

The $820 million Compellent acquisition gives Dell a second SAN product to go with its 2008 EqualLogic iSCSI SAN buy. Dell executives say the smaller 2010 acquisitions of Exanet and Ocarina will complement those SAN purchases.

Dell is close to completing the integration of Exanet’s clustered NAS file system with EqualLogic and entry level PowerVault storage. Those products will launch in mid-year, probably at the inaugural Dell Storage Forum conference in June. Later this year, they expect to add Ocarina’s content-aware compression and data deduplication into the Exanet file system.

Longer range plans – probably in 2012 — include integrating the Exanet file system with Compellent storage, developing dedupe for block storage, and coming up with a common management application for EqualLogic, Compellent and Exanet.

“Exanet and Ocarina will become ubiquitous technologies across the two platforms,” said Travis Vigil, Dell’s executive director of product marketing for enterprise storage. “We’ll have a file system that will provide unified storage for EqualLogic, PowerVault and Compellent. Once you get that common file system, we can take things that EqualLogic and Compellent have done with automated tiering and load balancing within the array and do that across the storage environment.”

Vigil admitted there is overlap between EqualLogic – which plays in the low to middle of the midrange – and Compellent, which sells into the middle and higher end of the midrange. The plan is to extend Compellent higher into the enterprise to fill some of the gap left when Hewlett-Packard outbid Dell for 3PAR last year. “If you don’t have overlap, you have a gap,” Vigil said.

Vigil said Dell will not sell the Ocarina Storage Optimizer dedupe appliances that Ocarina sold before the acquisition, but will support customers who bought the products.

Dell also will continue to OEM EMC Clariion SAN, Celerra NAS and the smaller Data Domain backup devices for the current lifecycle of those products, but Vigil said the “our focus is on Dell IP and the fluid data architecture.” Dell will resell but not OEM the EMC VNX unified storage systems that will eventually replace the Clariion and Celerra families.

March 16, 2011  8:36 PM

IT Pulse shows importance of storage in IT

Randy Kerns Randy Kerns Profile: Randy Kerns


I’m always looking at how storage is perceived by the IT industry and its customers. It’s obvious to me that storage is critical for successful IT, but that understanding seems to wax and wane with others. One positive indicator of the importance of storage was the IBM Pulse event a few weeks ago.


IBM Pulse focuses on service management, mostly software solutions that provide services for IT. These services involve the data center, security, facilities, and asset management. Content is divided into six streams, and each stream is divided into tracks. That provides attendees with the opportunity to check out a succession of sessions on topical areas. That also made it easy to gauge the coverage of storage in this service management conference.


I was impressed by the quantity of storage related presentations and the level of information provided. One entire track in the Service Management for the Data Center stream was dedicated to storage management. Another stream covered Security and Compliance Management and touched on storage. Product presentations were given at the level of what a systems administrator and an IT manager would need to know.


In addition to the speakers on the topics for storage, there were exhibits and labs for discussions and demonstrations. I  had interesting and lively discussions with technical people at these events. 


The importance of storage is clear among all vendors today – so much so that the discussions cross many areas that may not have included storage in the past. IBM deals with the entire scope of IT, yet its conference that is not perceived to be about storage covers a great deal of storage management software and storage systems. IBM’s GM of storage Doug Balog and vice president of storage marketing Dan Galvan were at the event.


This shows that storage is a prime area of focus for IT vendors — so much so that the discussions cross many areas that may not have included storage in the past. It also shows that storage is not only a necessary piece of IT, it’s a critical one.



March 15, 2011  8:17 PM

Better MLCs should help enterprise SSD adoption

Dave Raffo Dave Raffo Profile: Dave Raffo

When it became evident that solid state drives (SSDs) were still moving at a trickle into the enterprise about a year ago, industry experts said it would take two things to push their adoption. One was automatic tiering software and the other was enterprise-grade multi-level cell (MLC) flash that would bring down the price.

Now most storage array vendors have automated tiering software. And with Seagate’s launch of its Pulsar SSDs today, several of the leading drive makers have MLC devices to go with the more expensive single layer cell (SLC) drives on the market. So will SSDs now ramp quickly in the enterprise?

Probably not, although their use will certainly increase at a greater rate than they been.

Gartner analyst Joseph Unsworth said the enterprise SSD market went from around 320,000 units and $485 million in sales in 2009 to over 1 million units and over $1 billion in sales last year. He forecasts 9.4 million units and $4.2 billion in sales by 2015.

Seagate’s internal projections see SSDs catching up to 3.5-inch hard drives in the enterprise in units shipped in 20013 but not making much of a dent on 2.5-inch hard drives.

“The enterprise doesn’t turn on a dime,” Unsworth said. “There are long qualifications and there’s still confusion from end users. They know SSDs are important, but traditional data center professionals aren’t saying ‘We have to have this.’ They’re deployed in application workloads where they make most sense, like search, social media, data warehousing and stream video.”

MLC drives are cheaper than SLC drives but they wear out a lot faster. Until recently they were limited to consumer devices but are working into the enterprise. IBM said it will ship STEC MLC drives on midrange and enterprise storage arrays, and Seagate’s MLC drives will likely find their way into arrays from EMC and other array vendors.

Still, Unsworth added that pricing is decreasing at a slower rate for SSDs in storage systems than in servers.

He said the price erosion for the server side is forecasted for about 34% over the next five years with storage expected to drop about 30%. “It will take more MLC, more [product] competition and more scale to bring prices down,” he said.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: