Storage Soup


April 19, 2011  1:35 PM

Seagate pushes deeper into SSDs with Samsung acquisition

Dave Raffo Dave Raffo Profile: Dave Raffo

Seagate’s $1.375 billion acquisition of Samsung’s hard drive business today also strengthens Seagate’s solid state drive (SSD) hand by extending the NAND flash partnership between the two vendors.

Samsung sells hard drives for PCs and consumer electronics, so the SSD part of the deal is the key piece for enterprises.

“This provides Seagate with an important source of leading-edge NAND supply and early visibility into the next-generation of NAND,” Seagate CEO Steve Luczo said on a conference call to discuss the deal.

Seagate and Samsung announced a NAND partnership last August. Seagate launched its first enterprise SSDs in March with its Pulsar .2 mult-layer cell (MLC) and Pulsar XT.2 single-level cell (SLC) SSD using Samsung NAND chips. Seagate was slow entering the SSD market, but Luczo said those Pulsar products should be coming into the market through storage system OEM partners soon and Seagate is on schedule for its next generation of SSDs in partnership with Samsung. Still, Seagate’s OEM customers wanted assurances that the Seagate-Samsung arrangement would remain intact.

“This addresses an issue that customers have raised,” he said. “While they have a lot of confidence for Samsung and Seagate to design flash products, there always has been a bit of a concern that without a formal supply agreement, what was the whole package going to look like?”

Seagate also sells hybrid systems with SSDs and hard drives.

The Samsung deal, which Seagate expects to close around the end of the year, is the second major disk drive merger in barely a month. Western Digital said in March it intends to buy disk drive rival Hitachi Global Storage Technologies (HGST) for $4.3 billion.

Luczo said the consolidation reflects the growth of storage capacity worldwide and the need for investment in new storage technologies.

“Demand for storage is accelerating,” he said. “Petabyte growth is very strong and has been for the last six to eight quarters, even in an economy that has been lackluster.”

April 18, 2011  7:51 PM

Processors for storage systems follow server/PC roadmap

Randy Kerns Randy Kerns Profile: Randy Kerns

Recent storage system launches – such as EMC’s Isilon roll-out last week – included new generations of Intel processors that provide greater processing power and increased performance. This has become a familiar pattern — the latest generations of processors become available, and soon after storage system announcements follow with the incorporation of the new processor technology.

That marks a major change in storage system architectures. When I started in development for storage systems, custom microprocessors were designed that had special characteristics for handling data movement. The assembler level languages were written for each custom processor, and the surrounding data flow was developed to create the heart of the storage system. It was a badge of honor to have created the custom processor and the programming language for a generation of storage systems.

Over time, the use of special purpose but standard processors became common, as did more general purpose compilers and debuggers. These processors evolved into a separate business with offerings such as the AMD 29000. This took the custom processor design and programming languages and tools away from storage system development, but left the storage application software and the logic around moving the data and the interfaces.

As the progression continued, the use of commodity storage processors along with the support logic chips became prevalent. This has reduced the design demands and provided greater economies for components. Many systems today even use standard or nearly standard motherboards of the type found in servers for the underlying hardware in storage systems. That leaves storage system design to the application software and the hardware configuration. In some cases, the storage application can run within a virtual machine on a physical server.

The move towards commodity hardware was enabled by tremendous advances in the processor speed and functionality. The investment in the server/PC technology can be leveraged in storage, which alone could never sustain the investment required to make those advances. The other side of the coin is that the continual advancement requires the storage system hardware to change along with the new server change, which means that a particular hardware configuration will be offered only as long as the server/PC hardware offering is in new production. That’s about a 12-month cycle at best. New (meaning updated) versions of the storage system hardware will occur on a fairly regular basis by necessity.

This is mostly good for the IT customer. Newer, faster, and less expensive storage systems will continue to be delivered. But it also means that support of the storage systems will have a finite life. There will be a time when it is necessary to upgrade the storage system when a replacement controller is no longer available because the spares stock has been depleted and no more are being manufactured. This causes a bigger concern when planning the lifespan for those systems, the amortization of the investment, and the problematic migration of data.

The updated processors in storage systems such as the one from EMC Isilon are usually accompanied with other advances such as new or improved features delivered with the storage application and newer interface support. So despite the shorter hardware lifecycle, the IT customer does benefit compared to the costs and progression of technology from custom designs of the past.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


April 13, 2011  1:02 PM

SNW – The New, the Popular, and the Next

Randy Kerns Randy Kerns Profile: Randy Kerns

 

Storage Networking World always provides a chance to see what vendors want to inform the world about from their product perspective. It also includes presentations focused on education and general information for storage managers. These two perspectives often contrast, showing that the adoption rates of new technologies in data centers often runs at a different pace than delivery of new technology.

 

The vendor offerings are interesting to look at from a quantitative perspective:  what is popular as a focus area?  At last week’s SNW in Santa Clara, Calif., there were two distinct areas of popularity (quantitatively):  solid state devices (SSDs) and anything to do with “cloud.” The cloud is being liberally attributed to many different products and operational environments. This does not mean there were no storage systems and management software solutions presented, but they were greatly outnumbered by SSD and cloud offerings.

 

These products represented the popular ones at the event and certainly many of the offerings were new.  The “next” from vendors were some novel offerings that may become popular products or solutions of the future.  More of the new technologies seem to be unveiled at SNW in the fall rather than the spring. 

 

Fitting into the new category but not really representing particularly new products were integrated or coordinated solutions meant to replace or help manage point products.  This is recognition that there is another level of opportunity beyond delivering a new or novel product. That opportunity is in integrating solutions into the operational workflow in an IT environment.

 

 (Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


April 11, 2011  5:11 PM

Cloud evaporates over Iron Mountain

Dave Raffo Dave Raffo Profile: Dave Raffo

Iron Mountain today confirmed it will end its Virtual File Store (VFS) and Archive Service Platform (ASP) cloud storage services because of “modest levels of adoption.”

Iron Mountain released a statement confirming what Gartner first disclosed in a report last week. According to Gartner, Iron Mountain stopped accepting new customers for its public cloud storage business on April 1 and would officially end the service “no sooner than” the first half of 2013.

“Iron Mountain did recently notify customers of our Virtual File Store and Archive Service Platform that we are retiring these two commodity cloud-storage solutions,” the Iron Mountain statement said. “This decision only affects those using Virtual File Store, a low-cost cloud storage option for inactive files, and technology partners who use the Archive Service Platform as a general purpose cloud for storing their customers’ data. As the Gartner report notes, public cloud service offerings like these have seen modest levels of adoption.”

Iron Mountain has offered to transfer VFS customers to its higher value File System Archiving (FSA) service next year. FSA is a hybrid cloud that uses policy-based archiving onsite and in the cloud. ASP customers will have to move to an alternate service provider or move their archiving in-house.

Iron Mountain launched VFS in February of 2009, and followed with the ASP that let software vendors integrate the Iron Mountain API to use its cloud back end.

EMC also discontinued its Atmos Online public cloud service last July when it decided to market Atmos exclusively to service providers who could build their own cloud offerings. Gartner points out that VFS, Atmos Online and startup Vaultspace’s shuttered public cloud service all focused only on storage without cloud compute services. Nirvanix and Zetta are the surviving pure-play NAS public cloud providers, with others offering gateways for hybrid cloud services.

Iron Mountain’s problems could be internal as well as industry wide. The vendor took a $255 million charge against its struggling digital business in the third quarter of last year and another $29 million charge in the fourth quarter, citing lower eDiscovery revenue than expected. The Gartner report said Iron Mount digital services EVP David Jones acknowledged the reasons to drop the cloud services included profitability pressures.

Nirvanix and Zetta are offering to migrate data to their clouds for all Iron Mountain customers free for 30 days. Nirvanix said it would then offer those customers the option of implementing one of its cloud services.

Zetta said it can migrate data stored in the Iron Mountain service automatically to the Zetta Storage Service with its free ZettaMirror software.

Andres Rodriguez, CEO of NAS cloud gateway vendor Nasuni, said Iron Mountain’s move is not indicative of the cloud storage market. Iron Mountain was one of the cloud providers Nasuni could move file data off to.

“While Iron Mountain’s shutdown of its cloud service is another symptom of consolidation, the cloud storage market continues to expand,” Rodriguez said. “The fact of the matter is, cloud storage infrastructure providers must grow very rapidly, or else they will not achieve the economies of scale that enable them to be profitable. Iron Mountain is an example of a company that was unable to achieve this scale. Over time, we will see more consolidation and a few extremely large players will dominate the market.”


April 8, 2011  3:43 PM

Dell puts storage to use in bundled stacks

Dave Raffo Dave Raffo Profile: Dave Raffo

Dell’s entry into the integrated stack game this week provides another reason why it switched its storage strategy to developing its own rather than reselling EMC’s products.

While Dell will partner for some pieces of its data center and cloud strategy – mainly networking hardware and server virtualization software — it wants control over the core hardware elements.

The new vStart integrated virtualization appliances included EqualLogic storage arrays, and the Email and File Archive products use the Dell DX Object Storage system as the central repository. The archive products can also use EqualLogic storage. Dell is also leaning on two of its key data protection software partners for the archiving products. Symantec Enterprise Vault and CommVault Simpana are the options for moving and managing data.

Dell will add Compellent storage to these new products eventually.

At the start, Dell has two versions of vStart. One built for 100 virtual machines and the other for 200 virtual machines.

Like Hewlett-Packard and IBM, Dell wanted to build its own stacks with its core components instead of following Cisco’s route of using its UCS Servers with EMC or NetApp storage.

Dell vice president for Enterprise Solutions Praveen Asthana calls vStart “a way of delivering units of virtual machine assembled from Dell hardware and storage. We’ve done a lot of work in terms of pre-integrating and figuring out the optimal way of assembling virtual desktops, and created reference architecture.”

Dell may have committed one architectural no-no already, however. The lower-end vStart configuration uses one EqualLogic array, leaving it with a single point of failuare. The vStart for 200 VMs uses two EqualLogic arrays.


April 6, 2011  12:24 PM

Storage buying requires an application perspective

Randy Kerns Randy Kerns Profile: Randy Kerns

Much information provided about storage systems reflects details about the specifications of the storage systems – known as speeds and feeds. This has been the norm, and if that information was not presented prominently, many would think the vendor is trying to focus attention away from flaws in its product.

But times have changed, and the Information Technology professionals making decisions on acquiring the right storage system need to look at different factors.

What is different for making the decision and why there has been a change is interesting, and may require explanation. First, let’s look at the reason why. In general, the number of storage professionals, whether they are called storage administrators or not, has significantly declined. Professionals with exclusively storage responsibilities are mostly found only in the largest IT operations.

That leaves fewer storage specialists, mostly because of reductions in staffs. The IT people remaining have to take on multiple responsibilities. Additionally, the requirements for storage and the availability of solutions that require less detailed management or control have enabled successful deployment and operation without the storage specialist.

Changes in staffing have led to changes in decision-making when acquiring storage systems. The primary focus now is usually applying the storage system to an application to meet requirements for business issues. In this scenario, the storage system is integrated into the environment as a solution for one application. It is implemented and managed for the specific requirements of that application. How the storage can quickly meet the application needs is the most important consideration.

The need for information about speeds and feeds is still there, but that is not the most important representation to the IT professional making the evaluation. At least it shouldn’t be. There are some products (and product marketing) that continues to focus on that message. There are more opportunities for systems that solve a business problem and quickly meet application needs.

There will be a form of natural selection occurring here. Vendors that understand and can adapt to representing their products to their customer needs will ultimately be more successful. Those stuck in their old methods and thinking will have a more difficult time. The Evaluator Group publishes a series of Evaluation Guides on how to make informed decisions regarding purchasing of storage at www.evaluatorgroup.com. These guides attempt to focus on what is important to look at regarding the purchase of storage solutions.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


April 4, 2011  11:46 PM

Belluzzo takes Quantum leap from CEO job; COO Gacek steps up

Dave Raffo Dave Raffo Profile: Dave Raffo

Backup vendor Quantum switched CEOs today. Rick Belluzzo stepped down after nine years and Jon Gacek stepped up from his role as COO and president to replace him.

Belluzzo remains chairman of Quantum’s board, and both men said the vendor’s goals remain the same under Gacek: to remain the leader in open systems tape libraries, start taking disk backup market share from EMC’s Data Domain and expand its StorNext file system’s role in rich media and archiving.

“We’re in a different place than we were over the last few years,” Gacek said in an interview today. “The company needed to transform into a systems company [from a tape company]. I feel like with our products and value to end users, we’re in a much different place. I don’t think people understand how good the new DXi [data deduplication] software is yet. In bakeoffs against Data Domain, we’re more than competitive.”

Quantum had to remake its disk backup business strategy after ECM spent $2.1 billion to buy Data Domain in 2009. Before that, EMC sold Quantum’s first generation DXi software through an OEM deal. That OEM deal went away after the Data Domain acquisition, and Quantum has struggled to recoup that lost revenue through sales of its branded DXi appliances. It has since overhauled its entire DXi hardware platform, and upgraded the DXi software to version 2.0 in January.

“EMC tends to just badmouth our technology,” Gacek said. “They say, ‘We used to sell it, it’s not very good.’ But that was a couple of generations ago. We tell the customer, ‘We’ll bring the product in, go ahead and run it against Data Domain.’ Our win rates go up when we do that.”

Still, Quantum hasn’t made much of a dent in Data Domain’s dedupe backup business. While announcing the CEO change today, Quantum disclosed that its revenue for last quarter was around $165 million – near the low end of its forecast and about the same as a year ago.

Gacek said the strategy is to take on Data Domain through Quantum’s own channel, but he’ll keep the door open to other large OEM deals to try and pick up the slack.

“We have to control our own destiny, and that’s why we’re focused on our branded channel,” he said. “I don’t run the other companies, but it looks like EMC is running the table on the other guys. EMC is taking IBM and HP and Oracle to the woodshed [for disk backup]. I’m not chasing OEMs, but I do think the space will get more and more competitive.”

Quantum has an OEM deal with Fujitsu, but that doesn’t come close to replacing the lost revenue from EMC.

Gacek joined Quantum from ADIC after Quantum acquired its tape competitor in 2006. [Quantum’s dedupe and StorNext technology come from ADIC]. He began at Quantum as CFO, became COO in 2009, and had president added to his title in January. Belluzzo said the management transition was planned for more than a year, but Gacek said he had no assurances in January that he would be the next CEO.

It’s hard to believe Belluzzo didn’t feel at least some pressure to resign. Quantum’s stock price opened at $2.50 today – well above the 12 cents it dipped to in late 2009 but still far below the $4.02 it was at before the economy tanked in late 2008. Since he became CEO in Sept. 2002, Quantum’s stock has risen 21% compared to Nasdaq’s overall growth of 145%.

While Quantum still hasn’t been able to record significant growth in the hot dedupe market, Belluzzo said he’s leaving the company in good shape.

“Although we have had our share of challenges, the company is well positioned to play an expanded role in the storage industry,” he said.


March 30, 2011  3:26 PM

Establishing data relevance can help archiving strategy

Randy Kerns Randy Kerns Profile: Randy Kerns

As highlighted in many reports, the massive amounts of data being created today cause concerns for Information Technology pros – especially those who manage storage.

These concerns involve how to process data, what to keep and where to put it. Issues include how to present the information, where to store data, what are the requirements for that data, and how much will it cost to retain it. Most of the newly created data is in the form of files.
.
The massive amount of data being created is great news for storage vendors because it means that more storage is required. But storing all this newly created data may be unsustainable for organizations because of the cost required, as well as the physical space and power that storage systems use.

All of this data may require a new approach to storage and archiving. That approach involves creating a method to establish data relevance as part of the analytics performed when data is ingested. Relevance implies that there would be immediate data analysis. For example, data received from a source (monitoring equipment, feedback data, etc.) would immediately go into a data analytics process. The relevant source data received would go to an archiving storage system while the analytic processing continued. The valuable information in intermediary form would be retained in the analytic nodes or on a shared storage system.

The source data sent to the archive would be available for data mining or reprocessing if required. The archive system would handle the data protection process – a one-time protection for new data based on the requirements established for the business.

The processing for establishing the data relevance could be performed by the analytics engine or as part of advanced functions in a storage system. The data relevance engine would move the relevant data to the most appropriate location based on a set of rules on the analysis. Some data could be retained on primary storage but the majority would be stored directly on a more economical archiving system.

This may not really be a new model, but it reduces the steps and time it takes to manage the data. Making a solution like this available for IT would have high economic value and immediate benefit in dealing with the massive amounts of data being created.


March 29, 2011  12:40 PM

Permabit aims its primary dedupe at low-end NAS market

Dave Raffo Dave Raffo Profile: Dave Raffo

Permabit is expanding its Albireo primary data deduplication application with the Albireo Virtual Data Optimizer (VDO), which can bring dedupe to Linux-based SMB NAS systems.

Albireo, launched by Permabit last year, lets storage vendors embed inline post-process deduplication in their systems. OEM partners BlueArc, Xiotech, and LSI Engenio have signed on to use Albireo for enterprise storage systems. VDO is aimed at products on the other end of the storage market.

Permabit CTO Jered Floyd said the original Albireo is a software library that vendors can integrate with their file systems or block storage systems. It sits out of the data path, and lets the storage system control the data. VDO is a plug-in that sits in the data path between the file server and disk infrastructure. Floyd said that lets VDO add compression if its OEM partners choose to.

“It’s different from the Albireo deduplication library, because now we do own the data,” he said. “It’s not a concern for these OEMs, who are already depending on [open source] third party software for data placement and other capabilities.”

NAS vendors such as Overland Storage’s Snap platform, Buffalo Technology, NetGear, and Cisco Linksys would be candidates for VDO. Permabit CEO Tom Cook said there are at least 25 vendors who would fit the bill, and they make up about 18% of the NAS market. VDO can also be used in block and unified storage systems.

“They could do their own development, but they have other priorities,” Cook said of these NAS vendors. “Their other option would be a standalone dedupe appliance that sits in front of their system, and that drives dedupe right out of the market from a price standpoint.”

He expects VDO to be available to OEMs in the second half of the year.

Cook also confirmed what had been a poorly kept secret in the storage industry – LSI is working to add Albireo to its SAN platform. And he said he doesn’t expect NetApp’s pending purchase of LSI’s Engenio storage division to change that. NetApp has its own primary dedupe for its FAS storage platform, which is one of the main competitors for Albireo.

“The NetApp [Engenio] deal presents interesting business options for us,” Cook said. “I can’t comment beyond that.”


March 24, 2011  1:10 PM

DataDirect Networks ready to aim directly at NetApp NAS

Dave Raffo Dave Raffo Profile: Dave Raffo

When NetApp closes its $480 million acquisition of LSI’s Engenio storage division, it will move into head-to-head competition with high performance computing storage vendor DataDirect Networks in markets where NetApp barely plays today. And DDN will soon respond by moving into NetApp’s mainstream NAS space.

DDN is preparing to launch – probably next month – a NASScaler product that DDN’s EVP of strategy and technology Jean-Luc Chatelain said will be “aimed at the NetApp market” rather than HPC.

“It has standard IT NAS-type behavior,” Chatelain said. “We realized the demand for the density, bandwidth, capacity and performance that we used to see in specialty machines has migrated toward the traditional NAS market. It’s the standard NFS behavior on top of high performance computing.”

The NASScaler will be DDN’s fourth file storage system, to go with its xStreamScaler for media and entertainment, GridScaler for cloud and HPC and ExaScaler for supercomputing.

DDN bills itself as the largest private storage vendor, an assessment that IDC agrees with. DDN executives claim the vendor generated $180 million in revenue in 2010 and grew about 40% in 2009 and 2010. The vendor’s storage sells into what EMC calls “big data” markets, which are the same ones NetApp intends to chase with LSI Engenio. Those markets include HPC, media and entertainment, digital security, and as a platform for cloud providers.

It will take awhile before DDN can provide NetApp with solid competition in mainstream NAS, but the vendors will contend for both end user customers and OEM partners in the HPC space. The Engenio 7900 Storage System competes with DDN’s products, and is sold by OEMs including Cray, Teradata and SGI.

“It will be interesting to see what happens now,” Chatelain said. “NetApp is not focused on the domain where we play. NetApp is not a brand name in the world of high performance computing or rich media. We are known as people committed to those verticals.”


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: