The common understanding of tiering for many in IT is what is called external tiering today. This includes different storage systems with different performance, capacity, and — more importantly — cost characteristics. These tiers are called tier 1, tier 2, tier 3, and sometimes even a tier 4. These tiers may include a SAN with Fibre Channel or high-performing SAS drives, another box with capacity SATA drives for archiving or backup, and even a tape library.
When DRAM solid state storage as an external device became more generally available, it was marketed as a tier 0 storage hierarchy device. Most of IT still has looks at tiering that way.
“Inside the box” tiering is not new but the use of NAND Flash SSD has crystallized the focus on this type of tiering. We’ve recently seen various types of approaches and characteristics that provide real differentiation among the product offerings here. Evaluator Group covers these tiering products and their differences on our web site. The materials from the vendors for these solutions highlight the value of the systems’ tiering but do not address the new way of looking at tiering. These systems can include SSDs, high-performance spinning disk and capacity disk in one box.
For those in the storage world, the difference is so well understood that vendors don’t often place it in context for the IT world in general.
This background of tiering within storage brings us back to the regular interactions with IT directors and CIO that I have. There is a lack of clarity in the terminology based on the evolution that has occurred. Not everyone is at the same level of understanding. Consequently, putting the discussion in context is necessary to ensure that the conversation is following the path with different people. It is probably too late to change the descriptive terminology used and given enough time, the base understanding will change to where this becomes less of a problem. In the meantime, the education sessions offered at conferences such as SNW and Storage Decisions and more in-depth classes by others organizations such as the Evaluator Group work toward raising this level of the awareness of the difference.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
So is this a sign that cloud storage isn’t catching on, or the beginning of consolidation in the market? It’s certainly not yet a sign of doom for cloud storage. Every major storage vendor has public and/or private cloud products and services, and they’re pushing them hard. Iron Mountain and EMC – which closed Atmos Online last year – still have other cloud offerings.
And whenever several startups tackle a market, at least a few will fail. Cirtas is among a bunch of companies that began offering gateways to create hybrid storage clouds within the last year or so.
The CEOs of two of the cloud gateway vendors, Nasuni and StorSimple, say it was a coincidence that Iron Mountain and Cirtas failed in the same week. Nasuni’s Andres Rodriguez and StorSimple’s Ursheet Parikh agree that both failed for different reasons – Iron Mountain was too expensive at 65 cents a gigabyte and Cirtas’ SAN cloud gateway lacked enterprise SAN features.
People in the industry say some of the investors who sunk $22.5 million into Cirtas in January were unhappy with the lack of success of the product, and Cirtas is looking to either revamp its product or sell off its technology.
Rodriguez and Parikh said Iron Mountain was fighting a losing battle against larger service providers Amazon, Google, Microsoft and AT&T. Rodriguez predicted that other smaller cloud storage providers will also falter.
“No one buys technology, and the cloud is no different,” Rodriguez said. “To turn it into a real business, you better have something that works and offers a compelling value proposition. Iron Mountain learned the hard way that offering the back-end storage is a game for giants. It is simply not possible to compete with the economies of Amazon, Microsoft or Google. I believe it is just a matter of time before other service providers follow [Iron Mountain].”
Parikh said Iron Mountain had trouble adapting its core business model to the cloud.
“Iron Mountain knows how to manage tapes, but doesn’t know how to manage services,” StorSimple’s Parikh said. “Even at 65 cents a gig, it was losing money.”
As for Cirtas, Parikh said its Bluejet Cloud Storage Contoller lacked redundancy needed for a SAN product.
“You cannot deliver block storage without dual controllers,” he said. “If it were a file product like Riverbed [Whitewater], you can use it for backup. High availability and reliability has to be table stakes for block storage. No one gets fired for buying expensive storage. When a storage guy gets fired, it’s for losing data.”
Rodriguez said Cirtas took the wrong approach by trying to sell iSCI SAN storage in the cloud. “Blocks-to-the cloud is not technically sound,” he said. “From a business perspective, arrays are the hardest possible place to enter a data center with new technology. People are religious about their arrays. I don’t know exactly what happened, but I know [Cirtas] was having problems in the field.”
The Nasuni Filer is a virtual NAS appliance that turns commodity hardware into a NAS that moves data off to the cloud. The StorSimple Appliance is used mostly to move data from Windows file servers, Microsoft Exchange and SharePoint and VMware to the cloud. “We provide the appliance, and the customer buys the cloud directly with a separate contract with Amazon, Microsoft or A&&T,” Parikh said. “We reduce the cloud provider bill, and that more than pays for our appliance.”
Rodriguez said cloud storage gateways need to be reliable and low cost to succeed.
“This is a business not unlike the harddrive business,” he said. “It’s a total commodity play. It is about quality at a low cost. Nasuni repackages that raw, commodity into something businesses can run on.]]>
Samsung sells hard drives for PCs and consumer electronics, so the SSD part of the deal is the key piece for enterprises.
“This provides Seagate with an important source of leading-edge NAND supply and early visibility into the next-generation of NAND,” Seagate CEO Steve Luczo said on a conference call to discuss the deal.
Seagate and Samsung announced a NAND partnership last August. Seagate launched its first enterprise SSDs in March with its Pulsar .2 mult-layer cell (MLC) and Pulsar XT.2 single-level cell (SLC) SSD using Samsung NAND chips. Seagate was slow entering the SSD market, but Luczo said those Pulsar products should be coming into the market through storage system OEM partners soon and Seagate is on schedule for its next generation of SSDs in partnership with Samsung. Still, Seagate’s OEM customers wanted assurances that the Seagate-Samsung arrangement would remain intact.
“This addresses an issue that customers have raised,” he said. “While they have a lot of confidence for Samsung and Seagate to design flash products, there always has been a bit of a concern that without a formal supply agreement, what was the whole package going to look like?”
Seagate also sells hybrid systems with SSDs and hard drives.
The Samsung deal, which Seagate expects to close around the end of the year, is the second major disk drive merger in barely a month. Western Digital said in March it intends to buy disk drive rival Hitachi Global Storage Technologies (HGST) for $4.3 billion.
Luczo said the consolidation reflects the growth of storage capacity worldwide and the need for investment in new storage technologies.
“Demand for storage is accelerating,” he said. “Petabyte growth is very strong and has been for the last six to eight quarters, even in an economy that has been lackluster.”]]>
That marks a major change in storage system architectures. When I started in development for storage systems, custom microprocessors were designed that had special characteristics for handling data movement. The assembler level languages were written for each custom processor, and the surrounding data flow was developed to create the heart of the storage system. It was a badge of honor to have created the custom processor and the programming language for a generation of storage systems.
Over time, the use of special purpose but standard processors became common, as did more general purpose compilers and debuggers. These processors evolved into a separate business with offerings such as the AMD 29000. This took the custom processor design and programming languages and tools away from storage system development, but left the storage application software and the logic around moving the data and the interfaces.
As the progression continued, the use of commodity storage processors along with the support logic chips became prevalent. This has reduced the design demands and provided greater economies for components. Many systems today even use standard or nearly standard motherboards of the type found in servers for the underlying hardware in storage systems. That leaves storage system design to the application software and the hardware configuration. In some cases, the storage application can run within a virtual machine on a physical server.
The move towards commodity hardware was enabled by tremendous advances in the processor speed and functionality. The investment in the server/PC technology can be leveraged in storage, which alone could never sustain the investment required to make those advances. The other side of the coin is that the continual advancement requires the storage system hardware to change along with the new server change, which means that a particular hardware configuration will be offered only as long as the server/PC hardware offering is in new production. That’s about a 12-month cycle at best. New (meaning updated) versions of the storage system hardware will occur on a fairly regular basis by necessity.
This is mostly good for the IT customer. Newer, faster, and less expensive storage systems will continue to be delivered. But it also means that support of the storage systems will have a finite life. There will be a time when it is necessary to upgrade the storage system when a replacement controller is no longer available because the spares stock has been depleted and no more are being manufactured. This causes a bigger concern when planning the lifespan for those systems, the amortization of the investment, and the problematic migration of data.
The updated processors in storage systems such as the one from EMC Isilon are usually accompanied with other advances such as new or improved features delivered with the storage application and newer interface support. So despite the shorter hardware lifecycle, the IT customer does benefit compared to the costs and progression of technology from custom designs of the past.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).]]>
Storage Networking World always provides a chance to see what vendors want to inform the world about from their product perspective. It also includes presentations focused on education and general information for storage managers. These two perspectives often contrast, showing that the adoption rates of new technologies in data centers often runs at a different pace than delivery of new technology.
The vendor offerings are interesting to look at from a quantitative perspective: what is popular as a focus area? At last week’s SNW in Santa Clara, Calif., there were two distinct areas of popularity (quantitatively): solid state devices (SSDs) and anything to do with “cloud.” The cloud is being liberally attributed to many different products and operational environments. This does not mean there were no storage systems and management software solutions presented, but they were greatly outnumbered by SSD and cloud offerings.
These products represented the popular ones at the event and certainly many of the offerings were new. The “next” from vendors were some novel offerings that may become popular products or solutions of the future. More of the new technologies seem to be unveiled at SNW in the fall rather than the spring.
Fitting into the new category but not really representing particularly new products were integrated or coordinated solutions meant to replace or help manage point products. This is recognition that there is another level of opportunity beyond delivering a new or novel product. That opportunity is in integrating solutions into the operational workflow in an IT environment.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).]]>
Iron Mountain released a statement confirming what Gartner first disclosed in a report last week. According to Gartner, Iron Mountain stopped accepting new customers for its public cloud storage business on April 1 and would officially end the service “no sooner than” the first half of 2013.
“Iron Mountain did recently notify customers of our Virtual File Store and Archive Service Platform that we are retiring these two commodity cloud-storage solutions,” the Iron Mountain statement said. “This decision only affects those using Virtual File Store, a low-cost cloud storage option for inactive files, and technology partners who use the Archive Service Platform as a general purpose cloud for storing their customers’ data. As the Gartner report notes, public cloud service offerings like these have seen modest levels of adoption.”
Iron Mountain has offered to transfer VFS customers to its higher value File System Archiving (FSA) service next year. FSA is a hybrid cloud that uses policy-based archiving onsite and in the cloud. ASP customers will have to move to an alternate service provider or move their archiving in-house.
Iron Mountain launched VFS in February of 2009, and followed with the ASP that let software vendors integrate the Iron Mountain API to use its cloud back end.
EMC also discontinued its Atmos Online public cloud service last July when it decided to market Atmos exclusively to service providers who could build their own cloud offerings. Gartner points out that VFS, Atmos Online and startup Vaultspace’s shuttered public cloud service all focused only on storage without cloud compute services. Nirvanix and Zetta are the surviving pure-play NAS public cloud providers, with others offering gateways for hybrid cloud services.
Iron Mountain’s problems could be internal as well as industry wide. The vendor took a $255 million charge against its struggling digital business in the third quarter of last year and another $29 million charge in the fourth quarter, citing lower eDiscovery revenue than expected. The Gartner report said Iron Mount digital services EVP David Jones acknowledged the reasons to drop the cloud services included profitability pressures.
Nirvanix and Zetta are offering to migrate data to their clouds for all Iron Mountain customers free for 30 days. Nirvanix said it would then offer those customers the option of implementing one of its cloud services.
Zetta said it can migrate data stored in the Iron Mountain service automatically to the Zetta Storage Service with its free ZettaMirror software.
Andres Rodriguez, CEO of NAS cloud gateway vendor Nasuni, said Iron Mountain’s move is not indicative of the cloud storage market. Iron Mountain was one of the cloud providers Nasuni could move file data off to.
“While Iron Mountain’s shutdown of its cloud service is another symptom of consolidation, the cloud storage market continues to expand,” Rodriguez said. “The fact of the matter is, cloud storage infrastructure providers must grow very rapidly, or else they will not achieve the economies of scale that enable them to be profitable. Iron Mountain is an example of a company that was unable to achieve this scale. Over time, we will see more consolidation and a few extremely large players will dominate the market.”]]>
While Dell will partner for some pieces of its data center and cloud strategy – mainly networking hardware and server virtualization software — it wants control over the core hardware elements.
The new vStart integrated virtualization appliances included EqualLogic storage arrays, and the Email and File Archive products use the Dell DX Object Storage system as the central repository. The archive products can also use EqualLogic storage. Dell is also leaning on two of its key data protection software partners for the archiving products. Symantec Enterprise Vault and CommVault Simpana are the options for moving and managing data.
Dell will add Compellent storage to these new products eventually.
At the start, Dell has two versions of vStart. One built for 100 virtual machines and the other for 200 virtual machines.
Like Hewlett-Packard and IBM, Dell wanted to build its own stacks with its core components instead of following Cisco’s route of using its UCS Servers with EMC or NetApp storage.
Dell vice president for Enterprise Solutions Praveen Asthana calls vStart “a way of delivering units of virtual machine assembled from Dell hardware and storage. We’ve done a lot of work in terms of pre-integrating and figuring out the optimal way of assembling virtual desktops, and created reference architecture.”
Dell may have committed one architectural no-no already, however. The lower-end vStart configuration uses one EqualLogic array, leaving it with a single point of failuare. The vStart for 200 VMs uses two EqualLogic arrays.]]>
But times have changed, and the Information Technology professionals making decisions on acquiring the right storage system need to look at different factors.
What is different for making the decision and why there has been a change is interesting, and may require explanation. First, let’s look at the reason why. In general, the number of storage professionals, whether they are called storage administrators or not, has significantly declined. Professionals with exclusively storage responsibilities are mostly found only in the largest IT operations.
That leaves fewer storage specialists, mostly because of reductions in staffs. The IT people remaining have to take on multiple responsibilities. Additionally, the requirements for storage and the availability of solutions that require less detailed management or control have enabled successful deployment and operation without the storage specialist.
Changes in staffing have led to changes in decision-making when acquiring storage systems. The primary focus now is usually applying the storage system to an application to meet requirements for business issues. In this scenario, the storage system is integrated into the environment as a solution for one application. It is implemented and managed for the specific requirements of that application. How the storage can quickly meet the application needs is the most important consideration.
The need for information about speeds and feeds is still there, but that is not the most important representation to the IT professional making the evaluation. At least it shouldn’t be. There are some products (and product marketing) that continues to focus on that message. There are more opportunities for systems that solve a business problem and quickly meet application needs.
There will be a form of natural selection occurring here. Vendors that understand and can adapt to representing their products to their customer needs will ultimately be more successful. Those stuck in their old methods and thinking will have a more difficult time. The Evaluator Group publishes a series of Evaluation Guides on how to make informed decisions regarding purchasing of storage at www.evaluatorgroup.com. These guides attempt to focus on what is important to look at regarding the purchase of storage solutions.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).]]>
Belluzzo remains chairman of Quantum’s board, and both men said the vendor’s goals remain the same under Gacek: to remain the leader in open systems tape libraries, start taking disk backup market share from EMC’s Data Domain and expand its StorNext file system’s role in rich media and archiving.
“We’re in a different place than we were over the last few years,” Gacek said in an interview today. “The company needed to transform into a systems company [from a tape company]. I feel like with our products and value to end users, we’re in a much different place. I don’t think people understand how good the new DXi [data deduplication] software is yet. In bakeoffs against Data Domain, we’re more than competitive.”
Quantum had to remake its disk backup business strategy after ECM spent $2.1 billion to buy Data Domain in 2009. Before that, EMC sold Quantum’s first generation DXi software through an OEM deal. That OEM deal went away after the Data Domain acquisition, and Quantum has struggled to recoup that lost revenue through sales of its branded DXi appliances. It has since overhauled its entire DXi hardware platform, and upgraded the DXi software to version 2.0 in January.
“EMC tends to just badmouth our technology,” Gacek said. “They say, ‘We used to sell it, it’s not very good.’ But that was a couple of generations ago. We tell the customer, ‘We’ll bring the product in, go ahead and run it against Data Domain.’ Our win rates go up when we do that.”
Still, Quantum hasn’t made much of a dent in Data Domain’s dedupe backup business. While announcing the CEO change today, Quantum disclosed that its revenue for last quarter was around $165 million – near the low end of its forecast and about the same as a year ago.
Gacek said the strategy is to take on Data Domain through Quantum’s own channel, but he’ll keep the door open to other large OEM deals to try and pick up the slack.
“We have to control our own destiny, and that’s why we’re focused on our branded channel,” he said. “I don’t run the other companies, but it looks like EMC is running the table on the other guys. EMC is taking IBM and HP and Oracle to the woodshed [for disk backup]. I’m not chasing OEMs, but I do think the space will get more and more competitive.”
Quantum has an OEM deal with Fujitsu, but that doesn’t come close to replacing the lost revenue from EMC.
Gacek joined Quantum from ADIC after Quantum acquired its tape competitor in 2006. [Quantum’s dedupe and StorNext technology come from ADIC]. He began at Quantum as CFO, became COO in 2009, and had president added to his title in January. Belluzzo said the management transition was planned for more than a year, but Gacek said he had no assurances in January that he would be the next CEO.
It’s hard to believe Belluzzo didn’t feel at least some pressure to resign. Quantum’s stock price opened at $2.50 today – well above the 12 cents it dipped to in late 2009 but still far below the $4.02 it was at before the economy tanked in late 2008. Since he became CEO in Sept. 2002, Quantum’s stock has risen 21% compared to Nasdaq’s overall growth of 145%.
While Quantum still hasn’t been able to record significant growth in the hot dedupe market, Belluzzo said he’s leaving the company in good shape.
“Although we have had our share of challenges, the company is well positioned to play an expanded role in the storage industry,” he said.]]>