Storage Soup


May 4, 2011  3:39 PM

Brocade, Emulex kick off 16-gig FC hype

Dave Raffo Dave Raffo Profile: Dave Raffo

Ready or not, 16 Gbps Fibre Channel is coming.

Emulex and Brocade this week said they have 16-gig FC devices being qualified by storage and server vendors, although there doesn’t appear to be any screaming demand for organizations looking for more than 8-gig FC support. The aggressive move to 16-gig FC is a sign that mass adoption of Fibre Channel over Ethernet (FCoE) is still a long ways off and pure FC has plenty of life left.

Emulex got the ball rolling Monday with its XE201 I/O Controller, a converged adapter that supports 16-gig FC along with FCoE, 10-Gigabit Ethernet (GbE) and 40 GbE. Emulex will demo the XE201’s 16-gig capability at EMC World next week.

Brocade Tuesday rolled out what it calls the first end-to-end 16-gig FC platform of products. They include the DCX8510 Backbone SAN switch with up to 384 16-gig ports at line-rate speeds, the 6510 edge switch with 24 or 48 ports, and the 1860 Fabric Adapter that supports FC, Ethernet and FCoE. Brocade is also adding 16-gig FC support to its Network Advisor 11.1 unified management software and Brocade Fabric Manager 7.0.

Brocade said the new switches and software will be available this quarter. However, its OEM partners probably won’t complete qualifications before August. EMC and Hewlett-Packard are already qualifying the 16-gig products with other storage vendors to follow soon.

Emulex VP of marketing Shawn Walsh said he expects OEM certification for the second half of this year for Emulex’s 16-gig adapter.

It’s no surprise that Brocade is pushing the faster FC. Brocade has always been less bullish on FCoE than its FC switch rival Cisco, and Brocade picked up significant market share gains by beating Cisco to 4 Gbps and 8 Gbps FC gear. During its tech summit day Tuesday, Brocade execs said the market agrees with their take on FC and FCoE.

“FCoE adoption has been modest,” said Jason Nolet, VP of Brodcade’s data center and enterprise networking group.

“Our customers say they want to stay with Fibre channel,” Brocade CTO Dave Stevens added.

Emulex’s Walsh agreed with that. “Ten-gig [Ethernet] adoption is happening fast, but there’s still discrete networks,” he said. “Customers are not going to throw away what they have today. One of the big questions we get is, ‘What is Emulex’s commitment to 16-gig Fibre Channel?’”

It’s almost certain that more vendors will have 16-gig FC products by the Fibre Channel Industry Association (FCIA)’s October 16-gig plugfest.

If history is any indication, Cisco will trail Brocade by from six months to a year with 16 Gbps FC. Emulex’s main adapter rival QLogic isn’t commenting on its 16-gig roadmap but is expected to support it this year.

Brocade execs point to virtualization and the cloud as drivers of the faster technology. However, there will be a price premium to move from 8-gig to 16-gig. That was also the case with the move from 4-gig to 8-gig, and that transition was slower than the moves from 1-gig to 2-gig and from 2-gig to 4-gig when there was no price hike for the higher bandwidth. While the first two transitions took about two years each for the higher bandwidth to become dominant, the move to 8-gig took about three years.

In a blog post this week, Wikibon senior analyst Stuart Miniman advised organizations to pursue 16-gig FC and converged networks on their internal schedules rather than according to vendor roadmaps.

“Most users can take a slow and deliberate approach to the adoption of new generations of speeds,” Miniman wrote. “ … customers can support both FC and Ethernet and consider the migration on internal schedules rather than on the pace that the vendor community may want to push or pull. For equipment refresh cycles that start in 2012 or later, consider looking for adapters that can support the latest of both FC and Ethernet.”

May 3, 2011  5:42 PM

HP prepares new EVA, vows to keep XP *Update*

Dave Raffo Dave Raffo Profile: Dave Raffo

Hewlett-Packard today said it will launch its next-generation EVA midrange storage system in June, and denied that it will stop selling its Hitachi-manufactured P9500 XP enterprise platform.

For now, HP isn’t giving much detail on the P6000 EVA except to say it will have a 6 Gbps SAS back end, 2.5-inch SAS drives and 8 Gbps Fibre Channel connectivity. The vendor is offering an early access program to customers ahead of the official launch at the HP Discover user show June 6-10.

“We want to let folks know where we stand,” said Craig Nunes, marketing director for HP StorageWorks. “A quarter ago, there was a lot of speculation about when it [the next EVA] is going to come. We’re trying to be as proactive as possible.”

HP is also sending a message that it will continue to develop the EVA line as well as continue the P9500 that comes from an OEM deal with Hitachi Limited.

There has been speculation in the industry that HP would drop either the XP, EVA or both product platforms since it acquired 3PAR for $2.35 billion last year. HP executives have maintained they will keep both the XP and EVA, but StorageNewsletter.com posted an item Monday citing unnamed sources saying HP would stop selling the P9500 XP and replace it with high-end 3PAR arrays.

The Storage Newsletter story drew vehement denials from HP, with HP StorageWorks VP of marketing, strategy and operations Tom Joyce telling the newsletter “HP is in no way discontinuing the XP business relationship with Hitachi, Ltd. of Japan. The XP, and its newly named P9500 successor, are very successful mission-critical storage products for HP.”

HP’s storage blogger Calvin Zito added in a blog post Monday, “I saw a story on a small storage news website today claiming that HP would no longer OEM the XP Disk Array from HDS [Hitachi Data Systems].

“The story is wrong. Period.”

Joyce told StorageSoup that the industry speculation has created confusion with HP EVA and XP customers about the vendor’s roadmap.

“When you do something as publicly visible as making the investment HP made in 3PAR, it begs the question, ‘What does this mean to existing products?’” he said. “The thing about a free press is people are free to write what they want. But we’ve been consistent since we bought 3PAR that we would introduce a new EVA in the first half of 2011. We said the P9500 will stay a critical part of our product line. Over a period of time customers will like to have alternatives. Some customers will say ‘I’ll use 3PAR for something I used to use XP for,’ but 3PAR will never replace some of the things XP does well. We will never add mainframe connectivity for 3PAR.”

Joyce said EVA’s selling point is simplicity compared to other FC storage systems. “Folks buy EVA because of ease of use right out of the box,” he said. “It’s simple to use and you can run it with a lot less people.”

UPDATE: According to documents uncovered by SearchStorage ANZ’s Simon Sharwood, HP will deliver a P6300 and P6500 EVA. The P6500 will be a higher end version with faster processors, more cache, greater maximum capacity and so on. But the more interesting parts of the upgrade are the software and management features – mainly the reservationless dedicate-on-write thin provisioning that HP has in its 3PAR and LeftHand platforms. The new EVA will also have dynamic LUN migration, and new remote replication and clustering capabilities.


April 25, 2011  5:36 PM

Flash SSD systems redefine storage tiers

Randy Kerns Randy Kerns Profile: Randy Kerns

The popularity of storage systems that support tiering within the box, driven primarily by NAND Flash solid state drives (SSDs), has created confusion about the meaning of storage tiering. While speaking to directors and CIOs of IT operations at a recent event, questions came up that illustrate this confusion.

The common understanding of tiering for many in IT is what is called external tiering today. This includes different storage systems with different performance, capacity, and — more importantly — cost characteristics. These tiers are called tier 1, tier 2, tier 3, and sometimes even a tier 4. These tiers may include a SAN with Fibre Channel or high-performing SAS drives, another box with capacity SATA drives for archiving or backup, and even a tape library.

When DRAM solid state storage as an external device became more generally available, it was marketed as a tier 0 storage hierarchy device. Most of IT still has looks at tiering that way.

 

 “Inside the box” tiering is not new but the use of NAND Flash SSD has crystallized the focus on this type of tiering. We’ve recently seen various types of approaches and characteristics that provide real differentiation among the product offerings here. Evaluator Group covers these tiering products and their differences on our web site. The materials from the vendors for these solutions highlight the value of the systems’ tiering but do not address the new way of looking at tiering. These systems can include SSDs, high-performance spinning disk and capacity disk in one box.

 

For those in the storage world, the difference is so well understood that vendors don’t often place it in context for the IT world in general. 

This background of tiering within storage brings us back to the regular interactions with IT directors and CIO that I have. There is a lack of clarity in the terminology based on the evolution that has occurred. Not everyone is at the same level of understanding. Consequently, putting the discussion in context is necessary to ensure that the conversation is following the path with different people. It is probably too late to change the descriptive terminology used and given enough time, the base understanding will change to where this becomes less of a problem. In the meantime, the education sessions offered at conferences such as SNW and Storage Decisions and more in-depth classes by others organizations such as the Evaluator Group work toward raising this level of the awareness of the difference.  

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm). 

 

 

 


April 19, 2011  8:35 PM

Is it raining on the cloud storage parade?

Dave Raffo Dave Raffo Profile: Dave Raffo

Last week was a dark one for cloud storage vendors. The week began with confirmation that Iron Mountain had closed its file and archiving cloud services and ended with news that startup Cirtas laid off most of its staff.

So is this a sign that cloud storage isn’t catching on, or the beginning of consolidation in the market? It’s certainly not yet a sign of doom for cloud storage. Every major storage vendor has public and/or private cloud products and services, and they’re pushing them hard. Iron Mountain and EMC – which closed Atmos Online last year – still have other cloud offerings.

And whenever several startups tackle a market, at least a few will fail. Cirtas is among a bunch of companies that began offering gateways to create hybrid storage clouds within the last year or so.

The CEOs of two of the cloud gateway vendors, Nasuni and StorSimple, say it was a coincidence that Iron Mountain and Cirtas failed in the same week. Nasuni’s Andres Rodriguez and StorSimple’s Ursheet Parikh agree that both failed for different reasons – Iron Mountain was too expensive at 65 cents a gigabyte and Cirtas’ SAN cloud gateway lacked enterprise SAN features.

People in the industry say some of the investors who sunk $22.5 million into Cirtas in January were unhappy with the lack of success of the product, and Cirtas is looking to either revamp its product or sell off its technology.

Rodriguez and Parikh said Iron Mountain was fighting a losing battle against larger service providers Amazon, Google, Microsoft and AT&T. Rodriguez predicted that other smaller cloud storage providers will also falter.

“No one buys technology, and the cloud is no different,” Rodriguez said. “To turn it into a real business, you better have something that works and offers a compelling value proposition. Iron Mountain learned the hard way that offering the back-end storage is a game for giants. It is simply not possible to compete with the economies of Amazon, Microsoft or Google. I believe it is just a matter of time before other service providers follow [Iron Mountain].”

Parikh said Iron Mountain had trouble adapting its core business model to the cloud.

“Iron Mountain knows how to manage tapes, but doesn’t know how to manage services,” StorSimple’s Parikh said. “Even at 65 cents a gig, it was losing money.”

As for Cirtas, Parikh said its Bluejet Cloud Storage Contoller lacked redundancy needed for a SAN product.

“You cannot deliver block storage without dual controllers,” he said. “If it were a file product like Riverbed [Whitewater], you can use it for backup. High availability and reliability has to be table stakes for block storage. No one gets fired for buying expensive storage. When a storage guy gets fired, it’s for losing data.”

Rodriguez said Cirtas took the wrong approach by trying to sell iSCI SAN storage in the cloud. “Blocks-to-the cloud is not technically sound,” he said. “From a business perspective, arrays are the hardest possible place to enter a data center with new technology. People are religious about their arrays. I don’t know exactly what happened, but I know [Cirtas] was having problems in the field.”

The Nasuni Filer is a virtual NAS appliance that turns commodity hardware into a NAS that moves data off to the cloud. The StorSimple Appliance is used mostly to move data from Windows file servers, Microsoft Exchange and SharePoint and VMware to the cloud. “We provide the appliance, and the customer buys the cloud directly with a separate contract with Amazon, Microsoft or A&&T,” Parikh said. “We reduce the cloud provider bill, and that more than pays for our appliance.”
Rodriguez said cloud storage gateways need to be reliable and low cost to succeed.

“This is a business not unlike the harddrive business,” he said. “It’s a total commodity play. It is about quality at a low cost. Nasuni repackages that raw, commodity into something businesses can run on.


April 19, 2011  1:35 PM

Seagate pushes deeper into SSDs with Samsung acquisition

Dave Raffo Dave Raffo Profile: Dave Raffo

Seagate’s $1.375 billion acquisition of Samsung’s hard drive business today also strengthens Seagate’s solid state drive (SSD) hand by extending the NAND flash partnership between the two vendors.

Samsung sells hard drives for PCs and consumer electronics, so the SSD part of the deal is the key piece for enterprises.

“This provides Seagate with an important source of leading-edge NAND supply and early visibility into the next-generation of NAND,” Seagate CEO Steve Luczo said on a conference call to discuss the deal.

Seagate and Samsung announced a NAND partnership last August. Seagate launched its first enterprise SSDs in March with its Pulsar .2 mult-layer cell (MLC) and Pulsar XT.2 single-level cell (SLC) SSD using Samsung NAND chips. Seagate was slow entering the SSD market, but Luczo said those Pulsar products should be coming into the market through storage system OEM partners soon and Seagate is on schedule for its next generation of SSDs in partnership with Samsung. Still, Seagate’s OEM customers wanted assurances that the Seagate-Samsung arrangement would remain intact.

“This addresses an issue that customers have raised,” he said. “While they have a lot of confidence for Samsung and Seagate to design flash products, there always has been a bit of a concern that without a formal supply agreement, what was the whole package going to look like?”

Seagate also sells hybrid systems with SSDs and hard drives.

The Samsung deal, which Seagate expects to close around the end of the year, is the second major disk drive merger in barely a month. Western Digital said in March it intends to buy disk drive rival Hitachi Global Storage Technologies (HGST) for $4.3 billion.

Luczo said the consolidation reflects the growth of storage capacity worldwide and the need for investment in new storage technologies.

“Demand for storage is accelerating,” he said. “Petabyte growth is very strong and has been for the last six to eight quarters, even in an economy that has been lackluster.”


April 18, 2011  7:51 PM

Processors for storage systems follow server/PC roadmap

Randy Kerns Randy Kerns Profile: Randy Kerns

Recent storage system launches – such as EMC’s Isilon roll-out last week – included new generations of Intel processors that provide greater processing power and increased performance. This has become a familiar pattern — the latest generations of processors become available, and soon after storage system announcements follow with the incorporation of the new processor technology.

That marks a major change in storage system architectures. When I started in development for storage systems, custom microprocessors were designed that had special characteristics for handling data movement. The assembler level languages were written for each custom processor, and the surrounding data flow was developed to create the heart of the storage system. It was a badge of honor to have created the custom processor and the programming language for a generation of storage systems.

Over time, the use of special purpose but standard processors became common, as did more general purpose compilers and debuggers. These processors evolved into a separate business with offerings such as the AMD 29000. This took the custom processor design and programming languages and tools away from storage system development, but left the storage application software and the logic around moving the data and the interfaces.

As the progression continued, the use of commodity storage processors along with the support logic chips became prevalent. This has reduced the design demands and provided greater economies for components. Many systems today even use standard or nearly standard motherboards of the type found in servers for the underlying hardware in storage systems. That leaves storage system design to the application software and the hardware configuration. In some cases, the storage application can run within a virtual machine on a physical server.

The move towards commodity hardware was enabled by tremendous advances in the processor speed and functionality. The investment in the server/PC technology can be leveraged in storage, which alone could never sustain the investment required to make those advances. The other side of the coin is that the continual advancement requires the storage system hardware to change along with the new server change, which means that a particular hardware configuration will be offered only as long as the server/PC hardware offering is in new production. That’s about a 12-month cycle at best. New (meaning updated) versions of the storage system hardware will occur on a fairly regular basis by necessity.

This is mostly good for the IT customer. Newer, faster, and less expensive storage systems will continue to be delivered. But it also means that support of the storage systems will have a finite life. There will be a time when it is necessary to upgrade the storage system when a replacement controller is no longer available because the spares stock has been depleted and no more are being manufactured. This causes a bigger concern when planning the lifespan for those systems, the amortization of the investment, and the problematic migration of data.

The updated processors in storage systems such as the one from EMC Isilon are usually accompanied with other advances such as new or improved features delivered with the storage application and newer interface support. So despite the shorter hardware lifecycle, the IT customer does benefit compared to the costs and progression of technology from custom designs of the past.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


April 13, 2011  1:02 PM

SNW – The New, the Popular, and the Next

Randy Kerns Randy Kerns Profile: Randy Kerns

 

Storage Networking World always provides a chance to see what vendors want to inform the world about from their product perspective. It also includes presentations focused on education and general information for storage managers. These two perspectives often contrast, showing that the adoption rates of new technologies in data centers often runs at a different pace than delivery of new technology.

 

The vendor offerings are interesting to look at from a quantitative perspective:  what is popular as a focus area?  At last week’s SNW in Santa Clara, Calif., there were two distinct areas of popularity (quantitatively):  solid state devices (SSDs) and anything to do with “cloud.” The cloud is being liberally attributed to many different products and operational environments. This does not mean there were no storage systems and management software solutions presented, but they were greatly outnumbered by SSD and cloud offerings.

 

These products represented the popular ones at the event and certainly many of the offerings were new.  The “next” from vendors were some novel offerings that may become popular products or solutions of the future.  More of the new technologies seem to be unveiled at SNW in the fall rather than the spring. 

 

Fitting into the new category but not really representing particularly new products were integrated or coordinated solutions meant to replace or help manage point products.  This is recognition that there is another level of opportunity beyond delivering a new or novel product. That opportunity is in integrating solutions into the operational workflow in an IT environment.

 

 (Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


April 11, 2011  5:11 PM

Cloud evaporates over Iron Mountain

Dave Raffo Dave Raffo Profile: Dave Raffo

Iron Mountain today confirmed it will end its Virtual File Store (VFS) and Archive Service Platform (ASP) cloud storage services because of “modest levels of adoption.”

Iron Mountain released a statement confirming what Gartner first disclosed in a report last week. According to Gartner, Iron Mountain stopped accepting new customers for its public cloud storage business on April 1 and would officially end the service “no sooner than” the first half of 2013.

“Iron Mountain did recently notify customers of our Virtual File Store and Archive Service Platform that we are retiring these two commodity cloud-storage solutions,” the Iron Mountain statement said. “This decision only affects those using Virtual File Store, a low-cost cloud storage option for inactive files, and technology partners who use the Archive Service Platform as a general purpose cloud for storing their customers’ data. As the Gartner report notes, public cloud service offerings like these have seen modest levels of adoption.”

Iron Mountain has offered to transfer VFS customers to its higher value File System Archiving (FSA) service next year. FSA is a hybrid cloud that uses policy-based archiving onsite and in the cloud. ASP customers will have to move to an alternate service provider or move their archiving in-house.

Iron Mountain launched VFS in February of 2009, and followed with the ASP that let software vendors integrate the Iron Mountain API to use its cloud back end.

EMC also discontinued its Atmos Online public cloud service last July when it decided to market Atmos exclusively to service providers who could build their own cloud offerings. Gartner points out that VFS, Atmos Online and startup Vaultspace’s shuttered public cloud service all focused only on storage without cloud compute services. Nirvanix and Zetta are the surviving pure-play NAS public cloud providers, with others offering gateways for hybrid cloud services.

Iron Mountain’s problems could be internal as well as industry wide. The vendor took a $255 million charge against its struggling digital business in the third quarter of last year and another $29 million charge in the fourth quarter, citing lower eDiscovery revenue than expected. The Gartner report said Iron Mount digital services EVP David Jones acknowledged the reasons to drop the cloud services included profitability pressures.

Nirvanix and Zetta are offering to migrate data to their clouds for all Iron Mountain customers free for 30 days. Nirvanix said it would then offer those customers the option of implementing one of its cloud services.

Zetta said it can migrate data stored in the Iron Mountain service automatically to the Zetta Storage Service with its free ZettaMirror software.

Andres Rodriguez, CEO of NAS cloud gateway vendor Nasuni, said Iron Mountain’s move is not indicative of the cloud storage market. Iron Mountain was one of the cloud providers Nasuni could move file data off to.

“While Iron Mountain’s shutdown of its cloud service is another symptom of consolidation, the cloud storage market continues to expand,” Rodriguez said. “The fact of the matter is, cloud storage infrastructure providers must grow very rapidly, or else they will not achieve the economies of scale that enable them to be profitable. Iron Mountain is an example of a company that was unable to achieve this scale. Over time, we will see more consolidation and a few extremely large players will dominate the market.”


April 8, 2011  3:43 PM

Dell puts storage to use in bundled stacks

Dave Raffo Dave Raffo Profile: Dave Raffo

Dell’s entry into the integrated stack game this week provides another reason why it switched its storage strategy to developing its own rather than reselling EMC’s products.

While Dell will partner for some pieces of its data center and cloud strategy – mainly networking hardware and server virtualization software — it wants control over the core hardware elements.

The new vStart integrated virtualization appliances included EqualLogic storage arrays, and the Email and File Archive products use the Dell DX Object Storage system as the central repository. The archive products can also use EqualLogic storage. Dell is also leaning on two of its key data protection software partners for the archiving products. Symantec Enterprise Vault and CommVault Simpana are the options for moving and managing data.

Dell will add Compellent storage to these new products eventually.

At the start, Dell has two versions of vStart. One built for 100 virtual machines and the other for 200 virtual machines.

Like Hewlett-Packard and IBM, Dell wanted to build its own stacks with its core components instead of following Cisco’s route of using its UCS Servers with EMC or NetApp storage.

Dell vice president for Enterprise Solutions Praveen Asthana calls vStart “a way of delivering units of virtual machine assembled from Dell hardware and storage. We’ve done a lot of work in terms of pre-integrating and figuring out the optimal way of assembling virtual desktops, and created reference architecture.”

Dell may have committed one architectural no-no already, however. The lower-end vStart configuration uses one EqualLogic array, leaving it with a single point of failuare. The vStart for 200 VMs uses two EqualLogic arrays.


April 6, 2011  12:24 PM

Storage buying requires an application perspective

Randy Kerns Randy Kerns Profile: Randy Kerns

Much information provided about storage systems reflects details about the specifications of the storage systems – known as speeds and feeds. This has been the norm, and if that information was not presented prominently, many would think the vendor is trying to focus attention away from flaws in its product.

But times have changed, and the Information Technology professionals making decisions on acquiring the right storage system need to look at different factors.

What is different for making the decision and why there has been a change is interesting, and may require explanation. First, let’s look at the reason why. In general, the number of storage professionals, whether they are called storage administrators or not, has significantly declined. Professionals with exclusively storage responsibilities are mostly found only in the largest IT operations.

That leaves fewer storage specialists, mostly because of reductions in staffs. The IT people remaining have to take on multiple responsibilities. Additionally, the requirements for storage and the availability of solutions that require less detailed management or control have enabled successful deployment and operation without the storage specialist.

Changes in staffing have led to changes in decision-making when acquiring storage systems. The primary focus now is usually applying the storage system to an application to meet requirements for business issues. In this scenario, the storage system is integrated into the environment as a solution for one application. It is implemented and managed for the specific requirements of that application. How the storage can quickly meet the application needs is the most important consideration.

The need for information about speeds and feeds is still there, but that is not the most important representation to the IT professional making the evaluation. At least it shouldn’t be. There are some products (and product marketing) that continues to focus on that message. There are more opportunities for systems that solve a business problem and quickly meet application needs.

There will be a form of natural selection occurring here. Vendors that understand and can adapt to representing their products to their customer needs will ultimately be more successful. Those stuck in their old methods and thinking will have a more difficult time. The Evaluator Group publishes a series of Evaluation Guides on how to make informed decisions regarding purchasing of storage at www.evaluatorgroup.com. These guides attempt to focus on what is important to look at regarding the purchase of storage solutions.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: