Storage Soup

A SearchStorage.com blog.


April 16, 2009  3:39 PM

FCoE infrastructure coming together



Posted by: Dave Raffo
storage vendors

Cisco provided more details of its Unified Computing System today, including pieces of its FCoE strategy.

The UCS building blocks include 6100 Series Fabric Interconnects, which Cisco calls “FCoE capable.” The UCS Manager sits on the Fabric Interconnects, which can be clustered for high availability. The Fabric Interconnects use the same ASIC as Cisco’s Nexus switches and connect to Fibre Channel and 10-Gigabit Ethernet switches.

Cisco will also offer FCoE Converged Network from QLogic, Emulex, and Intel inside its UCS blade severs Shockingly, Brocade’s recently launched CNAs don’t fit into Cisco’s plans.

Brocade’s recent rollout of CNAs and FCoE switches and the Cisco UCS devices set to roll out around June serve as a further reminder that the FCoE puzzle is coming together. Storage vendors slowly getting into the act, too. EMC’s new Symmetrix V-Max system has native FCoE support. NetApp has pledged native FCoE support for its arrays and supports the protocol now through a free upgrade to its Data OnTap operating system.

NetApp and EMC have been the only storage array vendors to address FCoE so far, but Cisco’s director of product management for the Server Access and Virtualization Group Paul Durzan says “We’re working with all the major storage vendors. We don’t intend to be exclusive of other people.” Durzan says that list includes storage systems from Cisco’s new server rivals, IBM and Hewlett-Packard.

From storage vendors’ perspective, however, early support for FCoE amounts mostly to future-proofing their systems.

“The important thing for us is to support it now,” says Dave Donatelli, president of EMC’s storage division. “Typically, these are gradual transitions that take time. I don’t think you’ll see mainstream use before 2010.”

StorageIO Group analyst Greg Schulz says people who actually use FCoE now are “either getting a really good deal, among the Cisco faithful, or like to try things early” but says storage and network admins definitely have the converged protocol on their radar.

“We’re about ready for the real game to begin with FCoE,” Schulz says. “You can make the technology case, but how do you pay for it? Is it something you want to have, or something you need to have? That’s what people are asking now.”

April 14, 2009  5:37 PM

EMC V-Max: V stands for bigger



Posted by: Dave Raffo
disk arrays

Long before today’s official launch, just about anybody who cares about enterprise storage knew EMC would roll out its new Symmetrix system today during a series of webcasts. Yet EMC never used the word Symmetrix in all its hype about the launch. According to EMC, it was all about the “virtual data center of the future.”

So now we know the new Symmetrix is the V-Max – or virtual matrix – and not the DMX-5. But what makes this a system for the virtual world and the DMX-4 for the “physical world” as EMC’s storage division president Dave Donatelli puts it?

EMC CEO Joe Tucci likens the new Sym to a block-storage version of the objet-based Atmos system EMC rolled out last year. In other words, it’s an internal cloud storage system, which is a new way of saying virtualized storage.

According to EMC, V-Max is a storage virtualization system because it makes all its storage look like one pool, it can automatically migrate data between systems and arrays, and simplifies management with features such as thin provisioning and clustered nodes.

But the biggest difference between V-Max and other systems is really size and scale.
The virtualization features in V-Max aren’t new to the industry. 3PAR’s InServ systems support clustering of eight controllers. Compellent Technologies has software for moving data between solid state and hard drives, and Atrato will have it in a few weeks. Hitachi Data Systems has supported pooled storage in its arrays – and other vendors’ arrays – for years. But EMC says none of those systems scale to the level of V-Max or perform as fast. And that goes for the DMX-4, too.

When asked how it would be positioned against the DMX-4, Donatelli said the V-Max “has up to three times the capacity of DMX-4, and up to three times the performance. Clearly this will take over the high end of the product line.”

So, if you’re considering a V-Max, ask yourself if you need a bigger faster system with a bigger price tag. That’s easier than trying to decide if your data center resides in the virtual or physical world.


April 14, 2009  4:35 PM

EMC launches Symmetrix V-Max, may add spin-down



Posted by: Beth Pariseau
disk arrays

EMC Corp. had a virtual press conference this morning to announce the new Symmetrix V-Max high-end disk array. DMX-4 will remain on the market, but the new distributed architecture and software updates have EMC claiming V-Max is faster and more scalable.

The president of EMC’s storage division, Dave Donatelli, said during a conference call with press this morning that the vendor is “contemplating spin-down” for the new Symmetrix, though he did not commit to a time frame.

I also asked Donatelli a question I’ve had on my mind for a while with regard to EMC. Back in the fall of 2007, EMC revealed at a customer event that it was developing a universal backup and archiving appliance built on industry-standard components, which would be given a ‘personality’ by EMC’s different software modules. A centralized management GUI for all backup and replication processes was also discussed. The first steps toward this may have come in the form of Avamar / Networker integration; at last year’s EMC World, execs told me they acquired WysDM to make that company’s software a centralized management framework for backup and archiving.

Then came the Clariion CX-4, which added high-availability features and scaled well into Symmetrix range. It wasn’t necessarily cannibalism yet, but it the increasing overlap was notable. As EMC has talked more and more about becoming a software company over the last few years, combined with the backup and archiving appliance plans, and other subtle signs of convergence between the systems like the redesign of disk trays that could fit into either CX or DMX, I began to wonder if EMC wasn’t planning a similar melding and commoditization of primary / secondary storage hardware, with different software to give it different “personalities.”

EMC officials have been on the coy side in talking about this. The picture has gotten a little clearer with the announcement of V-Max, which adds multicore Intel x86 processors, a first, as noted by IDC’s Benjamin Woo, in the high-end disk array space and a first for the Symmetrix line. EMC put a heavy emphasis on the software side of V-Max as well; most of the performance improvements and new features come from a complete reworking of the Enginuity OS software that runs Symmetrix. A software-based approach to pools of devices–i.e. the “VMwareization” of Symm–further commoditizes the hardware, further relies on software to give a machine “personality”…

During today’s Q&A with Donatelli, I asked if that is, in fact, EMC’s plan–if today what we think of as Clariion, Celerra and Symm might one day be distinguished by software rather than different hardware. He said that while CX and V-Max both use x86 processors, they’re different kinds of processors–in fact, different across the different Clariion models as well as different among CX and V-Max. V-Max also uses custom ASICs for its virtual matrix scale-out. “We still see a difference between the high-end world and the midtier world,” he said.

But he didn’t say for how long…


April 14, 2009  2:13 PM

NetApp to pay $128 million to settle GSA probe



Posted by: Beth Pariseau
Strategic storage vendors

In an SEC filing Monday, NetApp disclosed it has paid $128 million to the federal government to settle a Department of Justice probe into its contracting activity with the General Services Administration (GSA).

EMC has also disclosed it’s the target of a similar probe, and late last fall it was also reported that several other IT vendors have been targeted by the DOJ over pricing, including Sun Microsystems, Canon and Cisco.

In exchange for the settlement, according to the filing, “the parties to the Agreement have agreed to release NetApp with respect to the claims alleged in the investigation as set forth in the Agreement. The Agreement reflects neither an admission nor denial by NetApp of any of the claims alleged by the DOJ, and represents a compromise to avoid continued litigation and associated risks.” NetApp “recorded a $128.0 million accrual for this contingency in the third quarter of fiscal 2009.”


April 13, 2009  7:06 PM

SpiderOak offers discount to Carbonite users, says SLAs on the way



Posted by: Beth Pariseau
Storage Software as a Service

When consumer backup SaaS provider Carbonite sued its storage vendor, Promise, for systems Carbonite alleges lost customer data, ESG founder and analyst Steve Duplessie wrote a blog post urging enterprise users to ask tough questions of backup service providers to winnow out providers prepared to offer enterprise-level services. Especially, what does your infrastructure look like — what failsafe mechanisms are in place to prevent data loss? and what service level agreements (SLAs) are provided, if any?

When Carbonite backup SaaS rival SpiderOak came along with a pitch for me about how they’re a) more reliable and secure than Carbonite and b) welcoming Carbonite customers with a 20% discount on a year’s service for switching, I decided to try out those questions on them. What followed was an interesting discussion.

SpiderOak CEO Ethan Oberman says SpiderOak, unlike Carbonite, assembles its own storage systems out of commodity servers and disk drives, purchasing individual components and assembling them under the company’s proprietary storage clustering software. “We don’t rely on a third party pre-assembled storage system” as Carbonite did with Promise, Oberman said. But does putting together its own storage systems make SpiderOak’s more reliable? Not necessarily.

(Side note: SpikerOak isn’t alone here. Whale many storage vendors are betting on their future by selling pre-built systems to cloud service providers, the pitch I hear from those service providers is that their service is more reliable/ more secure / better performing because they built it themselves.)

So if we take the claim that home-built is better at face value, let’s say I was a Carbonite user who lost data, and now I’m looking to switch providers. Assuming I haven’t been totally turned off on the idea of SaaS in general, I think I’d still like to see something definitive in writing from my new prospective vendor, regardless of that vendor’s data center architecture, about data loss and what it’s prepared to offer me on that front.

It took quite a while before our conversation today progressed to the point where we could concede that although data loss is highly, highly, highly unlikely, it theoretically can happen. One of the reasons SpiderOak doesn’t address that possibility outright is because it doesn’t want that possibility in users’ minds. “We take this very, very seriously,” Oberman said. “Losing customer data in this market basically means going out of business.”

But as Duplessie put it, “I know things break. What I don’t know is how often they break, or why, and most importantly – what you do about it.”

Oberman said SpiderOak would probably do the right thing and give consumers their money back in the event of their data being lost. “It’s just ethical business practices,” he said. “We stand behind our product.”

Would he put that in writing?

Well, that opened up another can of worms. SpiderOak, Carbonite, and other consumer-grade backup SaaS vendors don’t offer SLAs or even formal written guarantees about data loss, in part, Oberman said, because of a fear of predatory lawsuits in the consumer world. Why these are more prevalent among consumers than among businesses remains unclear to me, but SpikerOak claims that’s what its legal counsel says. Also, it’s not as easy to assign a value to consumer data vs. corporate data attached to billable hours in order to institute hard financial penalties, and SLAs make the whole service more expensive, Oberman claimed.

For its consumer/SOHO service, SpiderOak’s focus is on cost–it charges about $5 to $10 per month. “Those are pretty cheap numbers–so cheap, in fact, that we can’t offer geographic redundancy economically,” he said. To provide SLA-worthy redundancy, the cost of the service would have to go up. This is something SpiderOak is planning to do by this summer with the launch of a new enterprise-focused backup service, which will be about four times more expensive as the current offering.

In the meantime, Oberman suggested that users attracted to the cost but concerned with the reliability of consumer/SOHO services could theoretically treat them like some companies do internet service providers (ISPs), and deploy two or three of the cheaper services for DIY redundancy.  “There does seem to be a gap” between expensive fully-redundant enterprise services and cheaper but less resilient consumer/SOHO services in the market right now, he added.


April 9, 2009  5:03 PM

Sights and bites from SNW



Posted by: Beth Pariseau
Storage conferences

Photobucket
A band entertains at the SNW welcome reception Monday night.

SNW Orlando 2009 wraps up today, and the economic downturn was the elephant in practically every room at the show. It was the main topic of discussion, a factor in debates over technology trends, and a subliminal part of the background as the vendor presence dropped dramatically compared to previous shows. It was difficult to get a handle on exactly how many attendees there were–a wide range of numbers was floating around– but many of the vendor reps and analysts I ran into at the show exclaimed over how quiet it seemed compared to past years. 

Most striking to me was the noise (or lack thereof) around the press room, in previous years a bustling hub of activity as armies of vendor marketing directors and PR reps briefed an equally large cadre of analysts and press. The room was smaller than usual this year, and empty at times.

However, there were still some interesting discussions going on around the show, including sneak previews of interesting upcoming product announcements.

NEC eyes content-aware dedupe

NEC’s HydraStor backup and archiving grid system will soon put a new twist on its block-level dedupe, according to NEC director of product management and technical marketing Gideon Senderov. He said the vendor is working on content-aware deduplication, believing it can lead to better dedupe ratios for customers. “Backup applications insert their own metadata,” he said. “Depending on how they aggregate files, you may have different metadata within them. Similar chunks can sometimes look different.” Filtering application metadata from files requires integration with multiple backup apps. Sepaton’s DeltaStor VTLs already take this approach.

 Photobucket
3PAR’s clever marketing campaign for its new InServ F-class arrays. I wonder whether the system name or the slogan came first. 

HP has blade plans for LeftHand

LeftHand Networks’ SANiQ IP SAN software will soon be ported to HP’s blade servers, according to Lee Johns, marketing director for entry storage, HP StorageWorks. It’s part of an overall “converged infrastructure” trend for HP, which envisions storage as a network service centrally managed by software. The company is also preparing a software framework, based on its 2007 acquisition of Opsware Inc., to centrally manage different kinds of storage devices along with servers.

Photobucket 
The relatively sleepy show was not without its amenities, including air hockey at the welcome reception.

Brocade CTO talks FCoE

Brocade’s big announcement at the show was the rollout of its first Fibre Channel over Ethernet (FCoE) products, a top-of-rack switch and a converged network adapter (CNA). The company has taken a less bullish attitude than its rival Cisco to the technology, releasing its top of rack switches several months after Cisco released its Nexus FCoE product line. ,

CTO David Stevens and product manager Pompey Nagra went over the details of the technology with me, as well as its value proposition, which the cynical among us might see as an attempt for FC vendors to stay relevant as 10 GbE threatens to eat their lunch. Stevens pointed out that though FCoE, like 10 GbE, requires a swap-out of switching equipment in the data center, FC storage assets can remain the same. Even though convergence-enhanced Ethernet will bring the protocol more into line with FC as a lossless network with some flow controls, FC offers services like zoning and multipathing that will still be important to storage administrators, Stevens said. He also dismissed the idea that FC will fall out of favor once it’s slower than Ethernet (currently, FC is at 8 Gbps; Ethernet’s looking to move to 10 Gbps). “Maybe long-term,” he said. “But is that big a technology shift for the traditionally risk-averse storage community really worth two more gigs per second?”

Stevens said the focus for FCoE should really be just on cutting down on the number of wires in the data center. “The first-hop technology can all be combined while preserving assets in the infrastructure.”

However, Stevens also admitted this value proposition is similar to what has been promised by InfiniBand technologies, which have yet to see widespread adoption outside of high-performance computing (HPC) niches. Will FCoE be more successful because Ethernet is a more familiar interface than InfiniBand? I asked Stevens. “I don’t have a good answer for you there yet,” he said.

Photobucket
An open bar and darts–always a great combination.

Thales sees converging encryption standards

In February, Thales Group was part of a coalition of vendors that submitted a standard for interoperability between key management systems and encryption devices to the Organization for the Advancement of Structured Information Standards (OASIS) called the Key Management Interoperability Protocol (KMIP). If adopted, KMIP would mean users could attach almost any encrypting device to one preferred key management system, regardless of the vendors involved. Meanwhile, the Institute of Electrical and Electronics Engineers (IEEE) approved a standard in January 2008 for managing encryption on storage devices. Now the vendors are working on bridging between the two standards, according to Kevin Bocek, director of product marketing for Thales, so that if product developers want to roll the more-detailed IEEE spec into the more general OASIS spec, the two will be compatible. This interoperability will probably be more valuable to developers than end users, he said, as the IEEE spec contains very granular details for developing products down to specificying protocols. If engineers don’t have to re-invent the encryption wheel or ensure interoperability for each of their products, it could get products to market faster or free them to focus on other innovations, he said.

Photobucket
Video screens showing 1000 DVD-quality movie streams being served from one of Fusion-io’s ioDrives, part of SNIA’s exhibits focused on SSD technology.

SNIA SSD initiative finds ‘wide variability’ in SSD performance

The Storage Networking Industry Association (SNIA) had a few booths set up on the show floor focused on its SNIA Solid-State Storage Initiative (SSSI), including a demo of benchmark comparisons between different vendors’ single-level cell (SLC) enterprise SSDs by Calypso Systems. CTO Easen Ho, on hand for the demonstration, walked me through bar graphs of performance results. The specific manufacturers’ names were not listed (no fun!), but it was easy to see the ‘stairsteps’ between the different results on the graph. Still, as I peered at the screen it looked like they were all generally in the same ballpark. That’s until Ho pointed out to me that the y axis of the graphs was actually a logarithmic scale. This was initially done to better compare the results against spinning disk drives, which otherwise “wouldn’t even be visible on these graphs,” Ho said.

Photobucket
Data Mobility Group analyst and StorageMojo blogger Robin Harris (left) looks for a ‘man on the street’ interview during a chat on the show floor with BlueArc director of corporate marketing Louis Gray.

Here are the news stories filed from balmy Florida this week:


April 7, 2009  7:30 PM

Fusion-io shovels in $47.5M in fresh funding



Posted by: Dave Raffo
solid state drives

How does a startup get $47.5 million in funding in today’s economy?

“By being a leader in a pretty hot sector,” says David Flynn, CTO and one of the founders of Fusion-io, which closed its whopping Series B round today.

That hot sector is flash technology, which is rapidly making its way into enterprise storage. It also helps that Hewlett-Packard is an OEM partner, using Fusion-io’s ioMemory technology in the HP StorageWorks IO Accelerator NAND flash-based storage adapter. Fusion-io also has technology partnerships with IBM and Dell.

Flynn says the Series B funding will help Fusion-io “broadly address the market that’s growing at a break-neck pace, as well as to help us engineer our next-generation product.”

The first of these next-generation products is the ioSAN, due for release over the summer. Fusion-io describes the PCI Express-based ioSAN as server-deployed network-attached solid state storage.

“We’re working on what we call server-deployed network storage – putting components into the server to share high-performance storage over the network,” Flynn said.

Flynn says more than 400 customers are using Fusion-io cards, most of them large organizations running relational databases, web servers, or virtual machines.

Fusion-io also officially announced David Bradford as its CEO, although that move happened at least a month ago. Bradford spent 15 years as Novell, including three years reporting to current Google CEO Eric Schmidt. Bradford also brought Apple founder Steve Wozniak to Fusion-io as chief scientist a few months back.

Now that “Woz” is no longer dancing with the stars, he can devote more time to Fusion-io. Flynn says while Wozniak’s name recognition has helped Fusion-io generate attention, it wasn’t much of a factor in attracting new VC Lightspeed Venture Partners and getting more funding from Series A investors New Enterprise Associates (NEA), Dell Ventures and Sumitomo Ventures.

“It may help get attention, but don’t think it was substantial in closing the deal,” Flynn said. “I think our OEM relationship with HP, our deep relationship with IBM, the fact that Dell is an investor, and our huge market potential is the real essence of why we got funded.”


April 6, 2009  4:02 PM

Isilon cuts staff, switches sales VPs



Posted by: Dave Raffo
NAS, storage vendors

Isilon Systems finally took Wall Street’s advice about slashing staff today, revealing it would reduce its worldwide workforce by approximately 10%.

Isilon had 394 employees at the end of 2008. The clustered NAS vendor is also changing its chief of sales, brining in NetApp and Quantum veteran George Bennett to replace Steve Fitz as SVP of worldwide field operations.

Financial analysts have called on Isilon to cut staff since founder Sujal Patel took over as CEO in September of 2007. The company hasn’t had a profitable quarter since going public in 2006.

Patel gave up his resistance to the cuts, and estimated the reduction will cost $850,000 this quarter and then save the company $4 million annually.

“It’s clear that persistent global economic weakness and uncertainty has led to contraction in many of our customers’ IT budgets,” Patel said in a press release today announcing the restructuring.

Isilon actually met sales expectations despite suffering wider losses than expected, according to the preliminary results it disclosed today. Isilon expects revenue in the range of $26.5 million to $27 million, up approximately 10% to 12% from the same period last year and down approximately 15% to 17% from the fourth quarter of 2008. Although Isilon did not give a previous forecast for the quarter, financial analysts expected around $26.9 million in revenue and a loss of 12 cents per share. Isilon said it expects to lose 14 cents to 15 cents per share.

The company also took an inventory writedown of around $3.8 million because of the softening economy for its older products in anticipation of customers moving to its new products launched last month.

“That means there was faster adoption of new product, but they were unable to get rid of the old solution, so that’s a mixed bag,” Enterprise Strategy Group analyst Brian Babineau said.


April 6, 2009  3:24 PM

3PAR brings 4 controllers to midrange, skips SSD for now



Posted by: Dave Raffo
disk arrays, storage system

3PAR’s new F Class midrange systems that launched today look a lot like its T-Class enterprise systems, only smaller. So the F-Class inherits its share of enterprise features as well as the gap in the T-Class platform

The enterprise features includes the ability to scale to four controllers – other midrange systems support two controllers – and what 3PAR calls Mesh-Active controller nodes. Mesh-Active controller architecture provides symmetrical access to all LUNs, instead of connecting a LUN to one controller.

At least one analyst, Evaluator Group managing partner Russ Fellows, gives 3PAR high grades for bringing those features to the midrange.

“By scaling to four controllers, they have twice as many as any midrange class system out there,” he said. “And the system scales literally, with symmetric LUN access. All high-end data center systems support symmetric LUN access across controllers, but pretty much nobody in the midrange does.”

However, 3PAR’s new midrange systems share the same missing feature as its enterprise systems. At a time when just about every new storage system rollout includes solid state drive (SSD) support, 3PAR still has none.

“We will support solid state, but the pressure to do so has been muted compared to what we’ve observed with other storage vendors,” 3PAR VP of markeing Craig Nunes says.

Nunes said when 3PAR gets into SSD, it will be with a SATA interface instead of the pricier FC-attached SSDs.

“If you need higher IOPS, we deliver that today with wide striping,” he said. “Fibre Channel-attached SSDs are premium priced and will never cross over the dollar per IOP line. Wide striping is better. The next wave of SATA-interfaced SSD drives promises to drop that IOP per dollar to the crossover point.”

Although competitors offer SSDs in the midrange as well as the enterprise, Fellows says lack of SSD support hurts 3PAR more with its enterprise systems than with the F-Class. “In the midrange, you add more than six SSDs and you have a million-dollar system, and that’s not a midrange system anymore,” he said. “That will change in a year or two. But it is a bit of a drawback now in the high end.”


April 6, 2009  12:50 PM

Double-Take repackages products after emBoot acquisition



Posted by: Beth Pariseau
remote data protection, Strategic storage vendors, Virtualization strategies

Double-Take is consolidating its server-based replication, TimeData CDP, GeoGluster software for Microsoft stretched clusters, LiveWire bare-metal restor, and emBoot’s netboot iSCSI boot-from-SAN and sanFly iSCSI target products into four titles:

  • Double-Take Move – will support the conversion of workloads between any combination of physical and virtual servers, while applications remain responsive to end users.
  • Double-Take Flex – Packages netboot/i and sanFly so that servers can be moved to a centrally managed iSCSI SAN and be booted from it.
  • Double-Take Backup – combines TimeData and LiveWire for CDP, granular or full-server recovery of Windows systems.
  • Double-Take Availability – combines previously separate Double-Take titles, for Linux, Windows, VMware Infrastructure and Hyper-V.
  • Double-Take director of solutions engineering Bob Roudebush, says Double-Take Move and Double-Take Flex are available now. Double-Take Backup and Double-Take Availability will be available by summer.

    The long-term roadmap for the products is to add automation. “It might be more desirable somtimes to boot from SAN, and other times to boot locally, but they must be in sync,” Roudebush said. “That’s the vision for sure.”

    In the meantime, user Paul Hurst, network administrator for the City of Airdrie in Alberta, Canada, said he’s looking forward to replacing Double-Take’s Virtual Recovery Assistant (VRA) with Double-Take Move. “The key is more efficient migrations [between physical and virtual servers],” he said. “With VRA, you could do one move, test it, and then start all over again. With Double-Take move, I can do a test move without affecting the production box.”


    Forgot Password

    No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

    Your password has been sent to: