Storage Soup

April 21, 2009  3:02 PM

Broadcom turns up the heat on Emulex

Dave Raffo Dave Raffo Profile: Dave Raffo

Storage insiders predicted the Oracle-Sun deal would kick off a series of acquisitions, and now today chipmaker Broadcom is making a move on HBA vendor Emulex. Broadcom’s unsolicited offer of approximately $9.25 a share or $764 million is about a 40% premium over Emulex’s closing price of $6.61 yesterday.

Broadcom has actually been after Emulex for a while. When Emulex adopted a poison pill in January to defend it from unwanted suitors, Broadcom was the unwanted suitor it had in mind. A letter that Broadcom Scott McGregor sent to Emulex’s chairman Paul Folino and its directors today revisited that acquisition attempt:

“We were disappointed when, in early January, you responded that the company was not for sale and abruptly cut off the possibility of further discussions. Even more troubling was the fact that merely one week after that communication, you took actions clearly designed to thwart the ability of your shareholders to receive a premium for their shares. … It is difficult for us to understand why Emulex’s Board of Directors has not been open to consideration of a combination of our respective companies. We would much prefer to have engaged in mutual and constructive discussions with you. However this opportunity is in our view so compelling we now feel we must share our proposal publicly with your shareholders.”

McGregor went on in the letter to lay out Broadcom’s vision for single-chip converged network devices delivering Fibre Channel and Fibre Channel over Ethernet. He also laid out a case why it would benefit Emulex to accept the offer:

“Customers will demand from their suppliers advanced chip technology and supply chain scale and reliability which is not an area of strength for Emulex. Broadcom brings tremendous value in advanced chip technology and supply chain scale and reliability to Emulex’s products—and customers.”

McGregor’s letter also stated that Broadcom is taking legal action to declare Emulex’s poison pill invalid.

Broadcom has tried to make inroads in storage before. It has sold chips for FC switches and a few years ago developed a converged network interface (C-NIC) that including a TCP/IP offload engine (TOE), iSCSI HBA and remote memory access (RDMA) technology onto one chip – a forerunner of the current FCoE CNAs without the Fibre Channel. However, Broadcom hasn’t been successful in storage and today’s earnings report – it lost $92 million last quarter — show it hasn’t been successful period lately.

The approach of FCoE could prompt more Ethernet companies to look for FC technology, the reverse of Brocade’s acquisition of Ethernet provider Foundry late last year.

“Broadcom doesn’t want to buy Emulex for its embedded switch business, it wants its Fibre Channel stack,” Wedbush Morgan research analyst Kaushik Roy says. “To compete, you’ll need a Fibre Channel stack. And if Juniper has half a brain they will buy QLogic, although Juniper’s never known for doing a lot of acquisitions.”

Roy says Emulex may use its poison pill to negotiate an even better deal, but he said the time could be right to sell. For years, Emulex and QLogic have had a duopoly for HBAs but there will be greater competition as FCoE takes hold.

“There are a lot of players getting into FCoE, Emulex’s revenues and margins will be under pressure,” Roy said.

In a note to clients today, Stifel Nicolaus Equity Research analyst Aaron Rakers indicated that Emulex has fallen behind QLogic in developing FCoE technology. “We believe [Emulex] would face some strategic and fundamental challenges going forward with regard to its positioning in blade servers, our belief that QLogic is better positioned in FCoE, and continued secular headwinds in its Embedded Storage Product (ESP) division,” Rakers wrote.

April 17, 2009  8:24 AM

04-16-2009 Storage Headlines

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

All the news that’s fit to read aloud for this week –

Stories referenced:

April 16, 2009  5:32 PM

Samsung adds encryption to consumer SSDs; Dell to ship in notebooks

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Samsung is claiming it’s the first to ship a consumer solid state drive (SSD) with full-disk encryption (FDE) through a new partnership with security vendor Wave Systems Corp. The 256GB, 128GB, and 64GB SSDs will be available in both 1.8-inch and 2.5-inch form factors. Dell says it will ship the drives in its Latitude line of desktops and notebooks.

Samsung’s drives generate and store encryption keys and access credentials are in the drive hardware, and they are never held in the operating system or by application software. When ordered in a new computer, the drives will come bundled with Wave’s Embassy Trusted Drive Manager software for life cycle management of the drive. Teh software includes pre-boot authentication, enrolls drive administrators and users, and enables backup of drive credentials. Available separately, Wave’s Embassy Remote Administration Server allows an IT administrator to remotely turn on SSDs and adds event logs for compliance.

It probably won’t be long before full-disk encryption also hits the enterprise SSD space. It’s already working its way in on the spinning-disk side, where it’s being pushed by drive maker Seagate, controller maker LSI and systems vendor IBM. Multiple converging standards for key management are also being developed for the enterprise.

April 16, 2009  3:39 PM

FCoE infrastructure coming together

Dave Raffo Dave Raffo Profile: Dave Raffo

Cisco provided more details of its Unified Computing System today, including pieces of its FCoE strategy.

The UCS building blocks include 6100 Series Fabric Interconnects, which Cisco calls “FCoE capable.” The UCS Manager sits on the Fabric Interconnects, which can be clustered for high availability. The Fabric Interconnects use the same ASIC as Cisco’s Nexus switches and connect to Fibre Channel and 10-Gigabit Ethernet switches.

Cisco will also offer FCoE Converged Network from QLogic, Emulex, and Intel inside its UCS blade severs Shockingly, Brocade’s recently launched CNAs don’t fit into Cisco’s plans.

Brocade’s recent rollout of CNAs and FCoE switches and the Cisco UCS devices set to roll out around June serve as a further reminder that the FCoE puzzle is coming together. Storage vendors slowly getting into the act, too. EMC’s new Symmetrix V-Max system has native FCoE support. NetApp has pledged native FCoE support for its arrays and supports the protocol now through a free upgrade to its Data OnTap operating system.

NetApp and EMC have been the only storage array vendors to address FCoE so far, but Cisco’s director of product management for the Server Access and Virtualization Group Paul Durzan says “We’re working with all the major storage vendors. We don’t intend to be exclusive of other people.” Durzan says that list includes storage systems from Cisco’s new server rivals, IBM and Hewlett-Packard.

From storage vendors’ perspective, however, early support for FCoE amounts mostly to future-proofing their systems.

“The important thing for us is to support it now,” says Dave Donatelli, president of EMC’s storage division. “Typically, these are gradual transitions that take time. I don’t think you’ll see mainstream use before 2010.”

StorageIO Group analyst Greg Schulz says people who actually use FCoE now are “either getting a really good deal, among the Cisco faithful, or like to try things early” but says storage and network admins definitely have the converged protocol on their radar.

“We’re about ready for the real game to begin with FCoE,” Schulz says. “You can make the technology case, but how do you pay for it? Is it something you want to have, or something you need to have? That’s what people are asking now.”

April 14, 2009  5:37 PM

EMC V-Max: V stands for bigger

Dave Raffo Dave Raffo Profile: Dave Raffo

Long before today’s official launch, just about anybody who cares about enterprise storage knew EMC would roll out its new Symmetrix system today during a series of webcasts. Yet EMC never used the word Symmetrix in all its hype about the launch. According to EMC, it was all about the “virtual data center of the future.”

So now we know the new Symmetrix is the V-Max – or virtual matrix – and not the DMX-5. But what makes this a system for the virtual world and the DMX-4 for the “physical world” as EMC’s storage division president Dave Donatelli puts it?

EMC CEO Joe Tucci likens the new Sym to a block-storage version of the objet-based Atmos system EMC rolled out last year. In other words, it’s an internal cloud storage system, which is a new way of saying virtualized storage.

According to EMC, V-Max is a storage virtualization system because it makes all its storage look like one pool, it can automatically migrate data between systems and arrays, and simplifies management with features such as thin provisioning and clustered nodes.

But the biggest difference between V-Max and other systems is really size and scale.
The virtualization features in V-Max aren’t new to the industry. 3PAR’s InServ systems support clustering of eight controllers. Compellent Technologies has software for moving data between solid state and hard drives, and Atrato will have it in a few weeks. Hitachi Data Systems has supported pooled storage in its arrays – and other vendors’ arrays – for years. But EMC says none of those systems scale to the level of V-Max or perform as fast. And that goes for the DMX-4, too.

When asked how it would be positioned against the DMX-4, Donatelli said the V-Max “has up to three times the capacity of DMX-4, and up to three times the performance. Clearly this will take over the high end of the product line.”

So, if you’re considering a V-Max, ask yourself if you need a bigger faster system with a bigger price tag. That’s easier than trying to decide if your data center resides in the virtual or physical world.

April 14, 2009  4:35 PM

EMC launches Symmetrix V-Max, may add spin-down

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

EMC Corp. had a virtual press conference this morning to announce the new Symmetrix V-Max high-end disk array. DMX-4 will remain on the market, but the new distributed architecture and software updates have EMC claiming V-Max is faster and more scalable.

The president of EMC’s storage division, Dave Donatelli, said during a conference call with press this morning that the vendor is “contemplating spin-down” for the new Symmetrix, though he did not commit to a time frame.

I also asked Donatelli a question I’ve had on my mind for a while with regard to EMC. Back in the fall of 2007, EMC revealed at a customer event that it was developing a universal backup and archiving appliance built on industry-standard components, which would be given a ‘personality’ by EMC’s different software modules. A centralized management GUI for all backup and replication processes was also discussed. The first steps toward this may have come in the form of Avamar / Networker integration; at last year’s EMC World, execs told me they acquired WysDM to make that company’s software a centralized management framework for backup and archiving.

Then came the Clariion CX-4, which added high-availability features and scaled well into Symmetrix range. It wasn’t necessarily cannibalism yet, but it the increasing overlap was notable. As EMC has talked more and more about becoming a software company over the last few years, combined with the backup and archiving appliance plans, and other subtle signs of convergence between the systems like the redesign of disk trays that could fit into either CX or DMX, I began to wonder if EMC wasn’t planning a similar melding and commoditization of primary / secondary storage hardware, with different software to give it different “personalities.”

EMC officials have been on the coy side in talking about this. The picture has gotten a little clearer with the announcement of V-Max, which adds multicore Intel x86 processors, a first, as noted by IDC’s Benjamin Woo, in the high-end disk array space and a first for the Symmetrix line. EMC put a heavy emphasis on the software side of V-Max as well; most of the performance improvements and new features come from a complete reworking of the Enginuity OS software that runs Symmetrix. A software-based approach to pools of devices–i.e. the “VMwareization” of Symm–further commoditizes the hardware, further relies on software to give a machine “personality”…

During today’s Q&A with Donatelli, I asked if that is, in fact, EMC’s plan–if today what we think of as Clariion, Celerra and Symm might one day be distinguished by software rather than different hardware. He said that while CX and V-Max both use x86 processors, they’re different kinds of processors–in fact, different across the different Clariion models as well as different among CX and V-Max. V-Max also uses custom ASICs for its virtual matrix scale-out. “We still see a difference between the high-end world and the midtier world,” he said.

But he didn’t say for how long…

April 14, 2009  2:13 PM

NetApp to pay $128 million to settle GSA probe

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

In an SEC filing Monday, NetApp disclosed it has paid $128 million to the federal government to settle a Department of Justice probe into its contracting activity with the General Services Administration (GSA).

EMC has also disclosed it’s the target of a similar probe, and late last fall it was also reported that several other IT vendors have been targeted by the DOJ over pricing, including Sun Microsystems, Canon and Cisco.

In exchange for the settlement, according to the filing, “the parties to the Agreement have agreed to release NetApp with respect to the claims alleged in the investigation as set forth in the Agreement. The Agreement reflects neither an admission nor denial by NetApp of any of the claims alleged by the DOJ, and represents a compromise to avoid continued litigation and associated risks.” NetApp “recorded a $128.0 million accrual for this contingency in the third quarter of fiscal 2009.”

April 13, 2009  7:06 PM

SpiderOak offers discount to Carbonite users, says SLAs on the way

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

When consumer backup SaaS provider Carbonite sued its storage vendor, Promise, for systems Carbonite alleges lost customer data, ESG founder and analyst Steve Duplessie wrote a blog post urging enterprise users to ask tough questions of backup service providers to winnow out providers prepared to offer enterprise-level services. Especially, what does your infrastructure look like — what failsafe mechanisms are in place to prevent data loss? and what service level agreements (SLAs) are provided, if any?

When Carbonite backup SaaS rival SpiderOak came along with a pitch for me about how they’re a) more reliable and secure than Carbonite and b) welcoming Carbonite customers with a 20% discount on a year’s service for switching, I decided to try out those questions on them. What followed was an interesting discussion.

SpiderOak CEO Ethan Oberman says SpiderOak, unlike Carbonite, assembles its own storage systems out of commodity servers and disk drives, purchasing individual components and assembling them under the company’s proprietary storage clustering software. “We don’t rely on a third party pre-assembled storage system” as Carbonite did with Promise, Oberman said. But does putting together its own storage systems make SpiderOak’s more reliable? Not necessarily.

(Side note: SpikerOak isn’t alone here. Whale many storage vendors are betting on their future by selling pre-built systems to cloud service providers, the pitch I hear from those service providers is that their service is more reliable/ more secure / better performing because they built it themselves.)

So if we take the claim that home-built is better at face value, let’s say I was a Carbonite user who lost data, and now I’m looking to switch providers. Assuming I haven’t been totally turned off on the idea of SaaS in general, I think I’d still like to see something definitive in writing from my new prospective vendor, regardless of that vendor’s data center architecture, about data loss and what it’s prepared to offer me on that front.

It took quite a while before our conversation today progressed to the point where we could concede that although data loss is highly, highly, highly unlikely, it theoretically can happen. One of the reasons SpiderOak doesn’t address that possibility outright is because it doesn’t want that possibility in users’ minds. “We take this very, very seriously,” Oberman said. “Losing customer data in this market basically means going out of business.”

But as Duplessie put it, “I know things break. What I don’t know is how often they break, or why, and most importantly – what you do about it.”

Oberman said SpiderOak would probably do the right thing and give consumers their money back in the event of their data being lost. “It’s just ethical business practices,” he said. “We stand behind our product.”

Would he put that in writing?

Well, that opened up another can of worms. SpiderOak, Carbonite, and other consumer-grade backup SaaS vendors don’t offer SLAs or even formal written guarantees about data loss, in part, Oberman said, because of a fear of predatory lawsuits in the consumer world. Why these are more prevalent among consumers than among businesses remains unclear to me, but SpikerOak claims that’s what its legal counsel says. Also, it’s not as easy to assign a value to consumer data vs. corporate data attached to billable hours in order to institute hard financial penalties, and SLAs make the whole service more expensive, Oberman claimed.

For its consumer/SOHO service, SpiderOak’s focus is on cost–it charges about $5 to $10 per month. “Those are pretty cheap numbers–so cheap, in fact, that we can’t offer geographic redundancy economically,” he said. To provide SLA-worthy redundancy, the cost of the service would have to go up. This is something SpiderOak is planning to do by this summer with the launch of a new enterprise-focused backup service, which will be about four times more expensive as the current offering.

In the meantime, Oberman suggested that users attracted to the cost but concerned with the reliability of consumer/SOHO services could theoretically treat them like some companies do internet service providers (ISPs), and deploy two or three of the cheaper services for DIY redundancy.  “There does seem to be a gap” between expensive fully-redundant enterprise services and cheaper but less resilient consumer/SOHO services in the market right now, he added.

April 9, 2009  5:03 PM

Sights and bites from SNW

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

A band entertains at the SNW welcome reception Monday night.

SNW Orlando 2009 wraps up today, and the economic downturn was the elephant in practically every room at the show. It was the main topic of discussion, a factor in debates over technology trends, and a subliminal part of the background as the vendor presence dropped dramatically compared to previous shows. It was difficult to get a handle on exactly how many attendees there were–a wide range of numbers was floating around– but many of the vendor reps and analysts I ran into at the show exclaimed over how quiet it seemed compared to past years. 

Most striking to me was the noise (or lack thereof) around the press room, in previous years a bustling hub of activity as armies of vendor marketing directors and PR reps briefed an equally large cadre of analysts and press. The room was smaller than usual this year, and empty at times.

However, there were still some interesting discussions going on around the show, including sneak previews of interesting upcoming product announcements.

NEC eyes content-aware dedupe

NEC’s HydraStor backup and archiving grid system will soon put a new twist on its block-level dedupe, according to NEC director of product management and technical marketing Gideon Senderov. He said the vendor is working on content-aware deduplication, believing it can lead to better dedupe ratios for customers. “Backup applications insert their own metadata,” he said. “Depending on how they aggregate files, you may have different metadata within them. Similar chunks can sometimes look different.” Filtering application metadata from files requires integration with multiple backup apps. Sepaton’s DeltaStor VTLs already take this approach.

3PAR’s clever marketing campaign for its new InServ F-class arrays. I wonder whether the system name or the slogan came first. 

HP has blade plans for LeftHand

LeftHand Networks’ SANiQ IP SAN software will soon be ported to HP’s blade servers, according to Lee Johns, marketing director for entry storage, HP StorageWorks. It’s part of an overall “converged infrastructure” trend for HP, which envisions storage as a network service centrally managed by software. The company is also preparing a software framework, based on its 2007 acquisition of Opsware Inc., to centrally manage different kinds of storage devices along with servers.

The relatively sleepy show was not without its amenities, including air hockey at the welcome reception.

Brocade CTO talks FCoE

Brocade’s big announcement at the show was the rollout of its first Fibre Channel over Ethernet (FCoE) products, a top-of-rack switch and a converged network adapter (CNA). The company has taken a less bullish attitude than its rival Cisco to the technology, releasing its top of rack switches several months after Cisco released its Nexus FCoE product line. ,

CTO David Stevens and product manager Pompey Nagra went over the details of the technology with me, as well as its value proposition, which the cynical among us might see as an attempt for FC vendors to stay relevant as 10 GbE threatens to eat their lunch. Stevens pointed out that though FCoE, like 10 GbE, requires a swap-out of switching equipment in the data center, FC storage assets can remain the same. Even though convergence-enhanced Ethernet will bring the protocol more into line with FC as a lossless network with some flow controls, FC offers services like zoning and multipathing that will still be important to storage administrators, Stevens said. He also dismissed the idea that FC will fall out of favor once it’s slower than Ethernet (currently, FC is at 8 Gbps; Ethernet’s looking to move to 10 Gbps). “Maybe long-term,” he said. “But is that big a technology shift for the traditionally risk-averse storage community really worth two more gigs per second?”

Stevens said the focus for FCoE should really be just on cutting down on the number of wires in the data center. “The first-hop technology can all be combined while preserving assets in the infrastructure.”

However, Stevens also admitted this value proposition is similar to what has been promised by InfiniBand technologies, which have yet to see widespread adoption outside of high-performance computing (HPC) niches. Will FCoE be more successful because Ethernet is a more familiar interface than InfiniBand? I asked Stevens. “I don’t have a good answer for you there yet,” he said.

An open bar and darts–always a great combination.

Thales sees converging encryption standards

In February, Thales Group was part of a coalition of vendors that submitted a standard for interoperability between key management systems and encryption devices to the Organization for the Advancement of Structured Information Standards (OASIS) called the Key Management Interoperability Protocol (KMIP). If adopted, KMIP would mean users could attach almost any encrypting device to one preferred key management system, regardless of the vendors involved. Meanwhile, the Institute of Electrical and Electronics Engineers (IEEE) approved a standard in January 2008 for managing encryption on storage devices. Now the vendors are working on bridging between the two standards, according to Kevin Bocek, director of product marketing for Thales, so that if product developers want to roll the more-detailed IEEE spec into the more general OASIS spec, the two will be compatible. This interoperability will probably be more valuable to developers than end users, he said, as the IEEE spec contains very granular details for developing products down to specificying protocols. If engineers don’t have to re-invent the encryption wheel or ensure interoperability for each of their products, it could get products to market faster or free them to focus on other innovations, he said.

Video screens showing 1000 DVD-quality movie streams being served from one of Fusion-io’s ioDrives, part of SNIA’s exhibits focused on SSD technology.

SNIA SSD initiative finds ‘wide variability’ in SSD performance

The Storage Networking Industry Association (SNIA) had a few booths set up on the show floor focused on its SNIA Solid-State Storage Initiative (SSSI), including a demo of benchmark comparisons between different vendors’ single-level cell (SLC) enterprise SSDs by Calypso Systems. CTO Easen Ho, on hand for the demonstration, walked me through bar graphs of performance results. The specific manufacturers’ names were not listed (no fun!), but it was easy to see the ‘stairsteps’ between the different results on the graph. Still, as I peered at the screen it looked like they were all generally in the same ballpark. That’s until Ho pointed out to me that the y axis of the graphs was actually a logarithmic scale. This was initially done to better compare the results against spinning disk drives, which otherwise “wouldn’t even be visible on these graphs,” Ho said.

Data Mobility Group analyst and StorageMojo blogger Robin Harris (left) looks for a ‘man on the street’ interview during a chat on the show floor with BlueArc director of corporate marketing Louis Gray.

Here are the news stories filed from balmy Florida this week:

April 7, 2009  7:30 PM

Fusion-io shovels in $47.5M in fresh funding

Dave Raffo Dave Raffo Profile: Dave Raffo

How does a startup get $47.5 million in funding in today’s economy?

“By being a leader in a pretty hot sector,” says David Flynn, CTO and one of the founders of Fusion-io, which closed its whopping Series B round today.

That hot sector is flash technology, which is rapidly making its way into enterprise storage. It also helps that Hewlett-Packard is an OEM partner, using Fusion-io’s ioMemory technology in the HP StorageWorks IO Accelerator NAND flash-based storage adapter. Fusion-io also has technology partnerships with IBM and Dell.

Flynn says the Series B funding will help Fusion-io “broadly address the market that’s growing at a break-neck pace, as well as to help us engineer our next-generation product.”

The first of these next-generation products is the ioSAN, due for release over the summer. Fusion-io describes the PCI Express-based ioSAN as server-deployed network-attached solid state storage.

“We’re working on what we call server-deployed network storage – putting components into the server to share high-performance storage over the network,” Flynn said.

Flynn says more than 400 customers are using Fusion-io cards, most of them large organizations running relational databases, web servers, or virtual machines.

Fusion-io also officially announced David Bradford as its CEO, although that move happened at least a month ago. Bradford spent 15 years as Novell, including three years reporting to current Google CEO Eric Schmidt. Bradford also brought Apple founder Steve Wozniak to Fusion-io as chief scientist a few months back.

Now that “Woz” is no longer dancing with the stars, he can devote more time to Fusion-io. Flynn says while Wozniak’s name recognition has helped Fusion-io generate attention, it wasn’t much of a factor in attracting new VC Lightspeed Venture Partners and getting more funding from Series A investors New Enterprise Associates (NEA), Dell Ventures and Sumitomo Ventures.

“It may help get attention, but don’t think it was substantial in closing the deal,” Flynn said. “I think our OEM relationship with HP, our deep relationship with IBM, the fact that Dell is an investor, and our huge market potential is the real essence of why we got funded.”

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: