Seagate is claiming the world’s first 5900-RPM low-power disk drive today with the 1 TB, 1.5 TB and 2 TB Barracuda LP series. Seagate claimts its internal tests show the series draws 3.0 watts of power when idle and 5.6 watts of power when operating.
Seagate positions the drive against Samsung’s Eco-Green and Western Digital (WD)’s Caviar Green hard drives. Seagate’s testing shows the Caviar and Barracuda drives drawing 3.0 watts when idle, while Samsung’s drive tested at 4.0 watts. In Seagate’s testing, the operating power draw for Caviar – 5.72 watts – was roughly equivalent to Barracuda, while Samsung’s drive tested at 5.5 watts during operation.
Another test by Seagate using the PCMark05 performance benchmark shows the 5900-RPM drive with a performance score of 8444 to WD’s 7802 and Samsung’s 6579. (That’s 95 MBps for the Seagate drive, for those of you keeping score at home). Seagate product marketing manager Anne Haggar said the quirky RPM – most desktop drives run at 5400 or 7200 RPM – helps the drive “strike the optimum balance between performance and power.”
Seagate describes WD’s drive as 5400 RPM, but it may be that WD has just been more coy about its spindle speed. When the Caviar product launched in January, Caviar Green product manager Mojgan Pessian said the drive’s exact RPM–somewhere between 5400 and 7200–was not being disclosed.
In any event, consumers and SOHOs will have multiple low-power suppliers in the market. The Barracuda LP is not recommended for enterprise or SMB use; for the enterprise, Seagate markets the 2 TB Constellation product line.
MSRP for the 2 TB Barracuda LP is $358; for 1.5 TB, $156, for 1 TB, $118.
LSI Corp. has updated its Engenio 7900 storage system sold by IBM and others with new support for 8 Gbps FC, boosted capacity, and Seagate’s full-disk encryption encryption services that include key management and firmware features to take advantage of FDE drives from Seagate.
LSI, along with Seagate and IBM, has been talking about FDE for a couple of years now, but this is the first product LSI will ship that has the feature. The encryption is done in memory a specialized chip attached to the hard disk drive itself. Encryption can be used with a subset of drives within the array, which can also mix in FC and SATA disks. Up to 448 disk can now be attached to the controller, double the previous capacity limit.
Before encrypted disk arrays are widely deployed, key management will probably need to be developed a little further. With this release, users have to supply their own key management program; LSI is supplying key management through its SANtricity GUI. Every encrypted disk in this release would have the same key. Work is still being done to bring key management standards together so users can manage keys centrally within the data center.
Meanwhile, LSI has yet to add support for 10 GbE or FCoE to this array, but host interface cards can be swapped out of the 7900 without changing out the whole box. LSI director of product marketing Steve Gardner says FCoE won’t be ready for prime time until next year. “I think technological immaturity coupled with the economic downturn will slow adoption,” he said. He echoed Symantec CEO Enrique Salem in wondering aloud what the economic downturn will do to financial institutions which are normally early adopters for new technology.
“About a year ago, we started seeing interest in InfiniBand storage outside high-performance computing [HPC],” Gardner said. “Unfortunately, many of those interested were financial institutions with requirements for ‘enterprise HPC’,” he said.
As long as we were discussing FCoE, I was also reminded of my discussion with Brocade CTO David Stevens about the technical differences (or relative lack thereof) between the value proposition of InfiniBand vs. FCoE. Engenio’s 7900 already supports InfiniBand natively, so I asked Gardner as well.
“If FCoE has a better chance to succeed, it’ll be because of the [vendors] behind it, Cisco especially,” he said. “I don’t think it’s a technology question.”
IBM sells the LSI Engenio 7900 as the DS5000. Sun and SGI — recent acquisition targets — also sell the system under their brands.
Storage insiders predicted the Oracle-Sun deal would kick off a series of acquisitions, and now today chipmaker Broadcom is making a move on HBA vendor Emulex. Broadcom’s unsolicited offer of approximately $9.25 a share or $764 million is about a 40% premium over Emulex’s closing price of $6.61 yesterday.
Broadcom has actually been after Emulex for a while. When Emulex adopted a poison pill in January to defend it from unwanted suitors, Broadcom was the unwanted suitor it had in mind. A letter that Broadcom Scott McGregor sent to Emulex’s chairman Paul Folino and its directors today revisited that acquisition attempt:
“We were disappointed when, in early January, you responded that the company was not for sale and abruptly cut off the possibility of further discussions. Even more troubling was the fact that merely one week after that communication, you took actions clearly designed to thwart the ability of your shareholders to receive a premium for their shares. … It is difficult for us to understand why Emulex’s Board of Directors has not been open to consideration of a combination of our respective companies. We would much prefer to have engaged in mutual and constructive discussions with you. However this opportunity is in our view so compelling we now feel we must share our proposal publicly with your shareholders.”
McGregor went on in the letter to lay out Broadcom’s vision for single-chip converged network devices delivering Fibre Channel and Fibre Channel over Ethernet. He also laid out a case why it would benefit Emulex to accept the offer:
“Customers will demand from their suppliers advanced chip technology and supply chain scale and reliability which is not an area of strength for Emulex. Broadcom brings tremendous value in advanced chip technology and supply chain scale and reliability to Emulex’s products—and customers.”
McGregor’s letter also stated that Broadcom is taking legal action to declare Emulex’s poison pill invalid.
Broadcom has tried to make inroads in storage before. It has sold chips for FC switches and a few years ago developed a converged network interface (C-NIC) that including a TCP/IP offload engine (TOE), iSCSI HBA and remote memory access (RDMA) technology onto one chip – a forerunner of the current FCoE CNAs without the Fibre Channel. However, Broadcom hasn’t been successful in storage and today’s earnings report – it lost $92 million last quarter — show it hasn’t been successful period lately.
The approach of FCoE could prompt more Ethernet companies to look for FC technology, the reverse of Brocade’s acquisition of Ethernet provider Foundry late last year.
“Broadcom doesn’t want to buy Emulex for its embedded switch business, it wants its Fibre Channel stack,” Wedbush Morgan research analyst Kaushik Roy says. “To compete, you’ll need a Fibre Channel stack. And if Juniper has half a brain they will buy QLogic, although Juniper’s never known for doing a lot of acquisitions.”
Roy says Emulex may use its poison pill to negotiate an even better deal, but he said the time could be right to sell. For years, Emulex and QLogic have had a duopoly for HBAs but there will be greater competition as FCoE takes hold.
“There are a lot of players getting into FCoE, Emulex’s revenues and margins will be under pressure,” Roy said.
In a note to clients today, Stifel Nicolaus Equity Research analyst Aaron Rakers indicated that Emulex has fallen behind QLogic in developing FCoE technology. “We believe [Emulex] would face some strategic and fundamental challenges going forward with regard to its positioning in blade servers, our belief that QLogic is better positioned in FCoE, and continued secular headwinds in its Embedded Storage Product (ESP) division,” Rakers wrote.
All the news that’s fit to read aloud for this week –
Samsung is claiming it’s the first to ship a consumer solid state drive (SSD) with full-disk encryption (FDE) through a new partnership with security vendor Wave Systems Corp. The 256GB, 128GB, and 64GB SSDs will be available in both 1.8-inch and 2.5-inch form factors. Dell says it will ship the drives in its Latitude line of desktops and notebooks.
Samsung’s drives generate and store encryption keys and access credentials are in the drive hardware, and they are never held in the operating system or by application software. When ordered in a new computer, the drives will come bundled with Wave’s Embassy Trusted Drive Manager software for life cycle management of the drive. Teh software includes pre-boot authentication, enrolls drive administrators and users, and enables backup of drive credentials. Available separately, Wave’s Embassy Remote Administration Server allows an IT administrator to remotely turn on SSDs and adds event logs for compliance.
It probably won’t be long before full-disk encryption also hits the enterprise SSD space. It’s already working its way in on the spinning-disk side, where it’s being pushed by drive maker Seagate, controller maker LSI and systems vendor IBM. Multiple converging standards for key management are also being developed for the enterprise.
Cisco provided more details of its Unified Computing System today, including pieces of its FCoE strategy.
The UCS building blocks include 6100 Series Fabric Interconnects, which Cisco calls “FCoE capable.” The UCS Manager sits on the Fabric Interconnects, which can be clustered for high availability. The Fabric Interconnects use the same ASIC as Cisco’s Nexus switches and connect to Fibre Channel and 10-Gigabit Ethernet switches.
Cisco will also offer FCoE Converged Network from QLogic, Emulex, and Intel inside its UCS blade severs Shockingly, Brocade’s recently launched CNAs don’t fit into Cisco’s plans.
Brocade’s recent rollout of CNAs and FCoE switches and the Cisco UCS devices set to roll out around June serve as a further reminder that the FCoE puzzle is coming together. Storage vendors slowly getting into the act, too. EMC’s new Symmetrix V-Max system has native FCoE support. NetApp has pledged native FCoE support for its arrays and supports the protocol now through a free upgrade to its Data OnTap operating system.
NetApp and EMC have been the only storage array vendors to address FCoE so far, but Cisco’s director of product management for the Server Access and Virtualization Group Paul Durzan says “We’re working with all the major storage vendors. We don’t intend to be exclusive of other people.” Durzan says that list includes storage systems from Cisco’s new server rivals, IBM and Hewlett-Packard.
From storage vendors’ perspective, however, early support for FCoE amounts mostly to future-proofing their systems.
“The important thing for us is to support it now,” says Dave Donatelli, president of EMC’s storage division. “Typically, these are gradual transitions that take time. I don’t think you’ll see mainstream use before 2010.”
StorageIO Group analyst Greg Schulz says people who actually use FCoE now are “either getting a really good deal, among the Cisco faithful, or like to try things early” but says storage and network admins definitely have the converged protocol on their radar.
“We’re about ready for the real game to begin with FCoE,” Schulz says. “You can make the technology case, but how do you pay for it? Is it something you want to have, or something you need to have? That’s what people are asking now.”
Long before today’s official launch, just about anybody who cares about enterprise storage knew EMC would roll out its new Symmetrix system today during a series of webcasts. Yet EMC never used the word Symmetrix in all its hype about the launch. According to EMC, it was all about the “virtual data center of the future.”
So now we know the new Symmetrix is the V-Max – or virtual matrix – and not the DMX-5. But what makes this a system for the virtual world and the DMX-4 for the “physical world” as EMC’s storage division president Dave Donatelli puts it?
EMC CEO Joe Tucci likens the new Sym to a block-storage version of the objet-based Atmos system EMC rolled out last year. In other words, it’s an internal cloud storage system, which is a new way of saying virtualized storage.
According to EMC, V-Max is a storage virtualization system because it makes all its storage look like one pool, it can automatically migrate data between systems and arrays, and simplifies management with features such as thin provisioning and clustered nodes.
But the biggest difference between V-Max and other systems is really size and scale.
The virtualization features in V-Max aren’t new to the industry. 3PAR’s InServ systems support clustering of eight controllers. Compellent Technologies has software for moving data between solid state and hard drives, and Atrato will have it in a few weeks. Hitachi Data Systems has supported pooled storage in its arrays – and other vendors’ arrays – for years. But EMC says none of those systems scale to the level of V-Max or perform as fast. And that goes for the DMX-4, too.
When asked how it would be positioned against the DMX-4, Donatelli said the V-Max “has up to three times the capacity of DMX-4, and up to three times the performance. Clearly this will take over the high end of the product line.”
So, if you’re considering a V-Max, ask yourself if you need a bigger faster system with a bigger price tag. That’s easier than trying to decide if your data center resides in the virtual or physical world.
EMC Corp. had a virtual press conference this morning to announce the new Symmetrix V-Max high-end disk array. DMX-4 will remain on the market, but the new distributed architecture and software updates have EMC claiming V-Max is faster and more scalable.
The president of EMC’s storage division, Dave Donatelli, said during a conference call with press this morning that the vendor is “contemplating spin-down” for the new Symmetrix, though he did not commit to a time frame.
I also asked Donatelli a question I’ve had on my mind for a while with regard to EMC. Back in the fall of 2007, EMC revealed at a customer event that it was developing a universal backup and archiving appliance built on industry-standard components, which would be given a ‘personality’ by EMC’s different software modules. A centralized management GUI for all backup and replication processes was also discussed. The first steps toward this may have come in the form of Avamar / Networker integration; at last year’s EMC World, execs told me they acquired WysDM to make that company’s software a centralized management framework for backup and archiving.
Then came the Clariion CX-4, which added high-availability features and scaled well into Symmetrix range. It wasn’t necessarily cannibalism yet, but it the increasing overlap was notable. As EMC has talked more and more about becoming a software company over the last few years, combined with the backup and archiving appliance plans, and other subtle signs of convergence between the systems like the redesign of disk trays that could fit into either CX or DMX, I began to wonder if EMC wasn’t planning a similar melding and commoditization of primary / secondary storage hardware, with different software to give it different “personalities.”
EMC officials have been on the coy side in talking about this. The picture has gotten a little clearer with the announcement of V-Max, which adds multicore Intel x86 processors, a first, as noted by IDC’s Benjamin Woo, in the high-end disk array space and a first for the Symmetrix line. EMC put a heavy emphasis on the software side of V-Max as well; most of the performance improvements and new features come from a complete reworking of the Enginuity OS software that runs Symmetrix. A software-based approach to pools of devices–i.e. the “VMwareization” of Symm–further commoditizes the hardware, further relies on software to give a machine “personality”…
During today’s Q&A with Donatelli, I asked if that is, in fact, EMC’s plan–if today what we think of as Clariion, Celerra and Symm might one day be distinguished by software rather than different hardware. He said that while CX and V-Max both use x86 processors, they’re different kinds of processors–in fact, different across the different Clariion models as well as different among CX and V-Max. V-Max also uses custom ASICs for its virtual matrix scale-out. “We still see a difference between the high-end world and the midtier world,” he said.
But he didn’t say for how long…
In an SEC filing Monday, NetApp disclosed it has paid $128 million to the federal government to settle a Department of Justice probe into its contracting activity with the General Services Administration (GSA).
EMC has also disclosed it’s the target of a similar probe, and late last fall it was also reported that several other IT vendors have been targeted by the DOJ over pricing, including Sun Microsystems, Canon and Cisco.
In exchange for the settlement, according to the filing, “the parties to the Agreement have agreed to release NetApp with respect to the claims alleged in the investigation as set forth in the Agreement. The Agreement reflects neither an admission nor denial by NetApp of any of the claims alleged by the DOJ, and represents a compromise to avoid continued litigation and associated risks.” NetApp “recorded a $128.0 million accrual for this contingency in the third quarter of fiscal 2009.”
When consumer backup SaaS provider Carbonite sued its storage vendor, Promise, for systems Carbonite alleges lost customer data, ESG founder and analyst Steve Duplessie wrote a blog post urging enterprise users to ask tough questions of backup service providers to winnow out providers prepared to offer enterprise-level services. Especially, what does your infrastructure look like — what failsafe mechanisms are in place to prevent data loss? and what service level agreements (SLAs) are provided, if any?
When Carbonite backup SaaS rival SpiderOak came along with a pitch for me about how they’re a) more reliable and secure than Carbonite and b) welcoming Carbonite customers with a 20% discount on a year’s service for switching, I decided to try out those questions on them. What followed was an interesting discussion.
SpiderOak CEO Ethan Oberman says SpiderOak, unlike Carbonite, assembles its own storage systems out of commodity servers and disk drives, purchasing individual components and assembling them under the company’s proprietary storage clustering software. “We don’t rely on a third party pre-assembled storage system” as Carbonite did with Promise, Oberman said. But does putting together its own storage systems make SpiderOak’s more reliable? Not necessarily.
(Side note: SpikerOak isn’t alone here. Whale many storage vendors are betting on their future by selling pre-built systems to cloud service providers, the pitch I hear from those service providers is that their service is more reliable/ more secure / better performing because they built it themselves.)
So if we take the claim that home-built is better at face value, let’s say I was a Carbonite user who lost data, and now I’m looking to switch providers. Assuming I haven’t been totally turned off on the idea of SaaS in general, I think I’d still like to see something definitive in writing from my new prospective vendor, regardless of that vendor’s data center architecture, about data loss and what it’s prepared to offer me on that front.
It took quite a while before our conversation today progressed to the point where we could concede that although data loss is highly, highly, highly unlikely, it theoretically can happen. One of the reasons SpiderOak doesn’t address that possibility outright is because it doesn’t want that possibility in users’ minds. “We take this very, very seriously,” Oberman said. “Losing customer data in this market basically means going out of business.”
But as Duplessie put it, “I know things break. What I don’t know is how often they break, or why, and most importantly – what you do about it.”
Oberman said SpiderOak would probably do the right thing and give consumers their money back in the event of their data being lost. “It’s just ethical business practices,” he said. “We stand behind our product.”
Would he put that in writing?
Well, that opened up another can of worms. SpiderOak, Carbonite, and other consumer-grade backup SaaS vendors don’t offer SLAs or even formal written guarantees about data loss, in part, Oberman said, because of a fear of predatory lawsuits in the consumer world. Why these are more prevalent among consumers than among businesses remains unclear to me, but SpikerOak claims that’s what its legal counsel says. Also, it’s not as easy to assign a value to consumer data vs. corporate data attached to billable hours in order to institute hard financial penalties, and SLAs make the whole service more expensive, Oberman claimed.
For its consumer/SOHO service, SpiderOak’s focus is on cost–it charges about $5 to $10 per month. “Those are pretty cheap numbers–so cheap, in fact, that we can’t offer geographic redundancy economically,” he said. To provide SLA-worthy redundancy, the cost of the service would have to go up. This is something SpiderOak is planning to do by this summer with the launch of a new enterprise-focused backup service, which will be about four times more expensive as the current offering.
In the meantime, Oberman suggested that users attracted to the cost but concerned with the reliability of consumer/SOHO services could theoretically treat them like some companies do internet service providers (ISPs), and deploy two or three of the cheaper services for DIY redundancy. “There does seem to be a gap” between expensive fully-redundant enterprise services and cheaper but less resilient consumer/SOHO services in the market right now, he added.