Storage Soup


May 21, 2010  7:46 PM

HP to resell DataDirect Networks for HPC

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Hewlett-Packard Co. added another scale-out NAS system to its portfolio yesterday when it announced DataDirect Networks (DDN)’s S2A9900 disk array will be bundled with the Lustre File System resold by the Scalable Computing and Infrastructure (SCI) group within HP.

HP began collecting scale-out file systems when it acquired PolyServe in 2007, then saw some false starts with its ExDS9100 product for Web 2.0 and HPC use cases. HP continued its track record of acquiring its partners in the space with the acquisition of Ibrix last July. Yet HP still found a gap in its scale-out file system portfolio for DataDirect and Lustre with this agreement, according to Ed Turkel, manager of business development for SCI.

“Basically, both the X9000 [based on Ibrix] and [the new offering with] DDN are scale-out file systems sold as an appliance model,” Turkel said. But Lustre is geared more toward “the unique demands of HPC users” in which multiple servers in a cluster simultaneously read and write to a single file at the same time, requiring very high single file bandwidth. “The X9000 is more general purpose, with scalable aggregate bandwidth” rather than high single-file performance.

DDN’s VP of marketing Jeff Denworth said the two vendors have “a handful” of joint customers already, but Denworth and Turkel both dismissed the idea that DDN could be HP’s next scale-out acquisition. “If I respond to that question in any fashion, I’m probably going to get my hand slapped, but it’s certainly not the purpose of this announcement,” Turkel said. However, this product will replace a previous offering HP launched in 2006, also based on Lustre, called the Scalable File Share (SFS).

DDN is now partnered for storage with every large HPC OEM vendor there is — previously it has announced reseller and OEM relationships with IBM, Dell and SGI. “This sounds similar to the arrangement that DDN has with IBM, Dell and SGI to provide a turnkey solution to certain niche customers, more likely aligned with the HP server group than the storage group,” wrote StorageIO founder and analyst Greg Schulz in an email to Storage Soup.

May 21, 2010  12:06 PM

05-20-2010 Storage Headlines

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

(0:26) EMC MozyPro 2.0 adds local backup, remote data backup performance improvements

(2:50) S3 adds Tier 2 for 10 cents per GB

(4:47) Hitachi Data Systems tweaks IT Operations Analyzer resource monitoring application

(7:06) LSI releases 6 Gbps SAS external storage; and other data storage news

(8:32) HP resells DataDirect Networks’ HPC storage system; other data storage news


May 20, 2010  4:21 PM

S3 adds Tier 2 for 10 cents per GB

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Amazon Web Services today added a new offering for its Simple Storage Service (S3) called Reduced Redundancy Storage (RRS). RRS offers users the ability to choose fewer “hops” of object replication among Amazon’s facilities for a lower cost per gigabyte. With RRS, objects would survive one complete data center failure, but wouldn’t be replicated enough times to survive two concurrent data center failures. It’s like RAID 6 vs. RAID 5 storage tiering writ large.

Some users like the CDN capabilities Amazon offers with S3, and Amazon officials say those capabilities will still be offered with RRS, claiming no difference in performance between RRS and S3. However, the cloud data storage vendors that have introduced gateway and caching devices for S3 will have to update their support to offer users the option of RRS on the back end. I’m sure we can anticipate a flurry of announcements from companies such as Nasuni, StorSimple and TwinStrata in the coming months (ETA: at least where Nasuni is concerned, I stand corrected…).

Ten cents per GB is already raising eyebrows, but that’s actually just the starting price for RRS. According to an emailed statement from Amazon S3 general manager Alyssa Henry, “Base pricing for Reduced Redundancy Storage covers the first 50 TB of RRS storage in a month.  This tier is charged at a price of $0.10 per GB per month.  As customers increase their storage, the price declines to as low as $0.037 per GB per month for customers with more than 5 petabytes of RRS storage.”

Henry was mum on whether Amazon has any more gradations of storage tiering up its sleeve, saying, “The RRS offering was the result of feedback from customers who, for their particular use cases, did not require the level of durability that Amazon S3 provides today.  We’ll continue to listen to feedback from our customers on what’s important to them in terms of future functionality but have no other announcements today.”

S3 customers we’ve gotten in touch with so far seem intrigued by the new offering. Stay tuned for a followup in the coming days about reaction to this announcement in the market.


May 19, 2010  2:46 PM

Coraid picks up ZFS NAS trail

Dave Raffo Dave Raffo Profile: Dave Raffo

Coraid today added a ZFS-based NAS to its platform of Ethernet SANs.

Coraid’s base product is a non-iSCSI IP SAN called EtherDrive based on ATA over Ethernet (AOE), but the vendor has been looking to expand its product line since closing a $10 million funding round and hiring Kevin Brown as CEO in January.

The new EtherDrive Z-Series NAS includes two models. The Z2000 has four cores, 32 GB of RAM and either eight Gigabit Ethernet or four 10-Gigabit Ethernet ports. The Z3000 has eight cores, 48 GB of RAM, level 2 SSD cache, and either eight GigE or four 10-GigE ports.

Coraid relies on ZFS for features such as Inline deduplication, replication, unlimited snapshots and automatic tiering.

The Z-Series replaces Coraid’s Linux-based CLN NAS platform. “ZFS is a better fit for our ECODrive systems,” said Carl Wright, Coraid’s VP of sales and product management. “We’ve had a lot of requests from our customers for open-source ZFS systems.”

Wright described the Z-Series as a scale-out architecture because “as customers need capacity, they add EtherDrive data blocks on back.” The EtherDrive SAN and NAS systems can be managed from the same interface, he said.

Wright says the Z-Series uses the same Intel X25-E SSDs as in the EtherDrive SRX SAN platform it launched in March, but the SSDs serve as cache only for the NAS appliance (read cache is standard, and write cache is optional).

Compellent last month launched a ZFS-based NAS option to its Storage Center SAN system. Wright says the big difference between the Coraid and Compellent NAS offerings is price. He says Coraid’s Z series is priced at about $1,000 per TB while Compellent’s starting price is $84,000 for 8.7 TB for new customers and $36,000 for its current SAN customers.


May 17, 2010  8:19 PM

Double-Take to be acquired by Vision Solutions

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Double-Take Software’s executives and directors have agreed to be acquired by Vision Solutions in a deal valued at $242 million, pending approval of the company’s shareholders. Vision Solutions is a portfolio company of Thoma Bravo, LLC, a private equity firm.

Last month Double-Take disclosed it had received indications of acquisition interest during, but did not name any suitors. After issuing a press release saying “The Double-Take board of directors unanimously approved the agreement and has recommended the approval of the transaction to Double-Take’s stockholders,” Double-Take officials declilned further comment today.

Vison Solutions specializes in disaster recovery and HA software for IBM System i and AIX servers. Vision Solutions previously had an OEM agreement to rebrand Double-Take’s software when it needed to support x86 Windows or Linux servers. That relationship was subsequently changed to a reseller agreement that remains ongoing.

Industry observers see this deal as an exit strategy for Double-Take’s board, after the first calendar quarter of 2010 finished “weaker than expected” for Double-Take. Said one source familiar with the company and the deal, speaking on condition of anonymity: “Double-Take’s magic sauce was that it could make your Exchange on a server in, say, Chicago run just like it would in Boston. But VMware came along and said, ‘Stick it in a VM, it’s the same thing, and you don’t have to install agents or worry about third-party software.’

“The folks at Double-Take saw it coming, but they couldn’t jump out of the way of that train fast enough.”

It wasn’t just VMware that began eating away at Double-Take’s market, but other backup and DR tools began to crop up and undercut Double-Take’s offerings in price, our source said.

With the deal not expected to close until after July, it’s officially “business as usual” for Double-Take customers. After the deal closes, Vision Solutions is expected to continue supporting Double-Take’s existing customers. “You have $90 million cash on hand and $40 million a year in software maintenance business — Thoma Bravo has every reason to keep existing customers happy,” said the insider.


May 17, 2010  7:28 PM

More photos from EMC World

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Should’ve checked the phone again before hitting ‘publish’ on my EMC World Reporter’s Notebook — there were a few more shots from the show last week I’d overlooked –

Photobucket
Another shot of the main keynote hall in action — this time with dramatic lighting!

Photobucket
The cloud messaging was inescapable anywhere within a mile of the show.

Photobucket
All aboard the Virtual Storage Zone.

Photobucket
Salesy / bizdev types wheeling and dealing in the Westin lobby adjacent to the conference.

Photobucket
Fenway Park stilt walker on the show floor.

Photobucket
Burned out laptop being displayed in the Kroll-OnTrack booth. They claim to have recovered the data from the hard drive that was inside this mess.

Photobucket
Not sure what playing with Matchbox cars has to do with EMC storage, but attendees seemed to enjoy it.

Photobucket
The Centera Virtual Archive.

 Photobucket
Clariion man pins. Not pictured: the tiny blinking LEDs on each pin.

Photobucket
The jury’s still out about enterprises’ embrace of cloud storage, but it’s not for lack of slick messaging.

Photobucket
…the private cloud was still haunting Boston this weekend — spotted this cab while I was out in the city Sunday.


May 14, 2010  6:59 PM

05-13-2010 Storage Headlines

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

(0:26) EMC releases VPlex “active-active” storage

(1:58) EMC previews FAST 2, block-level compression, common management console for Clariion and Celerra

(3:44) Data Domain provides Boost in data dedupe performance

(5:38) Quantum releases two new data deduplication appliances for SMBs, remote offices

(7:33) Caringo CAStor adds ‘Darkive’ spin down, scores Dell OEM deal


May 14, 2010  3:03 PM

Quantum’s survived, now seesk to thrive

Dave Raffo Dave Raffo Profile: Dave Raffo

When EMC’s Data Domain took the first step towards global deduplication with its Global Deduplication Array (GDA) last month, it left Quantum as the only major disk data deduplication backup target lacking the ability to cluster nodes. And Quantum isn’t saying much about when and if that capability is coming.

During Quantum’s earnings call Thursday evening, CEO Rick Belluzzo said there would be further enhancements to the company’s deduplication in the wake of a rollout of midrange and SMB DXi disk systems over the past six months. When I spoke to Belluzzo after the call he stopped short of addressing global dedupe, except to say Quantum won’t be following Data Domain’s path. Quantum apparently sees its StorNext file system as a piece of its strategy to scale its backup targets.

“We will be saying more about our roadmap over time,” Belluzzo said. “We also have the StorNext platform to build from. I would expect our strategy to differ from Data Domain in terms of their approach to global deduplication. We have a scalable platform in StorNext, and it’s a better platform to deliver a scalable solution. And Data Domain is still pretty limited [for global deduplication].”

Data Domain’s GDA clusters two nodes at this stage, and requires Symantec’s OpenStorage (OST) API and either Symantec NetBackup or Backup Exec to control the placement of data across multiple controllers.

Analyst Greg Schulz of StorageIO says customers who need global dedupe are mostly large enterprises who may already be using Quantum tape libraries and may be willing to wait for the global dedupe.

“Quantum will have to evolve to global dedupe, it’s part of scaling,” he said. “If you have hundreds of terabytes to petabytes of data, you need more robust deduplication going forward. Quantum has more time because people in that category are still using tape. Quantum has to get there, but it can walk to that market and get it right opposed to others who have to sprint there.”

With or without global dedupe, Belluzzo says he expects Quantum to make a big push in disk backup this year, and it has a long way to go to make up ground on Data Domain. Quantum reported $22.9 million of revenue from its disk (DXi dedupe platform) and software (StorNext) last quarter, down from $24.2 million in the same quarter last year.

“It is still very early in the evolution of this technology,” he said. “We believe the deduplication market is growing rapidly. There have been reports on how many people have implemented it and it’s a very low amount. So we believe that the market definitely supports pretty rapid growth, and to be frank, our base is pretty small and we really feel like that we have to be focused on that kind of rapid growth.”

After losing its OEM relationship with EMC following EMC’s acquisition of Data Domain last summer, Quantum has been building up its own channel for branded products. “It’s been a year of working through transitions to strengthen the company,” Belluzzo said. Now he says Quantum is prepared to grow.

Quantum executives say the new dedupe OEM partner they announced in January – believed to be Fujitsu – is shipping product with DXi software but there are no new OEM deals imminent as the vendor concentrates on its branded products.


May 13, 2010  10:12 PM

EMC World Reporter’s Notebook and Photos

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

PhotobucketThis attendee (actually one of the Universal Studios performers who would play the Blues Brothers on the show floor) is wearing one of the blue-horned Brumhilde helmets that were given out at the welcome reception / concert. I never found out the significance of the helmets, though obviously blue corresponds with the “Beat Your Backup Blues” slogan…

VMax to add native VPlex capabilities
According to an EMC whitepaper on the architecture of the VPlex storage virtualization product it launched this week, “VPlex Local supports local federation today. EMC Symmetrix VMax hardware also adds local federation capabilities natively to the array later in 2010 and will work independently of the VPlex family.” One observer’s response: “Why buy [a separate] VPlex?”

***
More shots from the welcome reception and Counting Crows concert:

Photobucket

Photobucket

Photobucket

Photobucket

***

EMC keynote speaker previews future infrastructure management offerings
During a keynote by EMC EVP/COO Pat Gelsinger on Monday, EMC VMware czar Chad Sakac previewed some new “service oriented” features of the Ionix infrastructure management software product line, including the next Ionix Unified Infrastructure Manager (2.0) and Project Redwood. Redwood is a planned user self-service portal planned as a future capability of VMware. There’s talk that Redwood will also be integrated with EMC’s VPlex to give access to data across geographic distance. UIM 2.0, meanwhile, will add chargeback and other features to the software that Sakac said will make internal IT more like a service, including multi-tenancy, chargeback and SLA-based provisioning.

PhotobucketPat Gelsinger’s keynote on the main stage Monday afternoon.

FLARE/DART and midrange array unification – connecting the dots
EMC ended months of speculation about the convergence of its Clariion and Celerra midrange disk array products with an announcement of a unified management console for the two offerings, as well as disclosures by executives that further convergence is on its way. That convergence includes common replication tools and a more consolidated hardware footprint. Today, while Celerra’s gateway can front a Clariion array, the two engines that make a Clariion a Clariion (FLARE) and a Celerra a Celerra (DART) remain separate entities. If further unification is to take place, there are three options: 1) mash the two code bases together into one 2) run them alongside one another using server virtualization within a single storage server 3) run them alongside one another using separate cores of a multicore processor.

While EMC has finally copped to the fact that it is condensing its platforms, officials from the C-suite on down continue to insist that they will make either standalone offering available to people who want it (in other words, no plan to end-of-life one or the other).

Speculation about plans to mash together code, meanwhile, seem to have come from NetApp’s competitive analysis, which concluded the two codebases will be brought together under something called the Common Block File System (CBFS), an EMC answer to WAFL. CBFS already exists, handling thin provisioning and block compression for both arrays, and it’s possible its role could expand.

Mark Sorenson, senior vice president of EMC’s Unified Storage Division, declined to say exactly how the consolidation might work, but seemed to eliminate a combined codebase when, after presented with the three options for unification, answered (I thought rather pointedly) that server virtualization and multicore processors are “good, new, enabling technologies for consolidation.” Mashing the code together under CBFS also wouldn’t necessarily make sense if EMC plans to continue to sell one-off standalone offerings depending on what users ask for.

Other insiders hinted that the multicore processor route is the one that EMC will take, saying “stuff that’s just coming to maturation will come to the fore.” This would seem to fit with Intel’s continued development of multicore processor chip sets, as well as Gelsinger’s prior experience as an executive at Intel.

PhotobucketEMC CEO Joe Tucci’s opening keynote was piped into the press room.

Centera, Atmos, V-Max – the next place for product convergence?
With Celerra/Clariion coming together, a thought struck me as I stood on the show floor at EMC World Wednesday, surveying the happily humming racks whirring away at several separate booths. There was a Centera Virtual Archive set up, with one box labeled “London” and the other local to show how Centera data can be federated over distance. In another, a Symmetrix V-Max, demonstrating how data can be federated over multiple arrays in a mesh architecture for scalability. And finally, there was a tutorial being given on VPlex, a new virtualization product that can be used to virtualize multiple modular arrays and…federate data over distance. To complete the circle, see item above about V-Max adding native VPlex capabilities. Not shown: Atmos, which also consists of a scale-out system built on commodity hardware with an object interface that can be used to federate unstructured…do you see where I’m going with this?

I talked with one of the booth workers about the Centera a little bit. I asked her what the real difference is between Centera, especially the Centera Virtual Archive, and Atmos specifically. She mentioned the number of ISVs that have already written to Centera, the fact that it uses MD5 hashing to demonstrate objects have not been modified, etc. But is there anything about those different features that couldn’t be ported to one of the other boxes? Particularly since Atmos and Centera are object-based already?

PhotobucketGelsinger briefs press and analysts.

Experts hash out FAST Cache, potential future features
I already included some of the discussion between Gestalt IT’s Stephen Foskett and DeepStorage.net’s Howard Marks about FAST 2 in my piece about updates to Clariion and Celerra but the two self-described geeks also chewed over FAST Cache pretty thoroughly Tuesday.

Foskett and Marks both pumped EMC’s Sorenson for details on the use cases of FAST 2 vs. FAST Cache. Sorenson said FAST is meant to place data on tiers of storage according to historical performance characteristics, something Foskett described as “driving using the rear-view mirror.” FAST Cache would be for “bursty” unanticipated performance spikes.

Marks said he’d like to see more user control and predictive features added to FAST Cache, so that if users know which data will become hot at a later date, say, the end of a quarter, they can schedule a move up to cache and back out again when the burst is over. This also might allow users to create an “exclude list” of data that shouldn’t be put in cache regardless of access patterns. Sorenson said it was something EMC was considering exposing to ISVs through an API set, ISVs like, say, VMware.

“But,” Foskett pointed out, “previous attempts at user-tunable caches were colossal failures. I can see some use cases, but it’s probably better to turn it on and leave it alone.” But some awareness within the cache of files as well as blocks – the better to put file metadata in cache, for example – was something Foskett said EMC should consider, and something Sorenson also said was possible down the road. “That dream of file-level Flash cache is what Sun has with ZFS and Avere has also described already,” Foskett pointed out.

PhotobucketThe Bloggers’ Lounge, seen from the main keynote hall escalator.

Compellent fires back at VPlex; NetApp still snarking
EMC’s VPlex and midrange disk array announcements this week have caused a stir among competitors who have begun trying to pick apart EMC’s news.

I would have expected Compellent to start in right away on FAST 2 given its own sub-LUN tiered storage features, but Compellent reps said they haven’t dug into FAST 2’s inner workings yet to offer a detailed comparison.

In the meantime, Compellent issued a statement pitting its LiveVolume data migration feature against VPlex:

There are a few similarities between VPLEX and Compellent’s Live Volume technology. Both solutions tackle the problem of non-disruptive volume migration, which is useful in disaster avoidance, load balancing or maintenance situations. That’s where the similarities end.

The EMC solution requires multiple high-end VPLEX hardware engines and many high bandwidth, low latency network connections. Compellent’s focus is on integrating a software solution that scales. EMC has a significant focus on high-end bandwidth, and all that comes with a hefty price tag (about $77k for a local-only, interconnect system vs. the $5k license for Live Volume).

VPLEX is designed as an add-on set of rack-mounted servers, which requires a change in the data center topology and the addition of a new in-between configuration. Compellent’s Live Volume is a software solution that fits in with Compellent’s Fluid Data architecture, meaning that it supports any Storage Center configuration or network connections, and can leverage existing network connections without impact on write performance.

Meanwhile, in case you’d forgotten how bitter the rivalry remains between NetApp and EMC, look no further than NetApp’s response to EMC’s midrange storage announcement:

Yesterday’s announcements demonstrate that EMC has finally realized the importance of storage efficiency. NetApp is the undisputed industry leader in storage – from measurement to capabilities, from efficiency guarantees to reporting and optimization – across all systems, management products and workloads such as virtualization and database environments. EMC is clearly following our lead, but NetApp, with its V-Series open systems controller able to optimize an EMC system, continues to deliver greater value and storage efficiency than EMC can deliver natively on its own systems.


May 13, 2010  5:23 PM

VCE flips the switch on Cisco Fibre Channel sales

Dave Raffo Dave Raffo Profile: Dave Raffo

The VCE alliance between EMC, Cisco and VMware seems to have made Cisco’s MDS Fibre Channel switches more popular in data centers.

During Cisco’s earnings call Wednesday night, CEO John Chambers said storage product revenue grew 100% year-over-year to $140 million last quarter. One Wall Street analyst says Cisco clearly won market share from its rival Brocade in Fibre Channel switching, and he credits the VCE partnership for that.

“How is it possible that Cisco’s MDS 9513 gained share despite an oversubscribed backplane and generally being considered an inferior product compared to the Brocade’s DCX?” Wedbush Securities analyst Kaushik Roy wrote today in a research note. “Our checks indicate that Cisco is seeing a significant uptick in the sales of MDS and Nexus products, largely due to its VCE (VMware-Cisco-EMC) partnership. Customers who in the past have avoided buying Cisco’s MDS products are now willing to accept Cisco’s MDS due to EMC’s blessing as part of the VCE’s vBbock bundle.“

Comments from Chambers on Cisco’s call appear to agree with Roy’s assessment, except for the part about the oversubscribed backplane and the DCX’s superiority.

“Our architectural sales approach is rapidly not only gaining traction but accelerating,” Chambers said. “This is different than the standalone product approach of our competition. … Vblock sales pull Nexus, UCS, VMware and EMC sales. Key takeaway: Cisco’s momentum in the data center is rapidly accelerating.”

Roy also thinks budding Fibre Channel over Ethernet (FCoE) adoption is helping Cisco. He estimates that one-third of Nexus switches are used for FCoE. That’s mostly on the server rather than the storage side. He also believes 10-Gigabit Ethernet is pushing many enterprises to file-based rather than block-based storage.

According to Roy, Brocade will announce new 64-port cards for its DCX switches and next-generation FCoE converged network adapters at its June 9 Technology Day, and is working on a 10-GigE CNA mezzanine card and blade server switch with 10-GigE and 8 Gbps Fibre Channel ports.

He also says Brocade is working on 16 Gbps Fibre Channel switches for 2011 release, and Cisco will upgrade its MDS 9513 director switch with an improved backplane to eliminate the need for oversubscription later this year.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: