Saying it’s looking to appeal to larger shops with its online data backup service, Iron Mountain Digital released version 7.0 of its LiveVault SaaS product today with new support for multithreaded applications and larger data sets.
Previously, LiveVault’s “sweet spot” was protecting servers up to 1 TB, according to Jackie Su, senior product marketing manager for Iron Mountain Digital. The new version will protect up to 7 TB thanks to beefier processors and memory in the LiveVault TurboRestore on-site appliance, and the Data Shuttle option becoming a built-in feature. Previously, if users wanted to transport large data sets on portable hard drives, it was done only on request in special circumstances. The new TurboRestore appliance can now hold up to 24 TB of disk, and has a 64-bit memory cache.
Iron Mountain claims it’s seen growing adoption of cloud data protection in midsized enterprises among its customer base for LiveVault, citing this shift as the reason for its scalability updates with this release, but did not provide a specific number of midsized customers, percentage of growth in those customers compared with last year, or average deal size, though chief marketing officer TM Ravi said deal sizes are growing, which “indicates we’re covering larger and larger environments.”
Online data backup so far has been among the most popular uses of cloud data storage, particularly among enterprise users, but according to Storage Magazine’s most recent storage purchasing survey, “it’s still more hype than happening”.
Hewlett-Packard Co. added another scale-out NAS system to its portfolio yesterday when it announced DataDirect Networks (DDN)’s S2A9900 disk array will be bundled with the Lustre File System resold by the Scalable Computing and Infrastructure (SCI) group within HP.
HP began collecting scale-out file systems when it acquired PolyServe in 2007, then saw some false starts with its ExDS9100 product for Web 2.0 and HPC use cases. HP continued its track record of acquiring its partners in the space with the acquisition of Ibrix last July. Yet HP still found a gap in its scale-out file system portfolio for DataDirect and Lustre with this agreement, according to Ed Turkel, manager of business development for SCI.
“Basically, both the X9000 [based on Ibrix] and [the new offering with] DDN are scale-out file systems sold as an appliance model,” Turkel said. But Lustre is geared more toward “the unique demands of HPC users” in which multiple servers in a cluster simultaneously read and write to a single file at the same time, requiring very high single file bandwidth. “The X9000 is more general purpose, with scalable aggregate bandwidth” rather than high single-file performance.
DDN’s VP of marketing Jeff Denworth said the two vendors have “a handful” of joint customers already, but Denworth and Turkel both dismissed the idea that DDN could be HP’s next scale-out acquisition. “If I respond to that question in any fashion, I’m probably going to get my hand slapped, but it’s certainly not the purpose of this announcement,” Turkel said. However, this product will replace a previous offering HP launched in 2006, also based on Lustre, called the Scalable File Share (SFS).
DDN is now partnered for storage with every large HPC OEM vendor there is — previously it has announced reseller and OEM relationships with IBM, Dell and SGI. “This sounds similar to the arrangement that DDN has with IBM, Dell and SGI to provide a turnkey solution to certain niche customers, more likely aligned with the HP server group than the storage group,” wrote StorageIO founder and analyst Greg Schulz in an email to Storage Soup.
Amazon Web Services today added a new offering for its Simple Storage Service (S3) called Reduced Redundancy Storage (RRS). RRS offers users the ability to choose fewer “hops” of object replication among Amazon’s facilities for a lower cost per gigabyte. With RRS, objects would survive one complete data center failure, but wouldn’t be replicated enough times to survive two concurrent data center failures. It’s like RAID 6 vs. RAID 5 storage tiering writ large.
Some users like the CDN capabilities Amazon offers with S3, and Amazon officials say those capabilities will still be offered with RRS, claiming no difference in performance between RRS and S3. However, the cloud data storage vendors that have introduced gateway and caching devices for S3 will have to update their support to offer users the option of RRS on the back end. I’m sure we can anticipate a flurry of announcements from companies such as Nasuni, StorSimple and TwinStrata in the coming months (ETA: at least where Nasuni is concerned, I stand corrected…).
Ten cents per GB is already raising eyebrows, but that’s actually just the starting price for RRS. According to an emailed statement from Amazon S3 general manager Alyssa Henry, “Base pricing for Reduced Redundancy Storage covers the first 50 TB of RRS storage in a month. This tier is charged at a price of $0.10 per GB per month. As customers increase their storage, the price declines to as low as $0.037 per GB per month for customers with more than 5 petabytes of RRS storage.”
Henry was mum on whether Amazon has any more gradations of storage tiering up its sleeve, saying, “The RRS offering was the result of feedback from customers who, for their particular use cases, did not require the level of durability that Amazon S3 provides today. We’ll continue to listen to feedback from our customers on what’s important to them in terms of future functionality but have no other announcements today.”
S3 customers we’ve gotten in touch with so far seem intrigued by the new offering. Stay tuned for a followup in the coming days about reaction to this announcement in the market.
Coraid today added a ZFS-based NAS to its platform of Ethernet SANs.
Coraid’s base product is a non-iSCSI IP SAN called EtherDrive based on ATA over Ethernet (AOE), but the vendor has been looking to expand its product line since closing a $10 million funding round and hiring Kevin Brown as CEO in January.
The new EtherDrive Z-Series NAS includes two models. The Z2000 has four cores, 32 GB of RAM and either eight Gigabit Ethernet or four 10-Gigabit Ethernet ports. The Z3000 has eight cores, 48 GB of RAM, level 2 SSD cache, and either eight GigE or four 10-GigE ports.
Coraid relies on ZFS for features such as Inline deduplication, replication, unlimited snapshots and automatic tiering.
The Z-Series replaces Coraid’s Linux-based CLN NAS platform. “ZFS is a better fit for our ECODrive systems,” said Carl Wright, Coraid’s VP of sales and product management. “We’ve had a lot of requests from our customers for open-source ZFS systems.”
Wright described the Z-Series as a scale-out architecture because “as customers need capacity, they add EtherDrive data blocks on back.” The EtherDrive SAN and NAS systems can be managed from the same interface, he said.
Wright says the Z-Series uses the same Intel X25-E SSDs as in the EtherDrive SRX SAN platform it launched in March, but the SSDs serve as cache only for the NAS appliance (read cache is standard, and write cache is optional).
Compellent last month launched a ZFS-based NAS option to its Storage Center SAN system. Wright says the big difference between the Coraid and Compellent NAS offerings is price. He says Coraid’s Z series is priced at about $1,000 per TB while Compellent’s starting price is $84,000 for 8.7 TB for new customers and $36,000 for its current SAN customers.
Double-Take Software’s executives and directors have agreed to be acquired by Vision Solutions in a deal valued at $242 million, pending approval of the company’s shareholders. Vision Solutions is a portfolio company of Thoma Bravo, LLC, a private equity firm.
Last month Double-Take disclosed it had received indications of acquisition interest during, but did not name any suitors. After issuing a press release saying “The Double-Take board of directors unanimously approved the agreement and has recommended the approval of the transaction to Double-Take’s stockholders,” Double-Take officials declilned further comment today.
Vison Solutions specializes in disaster recovery and HA software for IBM System i and AIX servers. Vision Solutions previously had an OEM agreement to rebrand Double-Take’s software when it needed to support x86 Windows or Linux servers. That relationship was subsequently changed to a reseller agreement that remains ongoing.
Industry observers see this deal as an exit strategy for Double-Take’s board, after the first calendar quarter of 2010 finished “weaker than expected” for Double-Take. Said one source familiar with the company and the deal, speaking on condition of anonymity: “Double-Take’s magic sauce was that it could make your Exchange on a server in, say, Chicago run just like it would in Boston. But VMware came along and said, ‘Stick it in a VM, it’s the same thing, and you don’t have to install agents or worry about third-party software.’
“The folks at Double-Take saw it coming, but they couldn’t jump out of the way of that train fast enough.”
It wasn’t just VMware that began eating away at Double-Take’s market, but other backup and DR tools began to crop up and undercut Double-Take’s offerings in price, our source said.
With the deal not expected to close until after July, it’s officially “business as usual” for Double-Take customers. After the deal closes, Vision Solutions is expected to continue supporting Double-Take’s existing customers. “You have $90 million cash on hand and $40 million a year in software maintenance business — Thoma Bravo has every reason to keep existing customers happy,” said the insider.
Should’ve checked the phone again before hitting ‘publish’ on my EMC World Reporter’s Notebook — there were a few more shots from the show last week I’d overlooked —
When EMC’s Data Domain took the first step towards global deduplication with its Global Deduplication Array (GDA) last month, it left Quantum as the only major disk data deduplication backup target lacking the ability to cluster nodes. And Quantum isn’t saying much about when and if that capability is coming.
During Quantum’s earnings call Thursday evening, CEO Rick Belluzzo said there would be further enhancements to the company’s deduplication in the wake of a rollout of midrange and SMB DXi disk systems over the past six months. When I spoke to Belluzzo after the call he stopped short of addressing global dedupe, except to say Quantum won’t be following Data Domain’s path. Quantum apparently sees its StorNext file system as a piece of its strategy to scale its backup targets.
“We will be saying more about our roadmap over time,” Belluzzo said. “We also have the StorNext platform to build from. I would expect our strategy to differ from Data Domain in terms of their approach to global deduplication. We have a scalable platform in StorNext, and it’s a better platform to deliver a scalable solution. And Data Domain is still pretty limited [for global deduplication].”
Data Domain’s GDA clusters two nodes at this stage, and requires Symantec’s OpenStorage (OST) API and either Symantec NetBackup or Backup Exec to control the placement of data across multiple controllers.
Analyst Greg Schulz of StorageIO says customers who need global dedupe are mostly large enterprises who may already be using Quantum tape libraries and may be willing to wait for the global dedupe.
“Quantum will have to evolve to global dedupe, it’s part of scaling,” he said. “If you have hundreds of terabytes to petabytes of data, you need more robust deduplication going forward. Quantum has more time because people in that category are still using tape. Quantum has to get there, but it can walk to that market and get it right opposed to others who have to sprint there.”
With or without global dedupe, Belluzzo says he expects Quantum to make a big push in disk backup this year, and it has a long way to go to make up ground on Data Domain. Quantum reported $22.9 million of revenue from its disk (DXi dedupe platform) and software (StorNext) last quarter, down from $24.2 million in the same quarter last year.
“It is still very early in the evolution of this technology,” he said. “We believe the deduplication market is growing rapidly. There have been reports on how many people have implemented it and it’s a very low amount. So we believe that the market definitely supports pretty rapid growth, and to be frank, our base is pretty small and we really feel like that we have to be focused on that kind of rapid growth.”
After losing its OEM relationship with EMC following EMC’s acquisition of Data Domain last summer, Quantum has been building up its own channel for branded products. “It’s been a year of working through transitions to strengthen the company,” Belluzzo said. Now he says Quantum is prepared to grow.
Quantum executives say the new dedupe OEM partner they announced in January – believed to be Fujitsu – is shipping product with DXi software but there are no new OEM deals imminent as the vendor concentrates on its branded products.
This attendee (actually one of the Universal Studios performers who would play the Blues Brothers on the show floor) is wearing one of the blue-horned Brumhilde helmets that were given out at the welcome reception / concert. I never found out the significance of the helmets, though obviously blue corresponds with the “Beat Your Backup Blues” slogan…
VMax to add native VPlex capabilities
According to an EMC whitepaper on the architecture of the VPlex storage virtualization product it launched this week, “VPlex Local supports local federation today. EMC Symmetrix VMax hardware also adds local federation capabilities natively to the array later in 2010 and will work independently of the VPlex family.” One observer’s response: “Why buy [a separate] VPlex?”
More shots from the welcome reception and Counting Crows concert:
EMC keynote speaker previews future infrastructure management offerings
During a keynote by EMC EVP/COO Pat Gelsinger on Monday, EMC VMware czar Chad Sakac previewed some new “service oriented” features of the Ionix infrastructure management software product line, including the next Ionix Unified Infrastructure Manager (2.0) and Project Redwood. Redwood is a planned user self-service portal planned as a future capability of VMware. There’s talk that Redwood will also be integrated with EMC’s VPlex to give access to data across geographic distance. UIM 2.0, meanwhile, will add chargeback and other features to the software that Sakac said will make internal IT more like a service, including multi-tenancy, chargeback and SLA-based provisioning.
FLARE/DART and midrange array unification – connecting the dots
EMC ended months of speculation about the convergence of its Clariion and Celerra midrange disk array products with an announcement of a unified management console for the two offerings, as well as disclosures by executives that further convergence is on its way. That convergence includes common replication tools and a more consolidated hardware footprint. Today, while Celerra’s gateway can front a Clariion array, the two engines that make a Clariion a Clariion (FLARE) and a Celerra a Celerra (DART) remain separate entities. If further unification is to take place, there are three options: 1) mash the two code bases together into one 2) run them alongside one another using server virtualization within a single storage server 3) run them alongside one another using separate cores of a multicore processor.
While EMC has finally copped to the fact that it is condensing its platforms, officials from the C-suite on down continue to insist that they will make either standalone offering available to people who want it (in other words, no plan to end-of-life one or the other).
Speculation about plans to mash together code, meanwhile, seem to have come from NetApp’s competitive analysis, which concluded the two codebases will be brought together under something called the Common Block File System (CBFS), an EMC answer to WAFL. CBFS already exists, handling thin provisioning and block compression for both arrays, and it’s possible its role could expand.
Mark Sorenson, senior vice president of EMC’s Unified Storage Division, declined to say exactly how the consolidation might work, but seemed to eliminate a combined codebase when, after presented with the three options for unification, answered (I thought rather pointedly) that server virtualization and multicore processors are “good, new, enabling technologies for consolidation.” Mashing the code together under CBFS also wouldn’t necessarily make sense if EMC plans to continue to sell one-off standalone offerings depending on what users ask for.
Other insiders hinted that the multicore processor route is the one that EMC will take, saying “stuff that’s just coming to maturation will come to the fore.” This would seem to fit with Intel’s continued development of multicore processor chip sets, as well as Gelsinger’s prior experience as an executive at Intel.
Centera, Atmos, V-Max – the next place for product convergence?
With Celerra/Clariion coming together, a thought struck me as I stood on the show floor at EMC World Wednesday, surveying the happily humming racks whirring away at several separate booths. There was a Centera Virtual Archive set up, with one box labeled “London” and the other local to show how Centera data can be federated over distance. In another, a Symmetrix V-Max, demonstrating how data can be federated over multiple arrays in a mesh architecture for scalability. And finally, there was a tutorial being given on VPlex, a new virtualization product that can be used to virtualize multiple modular arrays and…federate data over distance. To complete the circle, see item above about V-Max adding native VPlex capabilities. Not shown: Atmos, which also consists of a scale-out system built on commodity hardware with an object interface that can be used to federate unstructured…do you see where I’m going with this?
I talked with one of the booth workers about the Centera a little bit. I asked her what the real difference is between Centera, especially the Centera Virtual Archive, and Atmos specifically. She mentioned the number of ISVs that have already written to Centera, the fact that it uses MD5 hashing to demonstrate objects have not been modified, etc. But is there anything about those different features that couldn’t be ported to one of the other boxes? Particularly since Atmos and Centera are object-based already?
Experts hash out FAST Cache, potential future features
I already included some of the discussion between Gestalt IT’s Stephen Foskett and DeepStorage.net’s Howard Marks about FAST 2 in my piece about updates to Clariion and Celerra but the two self-described geeks also chewed over FAST Cache pretty thoroughly Tuesday.
Foskett and Marks both pumped EMC’s Sorenson for details on the use cases of FAST 2 vs. FAST Cache. Sorenson said FAST is meant to place data on tiers of storage according to historical performance characteristics, something Foskett described as “driving using the rear-view mirror.” FAST Cache would be for “bursty” unanticipated performance spikes.
Marks said he’d like to see more user control and predictive features added to FAST Cache, so that if users know which data will become hot at a later date, say, the end of a quarter, they can schedule a move up to cache and back out again when the burst is over. This also might allow users to create an “exclude list” of data that shouldn’t be put in cache regardless of access patterns. Sorenson said it was something EMC was considering exposing to ISVs through an API set, ISVs like, say, VMware.
“But,” Foskett pointed out, “previous attempts at user-tunable caches were colossal failures. I can see some use cases, but it’s probably better to turn it on and leave it alone.” But some awareness within the cache of files as well as blocks – the better to put file metadata in cache, for example – was something Foskett said EMC should consider, and something Sorenson also said was possible down the road. “That dream of file-level Flash cache is what Sun has with ZFS and Avere has also described already,” Foskett pointed out.
Compellent fires back at VPlex; NetApp still snarking
EMC’s VPlex and midrange disk array announcements this week have caused a stir among competitors who have begun trying to pick apart EMC’s news.
I would have expected Compellent to start in right away on FAST 2 given its own sub-LUN tiered storage features, but Compellent reps said they haven’t dug into FAST 2’s inner workings yet to offer a detailed comparison.
In the meantime, Compellent issued a statement pitting its LiveVolume data migration feature against VPlex:
There are a few similarities between VPLEX and Compellent’s Live Volume technology. Both solutions tackle the problem of non-disruptive volume migration, which is useful in disaster avoidance, load balancing or maintenance situations. That’s where the similarities end.
The EMC solution requires multiple high-end VPLEX hardware engines and many high bandwidth, low latency network connections. Compellent’s focus is on integrating a software solution that scales. EMC has a significant focus on high-end bandwidth, and all that comes with a hefty price tag (about $77k for a local-only, interconnect system vs. the $5k license for Live Volume).
VPLEX is designed as an add-on set of rack-mounted servers, which requires a change in the data center topology and the addition of a new in-between configuration. Compellent’s Live Volume is a software solution that fits in with Compellent’s Fluid Data architecture, meaning that it supports any Storage Center configuration or network connections, and can leverage existing network connections without impact on write performance.
Meanwhile, in case you’d forgotten how bitter the rivalry remains between NetApp and EMC, look no further than NetApp’s response to EMC’s midrange storage announcement:
Yesterday’s announcements demonstrate that EMC has finally realized the importance of storage efficiency. NetApp is the undisputed industry leader in storage – from measurement to capabilities, from efficiency guarantees to reporting and optimization – across all systems, management products and workloads such as virtualization and database environments. EMC is clearly following our lead, but NetApp, with its V-Series open systems controller able to optimize an EMC system, continues to deliver greater value and storage efficiency than EMC can deliver natively on its own systems.