Amazon Web Services today added a new offering for its Simple Storage Service (S3) called Reduced Redundancy Storage (RRS). RRS offers users the ability to choose fewer “hops” of object replication among Amazon’s facilities for a lower cost per gigabyte. With RRS, objects would survive one complete data center failure, but wouldn’t be replicated enough times to survive two concurrent data center failures. It’s like RAID 6 vs. RAID 5 storage tiering writ large.
Some users like the CDN capabilities Amazon offers with S3, and Amazon officials say those capabilities will still be offered with RRS, claiming no difference in performance between RRS and S3. However, the cloud data storage vendors that have introduced gateway and caching devices for S3 will have to update their support to offer users the option of RRS on the back end. I’m sure we can anticipate a flurry of announcements from companies such as Nasuni, StorSimple and TwinStrata in the coming months (ETA: at least where Nasuni is concerned, I stand corrected…).
Ten cents per GB is already raising eyebrows, but that’s actually just the starting price for RRS. According to an emailed statement from Amazon S3 general manager Alyssa Henry, “Base pricing for Reduced Redundancy Storage covers the first 50 TB of RRS storage in a month. This tier is charged at a price of $0.10 per GB per month. As customers increase their storage, the price declines to as low as $0.037 per GB per month for customers with more than 5 petabytes of RRS storage.”
Henry was mum on whether Amazon has any more gradations of storage tiering up its sleeve, saying, “The RRS offering was the result of feedback from customers who, for their particular use cases, did not require the level of durability that Amazon S3 provides today. We’ll continue to listen to feedback from our customers on what’s important to them in terms of future functionality but have no other announcements today.”
S3 customers we’ve gotten in touch with so far seem intrigued by the new offering. Stay tuned for a followup in the coming days about reaction to this announcement in the market.
Coraid today added a ZFS-based NAS to its platform of Ethernet SANs.
Coraid’s base product is a non-iSCSI IP SAN called EtherDrive based on ATA over Ethernet (AOE), but the vendor has been looking to expand its product line since closing a $10 million funding round and hiring Kevin Brown as CEO in January.
The new EtherDrive Z-Series NAS includes two models. The Z2000 has four cores, 32 GB of RAM and either eight Gigabit Ethernet or four 10-Gigabit Ethernet ports. The Z3000 has eight cores, 48 GB of RAM, level 2 SSD cache, and either eight GigE or four 10-GigE ports.
Coraid relies on ZFS for features such as Inline deduplication, replication, unlimited snapshots and automatic tiering.
The Z-Series replaces Coraid’s Linux-based CLN NAS platform. “ZFS is a better fit for our ECODrive systems,” said Carl Wright, Coraid’s VP of sales and product management. “We’ve had a lot of requests from our customers for open-source ZFS systems.”
Wright described the Z-Series as a scale-out architecture because “as customers need capacity, they add EtherDrive data blocks on back.” The EtherDrive SAN and NAS systems can be managed from the same interface, he said.
Wright says the Z-Series uses the same Intel X25-E SSDs as in the EtherDrive SRX SAN platform it launched in March, but the SSDs serve as cache only for the NAS appliance (read cache is standard, and write cache is optional).
Compellent last month launched a ZFS-based NAS option to its Storage Center SAN system. Wright says the big difference between the Coraid and Compellent NAS offerings is price. He says Coraid’s Z series is priced at about $1,000 per TB while Compellent’s starting price is $84,000 for 8.7 TB for new customers and $36,000 for its current SAN customers.
Double-Take Software’s executives and directors have agreed to be acquired by Vision Solutions in a deal valued at $242 million, pending approval of the company’s shareholders. Vision Solutions is a portfolio company of Thoma Bravo, LLC, a private equity firm.
Last month Double-Take disclosed it had received indications of acquisition interest during, but did not name any suitors. After issuing a press release saying “The Double-Take board of directors unanimously approved the agreement and has recommended the approval of the transaction to Double-Take’s stockholders,” Double-Take officials declilned further comment today.
Vison Solutions specializes in disaster recovery and HA software for IBM System i and AIX servers. Vision Solutions previously had an OEM agreement to rebrand Double-Take’s software when it needed to support x86 Windows or Linux servers. That relationship was subsequently changed to a reseller agreement that remains ongoing.
Industry observers see this deal as an exit strategy for Double-Take’s board, after the first calendar quarter of 2010 finished “weaker than expected” for Double-Take. Said one source familiar with the company and the deal, speaking on condition of anonymity: “Double-Take’s magic sauce was that it could make your Exchange on a server in, say, Chicago run just like it would in Boston. But VMware came along and said, ‘Stick it in a VM, it’s the same thing, and you don’t have to install agents or worry about third-party software.’
“The folks at Double-Take saw it coming, but they couldn’t jump out of the way of that train fast enough.”
It wasn’t just VMware that began eating away at Double-Take’s market, but other backup and DR tools began to crop up and undercut Double-Take’s offerings in price, our source said.
With the deal not expected to close until after July, it’s officially “business as usual” for Double-Take customers. After the deal closes, Vision Solutions is expected to continue supporting Double-Take’s existing customers. “You have $90 million cash on hand and $40 million a year in software maintenance business — Thoma Bravo has every reason to keep existing customers happy,” said the insider.
Should’ve checked the phone again before hitting ‘publish’ on my EMC World Reporter’s Notebook — there were a few more shots from the show last week I’d overlooked –
When EMC’s Data Domain took the first step towards global deduplication with its Global Deduplication Array (GDA) last month, it left Quantum as the only major disk data deduplication backup target lacking the ability to cluster nodes. And Quantum isn’t saying much about when and if that capability is coming.
During Quantum’s earnings call Thursday evening, CEO Rick Belluzzo said there would be further enhancements to the company’s deduplication in the wake of a rollout of midrange and SMB DXi disk systems over the past six months. When I spoke to Belluzzo after the call he stopped short of addressing global dedupe, except to say Quantum won’t be following Data Domain’s path. Quantum apparently sees its StorNext file system as a piece of its strategy to scale its backup targets.
“We will be saying more about our roadmap over time,” Belluzzo said. “We also have the StorNext platform to build from. I would expect our strategy to differ from Data Domain in terms of their approach to global deduplication. We have a scalable platform in StorNext, and it’s a better platform to deliver a scalable solution. And Data Domain is still pretty limited [for global deduplication].”
Data Domain’s GDA clusters two nodes at this stage, and requires Symantec’s OpenStorage (OST) API and either Symantec NetBackup or Backup Exec to control the placement of data across multiple controllers.
Analyst Greg Schulz of StorageIO says customers who need global dedupe are mostly large enterprises who may already be using Quantum tape libraries and may be willing to wait for the global dedupe.
“Quantum will have to evolve to global dedupe, it’s part of scaling,” he said. “If you have hundreds of terabytes to petabytes of data, you need more robust deduplication going forward. Quantum has more time because people in that category are still using tape. Quantum has to get there, but it can walk to that market and get it right opposed to others who have to sprint there.”
With or without global dedupe, Belluzzo says he expects Quantum to make a big push in disk backup this year, and it has a long way to go to make up ground on Data Domain. Quantum reported $22.9 million of revenue from its disk (DXi dedupe platform) and software (StorNext) last quarter, down from $24.2 million in the same quarter last year.
“It is still very early in the evolution of this technology,” he said. “We believe the deduplication market is growing rapidly. There have been reports on how many people have implemented it and it’s a very low amount. So we believe that the market definitely supports pretty rapid growth, and to be frank, our base is pretty small and we really feel like that we have to be focused on that kind of rapid growth.”
After losing its OEM relationship with EMC following EMC’s acquisition of Data Domain last summer, Quantum has been building up its own channel for branded products. “It’s been a year of working through transitions to strengthen the company,” Belluzzo said. Now he says Quantum is prepared to grow.
Quantum executives say the new dedupe OEM partner they announced in January – believed to be Fujitsu – is shipping product with DXi software but there are no new OEM deals imminent as the vendor concentrates on its branded products.
This attendee (actually one of the Universal Studios performers who would play the Blues Brothers on the show floor) is wearing one of the blue-horned Brumhilde helmets that were given out at the welcome reception / concert. I never found out the significance of the helmets, though obviously blue corresponds with the “Beat Your Backup Blues” slogan…
VMax to add native VPlex capabilities
According to an EMC whitepaper on the architecture of the VPlex storage virtualization product it launched this week, “VPlex Local supports local federation today. EMC Symmetrix VMax hardware also adds local federation capabilities natively to the array later in 2010 and will work independently of the VPlex family.” One observer’s response: “Why buy [a separate] VPlex?”
More shots from the welcome reception and Counting Crows concert:
EMC keynote speaker previews future infrastructure management offerings
During a keynote by EMC EVP/COO Pat Gelsinger on Monday, EMC VMware czar Chad Sakac previewed some new “service oriented” features of the Ionix infrastructure management software product line, including the next Ionix Unified Infrastructure Manager (2.0) and Project Redwood. Redwood is a planned user self-service portal planned as a future capability of VMware. There’s talk that Redwood will also be integrated with EMC’s VPlex to give access to data across geographic distance. UIM 2.0, meanwhile, will add chargeback and other features to the software that Sakac said will make internal IT more like a service, including multi-tenancy, chargeback and SLA-based provisioning.
FLARE/DART and midrange array unification – connecting the dots
EMC ended months of speculation about the convergence of its Clariion and Celerra midrange disk array products with an announcement of a unified management console for the two offerings, as well as disclosures by executives that further convergence is on its way. That convergence includes common replication tools and a more consolidated hardware footprint. Today, while Celerra’s gateway can front a Clariion array, the two engines that make a Clariion a Clariion (FLARE) and a Celerra a Celerra (DART) remain separate entities. If further unification is to take place, there are three options: 1) mash the two code bases together into one 2) run them alongside one another using server virtualization within a single storage server 3) run them alongside one another using separate cores of a multicore processor.
While EMC has finally copped to the fact that it is condensing its platforms, officials from the C-suite on down continue to insist that they will make either standalone offering available to people who want it (in other words, no plan to end-of-life one or the other).
Speculation about plans to mash together code, meanwhile, seem to have come from NetApp’s competitive analysis, which concluded the two codebases will be brought together under something called the Common Block File System (CBFS), an EMC answer to WAFL. CBFS already exists, handling thin provisioning and block compression for both arrays, and it’s possible its role could expand.
Mark Sorenson, senior vice president of EMC’s Unified Storage Division, declined to say exactly how the consolidation might work, but seemed to eliminate a combined codebase when, after presented with the three options for unification, answered (I thought rather pointedly) that server virtualization and multicore processors are “good, new, enabling technologies for consolidation.” Mashing the code together under CBFS also wouldn’t necessarily make sense if EMC plans to continue to sell one-off standalone offerings depending on what users ask for.
Other insiders hinted that the multicore processor route is the one that EMC will take, saying “stuff that’s just coming to maturation will come to the fore.” This would seem to fit with Intel’s continued development of multicore processor chip sets, as well as Gelsinger’s prior experience as an executive at Intel.
Centera, Atmos, V-Max – the next place for product convergence?
With Celerra/Clariion coming together, a thought struck me as I stood on the show floor at EMC World Wednesday, surveying the happily humming racks whirring away at several separate booths. There was a Centera Virtual Archive set up, with one box labeled “London” and the other local to show how Centera data can be federated over distance. In another, a Symmetrix V-Max, demonstrating how data can be federated over multiple arrays in a mesh architecture for scalability. And finally, there was a tutorial being given on VPlex, a new virtualization product that can be used to virtualize multiple modular arrays and…federate data over distance. To complete the circle, see item above about V-Max adding native VPlex capabilities. Not shown: Atmos, which also consists of a scale-out system built on commodity hardware with an object interface that can be used to federate unstructured…do you see where I’m going with this?
I talked with one of the booth workers about the Centera a little bit. I asked her what the real difference is between Centera, especially the Centera Virtual Archive, and Atmos specifically. She mentioned the number of ISVs that have already written to Centera, the fact that it uses MD5 hashing to demonstrate objects have not been modified, etc. But is there anything about those different features that couldn’t be ported to one of the other boxes? Particularly since Atmos and Centera are object-based already?
Experts hash out FAST Cache, potential future features
I already included some of the discussion between Gestalt IT’s Stephen Foskett and DeepStorage.net’s Howard Marks about FAST 2 in my piece about updates to Clariion and Celerra but the two self-described geeks also chewed over FAST Cache pretty thoroughly Tuesday.
Foskett and Marks both pumped EMC’s Sorenson for details on the use cases of FAST 2 vs. FAST Cache. Sorenson said FAST is meant to place data on tiers of storage according to historical performance characteristics, something Foskett described as “driving using the rear-view mirror.” FAST Cache would be for “bursty” unanticipated performance spikes.
Marks said he’d like to see more user control and predictive features added to FAST Cache, so that if users know which data will become hot at a later date, say, the end of a quarter, they can schedule a move up to cache and back out again when the burst is over. This also might allow users to create an “exclude list” of data that shouldn’t be put in cache regardless of access patterns. Sorenson said it was something EMC was considering exposing to ISVs through an API set, ISVs like, say, VMware.
“But,” Foskett pointed out, “previous attempts at user-tunable caches were colossal failures. I can see some use cases, but it’s probably better to turn it on and leave it alone.” But some awareness within the cache of files as well as blocks – the better to put file metadata in cache, for example – was something Foskett said EMC should consider, and something Sorenson also said was possible down the road. “That dream of file-level Flash cache is what Sun has with ZFS and Avere has also described already,” Foskett pointed out.
Compellent fires back at VPlex; NetApp still snarking
EMC’s VPlex and midrange disk array announcements this week have caused a stir among competitors who have begun trying to pick apart EMC’s news.
I would have expected Compellent to start in right away on FAST 2 given its own sub-LUN tiered storage features, but Compellent reps said they haven’t dug into FAST 2’s inner workings yet to offer a detailed comparison.
In the meantime, Compellent issued a statement pitting its LiveVolume data migration feature against VPlex:
There are a few similarities between VPLEX and Compellent’s Live Volume technology. Both solutions tackle the problem of non-disruptive volume migration, which is useful in disaster avoidance, load balancing or maintenance situations. That’s where the similarities end.
The EMC solution requires multiple high-end VPLEX hardware engines and many high bandwidth, low latency network connections. Compellent’s focus is on integrating a software solution that scales. EMC has a significant focus on high-end bandwidth, and all that comes with a hefty price tag (about $77k for a local-only, interconnect system vs. the $5k license for Live Volume).
VPLEX is designed as an add-on set of rack-mounted servers, which requires a change in the data center topology and the addition of a new in-between configuration. Compellent’s Live Volume is a software solution that fits in with Compellent’s Fluid Data architecture, meaning that it supports any Storage Center configuration or network connections, and can leverage existing network connections without impact on write performance.
Meanwhile, in case you’d forgotten how bitter the rivalry remains between NetApp and EMC, look no further than NetApp’s response to EMC’s midrange storage announcement:
Yesterday’s announcements demonstrate that EMC has finally realized the importance of storage efficiency. NetApp is the undisputed industry leader in storage – from measurement to capabilities, from efficiency guarantees to reporting and optimization – across all systems, management products and workloads such as virtualization and database environments. EMC is clearly following our lead, but NetApp, with its V-Series open systems controller able to optimize an EMC system, continues to deliver greater value and storage efficiency than EMC can deliver natively on its own systems.
The VCE alliance between EMC, Cisco and VMware seems to have made Cisco’s MDS Fibre Channel switches more popular in data centers.
During Cisco’s earnings call Wednesday night, CEO John Chambers said storage product revenue grew 100% year-over-year to $140 million last quarter. One Wall Street analyst says Cisco clearly won market share from its rival Brocade in Fibre Channel switching, and he credits the VCE partnership for that.
“How is it possible that Cisco’s MDS 9513 gained share despite an oversubscribed backplane and generally being considered an inferior product compared to the Brocade’s DCX?” Wedbush Securities analyst Kaushik Roy wrote today in a research note. “Our checks indicate that Cisco is seeing a significant uptick in the sales of MDS and Nexus products, largely due to its VCE (VMware-Cisco-EMC) partnership. Customers who in the past have avoided buying Cisco’s MDS products are now willing to accept Cisco’s MDS due to EMC’s blessing as part of the VCE’s vBbock bundle.“
Comments from Chambers on Cisco’s call appear to agree with Roy’s assessment, except for the part about the oversubscribed backplane and the DCX’s superiority.
“Our architectural sales approach is rapidly not only gaining traction but accelerating,” Chambers said. “This is different than the standalone product approach of our competition. … Vblock sales pull Nexus, UCS, VMware and EMC sales. Key takeaway: Cisco’s momentum in the data center is rapidly accelerating.”
Roy also thinks budding Fibre Channel over Ethernet (FCoE) adoption is helping Cisco. He estimates that one-third of Nexus switches are used for FCoE. That’s mostly on the server rather than the storage side. He also believes 10-Gigabit Ethernet is pushing many enterprises to file-based rather than block-based storage.
According to Roy, Brocade will announce new 64-port cards for its DCX switches and next-generation FCoE converged network adapters at its June 9 Technology Day, and is working on a 10-GigE CNA mezzanine card and blade server switch with 10-GigE and 8 Gbps Fibre Channel ports.
He also says Brocade is working on 16 Gbps Fibre Channel switches for 2011 release, and Cisco will upgrade its MDS 9513 director switch with an improved backplane to eliminate the need for oversubscription later this year.
CommVault CEO Bob Hammer says his customers can’t get enough of data deduplication, and the vendor will give them a lot more of it when its next version of Simpana launches later this year.
During CommVault’s earnings call Tuesday, Hammer said deduplication was the major driver in the company’s 31% revenue growth last quarter. With Simpana 9, he said, CommVault will increase the scale and functionality of its deduplication while integrating source and target dedupe capability. Hammer says CommVault’s dedupe will go beyond anything on the market, “and there will be no close second.”
I spoke with Hammer after the call, and he clarified a bit.
“This will be our third-generation of deduplication, and we will dramatically increase the scale with the addition of source-side dedupe and the ability to deduplicate secondary copies and dedue directly to the cloud,” Hammer said. “Those are the major areas we’ll expand.”
Hammer says the expanded dedupe in Simpana 9 will take it closer to primary data.
“With source-side deduplication, you’re getting close to that primary layer,” he said. “We’re combining that deduplication with the ability to more intelligently manage snap copies across hardware silos. It’s not primary dedupe, but it’s close to the primary layer. We’re working with a number of hardware vendors that will be part of our release in the fall as well.”
Hammer also says Simpana 9 will deduplicate “virtualization environments across the board – at the source and in some cases at the target, but we don’t want to dedupe just at the target. It becomes an integrated seamless part of all tiers of storage, including the cloud and tape.”
Hammer says deduplication was the most popular feature of its Advanced Data Information Management (ADIM) product group, which also includes replication, virtualization, and archiving. ADIM revenue increased 91% year-over-year and 21% from the previous quarter, and made up 43% of CommVault’s $73.4 million revenue last quarter.
“It’s becoming a requirement for data management because it saves a significant amount of money on storage and reduces network traffic,” he said of data dedupe.
When CommVault released Simpana 8 with dedupe in January 2009, the vendor maintained many customers would no longer need hardware deduplication products. Hammer says CommVault now competes with more hardware vendors than software vendors for dedupe, with the main competition coming from EMC’s Data Domain.
“Our win rate remains very high, although clearly EMC is doing well with that product in its installed base,” Hammer said. “It’s still the best deduplication appliance on the market.”
While Hammer was talking up CommVault’s plans for dedupe Tuesday, Data Domain launched its DD Boost software to speed the dedupe process, At EMC World. Data Domain execs also discussed plans to integrate Data Domain’s dedupe with EMC Avamar source-based dedupe to end the target versus source debate.
Hammer also said on the earnings call that CommVault is developing technology that will bring it into a third major market to go with its current place in the backup and ADIM markets.
He wouldn’t elaborate when I spoke to him. “We’ll talk about that later in the fall,” he said.
CommVault may also reconsider its current strategy of developing all of its own technology rather than picking up IP via acquisition. When asked if that was a possibility during the earnings call, Hammer said: “In the past, we’ve dismissed it. When we plan our strategy for the next 12 months we’ll take a broader look at what we’re going to do with the company, and it may come into play. Now it’s not in the plan, but that may change.”