Hitachi Data Systems has kept a tight lid on any upgrades to its flagship USP-V enterprise storage platform. Everybody in the storage industry expects an upgrade this year, but the HDS folks won’t even confirm that much. They will talk about other developments, though, while keeping details sparce.
One thing HDS is working on is automated tiering software, which HDS VP of storage platforms Robert Basilio says is a key to driving solid state storage adoption. “SSD adoption will be limited until you have a better way of managing storage,” he said. “And prices are not coming down as fast as anybody would like.”
Basilio says he’s not concerned that archrival EMC already has its FAST tiering software out with version 2 on the way. “We have more know-how than anybody in this area,” he said. “EMC is bringing out FAST, but we think we’ll have faster and fastest.”
HDS, which partners for backup data deduplication, plans to have dedupe for primary storage as well but “there’s nothing I can share now,” Basilio said.
HDS did break out its revenue results from the Hitachi parent company’s earnings report, and while it doesn’t get as specific with its numbers as most of its competitors those numbers do show some interesting trends.
HDS said its revenue for last quarter was $804 million, up 13% from the previous year. This compares to 21% year over year growth by EMC, but Stifel Nicolas financial analyst Aaron Rakers points out EMC’s revenue grew 14% if you exclude the Data Domain platform that EMC acquired after the second quarter of 2009.
Still, EMC said its high-end Symmetrix revenue grew 32% since last year. HDS says its USP-V grew only “high-single digits.”
But HDS is becoming less reliant on USP-V business. Hardware made up 55% of its revenue for last quarter with services contributing 30% and software 15%. And HDS midrange modular storage grew in “strong double-digits.” Basilio says USP-V isn’t losing share to competitors but customers are finding more value in the HDS Adaptable Modular Storage 2000 midrange platform. It is also selling a lot more NAS through its OEM partnership with BlueArc.
“There’s a lot of change in the storage world today,” he said. “The AMS is not our best or fastest system, but it’s the most consistently reliable product in the modular area today.”
Quantum’s strategy for breaking its sales slump is to think small. That means a large steady stream of smaller deals rather than relying on a handful of big deals.
Quantum CEO Rick Belluzzo says the vendor made some large deals with its DXi7500 deduplication systems last quarter but many large deals also got pushed back because of spending issues, resulting in “lumpy” results. The bottom line is that Quantum needs to improve sales for the DXi6500 midrange and DXi4500 SMB platforms instead of relying heavily on larger DXi7500 deals in the enterprise.
“The nature of our business is big-deal oriented, and that makes us more susceptible to when people pull back,” Belluzzo said. “We look at bigger deals that we track, and very few – virtually none – are deals we lost. A lot didn’t close, but hopefully slipped into the next quarter. But our smaller deals can’t offset that because we don’t have enough of that business yet. We need a more diverse strategy.”
Quantum this week reported revenue for last quarter of $163 million, well below the $170 million to $180 million it forecasted for the quarter. The vendor lost $3 million for the quarter. “We clearly did not deliver the growth we expected,” Belluzzo said.
Belluzzo said Quantum struggled mightily in Europe and one geographic area of North America. The vendor made some realignment in its sales force, but will concentrate on pushing its DXi disk backup, StorNext software and tape library products through the channel. Quantum hopes to win partners looking for an alternative to Data Domain’s dedupe line now that EMC owns it and are frustrated with Oracle’s handling of the Sun tape platform.
It now appears unlikely that Quantum will sign any more major deduplication OEMs to replace the deal it lost with EMC after EMC bought Data Domain. Quantum did announce one OEM deal in January (believed to be Fujitsu) but is now more channel focused.
“Don’t expect any imminent changes in what we’re doing today, and that’s mostly driving branded business,” Belluzzo said. “Our disk and software and StorNext platform will have various partners and maybe OEMs associated with them, but we think we have plenty to work with.
“It was a disappointing quarter with the economy, but underlying this we are making progress around the core tenets of our strategy. That’s growing our branded business and taking advantage of channel disruption.”
A longer term goal for Quantum is to expand its data reduction capabilities beyond deduplication for backup. Primary data reduction is a hot topic with storage vendors these days and Quantum has designs on that space.
“We are driving our architecture and technology into a world where deduplication becomes more common in various tiers,” Belluzzo said. “That basically calls for the process to become more about data reduction than deduplication. We can take a stream of data that has lot of influences, and respond accordingly to get bet overall result. There is a lot of thinking and technical work underway to move our architecture into a world that is different than what we see today.”
IBM made its long-rumored acquisition of primary data compression vendor Storwize today. Word first got out more than a month ago that IBM would pay $140 million for the privately held Storwize as storage vendors are moving to put together their primary data reduction strategies.
IBM expects the deal to close by the end of September. It did not disclose financial terms.
The IBM-Storwize acquisition comes less than two weeks after Dell bought Ocarina Networks, which had been seen as Storwize’s main competitor although the vendors use different methods to shrink data.
The Storwize STN-6000 appliance works with NAS systems, including the IBM N series (rebranded NetApp storage) and Scale Out Network Attached Storage (SONAS).
In a letter emailed to “friends of Storwize” today, Storwize CEO Ed Walsh said IBM will continue selling Storwize’s STN-6000 appliance while expanding the platform. Storwize has been working on adding block storage reduction to go with its traditional file compression, and that apparently will continue under IBM.
“Storwize will continue to sell and deploy its STN-6000 series of products and support CIFS and NFS protocols,” Walsh wrote. “Additionally, the Storwize product will continue to evolve to support additional storage systems and additional protocols. … Under IBM we will continue to deliver capacity optimization without compromise to you, across more storage platforms, and to additional new customers.“
IBM’s press release issued today said it found Storwize attractive because it compresses primary data – files, virtualization images, and databases – and lets customers store up to five times more.
IBM already has backup deduplication technology in its ProtectTier virtual tape library (VTL) software and Tivoli Storage Manager (TSM) application.
IBM claimed Storwize has more than 100 customers including Mobileye, Polycom Israel, Shopzilla, and Sumitomo Mitsui Construction.
For more on this story, check out SearchStorage.com.
Druva Software is in the process of moving headquarters from India to the U.S. with designs on conquering the laptop backup world.
Druva last week released its inSync 4.0 backup software, which the vendor claims has application-aware data deduplication designed to work at the logical block or object level. Druva founder and CEO Jaspreet Singh compares it to EMC Avamar, but built specifically for laptops (EMC added laptop support for Avamar last year).
“We’re application aware, so we understand the file format,” Singh said. “We can actually go through APIs from Microsoft to understand the PST format, and dedupe at the message level or attachment level. We can dedupe across applications and at the source.”
inSync 4.0 also has a new embedded storage engine that supports 16 TB of deduped data per server and 200 parallel connections. The product is based on the NoSQL Berkeley Database (BDB) that Druva OEMs from Oracle. BDB uses a small storage library instead of SQL optimizer layer, according to Singh, making it easier to download and install. Its new WAN optimization engine will choose the best packet size to control the amount of bandwidth uses and reduce latency, Singh says.
“There’s a lot of software for backing up servers that was modified to work with PCs, and then modified to work with laptops,” Singh said. “None were made specifically for laptops. But data backup is much more tricky than PCs. A person is either working or has the laptops switched off, so there’s no ideal time to back up a laptop.”
Is there room for another backup software player, even if it does specialize in an underserved market like laptop data protection? Singh says a few large organizations are using inSync, and he’s negotiating OEM deals with two North American partners to achieve wider distribution and product recognition. “The two issues we face are branding and pricing,” he said, an admission that inSync’s price of $55 per laptop license ($65 with support) is not cheap.
Druva recently received $5 million in funding from Sequoia Capital, and Singh said the three-year-old company will work out of the Sequoia Menlo Park, Calif. office until it sets up a U.S. headquarters. “We’re moving management and key sales people to U.S.,” he said. “We will be more-or-less a U.S. company.”
In April, Oracle executives promised their largest Sun StorageTek tape customers enhancements that would help them scale their enterprise libraries to keep up with rapid data growth.
Today they started delivering on those promises with scalability and high availability enhancements. The SL8500 Modular Library System – the largest in the platform — now scales to 100,000 tape slots, up from 70,000. The SL8500 also now supports LTO-5 tape cartridges with 1.5 TB of native capacity. With the improvements, the SL8500 can scale to 1.5 PB of native capacity – more than twice its previous capacity.
Oracle also added redundant hot swappable robotics and library control cards to the SL8500 with automatic failover capabilities.
Oracle product marketing manager Tom Wultich said the focus of the upgrade was helping the largest enterprises that use the SL8500 keep up with data growth.
“Tape drives double in capacity every two years, but data is growing faster than that,” Wultich said. “Our largest customers need to be able to keep up with that growth, so we’re offering nearly three times an improvement in capacity.”
Overland Storage today took the next steps in its rejuvenation plan by rolling out larger versions of its LTO tape library and SnapServer multiprotocol storage system.
The NEO 8000e scales to 3 PB of capacity with 1,000 cartridges and 24 tape drives and supports LTO-5 and LTO-4. The library will eventually replace the NEO 8000 – Overland’s previous high-end library. Besides scaling higher, the differences between the 8000e and 8000 are the 8000e has embedded connectivity with Fibre Channel, SAS and SCSI drives embedded while the 8000 requires bridge cards for each protocol, and the 8000e requires no hardware requirements for partitioning. Overland’s director of product marketing for tape products Peri Grover says the vendor will offer an upgrade kit for 8000 customers who want to go to the 8000e.
The Neo8000e will compete with enterprise drives from Quantum and the Oracle Sun StorageTek platform.
“We see a lot of interest from the legacy StorageTek-installed base,” Grover said.
Pricing starts at $47,999 for the Neo8000e.
The SnapServer N2000 is the new high end of Overland’s NAS platform, which also supports iSCSI through Microsoft VSS (Volume Shadow Copy Services) and VDS (Virtual Disk Services). The 2U unit is available with four or six Ethernet ports and scales to 144 TB. The previous high end of the SnapServer NAS line, the 850, is a 1U model with four Ethernet ports.
“This is the top end of our NAS line,” Overland product marketing manager for network storage products Drew O’Brien said. “It’s for customers who need performance and scalability in simple IT environments. Maybe they bought NAS in the consumer space before and now need something more sophisticated.”
O’Brien says the NS2000 will compete with EMC Iomega and NetGear NAS devices. Pricing starts at $4,999 for 4 TB and $5,999 for 8 TB.
While these products are a step up from what Overland already had, they’re hardly enough to turn around a company that has suffered heavy financial losses for years. Considering Overland CEO Eric Kelly has put together a distinguished team including VP of engineering Geoff Barrall and VP of sales and marketing Julian Mansolf, we can expect more product rollouts soon.
Zetta added disaster recovery to its enterprise NAS on-demand service by opening a second data center and giving customers the option of replicating between the two.
Zetta launched its On-Demand Enterprise Storage cloud last October, using its Santa Clara, CA data center to host customer data. On Tuesday it said it has opened a second data center in Secaucus, NJ, giving customers an option to store data on the West or East Coast. Whatever data center they choose for their primary data, customers can replicate to the other data center.
Zetta’s pricing for primary volumes starts at 25 cents per GB per month and it charges another 15 cents per GB per month for a replica copy.
Zetta claims replicated storage volumes appear as fully mounted and accessible read-only volumes, giving customers access to data from both data centers at once. If the primary volume becomes unavailable, the second volume assumes read and write status and customers can treat it as primary data.
Zetta automatically replicates data with no user interaction, and the startup claims replicated data can be mounted for immediate use when required.
“Usually when a company wants a replica of data, it has to sign up a new data center,” Zetta CEO Jeff Treuhaft said. “We give them a one-click route of replicating that data in a separate geo zone and separate facility. We think this will attract people to the use case of primary data in the cloud.”
Treuhaft says the second data center also lets companies in the eastern U.S. stay closer to their data, which he says can improve performance because read and write requests will take less time to complete.
Like other smaller companies trying to make it as a cloud provider, Zetta faces the challenge of taking on providers such as Amazon, AT&T and Verizon. But Zetta also faces competition from primary NAS vendors such as NetApp and EMC. Treuhaft says Zetta is growing its footprint of stored data between 6% and 8% each week, but declined to say how many customers the startup has.
StorageIO analyst Greg Schulz says Zetta has a well focused target market and could get a boost from the DR option.
“These guys have as much of a shot at making it as any of the others, perhaps even an edge in their approach that can be used for public or private deployment with integrated replication and data integrity,” he said. “Some of their competitors are off elephant hunting looking for big game deals while others are trying to play to the for free market as opposed to having a solution affordable to deploy with their cloud point of presence approach.
“They have some legs, let’s see if and how they can use them now to survive in a market sector about to undergo change and transformation in the not so distant future.”
Remember AppIQ, the SRM startup that Hewlett-Packard acquired in 2005? Well, AppIQ’s founders Ash Ashutosh and David Chang are back with another startup. This time, they’re looking to help manage data in virtual environments.
Their company, Actifio, today closed an $8 million A funding round led by North Bridge Venture Partners and Greylock Partners. CEO Ashutosh, who also served as chief technologist for HP’s StorageWorks after the AppIQ deal, says Actifio will ship its first product around October. He’s not giving much away yet, except to say the market his new company is addressing is Data Management Virtualization. Acitifio’s press release says its patent pending technology “delivers unified data protection, disaster recovery and business continuity across the data lifecycle for virtual and physical IT environments.”
Ashutosh says Actifio will start out completely channel focused for sales, and has been working with five large resellers for months. But he says his startup will partner as much as possible with storage array vendors. “Our goal is to completely change — yet co-exist with — what is out there” he said.
The 50-person Waltham, MA-based company has been in stealth for 18 months. It’s other executives include VP of products Chang, VP of marketing Steven Blumenau (formerly at EMC), VP of sales Rick Nagengast (former of EMC, DEC and Compaq) and customer operations manager James Pownell (formerly of EMC and founder of ExaGrid Systems).
Dell bought its primary deduplication OEM partner Ocarina Networks today before it even integrated Ocarina’s technology into Dell storage.
Dell last month hinted at an OEM deal with Ocarina when a Dell storage executive was quoted in a press release Ocarina put out about its OEM product. Following the deal today, Dell product manager Brett Roscoe confirmed there was an OEM deal in the works but said Dell wanted more control over the dedupe technology.
Unlike Dell’s acquisition of EqualLogic in 2008, Dell isn’t getting a mature business with Ocarina. This was a pure technology buy, which highlights the importance storage vendors place on dedupe for primary data.
“We believe that deduplication is a key strategic pillar for storage going forward,” Roscoe said. “We started working with Ocarina some time ago, developing solutions around EqualLogic and other storage products. The more we worked with them, the more interesting they became.”
Dell did not disclose the price of the acquisition.
Although Roscoe wouldn’t discuss specific products, sources familiar with the Dell-Ocarina relationship say Dell was already working on integrating Ocarina’s dedupe in three products: EqualLogic iSCSI SANs, a scale-out NAS product it is developing from IP it picked up from Exanet this year, and a disk backup target. Dell currently OEMs dedupliation backup products from EMC Data Domain and software partners CommVault and Symantec.
Roscoe talked about dedupe for EqualLogic and for unstructured data, but wouldn’t get into using Ocarina for backup. “We’re going to look at all our opportunities,” he said. “There’s nothing specific around that now.”
The future of Ocarina’s current shipping products — appliances aimed at reducing unstructured data — is unclear although Dell plans to sell and support the appliances until it can develop its own branded version and move the technology to other platforms.
Storage vendors are moving to incorporate technology in their storage systems to shrink primary data. NetApp has had dedupe for primary data for three years. Hewlett-Packard last month launched its StoreOnce deduplication for backup and primary data. IBM has been linked in a possible deal for data compression vendor Storwize, and EMC is planning on delievering compression for primary storage on its Clariion and Celerra platforms.
“We knew having IP in the deduplication space was going to be strategic for all Dell storage products going forward,” Roscoe said. “We believe we had to have the ability to build deudpe into our product set.”
Roscoe wouldn’t say how close Dell is to having dedupe in any storage products, but hinted that it’s not far off. “Let’s just say this isn’t a five-year project.”
Nimble Storage Thursday came out of stealth with a storage system that the startup’s executives said combines primary storage with deduplication for backup in the same device. It makes sense that Nimble would use dedupe, considering its founders were former Data Domain engineers.
But Frank Slootman, president of EMC’s data backup and recovery division and Data Domain’s CEO until EMC acquired the company last year, says there is no dedupe in Nimble’s storage. Slootman saw my story on SearchStorage about Nimble, and sent an email claiming “there is no dedupe in
Nimble whatsoever. Read their white paper, or just ask them. We did. They do have local compression.”
I did ask Nimble CEO Varun Mehta when I spoke to him before their launch. He said his storage systems use inline compression for primary data and dedupe for backups. And according to Nimble’s press release on its product launch (emphasis added):
The CS-Series is based on the company’s patent-pending architecture, Cache Accelerated Sequential Layout (CASL™), which enables fast inline data compression, intelligent data optimization leveraging flash memory and high-capacity disk, instant deduped backups, and WAN efficient replication – all in a single device. CASL allows organizations to reduce their capital expenditures for storage and backup by at least 60 percent, while eliminating the need for separate, disk-based backup.
And a data sheet on the Nimble web site states:
Nimble slashes IT costs by converging compressed primary storage, deduped backup storage, and disaster recovery into one solution.
Slootman is correct about the whitepaper, though. A paper called “A New Approach to Storage and Backup” on the Nimble site does not say it uses deduplication. It claims “Nimble Storage CASL provides in-line compression on all data” and in a section on its backup technology says “CASL enables instant, application-consistent backups on the same array with very efficient (up to 20x) backup capacity optimization.”
Capacity optimization could be dedupe or compression. But nowhere in the 15-page whitepaper does Nimble claim to dedupe backup data.
While Nimble execs said in press interviews that they dedupe, they had a different message at a blogger TechField Day in Seattle where the startup officially launched Thursday. Nimble presenters did not mention deduplication at the blogger event.
I asked Nimble for clarification about its mixed marketing, and its VP of marketing Dan Leary replied via email:
“Sorry if there was any confusion regarding deduplication. Nimble does not deduplicate in the Data Domain sense, where all duplicate blocks are eliminated using a content-based signature. Our snapshot-based block sharing eliminates duplicate blocks across backups like deduplication systems. Nimble compresses, but does not deduplicate, within a primary storage volume. However, we offer better space savings compared with any secondary storage. Secondary storage systems require a baseline copy of the original data to get started. Because converged storage doesn’t require a baseline full backup, Nimble provides even better capacity optimization than secondary storage. Look for an upcoming blog from our CTO who will cover this topic in more detail.”
If Nimble can shrink data enough to make backups and replication for DR more efficient without taking much of a performance while compressing, it may not make much of a difference how it’s doing it. Nimble beta tester Dave Conde, IT director of eMeter, says he’s found performance outstanding and he’s getting a reduction in data although he hasn’t measured the actual rate.
But if Nimble is deduping, EMC execs probably want to know just how close the startup’s dedupe technology is to the dedupe it paid $2.1 billion for when it acquired Data Domain.
In a follow-up email, Slootman attributed Nimble’s mixed message to “a disconnect with marketing. They probably mean like NetApp that their snapshots use block differentials. They should not be using the term [deduplication] so indiscriminately.”