In April, Oracle executives promised their largest Sun StorageTek tape customers enhancements that would help them scale their enterprise libraries to keep up with rapid data growth.
Today they started delivering on those promises with scalability and high availability enhancements. The SL8500 Modular Library System – the largest in the platform — now scales to 100,000 tape slots, up from 70,000. The SL8500 also now supports LTO-5 tape cartridges with 1.5 TB of native capacity. With the improvements, the SL8500 can scale to 1.5 PB of native capacity – more than twice its previous capacity.
Oracle also added redundant hot swappable robotics and library control cards to the SL8500 with automatic failover capabilities.
Oracle product marketing manager Tom Wultich said the focus of the upgrade was helping the largest enterprises that use the SL8500 keep up with data growth.
“Tape drives double in capacity every two years, but data is growing faster than that,” Wultich said. “Our largest customers need to be able to keep up with that growth, so we’re offering nearly three times an improvement in capacity.”
Overland Storage today took the next steps in its rejuvenation plan by rolling out larger versions of its LTO tape library and SnapServer multiprotocol storage system.
The NEO 8000e scales to 3 PB of capacity with 1,000 cartridges and 24 tape drives and supports LTO-5 and LTO-4. The library will eventually replace the NEO 8000 – Overland’s previous high-end library. Besides scaling higher, the differences between the 8000e and 8000 are the 8000e has embedded connectivity with Fibre Channel, SAS and SCSI drives embedded while the 8000 requires bridge cards for each protocol, and the 8000e requires no hardware requirements for partitioning. Overland’s director of product marketing for tape products Peri Grover says the vendor will offer an upgrade kit for 8000 customers who want to go to the 8000e.
The Neo8000e will compete with enterprise drives from Quantum and the Oracle Sun StorageTek platform.
“We see a lot of interest from the legacy StorageTek-installed base,” Grover said.
Pricing starts at $47,999 for the Neo8000e.
The SnapServer N2000 is the new high end of Overland’s NAS platform, which also supports iSCSI through Microsoft VSS (Volume Shadow Copy Services) and VDS (Virtual Disk Services). The 2U unit is available with four or six Ethernet ports and scales to 144 TB. The previous high end of the SnapServer NAS line, the 850, is a 1U model with four Ethernet ports.
“This is the top end of our NAS line,” Overland product marketing manager for network storage products Drew O’Brien said. “It’s for customers who need performance and scalability in simple IT environments. Maybe they bought NAS in the consumer space before and now need something more sophisticated.”
O’Brien says the NS2000 will compete with EMC Iomega and NetGear NAS devices. Pricing starts at $4,999 for 4 TB and $5,999 for 8 TB.
While these products are a step up from what Overland already had, they’re hardly enough to turn around a company that has suffered heavy financial losses for years. Considering Overland CEO Eric Kelly has put together a distinguished team including VP of engineering Geoff Barrall and VP of sales and marketing Julian Mansolf, we can expect more product rollouts soon.
Zetta added disaster recovery to its enterprise NAS on-demand service by opening a second data center and giving customers the option of replicating between the two.
Zetta launched its On-Demand Enterprise Storage cloud last October, using its Santa Clara, CA data center to host customer data. On Tuesday it said it has opened a second data center in Secaucus, NJ, giving customers an option to store data on the West or East Coast. Whatever data center they choose for their primary data, customers can replicate to the other data center.
Zetta’s pricing for primary volumes starts at 25 cents per GB per month and it charges another 15 cents per GB per month for a replica copy.
Zetta claims replicated storage volumes appear as fully mounted and accessible read-only volumes, giving customers access to data from both data centers at once. If the primary volume becomes unavailable, the second volume assumes read and write status and customers can treat it as primary data.
Zetta automatically replicates data with no user interaction, and the startup claims replicated data can be mounted for immediate use when required.
“Usually when a company wants a replica of data, it has to sign up a new data center,” Zetta CEO Jeff Treuhaft said. “We give them a one-click route of replicating that data in a separate geo zone and separate facility. We think this will attract people to the use case of primary data in the cloud.”
Treuhaft says the second data center also lets companies in the eastern U.S. stay closer to their data, which he says can improve performance because read and write requests will take less time to complete.
Like other smaller companies trying to make it as a cloud provider, Zetta faces the challenge of taking on providers such as Amazon, AT&T and Verizon. But Zetta also faces competition from primary NAS vendors such as NetApp and EMC. Treuhaft says Zetta is growing its footprint of stored data between 6% and 8% each week, but declined to say how many customers the startup has.
StorageIO analyst Greg Schulz says Zetta has a well focused target market and could get a boost from the DR option.
“These guys have as much of a shot at making it as any of the others, perhaps even an edge in their approach that can be used for public or private deployment with integrated replication and data integrity,” he said. “Some of their competitors are off elephant hunting looking for big game deals while others are trying to play to the for free market as opposed to having a solution affordable to deploy with their cloud point of presence approach.
“They have some legs, let’s see if and how they can use them now to survive in a market sector about to undergo change and transformation in the not so distant future.”
Remember AppIQ, the SRM startup that Hewlett-Packard acquired in 2005? Well, AppIQ’s founders Ash Ashutosh and David Chang are back with another startup. This time, they’re looking to help manage data in virtual environments.
Their company, Actifio, today closed an $8 million A funding round led by North Bridge Venture Partners and Greylock Partners. CEO Ashutosh, who also served as chief technologist for HP’s StorageWorks after the AppIQ deal, says Actifio will ship its first product around October. He’s not giving much away yet, except to say the market his new company is addressing is Data Management Virtualization. Acitifio’s press release says its patent pending technology “delivers unified data protection, disaster recovery and business continuity across the data lifecycle for virtual and physical IT environments.”
Ashutosh says Actifio will start out completely channel focused for sales, and has been working with five large resellers for months. But he says his startup will partner as much as possible with storage array vendors. “Our goal is to completely change — yet co-exist with — what is out there” he said.
The 50-person Waltham, MA-based company has been in stealth for 18 months. It’s other executives include VP of products Chang, VP of marketing Steven Blumenau (formerly at EMC), VP of sales Rick Nagengast (former of EMC, DEC and Compaq) and customer operations manager James Pownell (formerly of EMC and founder of ExaGrid Systems).
Dell bought its primary deduplication OEM partner Ocarina Networks today before it even integrated Ocarina’s technology into Dell storage.
Dell last month hinted at an OEM deal with Ocarina when a Dell storage executive was quoted in a press release Ocarina put out about its OEM product. Following the deal today, Dell product manager Brett Roscoe confirmed there was an OEM deal in the works but said Dell wanted more control over the dedupe technology.
Unlike Dell’s acquisition of EqualLogic in 2008, Dell isn’t getting a mature business with Ocarina. This was a pure technology buy, which highlights the importance storage vendors place on dedupe for primary data.
“We believe that deduplication is a key strategic pillar for storage going forward,” Roscoe said. “We started working with Ocarina some time ago, developing solutions around EqualLogic and other storage products. The more we worked with them, the more interesting they became.”
Dell did not disclose the price of the acquisition.
Although Roscoe wouldn’t discuss specific products, sources familiar with the Dell-Ocarina relationship say Dell was already working on integrating Ocarina’s dedupe in three products: EqualLogic iSCSI SANs, a scale-out NAS product it is developing from IP it picked up from Exanet this year, and a disk backup target. Dell currently OEMs dedupliation backup products from EMC Data Domain and software partners CommVault and Symantec.
Roscoe talked about dedupe for EqualLogic and for unstructured data, but wouldn’t get into using Ocarina for backup. “We’re going to look at all our opportunities,” he said. “There’s nothing specific around that now.”
The future of Ocarina’s current shipping products — appliances aimed at reducing unstructured data — is unclear although Dell plans to sell and support the appliances until it can develop its own branded version and move the technology to other platforms.
Storage vendors are moving to incorporate technology in their storage systems to shrink primary data. NetApp has had dedupe for primary data for three years. Hewlett-Packard last month launched its StoreOnce deduplication for backup and primary data. IBM has been linked in a possible deal for data compression vendor Storwize, and EMC is planning on delievering compression for primary storage on its Clariion and Celerra platforms.
“We knew having IP in the deduplication space was going to be strategic for all Dell storage products going forward,” Roscoe said. “We believe we had to have the ability to build deudpe into our product set.”
Roscoe wouldn’t say how close Dell is to having dedupe in any storage products, but hinted that it’s not far off. “Let’s just say this isn’t a five-year project.”
Nimble Storage Thursday came out of stealth with a storage system that the startup’s executives said combines primary storage with deduplication for backup in the same device. It makes sense that Nimble would use dedupe, considering its founders were former Data Domain engineers.
But Frank Slootman, president of EMC’s data backup and recovery division and Data Domain’s CEO until EMC acquired the company last year, says there is no dedupe in Nimble’s storage. Slootman saw my story on SearchStorage about Nimble, and sent an email claiming “there is no dedupe in
Nimble whatsoever. Read their white paper, or just ask them. We did. They do have local compression.”
I did ask Nimble CEO Varun Mehta when I spoke to him before their launch. He said his storage systems use inline compression for primary data and dedupe for backups. And according to Nimble’s press release on its product launch (emphasis added):
The CS-Series is based on the company’s patent-pending architecture, Cache Accelerated Sequential Layout (CASL™), which enables fast inline data compression, intelligent data optimization leveraging flash memory and high-capacity disk, instant deduped backups, and WAN efficient replication – all in a single device. CASL allows organizations to reduce their capital expenditures for storage and backup by at least 60 percent, while eliminating the need for separate, disk-based backup.
And a data sheet on the Nimble web site states:
Nimble slashes IT costs by converging compressed primary storage, deduped backup storage, and disaster recovery into one solution.
Slootman is correct about the whitepaper, though. A paper called “A New Approach to Storage and Backup” on the Nimble site does not say it uses deduplication. It claims “Nimble Storage CASL provides in-line compression on all data” and in a section on its backup technology says “CASL enables instant, application-consistent backups on the same array with very efficient (up to 20x) backup capacity optimization.”
Capacity optimization could be dedupe or compression. But nowhere in the 15-page whitepaper does Nimble claim to dedupe backup data.
While Nimble execs said in press interviews that they dedupe, they had a different message at a blogger TechField Day in Seattle where the startup officially launched Thursday. Nimble presenters did not mention deduplication at the blogger event.
I asked Nimble for clarification about its mixed marketing, and its VP of marketing Dan Leary replied via email:
“Sorry if there was any confusion regarding deduplication. Nimble does not deduplicate in the Data Domain sense, where all duplicate blocks are eliminated using a content-based signature. Our snapshot-based block sharing eliminates duplicate blocks across backups like deduplication systems. Nimble compresses, but does not deduplicate, within a primary storage volume. However, we offer better space savings compared with any secondary storage. Secondary storage systems require a baseline copy of the original data to get started. Because converged storage doesn’t require a baseline full backup, Nimble provides even better capacity optimization than secondary storage. Look for an upcoming blog from our CTO who will cover this topic in more detail.”
If Nimble can shrink data enough to make backups and replication for DR more efficient without taking much of a performance while compressing, it may not make much of a difference how it’s doing it. Nimble beta tester Dave Conde, IT director of eMeter, says he’s found performance outstanding and he’s getting a reduction in data although he hasn’t measured the actual rate.
But if Nimble is deduping, EMC execs probably want to know just how close the startup’s dedupe technology is to the dedupe it paid $2.1 billion for when it acquired Data Domain.
In a follow-up email, Slootman attributed Nimble’s mixed message to “a disconnect with marketing. They probably mean like NetApp that their snapshots use block differentials. They should not be using the term [deduplication] so indiscriminately.”
After a couple of strong quarters with its Simpana backup and storage management software fueled by data deduplication, CommVault sales tumbled last quarter.
CommVault today gave preliminary revenue results for last quarter of approximately $66.3 million, below the approximately $71.7 million financial analysts expected. The new forecast would be about a 10% increase from a year ago and a 10% decrease from the previous quarter.
“We had a big miss here,” CommVault CEO Bob Hammer said on a conference call to discuss the results. Hammer called the results “very disappointing,” “surprising” and “unacceptable” and blamed the problems mainly on a restructured sales force.
“This was not the result of losing deals to a competitor,” he said.
Hammer said the reason for restructuring was to concentrate on more enterprise deals. He said after a strong previous quarter, the company overestimated its ability to close deals last quarter.
“We underestimated the distraction to our sales force and the ability to close forecasted deals by the end of the quarter,” he said. “To put it bluntly, we could’ve managed these processes more effectively.”
Still, he said he did not regret making the changes and he expects the “vast majority” of deals that slipped will close this quarter.
Hammer said market conditions in the U.K. and Europe also added to the problems. He said it is taking longer for companies to make buying decisions in those areas, resulting in an “unprecedentedly low level” of close rates. Government sales were also lower than he expected.
Hammer said he still expects CommVault’s revenue for the year to grow in double-digits, which would likely require strong sales this quarter. He said many deals that slipped past the end of last quarter have closed and CommVault is off to a good start for this quarter.
Hammer said he took a close look at the deals that did not close, and was confident those customers did not buy a competitor’s product. When asked about any changes in the competitive landscape, he said Symantec was weaker and EMC stronger, although he maintains that CommVault partner Dell’s decision to OEM EMC’s Data Domain deduplication target “wasn’t a major issue.” When asked about EMC’s products that compete with CommVault’s Simpana, Hammer said, “Legato [backup software] is still a relatively weak product. They’re doing well with Data Domain and OK with Avamar.”
Coraid has suspended its recently launched EtherDrive Z-Series NAS appliance after receiving a legal threat from NetApp, which claims the Z-Series infringes on NetApp’s ZFS patents.
In a letter to customers notifying them of the situation, Coraid CEO Kevin Brown said he hopes to continue selling the ZFS-based NAS appliance after a long-standing legal dispute between NetApp and Sun (now Oracle) gets settled.
“We hope to reinstate our Z-Series offering in the coming months,” Brown wrote to customers, after noting that the ZFS file system has been downloaded nearly one million times by customers and vendors.
“We made the decision to suspend shipment after receiving a legal threat letter from NetApp Inc., suggesting that the open source ZFS file system planned for inclusion with our EtherDrive Z-Series infringes NetApp patents.”
NetApp filed a lawsuit against ZFS creator Sun in 2007 claiming patent infringement and Sun promptly countersued. The suits were still pending when Oracle acquired Sun, and now Oracle and NetApp are attempting to settle out of court.
Coraid launched its Z-Series May 19, adding the NAS device to its ATE over Etherenet (AoE) SAN platform. Brown said Coraid is still selling the SAN systems, which are not affected by the NetApp patent charge.
In his letter to customers, Brown included a letter dated May 26 that he received from Edward Reines, a patent litigation attorney from Weil, Gotshal and Manges LLP who represents NetApp in its ZFS litigation. Reines’ letter read in part: “Coraid must cease infringement of NetApp’s patents and we reserve our rights to seek all appropriate remedies for any infringement.”
The letter to Brown points out that Coraid uses the term “unified storage” to describe the Z-Series, and NetApp’s patents involved in the litigation with Sun “cover a host of features, including unified storage …”
Brown, who served as VP for NetApp’s Decru security platform for 18 months before joining Coraid, apparently took the NetApp threat seriously.
His letter to customers didn’t say how many if any have purchased the Z-Series, only that Coraid has received “dozens of customer inquiries.”
Compellent, which is more competitive with NetApp than Coraid is, launched its zNAS system based on ZFS in April. If Compellent has been threatend by NetApp, it hasn’t said so publicly.
Last week’s Congressional hearing on cloud computing served as a condensed version of the cloud debate that has been ongoing for about two years now. Congress heard definitions of different types of clouds, government representatives voiced concerns over security and other issues associated with cloud computing, and vendors extolled the cloud’s virtues while promising their technology can overcome all of its hurdles.
But the hearing made it clear that the federal government — which is forecasted to spend about $76 billion on IT this year – is serious about the cloud. Government agencies see the cloud as a method of data center consolidation. According to s federal CIO Vivek Kundra, the U.S. government has nearly tripled the number of data centers from 432 to 1,100 over the past decade while many corporations have reduced their data centers.
There wasn’t a lot of specific talk about storage during the hearing, although Nick Combs, CTO of EMC’s Federal division, was part of the vendor panel.
“There’s a whole lot of concern about the number of data centers out there in the federal government today, and what’s the right number,” Combs said in an interview after the hearing.
Much of Combs’ testimony focused on security, which EMC delivers through its RSA division. The security talk is also where the various types of clouds came in.
“There were lots of questions around security in the cloud and where clouds wouldn’t be appropriate for government information,” Combs said. “We talked about the multitenant cloud – are there sufficient protections to put information in the cloud and what level of risk are we talking? How do we provide compliance and meet government regulations? Only public-facing information should be placed on public clouds. Information that is sensitive in nature needs to be protected in more private-type clouds. That seemed to resonate pretty well.”
As part of his prepared remarks, Combs offered the NIST definitions of four types of clouds:
– Private Cloud is infrastructure deployed and operated exclusively for an organization or enterprise. It may be managed by the organization or by a third party, either on or off premise.
– Community Cloud is infrastructure shared by multiple organizations with similar missions, requirements, security concerns, etc. It also may be managed by the organizations or by a third party on or off premise.
– Public cloud is infrastructure made available to the general public. It is owned and operated by an organization selling cloud services.
– Hybrid cloud is infrastructure consisting of two or more clouds (private, community, or public) that remain unique entities but that are tied together by standardized or proprietary technology that enables data and application portability.
Oracle upgraded its flagship disk storage platform this week, adding Fibre Channel host connectivity to the Sun Storage 7000 multiprotocol series while doubling down on its SAS disk interface support.
Sun originally launched the 7000 as a ZFS-based Ethernet platform, mainly focused on handling file data with iSCSI thrown in for block storage. That was in late 2008, more than a year before Oracle closed its acquisition of Sun. But Oracle’s senior director of storage products Jason Schaffer says customers wanted Fibre Channel to make the 7000 better suited for primary storage.
“When we first launched the 7000, we had a strong lineup of Fibre Channel with our 6000 series and the gap in our portfolio was NAS,” he said. “Early adopters used [the 7000] mainly for disk-to-disk-to-tape backup. Over time people started to trust it in other environments, like for virtual servers, and it was being brought in more as primary storage for consolidated workloads.”
Schaffer said current customers can download software for Fibre Channel support. He says about 15% of 7000 customers already downloaded software to use it for Fibre Channel over the past few months, even before Oracle officially announced FC support.
The 7000 also features built-in data deduplication, which Sun added to ZFS late last year. Another big part of the 7000 upgrade is support for 2 TB SAS drives, doubling the total capacity of the system to 576 TB. Schaffer says he sees no need for FC drives because the 7000 supports 6 Gbps SAS, solid state drives (SSDs) and SATA – especially with ZFS’ ability to use SSDs as high-speed disk cache.
“DRAM flash and SAS drives are more cost efficient than 15,000 RPM Fibre Channel drives,” he said.
The 7000 also takes advantage of data deduplication built into ZFS.
Oracle severed its OEM deal with Hitachi Data Systems to sell the Sun StorageTek 9000 enterprise SAN systems earlier this year, choosing to concentrate on the 7000 platform. But Schaffer said Oracle also remains committed to the Sun StorageTek 6000 series of Fibre Channel arrays, which consist of LSI Corp. controllers and Sun management software. “We’re still supporting and growing the 6000 platform,” he said, “although the bulk of our engineering will be on the 7000 series going forward.”