The Justice Dept. today said EMC paid $87.5 million to settle a lawsuit that charged the vendor with false pricing claims and taking part in a kickback scheme with consulting firms who do business with government agencies.
The Justice Dept. claims EMC committed fraud by inducing the General Services Administration (GSA) to enter a contract with prices that were higher than they should have been. The GSA purchases products for the federal government. The Justice Dept. said EMC claimed during contract negotiations that for each government order under the contract, the vendor would conduct a price comparison to ensure that the government received the lowest price provided to any of its commercial customers – claims EMC could not live up to because it could not make such price comparisons.
Under the kickback scheme detailed in the Justice Dept. press release, EMC paid consulting companies fees whenever the consultants recommended that a government agency buy EMC products. EMC is not alone here – the DOJ said it has settled with three other technology companies and other investigations are pending. It did not name the other vendors.
“Misrepresentations during contract negotiations and the payment of kickbacks or illegal inducements undermine the integrity of the government procurement process,” Tony West, assistant Attorney General for the Civil Division of the Department of Justice, said in the Justice Dept. release. “The Justice Department is acting to ensure that government purchasers of commercial products can be assured that they are getting the prices they are entitled to.”
EMC denied any wrongdoing when the charges were first made public in March of 2009, and an EMC spokesman today emailed a statement to StorageSoup saying the vendor “has always denied these allegations and will continue to deny any liability arising from the allegations made in this case. We’re pleased that the expense, distraction and uncertainty of continued litigation are behind us.”
The EMC spokesman said some of the charges are almost 10 years old.
Saying it’s looking to appeal to larger shops with its online data backup service, Iron Mountain Digital released version 7.0 of its LiveVault SaaS product today with new support for multithreaded applications and larger data sets.
Previously, LiveVault’s “sweet spot” was protecting servers up to 1 TB, according to Jackie Su, senior product marketing manager for Iron Mountain Digital. The new version will protect up to 7 TB thanks to beefier processors and memory in the LiveVault TurboRestore on-site appliance, and the Data Shuttle option becoming a built-in feature. Previously, if users wanted to transport large data sets on portable hard drives, it was done only on request in special circumstances. The new TurboRestore appliance can now hold up to 24 TB of disk, and has a 64-bit memory cache.
Iron Mountain claims it’s seen growing adoption of cloud data protection in midsized enterprises among its customer base for LiveVault, citing this shift as the reason for its scalability updates with this release, but did not provide a specific number of midsized customers, percentage of growth in those customers compared with last year, or average deal size, though chief marketing officer TM Ravi said deal sizes are growing, which “indicates we’re covering larger and larger environments.”
Online data backup so far has been among the most popular uses of cloud data storage, particularly among enterprise users, but according to Storage Magazine’s most recent storage purchasing survey, “it’s still more hype than happening”.
Hewlett-Packard Co. added another scale-out NAS system to its portfolio yesterday when it announced DataDirect Networks (DDN)’s S2A9900 disk array will be bundled with the Lustre File System resold by the Scalable Computing and Infrastructure (SCI) group within HP.
HP began collecting scale-out file systems when it acquired PolyServe in 2007, then saw some false starts with its ExDS9100 product for Web 2.0 and HPC use cases. HP continued its track record of acquiring its partners in the space with the acquisition of Ibrix last July. Yet HP still found a gap in its scale-out file system portfolio for DataDirect and Lustre with this agreement, according to Ed Turkel, manager of business development for SCI.
“Basically, both the X9000 [based on Ibrix] and [the new offering with] DDN are scale-out file systems sold as an appliance model,” Turkel said. But Lustre is geared more toward “the unique demands of HPC users” in which multiple servers in a cluster simultaneously read and write to a single file at the same time, requiring very high single file bandwidth. “The X9000 is more general purpose, with scalable aggregate bandwidth” rather than high single-file performance.
DDN’s VP of marketing Jeff Denworth said the two vendors have “a handful” of joint customers already, but Denworth and Turkel both dismissed the idea that DDN could be HP’s next scale-out acquisition. “If I respond to that question in any fashion, I’m probably going to get my hand slapped, but it’s certainly not the purpose of this announcement,” Turkel said. However, this product will replace a previous offering HP launched in 2006, also based on Lustre, called the Scalable File Share (SFS).
DDN is now partnered for storage with every large HPC OEM vendor there is — previously it has announced reseller and OEM relationships with IBM, Dell and SGI. “This sounds similar to the arrangement that DDN has with IBM, Dell and SGI to provide a turnkey solution to certain niche customers, more likely aligned with the HP server group than the storage group,” wrote StorageIO founder and analyst Greg Schulz in an email to Storage Soup.
Amazon Web Services today added a new offering for its Simple Storage Service (S3) called Reduced Redundancy Storage (RRS). RRS offers users the ability to choose fewer “hops” of object replication among Amazon’s facilities for a lower cost per gigabyte. With RRS, objects would survive one complete data center failure, but wouldn’t be replicated enough times to survive two concurrent data center failures. It’s like RAID 6 vs. RAID 5 storage tiering writ large.
Some users like the CDN capabilities Amazon offers with S3, and Amazon officials say those capabilities will still be offered with RRS, claiming no difference in performance between RRS and S3. However, the cloud data storage vendors that have introduced gateway and caching devices for S3 will have to update their support to offer users the option of RRS on the back end. I’m sure we can anticipate a flurry of announcements from companies such as Nasuni, StorSimple and TwinStrata in the coming months (ETA: at least where Nasuni is concerned, I stand corrected…).
Ten cents per GB is already raising eyebrows, but that’s actually just the starting price for RRS. According to an emailed statement from Amazon S3 general manager Alyssa Henry, “Base pricing for Reduced Redundancy Storage covers the first 50 TB of RRS storage in a month. This tier is charged at a price of $0.10 per GB per month. As customers increase their storage, the price declines to as low as $0.037 per GB per month for customers with more than 5 petabytes of RRS storage.”
Henry was mum on whether Amazon has any more gradations of storage tiering up its sleeve, saying, “The RRS offering was the result of feedback from customers who, for their particular use cases, did not require the level of durability that Amazon S3 provides today. We’ll continue to listen to feedback from our customers on what’s important to them in terms of future functionality but have no other announcements today.”
S3 customers we’ve gotten in touch with so far seem intrigued by the new offering. Stay tuned for a followup in the coming days about reaction to this announcement in the market.
Coraid today added a ZFS-based NAS to its platform of Ethernet SANs.
Coraid’s base product is a non-iSCSI IP SAN called EtherDrive based on ATA over Ethernet (AOE), but the vendor has been looking to expand its product line since closing a $10 million funding round and hiring Kevin Brown as CEO in January.
The new EtherDrive Z-Series NAS includes two models. The Z2000 has four cores, 32 GB of RAM and either eight Gigabit Ethernet or four 10-Gigabit Ethernet ports. The Z3000 has eight cores, 48 GB of RAM, level 2 SSD cache, and either eight GigE or four 10-GigE ports.
Coraid relies on ZFS for features such as Inline deduplication, replication, unlimited snapshots and automatic tiering.
The Z-Series replaces Coraid’s Linux-based CLN NAS platform. “ZFS is a better fit for our ECODrive systems,” said Carl Wright, Coraid’s VP of sales and product management. “We’ve had a lot of requests from our customers for open-source ZFS systems.”
Wright described the Z-Series as a scale-out architecture because “as customers need capacity, they add EtherDrive data blocks on back.” The EtherDrive SAN and NAS systems can be managed from the same interface, he said.
Wright says the Z-Series uses the same Intel X25-E SSDs as in the EtherDrive SRX SAN platform it launched in March, but the SSDs serve as cache only for the NAS appliance (read cache is standard, and write cache is optional).
Compellent last month launched a ZFS-based NAS option to its Storage Center SAN system. Wright says the big difference between the Coraid and Compellent NAS offerings is price. He says Coraid’s Z series is priced at about $1,000 per TB while Compellent’s starting price is $84,000 for 8.7 TB for new customers and $36,000 for its current SAN customers.
Double-Take Software’s executives and directors have agreed to be acquired by Vision Solutions in a deal valued at $242 million, pending approval of the company’s shareholders. Vision Solutions is a portfolio company of Thoma Bravo, LLC, a private equity firm.
Last month Double-Take disclosed it had received indications of acquisition interest during, but did not name any suitors. After issuing a press release saying “The Double-Take board of directors unanimously approved the agreement and has recommended the approval of the transaction to Double-Take’s stockholders,” Double-Take officials declilned further comment today.
Vison Solutions specializes in disaster recovery and HA software for IBM System i and AIX servers. Vision Solutions previously had an OEM agreement to rebrand Double-Take’s software when it needed to support x86 Windows or Linux servers. That relationship was subsequently changed to a reseller agreement that remains ongoing.
Industry observers see this deal as an exit strategy for Double-Take’s board, after the first calendar quarter of 2010 finished “weaker than expected” for Double-Take. Said one source familiar with the company and the deal, speaking on condition of anonymity: “Double-Take’s magic sauce was that it could make your Exchange on a server in, say, Chicago run just like it would in Boston. But VMware came along and said, ‘Stick it in a VM, it’s the same thing, and you don’t have to install agents or worry about third-party software.’
“The folks at Double-Take saw it coming, but they couldn’t jump out of the way of that train fast enough.”
It wasn’t just VMware that began eating away at Double-Take’s market, but other backup and DR tools began to crop up and undercut Double-Take’s offerings in price, our source said.
With the deal not expected to close until after July, it’s officially “business as usual” for Double-Take customers. After the deal closes, Vision Solutions is expected to continue supporting Double-Take’s existing customers. “You have $90 million cash on hand and $40 million a year in software maintenance business — Thoma Bravo has every reason to keep existing customers happy,” said the insider.
Should’ve checked the phone again before hitting ‘publish’ on my EMC World Reporter’s Notebook — there were a few more shots from the show last week I’d overlooked —
When EMC’s Data Domain took the first step towards global deduplication with its Global Deduplication Array (GDA) last month, it left Quantum as the only major disk data deduplication backup target lacking the ability to cluster nodes. And Quantum isn’t saying much about when and if that capability is coming.
During Quantum’s earnings call Thursday evening, CEO Rick Belluzzo said there would be further enhancements to the company’s deduplication in the wake of a rollout of midrange and SMB DXi disk systems over the past six months. When I spoke to Belluzzo after the call he stopped short of addressing global dedupe, except to say Quantum won’t be following Data Domain’s path. Quantum apparently sees its StorNext file system as a piece of its strategy to scale its backup targets.
“We will be saying more about our roadmap over time,” Belluzzo said. “We also have the StorNext platform to build from. I would expect our strategy to differ from Data Domain in terms of their approach to global deduplication. We have a scalable platform in StorNext, and it’s a better platform to deliver a scalable solution. And Data Domain is still pretty limited [for global deduplication].”
Data Domain’s GDA clusters two nodes at this stage, and requires Symantec’s OpenStorage (OST) API and either Symantec NetBackup or Backup Exec to control the placement of data across multiple controllers.
Analyst Greg Schulz of StorageIO says customers who need global dedupe are mostly large enterprises who may already be using Quantum tape libraries and may be willing to wait for the global dedupe.
“Quantum will have to evolve to global dedupe, it’s part of scaling,” he said. “If you have hundreds of terabytes to petabytes of data, you need more robust deduplication going forward. Quantum has more time because people in that category are still using tape. Quantum has to get there, but it can walk to that market and get it right opposed to others who have to sprint there.”
With or without global dedupe, Belluzzo says he expects Quantum to make a big push in disk backup this year, and it has a long way to go to make up ground on Data Domain. Quantum reported $22.9 million of revenue from its disk (DXi dedupe platform) and software (StorNext) last quarter, down from $24.2 million in the same quarter last year.
“It is still very early in the evolution of this technology,” he said. “We believe the deduplication market is growing rapidly. There have been reports on how many people have implemented it and it’s a very low amount. So we believe that the market definitely supports pretty rapid growth, and to be frank, our base is pretty small and we really feel like that we have to be focused on that kind of rapid growth.”
After losing its OEM relationship with EMC following EMC’s acquisition of Data Domain last summer, Quantum has been building up its own channel for branded products. “It’s been a year of working through transitions to strengthen the company,” Belluzzo said. Now he says Quantum is prepared to grow.
Quantum executives say the new dedupe OEM partner they announced in January – believed to be Fujitsu – is shipping product with DXi software but there are no new OEM deals imminent as the vendor concentrates on its branded products.