Editor’s note: After this blog was initially published, Atrato alerted us to a misunderstanding about the GA date of certain features. The corrected text is below.
Self-healing array maker Atrato Inc. is finally making updating its support for solid-state drives and automated tiered storage generally available, a year after it first promised them.
Atrato issued a press release announcing general availability of new SSD units and automated tiered storage software Wednesday, after going back to the drawing board a few times a series of incremental releases following last year’s similar announcement, according to vice president of marketing Bill Mottram.
Atrato originally aimed for released SSD support last May, “but there was more complexity involved in the product than we anticipated,” Mottram said. Atrato’s hybrid VLUN, which spans across solid-state and spinning disks required some tweaks for performance, including a new feature being released this week called “I/O reforming,” which takes blocks of multiple sizes and bundles them into a fixed block size of 256 KB. Mottram said this speeds up moves between SSD and HDD tiers.
Atrato’s arrays are constructed out of enclosures stacked into what it calls a Self-maintaining Array of Identical Disks (SAID). These enclosures, which previously held 10 disks each, are now available with room for 24 drives. Atrato’s SSD enclosure holds 10 or 24 drives, and it now offers multiple configuration options depending on the level of performance or capacity needed.
The updated Velocity 1000 will also support up to four SSD enclosures, whereas last year support was announced for one.
Last year Atrato said it would support Intel’s X-25 E and X-25 M SLC and MLC drives, but now says it will only support the SLC version, as well as 150 GB drives from Pliant on customer request.
A redesign of the V1000 backplane has also boosted Atrato’s performance benchmark claims for the array from 16,000 IOPS to 24,000 IOPS. “It has to do with how we move data within the SAID,” Mottram said, declining to disclose further technical detail.
One feature Atrato customers have asked for, a graphical user interface (GUI) to control the box, remains a roadmap item but should be released toward the end of next month, Mottram said. Until then, the V1000 will continue to be managed through a command-line interface (CLI).
Dell is trying to stay vendor neutral in its data deduplication strategy, which isn’t always easy to do when partnering with data deduplication backup rivals EMC, CommVault and Symantec.
The acrimony between CommVault and Data Domain began when CommVault added its data deduplication to its Simpana 8 data backup software, and the former partners quickly became competitors. Those fires were stoked by statements from former Data Domain CEO and current EMC Corp. data backup and recovery division president Frank Slootman following EMC’s acquisition of Data Domain last year. CommVault responded with some strong statements of its own.
Now, both are vying to put their partnerships with Dell front and center. CommVault has added snapshot integration with Dell’s EqualLogic iSCSI arrays similar to that it first announced for NetApp FAS filers and EMC Clariion, Celerra and Symmetrix disk arrays last year. The feature, called SnapBackup, allows array-based snapshots to be managed according to Simpana policies and through Simpana’s catalog. SnapBackup licenses are $4,000 per protected application server; users with the software already in place will be able to add EqualLogic support at no additional cost.
Dell also disclosed today that it will sell Data Domain data deduplication systems as the Dell/EMC DD140, DD610 and DD630 models. The DD140 is for remote offices, while the DD600 models are for midrange and small enterprises. Dell also sells a DL2100 Powered by CommVault Simpana 8 and another powered by Symantec’s Backup Exec 2010 with deduplication.
For its part, Dell is attempting to position the 2100 and Data Domain boxes as targeting different audiences, but not everyone sees it that way — Forrester Research analyst Andrew Reichman described what he saw as “an emerging battle because the two impinge on each other’s value proposition.”
CommVault officials, meanwhile, point out that their product already competes with others in the Dell portfolio, as well as with Dell itself in some cases. “Dell sells a lot of products and we compete with a lot of those products inside Dell distribution, and we continue to win,” said CommVault vice president of marketing Dave West. “Our business has grown through changes and will continue to grow — we’re not concerned about it.”
Added CommVault director of corporate communications Dani Kenison, “We announced an expansion of our relationship with Dell [through the DL2100] way in advance of Data Domain. Being ahead of the game speaks volumes about the strength of our relationship.”
CommVault bills its DL2000 series joint products with Dell as “a centralized data protection engine,” as CommVault senior product manager Don Foster put it. “Customers truly have a platform instead of just another appliance put into the environment,” he said.
Emulex says two men were convicted in China for selling counterfeit versions of its Fibre Channel HBAs. According to the Emulex statement:
Following a complaint by Emulex Corporation (NYSE:ELX), counterfeiters of Emulex Fibre Channel Host Bus Adapters (HBAs) were recently convicted by a criminal court in the city of Shenzhen.
The condemned include the Legal Representative and general manager of Shenzhen Xinfengze Electronics Co. Ltd. (Xingfenze), Mr. Yang Jiaquan and his assistant Mr. Liu Yibin. Both were sentenced to custodial imprisonment and have received a fine. Both defendants were engaged in the counterfeiting of various brands, including Emulex. The defendants had been refurbishing recycled products and attaching forged trademarks to them.
We are encouraged by the outcome of this lawsuit and will continue to work with law enforcement agencies in our crack down of the counterfeit market.
I’ve asked for more details about the nature of this “crackdown,” but an Emulex spokesperson said the company will not have any more to say about it.
Enterprise Strategy Group president Steve Duplessie blogged Wednesday about information he’d received that Symantec Corp. has laid off 600 engineers who worked on VxFS (Veritas File System), VxVM (Veritas Volume Manager) and the VCS One server clustering line of products. Symantec today declined to comment on what it terms “rumors and speculation,” but industry sources have confirmed that number and say development of these products has been outsourced overseas.
There has been speculation among storage industry watchers on Twitter that this is a move toward Symantec spinning off all or part of the Veritas business, but sources close to the company say it’s unlikely. “It smells like a move of downsizing to milk the business rather than spin it out,” said one industry veteran who requested anonymity.
This seems to match Duplessie’s information:
This is a signal that those markets, now 20 plus years old, are finally in maintenance mode and those services are to most likely be off-shored so the company can milk whatever profits are left from the dwindling install base. While acts like this are always sad, it was inevitable.
In the meantime, most sources expect Veritas’s backup products, NetBackup and Backup Exec, as well as its data archiving product, Enterprise Vault, to stay as they are.
It’s unclear how big an impact this move will have on Symantec customers — Duplessie also points out that these products are growing older and the markets they served have largely moved on. Symantec has already unveiled new file systems that will replace the Veritas products.
In my time in the industry, I’ve noticed there are certain companies whose “diasporas” have continued to steer enterprise technology long after they’ve ceased to be dominant players in the market. The two that most often come up in my experience are Digital Equipment Corporation (DEC) and Storage Networks. Now, you may be able to add Veritas to that list.
EMC Corp. is taking two founders of a consulting company to court, claiming they violated an agreement not to compete with EMC after they sold their BusinessEdge company to the vnedor for a reported $200 million in 2007.
The lawsuit was filed in U.S. District Court in Massachusetts earlier this week after the founders of BusinessEdge, Emanuel Arturi and Francis Casagrande, started a new company, Knowledgent. According to a story on Boston.com, “EMC alleges that Arturi and Casagrande were ‘amply’ compensated to refrain from competing against EMC while using confidential EMC information under conditions related to the BusinessEdge sale.”
BusinessEdge was acquired for its vertical market expertise in industries such as healthcare, financial services and life sciences, and has since been folded into EMC’s client consulting group. Meanwhile, that consulting business is the subject of renewed marketing focus from EMC — the company emphasized the availability of Microsoft consulting services in its recent announcement of support for Exchange 2010, and execs have indicated the company is looking to light a fire under the consulting business.
“Consulting is one thing that remains a well kept secret at EMC,” said Bob Madaio, director of the Microsoft global alliance at EMC. “We also want to remind the market about the availability of Microsoft-specific consultants.” These consultants became part of EMC through the acquisition of Microsoft gold certified partners in recent years, including Interlink and Internosis.
The major story with Sepaton’s launch of a new midrange data deduplication system and support for EMC Corp.’s NetWorker last week was the shifting competition in the data deduplication market: Data Domain, which made a name for itself catering to midmarket customers, is pushing upmarket now that it’s part of EMC, and Sepaton, which has previously emphasized the enterprise, is gunning for Data Domain’s midmarket turf with the new S2100-MS2.
Along with the new configuration, though, Sepaton also made some modifications to its data deduplication algorithms that have not been as widely discussed in the industry. I followed up with some Sepaton executivesto get some more details on these updates, and thought they’d be worth throwing into the mix here.
First a couple of refreshers on Sepaton’s approach to data deduplication. It uses delta differencing to identify duplicates between two sets of backup data, along with content-aware integration with the major data backup applications – Hewlett-Packard Co. (HP)’s Data Protector; Symantec Corp.’s NetBackup; IBM’s Tivoli Storage Manager (TSM) and EMC Corp.’s NetWorker so far. This allows Sepaton’s data deduplication engine to identify objects within the backup stream that may be redundant, like Oracle and Word documents. Sepaton’s data deduplication also occurs post-process, rather than as data is ingested into the system, and uses forward referencing, a process which keeps the latest copy of data intact and eliminates duplicates from previous versions as opposed to eliminating duplicate data from the newest version upon ingestion.
There are two tradeoffs when it comes to doing delta differencing the way Sepaton does it: the fact that some applications, particularly those that might make large insertions into a database table as records are modified, like SAP, don’t necessarily lend themselves well to object comparisons, and a challenge to getting the most data reduction out of a delta comparison between incremental backups, which by definition don’t contain many duplicate objects.
With version 5.3 of its DeltaStor data deduplication software, Sepaton has updated its algorithms to better support incrementals using a new metadata “scraper” to give the data deduplication engine “hints” about what blocks within incremental backups can be deduplicated. These “hints” were also developed based on field-collected customer data and new heuristics added to the DeltaStor algorithm, according to executive vice president of engineering Fidelma Russo.
Additional application types such as SAP are now supported with this release, which involved adding a new process to Sepaton’s dedupe which allows it to more quickly compare incremental backups to one another, by generating a lightweight “fingerprint” to suggest which portions of the incremental backup might contain duplicate blocks.
This sounds somewhat like the hashing approach used by other data deduplication vendors, including Data Domain, but rather than performing a hash on all data coming in to the system as a primary means of locating duplicates, Sepaton uses this “fingerprinting” process to give the delta differencer “hints” about where duplicates might be located. “Its only goal in life is to sort through incrementals for the delta differencer — inline deduplication products use hashing to compare across all data — our process identifies only the probability of common data,” said Dennis Rowland, director of advanced technology for Sepaton. Rowland said Sepaton has a patent pending on its new process.
Backup expert W. Curtis Preston said he doesn’t think this latest update of Sepaton’s and its ramifications — a new alternative to deduplicating SAP, for example, which is a notoriously heavy application when it comes to generating backup data — have been well understood by the market yet. “I think it’s a very important release for [Sepaton],” he said.
EMC Corp. Monday sent out a release saying its Celerra multiprotocol storage systems now officially have a plug-in available for VMware Inc.’s vCenter management console and Site Recovery Manager (SRM) failover and failback.
At first, this seemed like a ho-hum announcement. Plug-ins that integrate with vCenter are part of the new standard APIs VMware is making available for partners — everyone and their brother seemingly has something along these lines already (including EMC, which introduced a management plug-in for its Clariion and Symmetrix disk arrays last year). Ditto integration with Site Recovery Manager — FalconStor Software, for example, launched an SRM plug-in for its Network Storage Server (NSS) last year around the time of VMWorld in August.
But EMC director of unified storage marketing Brad Bunce claimed EMC’s integration of automated failback for VMware environments running on NFS is unique. “The difference [between the Celerra plug-in and competitors] is that failback is automated without requiring advanced scripting in the NFS environment,” he said.
At least one competitor acknowledges that the NFS support with automated SRM failback is not something it has yet offered — “As an NFS mounted device Celerra may be the only product with auto failover for SRM,” wrote Falconstor director of marketing Fadi Albatal in an email to Storage Soup. However, he added, “from our side, we can say, welcome to the party–you’re six months behind. Falconstor’s SAN virtualization and DR solutions have a block storage service for SRM and our plug in has an auto-discovery feature that eliminates the need for scripting and ensures full integration with VMware SRM.”
EMC’s vCenter plug-in for management also includes automated provisioning features for VMware in Celerra NFS environments, including automatic mounts to ESX servers and clusters, virtual machine cloning, and compression.
We are hearing from sources that NAS caching and monitoring startup Storspeed is already closing its doors, just six months after coming out of stealth.
Reached for comment today, Storspeed founder and vice president of business development Greg Dahl declined comment but said the company may make a public statement next week.
It’s still unclear what the problems were that would have led to the company’s quick demise, but our sources say the appliance didn’t work correctly. “They were in business since 2007 and did not have a stable, working product,” said one source familiar with the company. “The investors lost confidence in their ability to do a re-work.” This source also told Storage Soup the company has already laid off employees. Storspeed did not have paying customers when it announced its SP5000 appliance in October.
Update: An anonymous tipster sent us this hint after this blog was originally published: “Storspeed closed the doors last week on Thursday. The product was indeed stable and did work correctly. One of the investors pulled out and caused a domino effect. All employees were laid off. The IP is for sale.”
For a startup to source its own hardware rather than focusing on developing software as Storspeed did is an expensive proposition. To gain adoption among enterprises with an appliance that sits in the data path is a tough row to hoe for even the best of technologies. Combine that with the effects of the economic downturn, and there could be a potpurri of reasons for this early exit. Hopefully we’ll know more soon.