The recent loss of mortgage records in Orleans Parish in New Orleans shows how important it is to have an efficient data protection process and efficient use of storage resources, and how the two are related.
Newspaper articles in the New Orleans area highlight a double failure data loss. Successful backups stopped after an update to the backup software, despite a message saying the update was successful. No one checked for successful backups after the update. The second failure came when the servers with the original data crashed with significant data loss. The backups were cyclical — as most are — and the older backups were expired (read deleted) according to a 90-day retention schedule. There were paper records for most of the transactions, but the index of the paper records including the location of documents was part of the lost data. This is like burying the key to the treasure chest along with the treasure.
The result was that primary digital data was lost and much of the backup data was either unrecoverable or does not exist. There are a number of process problems that can be pointed to here, but mainly this shows the need for a sound strategy for effectively and efficiently protecting records. Efficiency in this context is about both cost and the protection process.
Mortgage records are static, they do not change. Additional records for affected property are created and added to the records collection but each record is retained in perpetuity as an historical and legal document. These records should be kept in an online archive with infinite retention and protected with multiple copies at the time of archiving. Keeping the records in a volatile disk system where the protection depends on regular backups is extraordinarily inefficient.
Since the records don’t change, continual backups waste time and physical resources. The records are required when a real estate transaction affecting the property is done, so having them on a primary storage system may not be the most cost effective location. With the “forever” type of retention required, migrating the records every few years when the current storage system is replaced also would add extra operational costs.
This is one example where looking at the strategy around protecting information and being efficient could have yielded great return – and avoided a disaster. The disaster will be costly in the heroic efforts required to recover lost data, and in the careers or reputations of those involved. The real failing is in not re-examining the strategy around protecting the valuable assets and how to do that efficiently and effectively. I’ve written more about data efficiency here.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
New Orleans is no stranger to natural disasters that emphasize the need for a good disaster recovery plan. But people in the city are struggling to deal with a business continuity situation stemming from crashed servers and a backup service gone bad.
Two servers that hold the Parish of Orleans Civil District Court’s conveyance and mortgage records going back to the 1980s crashed simultaneously on Oct. 25, and the court has been without critical digital real estate records since then. The documents stored on the servers’ database exist on paper, but the computerized index linking them to their physical location was on one of the failed servers.
Members of the court’s IT staff are blaming the problem on Seagate’s i365, according to a series of stories on the incident in The Times-Picayune newspaper. The court is under contract with i365 to use its EVault Remote Disaster Recovery Service, EVault SaaS and SaaS plus, and Evault Express Recovery Appliance. i365 had been backing up records and purging them every 30 days from August 2009. Last July, i365 sent a software update to the court to install. The court’s IT staff said it installed the update and received a message saying it was installed correctly.
But no data was backed up since July, and other records were purged after their 30-day expiration date. The court did recover digital conveyance records from the 1980s up to March 27, 2009, and mortgage data through Aug. 6, 2009. But that left more than 150,000 documents without digital records or indexes to them.
While i365 has refused to comment publicly, an email from one of its executives to the chairwoman of the court’s technology committee obtained by the Times-Picayune put at least some of the blame with the court:
“The continued exposure of this situation hurts all involved — i365, Orleans Parish and the Civil District Court,” Dave Hallmen, head of i365’s Worldwide Sales and Marketing division, wrote to [judge Piper] Griffin on Nov. 5. “We have instructed our staff to refrain from publicizing our service call records which support our position that Civil District Court IT personnel failed to properly maintain the on-site software and backup jobs.”
According to the Times-Picayune, the court’s IT staff also inadvertently lost or corrupted database information when trying to troubleshoot the failed Dell servers.
Earlier this month, the court contracted with a data management firm, The Windward Group, to restore 35,000 missing conveyance records and 119,000 lost mortgage records by Jan. 2 at a cost of hundreds of thousands of dollars, according to the Times-Picayune. About 30,000 records have been restored.
The snafu is taking its toll on real estate sales in the area. According to the Times-Picayune, title insurance companies are reluctant to underwrite home sales and some refinancing deals without up-to-date records.
Regardless of who’s at fault, i365 competitors will be sure to use the incident to push alterative DR solutions. Larry Lang, CEO of QuorumLabs, said the New Orleans incident highlights problems with backing up to the cloud. QuorumLabs sells onQ backup appliances that can provide DR when used in pairs.
“Sometimes your data goes up into the cloud and when you go to pull it back it’s not there anymore,” Lang said. “It’s like freeze drying stuff, you don’t know what will happen when you add water. When [the court’s IT staff] went back to add water, there was nothing there.”
Lang said the incident also shows the importance of DR testing. onQ appliances allow customers to automatically test their restore capabilities. “You should consistently run tests to make sure your snapshots are good,” he said.
The multi-billion dollar acquisitions of 3PAR and Isilon this year made a lot of money for those vendors’ venture capitalists, and opened the spigot for more funding money for storage startups. The biggest beneficiaries of that spending have been startups that can legitimately tie their products and services to the cloud.
Cloud storage gateway startup Nasuni became the latest cloud storage vendor to close a funding round, picking up a $15 million stocking stuffer today from VCs. CEO Andres Rodriguez said the funding will be used to build up sales and partnerships for Nasuni Filer software, which runs as a virtual appliance and serves as a cloud gateway for NAS storage. Nasuni now has $23 million over two funding rounds.
You don’t have to be a weatherman to see that storage funding is blowing to the clouds. Nasuni’s fellow cloud gateway startups StorSimple ($13 million) and Cirtas ($10 million) closed funding rounds in September. Object-based cloud storage vendor Cleversafe grabbed $31.4 million from VCs in October, and cloud storage provider Nirvanix picked up $10 million in November.
IBM is bringing STEC multi-level cell (MLC) solid state drives into the enterprise, and the word from STEC is that others will follow soon.
SSD supplier STEC today said IBM is using its ZeusIOPS MLC flash drives in the IBM DS8800, DS8700 and Storwize V7000 storage arrays. STEC manager for SSD technical marketing Scott Shadley said it marks the first use of STEC’s MLC in enterprise storage systems, but he expects other OEM partners to qualify the MLC flash soon. Until now, more expensive single-level cell (SLC) flash has shipped in enterprise solid state storage arrays.
STEC also sells SSDs through OEM deals with array vendors EMC, Hewlett-Packard, and Hitachi Data Systems as well as IBM and others smaller partners.
“We’ll see many more OEMs going forward with MLC in the first part of next year,” Shadley said. “They’ve been in qualifications.”
Shadley said IBM is replacing the SLC modules it previously sold on the three storage systems with MLC modules. IBM will sell STEC ZeusIOPS MLCs with 6 Gbps SAS and 4 Gbps Fibre Channel interfaces, and at 300 GB and 600 GB capacities.
While pricing varies among vendors, Shadley said the MLCs cost more than 25% less than SLCs. But that cheaper price has come at the expense of performance and endurance. Until the last year or so, MLCs were exclusively used in consumer devices while only SLCs were considered to have the performance and reliability for enterprise applications. MLCs can store two or more bits per cell and multiple levels of charge, giving it at least twice the capacity of SLCs but with slower write speeds and higher bit error rates.
However, vendors such as Fusion-io, Micron, Anobit, Violin Memory and STEC have worked to improve the write endurance and reliability of MLC flash.
In STEC’s case, it uses what it calls Secure Array of Flash Element (SAFE) technology to reduce component failures associated with MLC flash and CellCare technology to improve MLC’s endurance. STEC claims its MLC flash endurance can reach five years.
Backing up to the cloud started mainly as an option for smaller companies, but enterprise backup vendors see it moving up into the enterprise.
CommVault and Symantec have made their backup software more cloud friendly for larger companies, and now enterprise virtual tape library (VTL) vendor Sepaton is preparing to help customers back up to private and public clouds.
Sepaton director of advanced technology Dennis Rolland said the vendor will have products that fulfill a cloud model. He’s not yet ready to talk about any specific cloud products, but he said Sepaton is putting together a cloud strategy built around two of its key capabilities.
“The main thing is that deduplication and low-bandwidth replication really make a cloud deployment possible,” Rolland said. “Our customers typically have short backup windows and a large amount of data to move. Deduplication reduces the cost of that storage footprint and replication sends only unique data after the first backup. Those are really the enablers. Cloud solutions today that don’t have dedupe, I don’t know how viable they will be in the enterprise to move large amounts of data.”
Rolland said organizations backing up to private clouds also need the ability to allocate performance and capacity according to business needs, and a multi-tenant model. He said many companies also want to use the public cloud to seamlessly create another tier of data protection.
Using the cloud for backup isn’t the only change Sepaton sees in enterprise data protection. Rolland said the rise of Ethernet in storage, the proliferation of virtual machines and emergence of OST for Symantec NetBackup customers is leading to more unified storage devices. This is changing Sepaton’s model because it started as a Fibre Channel-only VTL vendor before adding NAS and file-based interfaces.
“Ten-gigabit Ethernet is driving consolidation,” he said. “Ethernet is easier to manage that Fibre Channel. Users are asking for Ethernet support in every vendor’s roadmap. We can run Ethernet in same system simultaneously over FC. We can run a VTL emulation over Fibre Channel and OST to a disk device over Ethernet to the same system.”
Dell made it official today. It is buying Compellent for $820 million, and expects the deal to close early next year.
Dell and Compellent said last Thursday that they were in advanced negotiations for a deal, and most industry experts said they did not expect another serious bidder to emerge.
We will have more details on SearchStorage.com later today, following a press briefing. We expect Dell to answer some of the product positioning issues raised last week as well as its relationship with storage partner EMC.
Dell did say in its release this morning that it will keep Compellent’s operations in Eden Prairie, Minn., and will invest in engineering, support, operations and sales capability to grow the Compellent business. The deal is worth $27.75 per Compellent share. That comes to $960 million, or $820 million minus Compellent’s cash.
The deal winds up a year of consolidation among storage system vendors. Hewlett-Packard acquired 3PAR for $2.35 billion in September after a long bidding war with Dell, and EMC is in the process of closing a $2.25 billion deal for clustered NAS vendor Isilon.
Dell today said it is serious about buying Compellent Technologies for $876 million. I guess it’s really serious if the companies already have a price.
Any storage acquisition under $2 billion seems like a steal these days and Dell has clearly been shopping for another platform to go with its EqualLogic iSCSI family. Still, this proposed deal raises some questions about Dell’s storage strategy:
Is Compellent that much different than EqualLogic? Sure, Compellent sells Fibre Channel SANs and EqualLogic is iSCSI only, but they target similar customers. Buying Compellent won’t give Dell a larger potential customer base than it already has with EqualLogic plus its OEM deal with EMC for Clarrion Fibre Channel SANs. And Compellent isn’t a replacement for the higher end 3PAR arrays that Hewlett-Packard outbid Dell for in September. “How does Compellent, which has been a repeatedly highlighted competitor of Dell/EqualLogic, address Dell’s desire to move into the higher-end storage market?” Stifel, Nicolaus & Co. analyst Aaron Rakers asked in a note issued to clients today.
Is this the end of the Dell-EMC relationship? That long-standing partnership took a hit when Dell bought EqualLogic in 2008 and another when it tried to buy 3PAR. EMC CEO Joe Tucci has talked about rebuilding the relationship, but only if Dell were serious. Buying another EMC competitor probably isn’t what Tucci had in mind as a Dell show of seriousness. You could argue that the higher end Clariion systems still give Dell something that it doesn’t get from Compellent, but does EMC still want to play ball with a company that has become a prime competitor?
One financial analyst said EMC and Dell will retain their uneasy relationship for now. “It’s obvious that the EMC-Dell relationship is getting worse,” said Kaushik Roy of Wedbush Securities. “But what choice does Dell have in the high end of the midrange? Compellent doesn’t go there. So I think Dell still sells some CX in the midrange.”
By tipping its hand on Compellent, is Dell making it more likely that another suitor will make a bid? And if so, who? Rakers raises the question of whether NetApp would be interested, but Compellent’s doesn’t fill any gaps in NetApp’s product line.
On the plus side for Dell, this takes it closer to becoming full-service storage vendor. Dell’s in-house storage portfolio now includes EqualLogic, Exanet’s clustered NAS IP, Ocarina’s data reduction technology, and its DX Object Storage platform. Compellent brings impressive software technologies such as Data Progression automated tiering, Dynamic Capacity thin provisioning, and Live Volume for migrating volumes across arrays. It will be interesting to see if Dell makes a play for its own storage software company, perhaps its backup software OEM partner CommVault.
From a business perspective, Dell will make more money from selling Compellent systems than it makes from EMC Clariion because it owns the products. This is obviously a key reason for Dell broadening its storage portfolio.
If the companies reach an agreement, it will be interesting to hear Dell’s responses to these questions and its plans for this new technology.
While the enterprise solid state drive (SSD) is still growing slowly and no method of using SSDs has moved to the forefront, Fusion-io CEO David Flynn said his company’s Flash-based PCIe cards are making a big splash in a handful of vertical markets.
“We’re growing like gangbusters on revenue and growing our headcount,” he said. “It’s like riding a rocket ship.”
Because Fusion-io is a private company, we don’t really know if the vendor had five times as much revenue in its fiscal year that ended in June as it did last year as Flynn claims. And we don’t know how much revenue that would be. But Fusion-io did have a customer go public today when financial services firm Credit Suisse said it is using Fusion’s ioMemory technology with its trading platform.
Flynn said Credit Suisse is a typical customer in some ways, but not others. He said Fusion-io has other top trading firms as customers but most won’t publicly admit it. Credit Suisse uses Fusion-io’s cards to improve the speed of its computers to determine what stock trades to make and to execute those trades. “People have been using us for trading for three years,” Flynn said.
But Flynn said financial services is only the third largest vertical market among Fusion-io’s customer base, behind Web companies and government agencies. The most commonly run applications are analytical and transactional databases that require caching tiers, messaging for financial transactions and unstructured text search.
“Awareness of SSD technology is maturing,” Flynn said. “We’ve been in production for three years and people are becoming more aware and comfortable with our technology.”
With that awareness and comfort comes more competition for Fusion-io. Vendors such as STEC and LSI, in partnership with Seagate, have SSD PCIe cards that compete with Fusion-io. And industry heavyweights Dell, EMC, IBM and Intel in October initiated a working group to standardize PCIe-based SSDs technology. Fusion-io is among that group, which underscores the value the industry is placing on PCIe SSD cards as well as the increased level of competition.
“They’re bringing PCI express to the drive bay and that’s an admission that we had it right from the start,” Flynn said. “People are recognizing that you have to get the legacy storage infrastructure out of the way if you want to tap into the true potential of NAND Flash. But they’re all three or four years behind us in getting product out.”
The latest external storage system revenue numbers are in from IDC and Gartner, showing that trends we saw in the first half of 2010 continued into last quarter.
Those include the rise of Ethernet in networked storage at the expense of Fibre Channel, and market share gains by NetApp and EMC at the expense of IBM, Hewlett-Packard and Dell.
According to IDC, Ethernet-based NAS and iSCSI SANs outgrew the market again in the third quarter of 2010. NAS increased 49.8% over last year and iSCSI jumped 41.4%. Those numbers compare to overall SAN revenue growth of 18.5% and an external storage increase of 19%.
EMC led the NAS market with 46.6% share followed by NetApp with 28.9%, according to IDC. Both vendors picked up share over last quarter. EMC is also in the process of acquiring clustered NAS vendor Isilon, which had 0.75% of the overall storage share last quarter.
Dell still leads the iSCSI market on the strength of its EqualLogic platform. Dell had 33.8% market share last quarter, well ahead of EMC at 13.8% and HP at 13.7%. Dell increased its iSCSI share from 32.9% in the second quarter.
In the overall vendor horse race, EMC kept its lead in external storage with 26.1% of the market revenue, followed by IBM at 12.9%. NetApp, which edged ahead of HP in the second quarter, stayed third with 11.6%. HP was fourth at 11.1%, followed by Dell with 9.1%
EMC and NetApp gained total share over the previous quarter and last year while the other three of the top five lost share. NetApp revenue grew a whopping 54.9% since last year while EMC grew 28.3%. HP had the lowest growth at 11.3% over last year. Maybe that’s why HP spent $2.35 billion to acquire SAN vendor 3PAR, which had 0.83% of the market last quarter.
IDC said total external storage sales for last quarter were $5.2 billion.
Gartner reported similar numbers, placing the total external storage revenue at $4.6 billion — up 16% from last year.
Gartner added Fujitsu to the list of vendors who outgrew the market, and shed more light on specific vendor successes and failures. Gartner’s top five was the same as IDC’s, and it listed Hitachi Data Systems, Futjitsu and Oracle/Sun as the next three leaders. Gartner said Oracle/Sun revenue declined 36.3% from last year, attributing it to Oracle’s ending the OEM deal Sun had with HDS for its 9000 high-end monolithic array platform.
Other information in the Gartner report:
While storage vendors are paying a lot of attention to object storage these days, a new report from Forrester Research points to limitations that make the technology the wrong choice for many organization’s storage requirements.
The report, “Prepare for Object Storage in the Enterprise, defines object storage as “Storage of data that is broken into distinct segments, each containing a unique identifier that allows for retrieval and integrity verification of the data.”
The report isn’t anti-object storage. It points out object storage systems’ value in the areas of massive scalability, greater custom control over data, the ability to reduce management and hardware costs, and its WORM and shared tenancy features. It also recommends object storage for certain workloads. But it also looks at the downsides that need to be considered before adopting an object storage system.
“Poor performance, high data change rates, capacity sprawl, and lack of standards will prevent object storage from becoming ubiquitous, but it has the potential to significantly improve storage economics, ease of use, and control when mapped to the right workloads,” wrote the report’s lead author Andrew Reichman.
The right workloads, according to the report, include archiving, cloud storage, Web 2.0 and imaging applications. In other words, you shouldn’t be using object storage systems for databases.
As for drawbacks, Reichman writes that object storage focuses on data movement, high scalability and automation but not performance. “Performance, measured in inputs/outputs per second (IOPS) is not the strong suit,” he writes. “Put simply, object storage is just not designed as a replacement for SAN storage, deployed where high transactional performance is paramount.”
He adds that object storage’s use of unique identifiers is “not the most efficient design for data that gets edited frequently.” So while it’s good for picture or audio files that usually aren’t modified after creation, it’s not so good for databases and collaborative files.
And while file system vendors follow consistent standards and formats, there is no standard for object APIs. “In the end, the benefits of objects may take a back seat to the consistency and familiarity of files, unless the industry can get together on standardization,” Reichman wrote.