Storage Soup

January 10, 2011  5:29 PM

Storage spending: the big picture

Randy Kerns Randy Kerns Profile: Randy Kerns

I’m always interested in reading stories about expectations for spending on storage. There are the continuing prediction stories where it is reported that next’s year’s spending will be x% up or down from last year. Those stories are interesting but not always illuminating. They remind me of an article I read about predicting the weather. In that article, a comparison was done for the last year of the predicted weather versus the actual, and the accuracy of just saying tomorrow’s weather will be just like today’s weather. Obviously, the latter was statistically much more accurate.

For storage spending, the bigger picture economic indicators are more interesting and probably much more helpful to truly understand the market. I found the Jan. 3 Wall Street Journal, “Big Firms Poised to Spend Again,” interesting. The article reported that big companies (Fortune 500 in my estimation) had cleaned up their balance sheets and conserved cash over the last year and were ready to invest again in R & D, expand manaufacturing and sales, and so on. Basically the point of the article was that these companies were ready to spend money to make money.

The article cited several companies that had money to invest, and the numbers were in the billions of dollars. These expenditures will result in requirements in IT infrastructure and certainly in storage. Interestingly, the top three industries that had the greatest amount of cash accumulated to invest were Information Technology, healthcare, and industrials.

From a storage standpoint, the question is “where will the primary areas of spending be?” Given the conservatism that occurred over the last few years when many companies postponed purchases, you can probably figure out where spending will go. It’s likely that companies will look to replace aging or obsolete systems, add new systems to meet expansion and growth demands, and carry out postponed or new projects to improve operations and reduce operational expenditures.

These indicators present opportunities for the storage industry, if the industry is ready to take advantage of those opportunities right now. The products to solve problems and meet customer needs must be there. There won’t be time to take requirements and develop or modify a product. By the time that is done, competitors will have captured the business. The opportunity is about sales and marketing in the near term to deliver products needed when the customer is ready. Vendors will continue to develop new shiny toys but these major firms with money to spend to expand their businesses won’t wait for them.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).

January 7, 2011  1:34 PM

Panasas ponders iSCSI, looks to follow Isilon path

Dave Raffo Dave Raffo Profile: Dave Raffo

Now that Isilon is part of EMC, the new management team at Panasas is looking to fill a gap in the market for an independent clustered NAS vendor.

Panasas has concentrated on high performance computing (HPC), but is looking to change that. Faye Pairman, who became the Panasas CEO last April and has since added new chiefs of sales, engineering and marketing, said the vendor is looking to capitalize on pNFS and perhaps add iSCSI capabilities to become more of a commercial or enterprise play.

“I don’t see us sitting behind an Exchange server, but the lines between the commercial market and HPC is getting blurrier,” she said.

Panasas has talked about moving to the enterprise for years, but Pairman said the Isilon deal provides its best opportunity while shining a light on parallel NAS.

“[The $2.25 billion EMC-Isilon] deal places a value on scale-out NAS, and indicates a future market,” she said. “It highlights the fact that there’s a ton of unstructured data and we need a new type of file system to support that. It also highlights how hard it is to make that type of file system that works.”

Pairman said there are parallels between Panasas and Isilon, which both began shipping products in 2003. She said the major difference between the two was the markets they pursued. Isilon went after the rich media market more while Panasas concentrated on the energy and simulation markets.

“We tend to be big on performance, scale and next-gen types of standards,” she said. “They tend to be more practical – more usability features, things like that. Their architecture limits them from scaling like we can.”

Along with NetApp and EMC, Panasas was a driving force behind pNFS and Pairman said pNFS will eventually help Panasas broaden its feature set. “But we’re not trying to look like NetApp,” she said.

One of Isilon’s last major enhancements before the EMC deal was to join the multiprotocol storage parade by adding iSCSI capability for block storage. Pairman said block storage hasn’t been a big priority for Panasas in HPC markets, but it’s certainly under consideration.

“The parallel nature of the Panasas ActiveStor architecture is ideally suited for delivering high performance, scalable iSCSI block storage,” she said. “As Panasas starts to address enterprise and commercial market needs, block storage services would be a natural growth path for us.”

January 3, 2011  2:06 PM

New Orleans backup snafu shows inefficient data protection

Randy Kerns Randy Kerns Profile: Randy Kerns

The recent loss of mortgage records in Orleans Parish in New Orleans shows how important it is to have an efficient data protection process and efficient use of storage resources, and how the two are related.

Newspaper articles in the New Orleans area highlight a double failure data loss. Successful backups stopped after an update to the backup software, despite a message saying the update was successful. No one checked for successful backups after the update. The second failure came when the servers with the original data crashed with significant data loss. The backups were cyclical — as most are — and the older backups were expired (read deleted) according to a 90-day retention schedule. There were paper records for most of the transactions, but the index of the paper records including the location of documents was part of the lost data. This is like burying the key to the treasure chest along with the treasure.

The result was that primary digital data was lost and much of the backup data was either unrecoverable or does not exist. There are a number of process problems that can be pointed to here, but mainly this shows the need for a sound strategy for effectively and efficiently protecting records. Efficiency in this context is about both cost and the protection process.

Mortgage records are static, they do not change. Additional records for affected property are created and added to the records collection but each record is retained in perpetuity as an historical and legal document. These records should be kept in an online archive with infinite retention and protected with multiple copies at the time of archiving. Keeping the records in a volatile disk system where the protection depends on regular backups is extraordinarily inefficient.

Since the records don’t change, continual backups waste time and physical resources. The records are required when a real estate transaction affecting the property is done, so having them on a primary storage system may not be the most cost effective location. With the “forever” type of retention required, migrating the records every few years when the current storage system is replaced also would add extra operational costs.

This is one example where looking at the strategy around protecting information and being efficient could have yielded great return – and avoided a disaster. The disaster will be costly in the heroic efforts required to recover lost data, and in the careers or reputations of those involved. The real failing is in not re-examining the strategy around protecting the valuable assets and how to do that efficiently and effectively. I’ve written more about data efficiency here.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).

December 23, 2010  3:13 PM

i365 involved in New Orleans backup failure

Dave Raffo Dave Raffo Profile: Dave Raffo

New Orleans is no stranger to natural disasters that emphasize the need for a good disaster recovery plan. But people in the city are struggling to deal with a business continuity situation stemming from crashed servers and a backup service gone bad.

Two servers that hold the Parish of Orleans Civil District Court’s conveyance and mortgage records going back to the 1980s crashed simultaneously on Oct. 25, and the court has been without critical digital real estate records since then. The documents stored on the servers’ database exist on paper, but the computerized index linking them to their physical location was on one of the failed servers.

Members of the court’s IT staff are blaming the problem on Seagate’s i365, according to a series of stories on the incident in The Times-Picayune newspaper. The court is under contract with i365 to use its EVault Remote Disaster Recovery Service, EVault SaaS and SaaS plus, and Evault Express Recovery Appliance. i365 had been backing up records and purging them every 30 days from August 2009. Last July, i365 sent a software update to the court to install. The court’s IT staff said it installed the update and received a message saying it was installed correctly.

But no data was backed up since July, and other records were purged after their 30-day expiration date. The court did recover digital conveyance records from the 1980s up to March 27, 2009, and mortgage data through Aug. 6, 2009. But that left more than 150,000 documents without digital records or indexes to them.

While i365 has refused to comment publicly, an email from one of its executives to the chairwoman of the court’s technology committee obtained by the Times-Picayune put at least some of the blame with the court:

“The continued exposure of this situation hurts all involved — i365, Orleans Parish and the Civil District Court,” Dave Hallmen, head of i365’s Worldwide Sales and Marketing division, wrote to [judge Piper] Griffin on Nov. 5. “We have instructed our staff to refrain from publicizing our service call records which support our position that Civil District Court IT personnel failed to properly maintain the on-site software and backup jobs.”

According to the Times-Picayune, the court’s IT staff also inadvertently lost or corrupted database information when trying to troubleshoot the failed Dell servers.

Earlier this month, the court contracted with a data management firm, The Windward Group, to restore 35,000 missing conveyance records and 119,000 lost mortgage records by Jan. 2 at a cost of hundreds of thousands of dollars, according to the Times-Picayune. About 30,000 records have been restored.

The snafu is taking its toll on real estate sales in the area. According to the Times-Picayune, title insurance companies are reluctant to underwrite home sales and some refinancing deals without up-to-date records.

Regardless of who’s at fault, i365 competitors will be sure to use the incident to push alterative DR solutions. Larry Lang, CEO of QuorumLabs, said the New Orleans incident highlights problems with backing up to the cloud. QuorumLabs sells onQ backup appliances that can provide DR when used in pairs.

“Sometimes your data goes up into the cloud and when you go to pull it back it’s not there anymore,” Lang said. “It’s like freeze drying stuff, you don’t know what will happen when you add water. When [the court’s IT staff] went back to add water, there was nothing there.”

Lang said the incident also shows the importance of DR testing. onQ appliances allow customers to automatically test their restore capabilities. “You should consistently run tests to make sure your snapshots are good,” he said.

December 20, 2010  7:32 PM

Cloud storage gateway startup Nasuni gets a visit from Santa VCs

Dave Raffo Dave Raffo Profile: Dave Raffo

The multi-billion dollar acquisitions of 3PAR and Isilon this year made a lot of money for those vendors’ venture capitalists, and opened the spigot for more funding money for storage startups. The biggest beneficiaries of that spending have been startups that can legitimately tie their products and services to the cloud.

Cloud storage gateway startup Nasuni became the latest cloud storage vendor to close a funding round, picking up a $15 million stocking stuffer today from VCs. CEO Andres Rodriguez said the funding will be used to build up sales and partnerships for Nasuni Filer software, which runs as a virtual appliance and serves as a cloud gateway for NAS storage. Nasuni now has $23 million over two funding rounds.

You don’t have to be a weatherman to see that storage funding is blowing to the clouds. Nasuni’s fellow cloud gateway startups StorSimple ($13 million) and Cirtas ($10 million) closed funding rounds in September. Object-based cloud storage vendor Cleversafe grabbed $31.4 million from VCs in October, and cloud storage provider Nirvanix picked up $10 million in November.

December 15, 2010  6:24 PM

IBM, STEC seek to make MLC solid state an enterprise play

Dave Raffo Dave Raffo Profile: Dave Raffo

IBM is bringing STEC multi-level cell (MLC) solid state drives into the enterprise, and the word from STEC is that others will follow soon.

SSD supplier STEC today said IBM is using its ZeusIOPS MLC flash drives in the IBM DS8800, DS8700 and Storwize V7000 storage arrays. STEC manager for SSD technical marketing Scott Shadley said it marks the first use of STEC’s MLC in enterprise storage systems, but he expects other OEM partners to qualify the MLC flash soon. Until now, more expensive single-level cell (SLC) flash has shipped in enterprise solid state storage arrays.

STEC also sells SSDs through OEM deals with array vendors EMC, Hewlett-Packard, and Hitachi Data Systems as well as IBM and others smaller partners.

“We’ll see many more OEMs going forward with MLC in the first part of next year,” Shadley said. “They’ve been in qualifications.”

Shadley said IBM is replacing the SLC modules it previously sold on the three storage systems with MLC modules. IBM will sell STEC ZeusIOPS MLCs with 6 Gbps SAS and 4 Gbps Fibre Channel interfaces, and at 300 GB and 600 GB capacities.

While pricing varies among vendors, Shadley said the MLCs cost more than 25% less than SLCs. But that cheaper price has come at the expense of performance and endurance. Until the last year or so, MLCs were exclusively used in consumer devices while only SLCs were considered to have the performance and reliability for enterprise applications. MLCs can store two or more bits per cell and multiple levels of charge, giving it at least twice the capacity of SLCs but with slower write speeds and higher bit error rates.

However, vendors such as Fusion-io, Micron, Anobit, Violin Memory and STEC have worked to improve the write endurance and reliability of MLC flash.

In STEC’s case, it uses what it calls Secure Array of Flash Element (SAFE) technology to reduce component failures associated with MLC flash and CellCare technology to improve MLC’s endurance. STEC claims its MLC flash endurance can reach five years.

December 13, 2010  9:30 PM

Sepaton prepares enterprise cloud backup model

Dave Raffo Dave Raffo Profile: Dave Raffo

Backing up to the cloud started mainly as an option for smaller companies, but enterprise backup vendors see it moving up into the enterprise.

CommVault and Symantec have made their backup software more cloud friendly for larger companies, and now enterprise virtual tape library (VTL) vendor Sepaton is preparing to help customers back up to private and public clouds.

Sepaton director of advanced technology Dennis Rolland said the vendor will have products that fulfill a cloud model. He’s not yet ready to talk about any specific cloud products, but he said Sepaton is putting together a cloud strategy built around two of its key capabilities.

“The main thing is that deduplication and low-bandwidth replication really make a cloud deployment possible,” Rolland said. “Our customers typically have short backup windows and a large amount of data to move. Deduplication reduces the cost of that storage footprint and replication sends only unique data after the first backup. Those are really the enablers. Cloud solutions today that don’t have dedupe, I don’t know how viable they will be in the enterprise to move large amounts of data.”

Rolland said organizations backing up to private clouds also need the ability to allocate performance and capacity according to business needs, and a multi-tenant model. He said many companies also want to use the public cloud to seamlessly create another tier of data protection.

Using the cloud for backup isn’t the only change Sepaton sees in enterprise data protection. Rolland said the rise of Ethernet in storage, the proliferation of virtual machines and emergence of OST for Symantec NetBackup customers is leading to more unified storage devices. This is changing Sepaton’s model because it started as a Fibre Channel-only VTL vendor before adding NAS and file-based interfaces.

“Ten-gigabit Ethernet is driving consolidation,” he said. “Ethernet is easier to manage that Fibre Channel. Users are asking for Ethernet support in every vendor’s roadmap. We can run Ethernet in same system simultaneously over FC. We can run a VTL emulation over Fibre Channel and OST to a disk device over Ethernet to the same system.”

December 13, 2010  1:02 PM

Dell wraps up Compellent deal, acquires storage array vendor

Dave Raffo Dave Raffo Profile: Dave Raffo

Dell made it official today. It is buying Compellent for $820 million, and expects the deal to close early next year.

Dell and Compellent said last Thursday that they were in advanced negotiations for a deal, and most industry experts said they did not expect another serious bidder to emerge.

We will have more details on later today, following a press briefing. We expect Dell to answer some of the product positioning issues raised last week as well as its relationship with storage partner EMC.

Dell did say in its release this morning that it will keep Compellent’s operations in Eden Prairie, Minn., and will invest in engineering, support, operations and sales capability to grow the Compellent business. The deal is worth $27.75 per Compellent share. That comes to $960 million, or $820 million minus Compellent’s cash.

The deal winds up a year of consolidation among storage system vendors. Hewlett-Packard acquired 3PAR for $2.35 billion in September after a long bidding war with Dell, and EMC is in the process of closing a $2.25 billion deal for clustered NAS vendor Isilon.

December 9, 2010  3:15 PM

Dell says it wants Compellent for $876 million

Dave Raffo Dave Raffo Profile: Dave Raffo

Dell today said it is serious about buying Compellent Technologies for $876 million. I guess it’s really serious if the companies already have a price.

Any storage acquisition under $2 billion seems like a steal these days and Dell has clearly been shopping for another platform to go with its EqualLogic iSCSI family. Still, this proposed deal raises some questions about Dell’s storage strategy:

Is Compellent that much different than EqualLogic? Sure, Compellent sells Fibre Channel SANs and EqualLogic is iSCSI only, but they target similar customers. Buying Compellent won’t give Dell a larger potential customer base than it already has with EqualLogic plus its OEM deal with EMC for Clarrion Fibre Channel SANs. And Compellent isn’t a replacement for the higher end 3PAR arrays that Hewlett-Packard outbid Dell for in September. “How does Compellent, which has been a repeatedly highlighted competitor of Dell/EqualLogic, address Dell’s desire to move into the higher-end storage market?” Stifel, Nicolaus & Co. analyst Aaron Rakers asked in a note issued to clients today.

Is this the end of the Dell-EMC relationship? That long-standing partnership took a hit when Dell bought EqualLogic in 2008 and another when it tried to buy 3PAR. EMC CEO Joe Tucci has talked about rebuilding the relationship, but only if Dell were serious. Buying another EMC competitor probably isn’t what Tucci had in mind as a Dell show of seriousness. You could argue that the higher end Clariion systems still give Dell something that it doesn’t get from Compellent, but does EMC still want to play ball with a company that has become a prime competitor?

One financial analyst said EMC and Dell will retain their uneasy relationship for now. “It’s obvious that the EMC-Dell relationship is getting worse,” said Kaushik Roy of Wedbush Securities. “But what choice does Dell have in the high end of the midrange? Compellent doesn’t go there. So I think Dell still sells some CX in the midrange.”

By tipping its hand on Compellent, is Dell making it more likely that another suitor will make a bid? And if so, who? Rakers raises the question of whether NetApp would be interested, but Compellent’s doesn’t fill any gaps in NetApp’s product line.

On the plus side for Dell, this takes it closer to becoming full-service storage vendor. Dell’s in-house storage portfolio now includes EqualLogic, Exanet’s clustered NAS IP, Ocarina’s data reduction technology, and its DX Object Storage platform. Compellent brings impressive software technologies such as Data Progression automated tiering, Dynamic Capacity thin provisioning, and Live Volume for migrating volumes across arrays. It will be interesting to see if Dell makes a play for its own storage software company, perhaps its backup software OEM partner CommVault.

From a business perspective, Dell will make more money from selling Compellent systems than it makes from EMC Clariion because it owns the products. This is obviously a key reason for Dell broadening its storage portfolio.

If the companies reach an agreement, it will be interesting to hear Dell’s responses to these questions and its plans for this new technology.

December 7, 2010  6:54 PM

Fusion-io sees comfort level rising in PCIe-based SSDs

Dave Raffo Dave Raffo Profile: Dave Raffo

While the enterprise solid state drive (SSD) is still growing slowly and no method of using SSDs has moved to the forefront, Fusion-io CEO David Flynn said his company’s Flash-based PCIe cards are making a big splash in a handful of vertical markets.

“We’re growing like gangbusters on revenue and growing our headcount,” he said. “It’s like riding a rocket ship.”

Because Fusion-io is a private company, we don’t really know if the vendor had five times as much revenue in its fiscal year that ended in June as it did last year as Flynn claims. And we don’t know how much revenue that would be. But Fusion-io did have a customer go public today when financial services firm Credit Suisse said it is using Fusion’s ioMemory technology with its trading platform.

Flynn said Credit Suisse is a typical customer in some ways, but not others. He said Fusion-io has other top trading firms as customers but most won’t publicly admit it. Credit Suisse uses Fusion-io’s cards to improve the speed of its computers to determine what stock trades to make and to execute those trades. “People have been using us for trading for three years,” Flynn said.

But Flynn said financial services is only the third largest vertical market among Fusion-io’s customer base, behind Web companies and government agencies. The most commonly run applications are analytical and transactional databases that require caching tiers, messaging for financial transactions and unstructured text search.

“Awareness of SSD technology is maturing,” Flynn said. “We’ve been in production for three years and people are becoming more aware and comfortable with our technology.”

With that awareness and comfort comes more competition for Fusion-io. Vendors such as STEC and LSI, in partnership with Seagate, have SSD PCIe cards that compete with Fusion-io. And industry heavyweights Dell, EMC, IBM and Intel in October initiated a working group to standardize PCIe-based SSDs technology. Fusion-io is among that group, which underscores the value the industry is placing on PCIe SSD cards as well as the increased level of competition.

“They’re bringing PCI express to the drive bay and that’s an admission that we had it right from the start,” Flynn said. “People are recognizing that you have to get the legacy storage infrastructure out of the way if you want to tap into the true potential of NAND Flash. But they’re all three or four years behind us in getting product out.”

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: