Storage Soup


May 28, 2008  11:47 AM

Storage experts pan report on tape archiving TCO

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

The disk vs. tape debate that has been going on for years is heating up again, given technologies like data deduplication that are bringing disk costs into line with tape.

Or, at least, so some people believe.

The Clipper Group released a report today sponsored by the LTO Program which compared five-year total cost of ownership (TCO) for data in tiered disk-to-disk-to-tape versus disk-to-disk-to-disk configurations. The conclusion?

“After factoring in acquisition costs of equipment and media, as well as electricity and data center floor space, Clipper found that the total cost of SATA disk archiving solutions were up to 23 times more expensive than tape solutions for archiving. When calculating energy costs for the competing approaches, the costs for disk were up to 290 times that of tape.”

Let’s see. . .sponsored by the LTO trade group. . .conclusion is that tape is superior to disk. In Boston, we would say, “SHOCKA.”

This didn’t get by “Mr. Backup,” Curtis Preston, either, who gave the whitepaper a thorough fisking on his blog today. His point-by-point criticism should be read in its entirety, but he seems primarily outraged by the omission of data deduplication and compression from the equation on the disk side.

How can you release a white paper today that talks about the relative TCO of disk and tape, and not talk about deduplication?  Here’s the really hilarious part: one of the assumptions that the paper makes is both disk and tape solutions will have the first 13 weeks on disk, and the TCO analysis only looks at the additional disk and/or tape needed for long term backup storage.  If you do that AND you include deduplication, dedupe has a major advantage, as the additional storage needed to store the quarterly fulls will be barely incremental.  The only additional storage each quarterly full backup will require is the amount needed to store the unique new blocks in that backup.  So, instead of needing enough disk for 20 full backups, we’ll probably need about 2-20% of that, depending on how much new data is in each full.

TCO also can’t be done so generally, as pricing is all over the board.  I’d say there’s a 1000% difference from the least to the most expensive systems I look at.  That’s why you have to compare the cost of system A to system B to system C, not use numbers like “disk cost $10/GB.” 

Jon Toigo isn’t exactly impressed, either:

Perhaps the LTO guys thought we needed some handy stats to reference.  I guess the tape industry will be all over this one and referencing the report to bolster their white papers and other leave behinds just as the replace-disk-with-tape have been leveraging the counter white papers from Gartner and Forrester that give stats on tape failures that are bought and paid for by their sponsors.

Neither Preston nor Toigo disagrees with the conclusion that tape has a lower TCO than disk. But for Preston, it’s a matter of how much. “Tape is still winning — by a much smaller margin than it used to — but it’s not 23x or 250x cheaper,” he writes.

For Toigo, the study doesn’t overlook what he sees as a bigger issue when it comes to tape adoption:

The problem with tape is that it has become the whipping boy in many IT shops.  Mostly, that’s because it is used incorrectly – LTO should not be applied when 24 X 7 duty cycles are required, for example…Sanity is needed in this discussion… 

Even when analysts agree in general, they argue.

May 27, 2008  10:15 AM

How should tech companies do business in China?

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

The Chinese technology market isn’t up-and-coming anymore, it’s already here. And with a billion-plus people looking to participate in a capitalist experiment, combined with cheap (though rising) labor costs and a less regulated manufacturing industry, it’s a force to be reckoned with.

American tech companies these days have few choices when it comes to contending with this market. They can be acquired by Chinese companies, as in the case of Huawei-3Com (or nearly the case with Iomega); they can look to get a slice of the Chinese market by selling products there; or they can have their lunch eaten by the rising superpower. If you’re strictly thinking in business terms, and you’re a sufficiently large company, my guess is option two would be the most appealing.

But the problem is that whenever big companies look to open their technology to the Chinese market, there’s political fallout here at home. Our queasy, at times hypocritical, relationship with China is a tangled web. On the one hand, we are dependent on China for manufactured goods as well as a large market for our own raw materials. On the other hand, some of China’s social and political policies make many Americans cringe.

Increasingly, technology is at the center of these tricky issues. Rolling Stone recently did an interesting story about China’s Golden Shield, an integrated network of physical-security and surveillance technologies being implemented chiefly in Shenzhen to keep an eye on citizens. What was creepy about this article was the part where I started to feel less like I was reading a political article and more like I was listening to a technology briefing:

[Surveilance] cameras…are only part of the massive experiment in population control that is under way here. “The big picture,” Zhang tells me in his office at the factory, “is integration.” That means linking cameras with other forms of surveillance: the Internet, phones, facial-recognition software and GPS monitoring.

Just last week I listened to EMC’s Mark Lewis expound on similar integration between content repositories in the American workplace. But while U.S. businesses clearly see a lucrative market for these technologies, I don’t believe technology vendors set out to let people use their technologies unethically. If anything, I believe they approach it from a decidedly amoral standpoint–neither condoning or condemning China’s policies, and focusing on the bottom line.

The problem, as companies beginning with Yahoo! and Google found out, is that some Americans — particularly politicians — are saying not so fast. Many a company with its eyes on this emerging-market prize has received negative press and even government inquest as a result.

Cisco is the most recent example. It was called in along with Yahoo and Google last week for a grilling before a Congressional committee on business practices in China, following the leak of some internal slides that suggest “it appeared to be willing to assist the Chinese Ministry of Public Security in its goal of “combating Falun Gong evil cult and other hostile elements,” according to a story in the San Jose Mercury News.

In an AP followup story, Cisco’s director of corporate communications insisted the documents were taken out of context. “Those statements were included in the presentation to reflect the Chinese government’s position,” [Terry] Alberstein said. “They do not represent Cisco’s views, principles or its sales and marketing strategy or approach. They were merely inserted in that presentation to capture the goals of the Chinese government in that specific project, which was one of many discussed in that 2002 presentation.” 

This position has its supporters, including Seeking Alpha columnist Kevin Maney:

 This was made worse for Cisco by an unfortunate PowerPoint slide that some employee — probably 123 levels down from CEO John Chambers — used in a pitch that implied that Cisco is cheering for Chinese censorship. Such is the danger of technology that anyone can use.This was made worse for Cisco by an unfortunate PowerPoint slide that some employee — probably 123 levels down from CEO John Chambers — used in a pitch that implied that Cisco is cheering for Chinese censorship. Such is the danger of technology that anyone can use…

The political grandstanding and berating executives helps nothing. If U.S. tech companies are going to sell their goods around the world, some of it is going to be used in ways many Americans don’t like. So do we want the business — and the jobs and income? Or do we want to make a point? Let’s decide.

Yes, let’s. We must. This will continue to be an unavoidable issue as time goes on. While it’s easy to point out that by the same standards of responsibility, gun manufacturers might be out of business, a colleague of mine also pointed out last week that gun manufacturers (ostensibly) don’t market their products with the express intention that they be used unethically. Some argue the Cisco PowerPoint shows such intentions.

Personally, I like to think there’s some middle ground here. There’s got to be some way to tap into this burgeoning market without participating actively in some of the more nefarious practices (such as voluntarily divulging information on political dissidents to government authorities or expressly designing technologies to be used to target a particular group, including the Falun Gong); there has to be a way we can draw a line between the pursuit of profit and breaches of our fundamental national principles. At least ostensibly, we do it all the time with the same technologies in U.S. hands, and the European Union is even further along in establishing privacy standards along with new data retention practices. There must also be ways to balance these competing interests when it comes to China.


May 22, 2008  8:47 AM

EMC shares vision of AI storage arrays

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Photobucket 
“Project Futon” demo area on the show floor at EMC World.
Complete with actual futon.

EMC has made a habit of opening the kimono lately, especially at this year’s EMC World where execs divulged detailed roadmap information around backup and archiving software consolidation.

They also had an exhibit set up as part of the show floor called the Innovation Showcase. The showcase displayed products on tap for much farther down the line, including Project Futon, a consumer appliance being developed at EMC’s R&D facilities in China to store digital photos.

Eventually, the Futon software would automatically upload photos to the “Futon Cloud,” according to EMC senior engineer Hongbin Yin. Yin was one of several engineers on hand to demonstrate their prototypes. The goal would be to make small home appliances a “local cache” for multimedia, with long-term storage taking place in the cloud. Futon is also being developed to automatically collect metadata, including the location photos that were taken, and to integrate that information into other applications like Google Maps.

Also on display was a diagram of “Centera with Data Lineage,” which, according to Burt Kaliski, senior director of EMC’s Innovation Network, would allow Centera to archive not just elements of a workflow but the workflow itself, and link documents to related data in spreadsheets and databases.

Senior consulting software engineer Sorin Faibish was showing off his own diagram of “Application-Aware Intelligent Storage.” This would combine artificial intelligence software capable of being “trained” with hardware-embedded VMware ESX servers to automatically spawn services like data migration, encryption and replication to data as it comes into the cache on a storage array. The embedded ESX host would run EMC’s RecoverPoint CDP inside, logging and catalogging I/O, indexing data for input into a modeling engine, which would then decide on the proper way to store and protect the data before flushing it to disk.

No time frame was given on any of the prototypes. Faibish’s project would require development of advanced artificial intelligence software using concepts like neural networks, fuzzy-logic modeling and genetic-based learning. “It could prevent commoditization of the storage array,” he said. “But it’s still a dream.”


May 21, 2008  1:36 PM

Archiving news spans the globe

Dave Raffo Dave Raffo Profile: Dave Raffo

Busy week for archiving. While EMC was detailing its archiving roadmap in Las Vegas this week at EMC World, IBM was opening an archiving solutions center in Mexico, and U.K.-based Plasmon was launching its latest UDO-based system.

Plasmon’s Enterprise Active Archive (EAA) allows Plasmon’s UDO systems to work with disk for long-term archiving flexibility. EAA gives Plasmon’s archive appliance the ability to search disk and UDO media together, as well as index, classify, migrate and replicate files across systems.

Mike Koclanes, Plasmon’s chief strategy officer, said it wasn’t enough to have UDO drives that can last for 50 years to store data. “It didn’t fit into the IT ecosystem,” he said. “So we had to come up with an archiving appliance. The first thing we had to do was virtualize access to the application so it’s writing as if it’s a file system.” Koclanes said Plasmon intends to support solid state and holographic storage when they become widely available. He sees those technologies as well as SATA drives as complementary to UDO rather than competitive.

“When you store something on UDO, you have a permanent copy,” he said. “You have a UDO copy for DR and you don’t have to do backup any more. We’re not saying don’t use disk, we’re saying you don’t need three or four copies and have to replicate it around and use all that power.”

Meanwhile, IBM today opened a $10 million executive briefing center in Guadalajara, Mexico, dedicated to its archiving practice. IBM intends to use the Global Archive Solutions Center to help customers with their strategies for long-term data retention.

“Customers can come in and learn about best practices and do simulations of archiving with our products and partners’ products,” said Charlie Andrews, worldwide marketing manager for IBM storage.

“We’ve done a lot of research on what it means to have long-term retention of data. Does the media last long enough, how expensive is it, and when you talk about really long term — over 10 years — what happens with applications?” Andrews added. “Sometimes when you switch applications, you can’t read documents.”

Andrews said the archiving center is IBM’s 11th  global center but the first to address a “specific solution area” instead of a product line.   

Why Guadalajara? “Because we have a strong presence there,” Andrews said. “We’ve been there since 1927. It’s now a very rich high-tech area, it’s called the ‘Silicon Valley of Mexico.’ Also we believe growth in Latin America is significant for us.”

 


May 21, 2008  10:04 AM

Good quarter for HP storage not good enough for CEO Hurd

Dave Raffo Dave Raffo Profile: Dave Raffo

Hewlett-Packard’s storage division has taken its share of heat in recent years. It has often underachieved from a revenue standpoint, been knocked by competitors and analysts for lack of innovation, and been reorganized internally. Now it’s led by Dave Roberson, former Hitachi Data Systems (HDS) CEO and currently VP of HP’s enterprise storage business.

So what was the reaction of the storage group and upper management to a sharp hike in revenue from storage last quarter? Not what you might think.

“If there’s anybody taking a lap around the building, I sure haven’t seen him because we’ve got a lot more work to do to be a participant in the way that HP ought to be a participant in the storage market,” CEO Mark Hurd told analysts on the earnings conference call Wednesday night.

Ok, so no laps around the building. But HP did report storage revenue increased 14 percent year over year to just over $1 billion, led by a 17 percent gain in its midrange EVA and 21 percent improvement with its high-end XP [rebranded HDS] arrays. Still, Hurd says that’s only the start. He expects to attach storage to a higher percentage of ProLiant servers going forward, stronger adoption of new HP storage products and a return on some of the money it’s laid out on storage acquisitions in recent years.

“Listen, we can just do better than this,” he said. “We’ve got new product we’ve announced into the market during the quarter in the storage space that we are excited about. We’ve begun to bring together some of the acquisitions we’ve done and align some of our Storage Essentials software with our platforms. We are doing more work in the channel and it turned into good growth. I mean, for us to get mid-double-digit growth or 14% growth in the quarter is better than we’ve seen.”

I’m guessing the new product Hurd referred to was the ExDS9100 Web 2.0/NAS system HP unveiled with much fanfare this month although it also launched a new EVA in February. In any case, Hurd emphasized there is still work to be done, and he expects to see it get done: “I don’t want you for one minute to think we are satisfied with it [storage success].”

Hurd has accomplished many of his goals since he arrived at HP three years ago. Now we’ll see if he can complete a storage turnaround.


May 21, 2008  9:08 AM

The incredible Hulk and more: photos from EMC World

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Photobucket
Tuckered out after a long couple of days of learning about information-centric data centers.

Plenty more where that came from, after the jump.

Continued »


May 20, 2008  12:00 PM

Another approach to self-healing storage

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Over the last several weeks, I’ve been writing about the self-healing storage arrays announced by startup Atrato Inc. and a more seasoned company, Xiotech Corp. Seeing this, another company stuck its hand up to say, “Hey, we have that too!”

This other company is Data Direct Networks, and their self-healing product has actually been on the market in one form or another for some time now (DDN has been in business 10 years), but I wasn’t aware of it until they got my attention last week.

DDN’s product line consists of four hardware models: the S2A6620, 9550, 9700 and 9900. The self-healing stuff resides mostly in the S2A operating system. Like BlueArc’s Titan NAS heads, DDN’s products put some of the heavy-duty processing of data, such as parity calculations, into silicon by way of field-programmable gate arrays (FPGAs). According to DDN’s CTO Dave Fellinger, this, along with parallelization, means that the arrays can write and read at the same rate.

What that sets up is the ability to calculate two-disk parity on every read, as well as every write, without performance degradation. So theoretically the system never goes into rebuild mode, because it already operates in that mode all the time.

Like the systems from Xiotech and Atrato, DDN’s systems perform disk scrubbing, as well as isolation of failed disks for diagnosis and attempted repair. The product can conduct low-level formatting of drives, power-cycle individual drives if they become unresponsive, correct data using checksums on the fly if it comes off the disk corrupt, rewrite corrected data back to the disk, and use S.M.A.R.T. diagnostics on SATA disks to determine if the drives need to be replaced. (Atrato also uses S.M.A.R.T., among other error correction codes.)

The key difference between the DDN product and the others is that DDN’s disk arrays are not sealed. According to Fellinger, the company’s experience in large environments suggests that sealed arrays are impractical.According to Fellinger, the company’s experience in large environments suggests that sealed arrays are impractical. So admins swap out individual drives at DDN installations, the largest of which is an 8 PB deployment at Lawrence Livermore National Labs.

However, it seems Xiotech and Atrato are going after slightly different markets than DDN. Each of those vendors talks in capacities of 16 TB and 3U rack units. Xiotech said it is specifically going after enterprise accounts with its ISE system; Atrato seems to be targeting a more similar market to DDN in multimedia and entertainment. DDN claims to already have the big fish, such as AOL/Time Warner, locked up. And, Fellinger added, the company has just released an entry-level system with 60 drives offering up to 2 GBps performance.

For me, anyway, an already interesting and (relatively) new market just got that much more intriguing.


May 20, 2008  9:41 AM

EMC’s Maui “conspicuous by its absence” at EMC World

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Photobucket
Celerra Man: just one of the many attractions
at this week’s EMC World.

There’s plenty going on out here in Vegas with EMC World in full swing. It seems like there are attendees, press, analysts and EMC’ers stashed in every hotel on the Strip, and about 10,000 people milling around the corridors of the Mandalay Bay Convention Center.

There’s also been plenty of product news so far. EMC finally unveiled its deduping VTLs based on software from Quantum, one of which will also be the first Clariion array to offer drive spin-down, a feature EMC has promised in multiple products going forward. Also on the docket: incremental updates to Avamar’s software and Data Store hardware, and a new mid-market package for Networker.

But the subject on everyone’s lips in the whispering corridors was the one product EMC didn’t spend much time at all talking about: its Web 2.0 storage system, codenamed Hulk and Maui (looks like Hulk’s real name will be InfiniFlex).

While Hulk is shipping, Maui is, to quote StorageMojo blogger and Data Mobility Group analyst Robin Harris, “conspicuous by its absence.” Others who sat in on CEO Joe Tucci’s Q&A with analysts this morning said he was reluctant to answer questions about Hulk and Maui, and his face seemed to darken for a moment when I asked him afterwards if Maui had joined Hulk in shipping yet. But he also gave a straightforward answer, saying it’s not shipping yet but it is scheduled to do so this summer.

Before I could even ask how one was supposed to deploy one without the other, Tucci added, “Maui doesn’t require Hulk and Hulk doesn’t require Maui. They’re both independent but work together.” Furthermore, when both products arrive, EMC “is going to push them both very heavily.”

But for Harris, something doesn’t quite add up. “Joe promised Hulk/Maui in seven months last November,” he said. “EMC showed the hardware at NAB last month. But the software has always been the hard part.”

Word has it Hulk will ship with clustering software from Ibrix. I’ve heard rumblings about what Maui’s major malfunction might be, but nothing definite. Meanwhile, Harris told me that at the show he encountered a customer who’d been given a demo of Maui and described it as a layer of softwre that sits above local storage pools, “which could serve as a global data repository for multinational companies, tying multiple data centers together.” If that’s the case, it wouldn’t be surprising if getting things synchronized on that kind of scale took some time. And it would also explain why you could run Hulk without Maui, and Maui without Hulk.

But we’ve got another couple of months (it seems) before we find out for sure if the rumors about Maui are correct.


May 19, 2008  8:39 AM

SME backup products in the spotlight

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Backup software vendors BakBone and CommVault share the same code base — both platforms came out of work done at the Unix Systems Laboratory when it was owned by AT&T. But in the past week, I’ve heard different stories from both companies.

CommVault reported strong earnings on its last call and says it’s looking into expanding its product into adjacent markets such as data deduplication and records management. There’s also some evidence it’s winning converts from its larger rivals Symantec NetBackup and EMC Networker, and winning more deals over $100,000 as well as over $1 million. That move upmarket isn’t by accident, either: CEO Bob Hammer said on the earnings call that building higher-end business is CommVault’s goal for the remainder of this year.  Wall Street analysts are still dinging them over operating margin issues, but that’s neither here nor there for those of us more concerned with the technology side of things.

According to analysts, there are a few things making CommVault hot right now – a transition to disk-based backup, the rising popularity of VMware, and a Windows Server OS refresh. CommVault’s got a pretty good story in each of those categories, but the real trump card according to industry observers is its unified software platform that integrates backup with archiving and replication.

It would seem Bakbone shares several parts of that story as well, but they have been a much quieter company. Like CommVault, they have an integrated platform, but have chosen to focus on data protection rather than expanding into adjacent markets. From a business standpoint, they’ve gone in different directions: while CommVault has completed an IPO and had success in the public markets over the past two years, BakBone got booted off the Toronto Stock Exchange in 2004 for accounting issues it still hasn’t solved. But BakBone has continued to move forward with its product line, and in the last couple of weeks it added support for SharePoint archiving (something many competitors rolled out last year) and new integration with VMware’s consolidated backup (VCB).

BakBone previously offered integration with VCB, according to senior product marketing manager Gary Parker, but like many of its competitors in data backup, required scripting based on a publicly available API from VMware. With version 8.1 of its software released today, BakBone has packaged up those scripts into new software agents that sit on the client and VCB proxy server, allowing VCB management through the Bakbone Netvault interface.

Pretty nifty, and according to Taneja Group analyst Eric Burgener, not necessarily something everybody’s doing (TSM, for example, and EMC Networker don’t do it). But plenty of competitors have beaten them to the punch, including Netbackup and CommVault (Symantec’s Backup Exec still has no VMware client. What’s up with that?). And according to Burgener, “there is some value to it, but it’s not rocket science.”

According to Bakbone VP of marketing Jeff Drescher, Bakbone’s goal isn’t to be the first to market but to do right by its customers with what it does release. And it does have some blue-chip customers to its credit, though they’re either shy about talking to the press or Bakbone hasn’t noised them around much – customers like Yahoo! and GE Capital.

“These customers are using Bakbone to deal with departmental issues, in smaller environments,” Burgener said. “Their pitch is simplicity and covering a broad swath of backup functions.”

But right now I have to say my reporterly curiosity is piqued when it comes to these fraternal twins in the backup market. It’s hard for me to tell right now whether or not Bakbone has truly fallen behind CommVault or if it has just been quieter.

Drescher also pointed out the lag between announcement and adoption of emerging technologies, saying Bakbone is following other vendors with some product features, but getting them to customers right on time.

At least one customer I’ve talked to, though, said he’d been waiting for the VCB integration Bakbone has just added. “We’ve waited a little bit longer than we would have liked,” said Bryan Vonk, technical specialist for the Vancouver Police Department. But he added that he watched admins at the City of Vancouver’s main data center wrestle with the scripting for VCB and was content to wait for Bakbone. Furthermore, he said, now that that issue has been addressed, he has no pressing items remaining on his wish list at the moment. So we’ll just have to wait and see how this tale of two backup products plays out from here.

***

Meanwhile, another product marketed to midsize companies or departments, Atempo, also made incremental updatesl based on a desire to branch out its product, much like CommVault. In Atempo’s case, however, it has added a Mac and Linux backup client to its LiveBackup PC and laptop CDP software, support for more granular administrative roles than just “admin” or “user,” support for a group management console, and more scalability thanks to a change in how the product’s underlying database arranges data.

Like CommVault, Atempo is looking to take this product further upmarket than it has played in the past. That’s especially interesting when you think about how storage giant EMC and others are trying to figure out how to take their products downmarket at the same time. Which way is the market heading? I’m so confused.


May 16, 2008  10:31 AM

Grass on the roof and data center cooling

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Photobucket
(Photo courtesy 416style on Flickr)

Heard a story on the news yesterday and couldn’t help but wonder if it might have implications for green data centers. 

It was a report on green roofs, an emerging practice of placing a layer of soil on the flat roof of a building or house and then planting vegetation. It’s chiefly done to compensate for the effect of deforestation on air quality in cities, and to manage stormwater runoff more effectively in the concrete jungle.

But there’s another effect green roofs have, according to the report: the average flat roof can climb to between 150 and 200 degrees in summer months, exposed as it is under the hot sun. A green roof can bring that temperature down to the vicinity of 80 or 90 degrees, alleviating some of the stress on the building’s air-conditioning system and burning less energy to cool it.

I’m sure I don’t have to explain the implications of this for helping to cool data centers, provided they’re in the right location. As it turns out, it would seem there are at least a few people out there who got that idea as well.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: