Storage Soup


February 28, 2008  12:40 PM

Hard drives coming to a theater near you

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

At a media event for BlueArc in Boston yesterday, I spoke to Tom Burns, director of post-production infrastructure for Technicolor, about an interesting project going on with movie studios and storage.

It’s called the Digital Cinema Initiative, a joint venture of Disney, Fox, Paramount, Sony Pictures Entertainment, Universal and Warner Bros. Studios aiming to convert movie theaters from film projection to digital movie delivery on FireWire hard drives. Eventually, Burns told me, the studios want to move from delivering movies to theaters in heavy, expensive film prints to sending them on hard drives so they could be played through license keys.

If you think you’ve got problems with analog tape in the IT world, consider the plight of the movie projectionist, who must splice together film prints and align them just so on a massive platter more than a yard in diameter. Splice it wrong, and part of the movie will be upside down and backwards, and the whole thing will need to be unspooled from the platter and re-spliced. Film also often spontaneously unspools itself from the platter if not fed through the projector at the right speed. I worked at a movie theater in college and have seen this tangled, panicked phenomenon in action. It’s sometimes called a “brain wrap” for the looping patterns the loose film makes on the platter/floor, and it’s not pretty.

A move to digital delivery would make it easier for theaters to transfer movies between screens, as they frequently do to move older releases to smaller theaters to make room on bigger screens for the latest premieres. It could also potentially eliminate another custom of the film-projection world that’s costly in terms of time for the cinema: the tech screening. It’s usually done the night before a movie opens to make sure everything’s in proper working order. But this new technology has a downside for theater employees, who are often asked to be the audience for tech screenings to point out any flaws in the film. With those tech sceenings gone, theater employees would lose their free sneak previews.

Speaking of sneak previews, BlueArc execs gave customers, partners and the media a look at the 2008 overall strategy for the company, which still plans on going public. The SEC-imposed quiet period on BlueArc led to some humor in the form of a revenue-growth slide with no numbers on it. CEO Mike Gustafson gamely posed for a photo in front of it for me:

BlueArc CEO in front of a funny 'quiet period' revenue slide

February 27, 2008  9:07 AM

EMC serves up new pricing for Mozy

Dave Raffo Dave Raffo Profile: Dave Raffo

The Mozy online backup service has gotten considerably more expensive since EMC acquired it from Berkley Data Systems.

After buying Berkeley Data Systems for $76 million last October, EMC placed Mozy into its EMC Fortress group of services, expanded the product line to include an enterprise version and split its MozyPro for small businesses into two levels: desktop and server. But new customers must pay a heavy premium for their server backups.

In an email to customers, EMC said its MozyPro desktop service retain its price of $3.95 per license and 50 cents per GB while MozyPro server costs $6.95 per license and $1.75  per GB per month. That means a one-server license with 10 GB of storage that would cost $8.95 under the old model will be $24.95 under the new model. A 20-server license with 500 GB goes from $329 to $1,014.

Before EMC acquired Mozy, MozyPro included server backup in its single tier. It added features such as support for Microsoft Windows server OS, and VSS writer backup and restore to Exchange, SYSVOL and Active Directory to its server tier, but customers will pay a stiff price.

MozyPro customer Jason Powell, IT director at Granger (Indiana) Community Church, wrote about the price increase on his blog, concluding: “An approx (sic) quadrupling in price seems ridiculous to me, but what do I know?”

Roy Sanford, vice president of EMC Fortress, defended the price increase by saying it affects only one Mozy service and EMC is grandfathering the price for licenses purchased before March 1. “There’s nothing surreptitious going on here,” Sanford said in a statement emailed to SearchStorage.com. “We’re not trying to institute enterprise pricing across the board. The price changes that were announced to customers affects only one out of five Mozy offers and certainly can’t be misconstrued as a price change across the board.”

EMC also offers a Mozy Enterprise service that was not available from Berkeley Data Systems. The Enterprise service costs $5.25 per month per desktop or laptop plus 70 cents per GB, and $9.25 per Windows Server plus $2.35 per GB protected per month.

Ridiculous pricing or not, Powell is considering an upgrade to Enterprise edition because it lets customers seed a 2 TB USB drive from EMC for their first backup instead of uploading data online.

“Guess I’ll be making some calls to Mozy this week and see what makes the most sense for GCC,” he wrote on his blog.

He may be getting calls from Mozy competitors. Online backup vendor Intronis Technologies already launched a Mozy Migration Plan to induce customers to switch. Intronis says it will charge no license fees for Mozy customers who switch.


February 22, 2008  9:57 AM

Apple’s move away from hardware lock-in to low-cost generic arrays is a shrewd one

Tskyers Tory Skyers Profile: Tskyers

I’ve been seeing the scuttlebutt about Apple and Promise Technology and couldn’t help but add my two cents about how many Promise arrays I’ve seen pop up lately.

Last week, while installing our IBM N-series, I saw a couple of admins installing a multi-shelf Promise array. Peering through the cages in one of our colo areas, I’ve seen quite a few Promise and generic arrays installed. Walking the aisles in the areas I have access to, I’ve seen a rapid uptick in the installation of Off-Broadway-brand array vendors.

We own a small (5TB) Promise V-Track array we use for limited duty validation and testing (we bought it before the Storevault was released). I like it — it certainly fills the need and it does what it’s supposed to do. I can buy any brand and size SATA hard drive I want and the management tools come with the product at no additional charge. I was able to set it up in about 30 minutes and after the drive initialization (took close to 24! hours) I was all set and ready to go, all without a PhD. I even did the guy thing and didn’t read the instructions! I don’t know about you, but I can’t really ask for more, considering the price.

I’ve seen the folks at Apple accused of being stupid or lacking foresight in the past (Steve, I’m still upset about my Newton!!). In recent years, the accusers have usually been dining on crow, given the fact that Apple’s products consistently create trends. (Anyone up for an iDog?) I firmly believe they know something about the trend towards lower-cost generic arrays using generic disks in generic trays, otherwise (at least in my mind anyway), a company that prides itself on solidly locking you into their hardware when you use their software would have gone with a more mainstream storage vendor, or simply re-branded something and inserted a v-chip.

You’ve read me typing this for a couple of blog posts now, but I’ll type it again: Small to midsized SANs for under $50,000 with simple software and easy to use interfaces are going to be the market in the coming years. I’ll go a step further and say the days of proprietary drive trays and “enterprise-class ” drives are numbered too.

I seem to recall another big vendor that often gets maligned for lacking foresight snapping up a low-cost storage array vendor recently.

More importantly, Apple knows how to make difficult things easy and stylish. Not to mention that people who OEM for Apple (Foxconn , Acer et al.) are quite happy pumping out the iWhatever. It wouldn’t be too far-fetched to see Promise doing the same.

If there was ever a company that could pull off making a product that does easy data migration … see where I’m going with this?

Couple Apple’s really-easy-to-use SAN software with low-cost generic arrays and you could have a quick rise to major player in the storage software market. . .for a company many thought would be out of business by now, bringing in another company that “real” storage vendors look down their nose at.


February 21, 2008  11:13 AM

New SSDs are on their way

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Or so says Pliant Technology, a new company that just received $8 million in Series A funding. It’s comprised of former execs from storage companies including Maxtor, Quantum, Fujitsu and Seagate.

The cast of characters is as follows:

  • Jim McCoy, Chairman – Co-Founder of Maxtor and Quantum
  • Amyl Ahola, CEO – Former CEO of TeraStor, vice president at Seagate and Control Data
  • Mike Chenery, President/Founder – Former vice president of advanced product engineering at Fujitsu
  • Doug Prins, Founder/Chief Architect – Former consultant for Fujitsu, Emulex, and Q-Logic
  • Aaron Olbrich, Founder/CTO – Formerly at Fujitsu and IBM

And that’s just about all we know in detail right now about Pliant. I spoke with McCoy this week about the announcement of funding; he said the company has decided to come out of stealth now, but has been working on perfecting the solid-state drive for the last two years.

The new company is aiming to improve the solid-state drive with its products, which are due out by the end of this year, with alpha and beta testing scheduled beginning this summer. Pliant’s drives will perform better than current flash drives, “closer to what the DRAM people have,” McCoy claims. The drives have also been “designed for a 24×7 operating environment, with error rates equal to or better than hard drives.” Specifically, the drives are going to tackle an issue McCoy says has been a dirty secret in the solid-state game: read disturb, a phenomenon in which reading data from one portion of a flash drive causes degradation in nearby bits.

Existing solid-state vendors have tried to address this problem, as well as issues with write endurance, using error correction codes (ECCs). But according to McCoy, ECC is not enough. “ECCs are a minimal starting point,” he said. “By themselves, they are not sufficient.”

If that gets you all wound up about the state of solid state, though, you’re going to have to wait to find out how exactly Pliant plans to build a better mousetrap. The specifics of its technical approach are “confidential at this point,” said McCoy.

Will the new and improved Pliant drives be able to do anything about the acquisition costs that are keeping many users away from solid-state drives right now? “There won’t be much of a price penalty over other [SSD products],” McCoy said, which I’ll take as a no. McCoy did point out that long-term, solid state is more cost-effective than over-provisioning hard drives.

The problem is, users rarely start from scratch; many will have over-provisioned hard drives already, and would need to start by adding very expensive SSDs on top of already very expensive assets. “Customers are reaching the end of possible performance with hard drives,” McCoy countered. “And new systems [like EMC’s Symmetrix] are going to start going out with a combination of drives.”

According to research from IDC, performance and mobility-related requirements will propel SSD revenues from $373 million in 2006 to $5.4 billion in 2011, a 71% CAGR. And I’ve heard many in the industry lament that while the capacity of spinning drives has been going up continually, the ability to get data off those drives faster is not keeping pace. Something will obviously need to change.

Meanwhile, the answer to the question of exactly how Pliant’s products propose to be a catalyst in that equation remains in stealth for now.


February 19, 2008  12:03 PM

Open-source grid storage project goes commercial

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Some of you may have heard of Cleversafe, until now an open-source research project working to develop the prototype of a system that would automatically spread data over geographically dispersed grids while encrypting it.

Cleversafe has been making slow but steady progress over the last year and a half or so and have been keeping me updated. Their concept, is an interesting one: a way to automate the “chunking” of data over geographically dispersed nodes through new algorithms that also make each chunk of data unreadable, essentially combining primary storage with disaster recovery and data security all in one go, as our friends across the pond would say.

So far, Cleversafe has launched itself as an open-source project, invited developers to play with the Dispersed Storage Network (DSNet) prototype, and signed up 14 internet service provider (ISP) partners to pilot the service. This spring, those partners will begin to sell some actual software and hardware to go with the pie-in-the-sky concept.

The new products, which will be generally available May 31, include a storage node, called the Cleversafe Slicestor; a storage router, called the Cleversafe Accesser; and a software management console called the Cleversafe Manager. Each Slicestor will hold 3 TB raw in a 1U pizza box. There is no formal restriction on the number of Slicestors and Accesser nodes in one grid, but the first products will be offered in groups of 8 and 16 nodes, with a 4:1 ratio of storage to router nodes recommended. The nodes can be kept in a single rack in one location or distributed globally. Cleversafe says its business model will be to offer its grids directly to enterprises, as well as ISPs and managed service providers who can offer Cleversafe storage as an online or hosted service.

This is the kind of stuff that really intrigues me in the storage market–the kind of stuff that makes me envision Conan O’Brien with a flashlight under his chin singing “In the Year 2000…” The futuristic stuff. As a general, all-around nerd, it’s interesting to me to talk to the people planning the next generation of technology, to learn what the challenges are and what goals their sights are set on. The Cleversafe concept is a particularly interesting one to me given the global-scale DR challenges we’re beginning to face.

When we chatted about it last week, though, Taneja Group founder Arun Taneja tempered my enthusiasm with the reminder that future products are just that: in the future, and the proof is in the pudding. “At the concept level I’ve never had any issue with Cleversafe,” he said. “But while the concept is interesting, provability will take a long time.” Cleversafe must show its product can support multi-tenancy environments reliably, without mixing up data chunks, and must show that its performance and ability to recover data are what it says they are.

And while some of the deepest innovations in technology are happening around storage, Taneja also reminded me that the market for storage products remains more conservative than most. “Even if Cleversafe can prove that this is the best thing since sliced bread, the GMs, Fords and Pepsis of the world would have to test something like this for years before they’d trust it,” he said.

So we might not be looking at The Storage Internet ™ anytime soon. But I’m going to keep watching.


February 15, 2008  3:25 PM

Another service-provider infrastructure gets the hiccups

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Amazon’s S3 online storage service suffered an outage this morning for several hours, echoing the outage suffered by email service provider RIM last week. While RIM’s outage affected CrackBerry addicts with alternatives to email, the Amazon outage may have affected Web-based companies relying on S3’s storage to deliver core services. Not good.

However, one S3 user I talked to today, SmugMug CEO Don McAskill, said his site didn’t feel a thing. “None of our customers reported any issues–we haven’t seen any problems that are customer facing,” he said.

But there’s also an important factor that may have led to SmugMug’s resiliency: the fact that after another outage last year, SmugMug started keeping about 10% of its data in a hot cache on-site. “It could have been that the hot cache was adequate for the 2 or so hours it was going on, or it could have been that for some people the outage was intermittent,” he added.

Meanwhile, some users were still reporting issues as recently as five minutes ago on Amazon’s Web Services Developer Connection message board. According to an Amazon.com official response on the thread about an hour ago, “This morning’s issue has been resolved and the system is continuing to recover. However, we are currently seeing slightly elevated error rates for some customers, and are actively working to resolve this.  More information on that to follow as we have it.”

Their businesses aren’t the same, but I think this ties in with what I was saying in my post about RIM’s Blackberry meltdown–as more and more data “eggs” put into centralized service provider “baskets”, more and more of them are going to get broken, especially as the service-provider market ramps up.

Or as TechCrunch put it:

This could just be growing pains for Amazon Web Services, as more startups and other companies come to rely on it for their Web-scale computing infrastructure. But even if the outage only lasted a couple hours, it is unacceptable. Nobody is going to trust their business to cloud computing unless it is more reliable than the data-center computing that is the current norm. So many Websites now rely on Amazon’s S3 storage service and, increasingly, on its EC2 compute cloud as well, that an outage takes down a lot of sites, or at least takes down some of their functionality. Cloud computing needs to be 99.999 percent reliable if Amazon and others want it to become more widely adopted.

Growing pains may have had something to do with it, according to Taneja Group analyst Eric Burgener. “There’s less of this going on than there used to be, but this is one of those things that gives people pause about services,” he said. A focus on secondary storage and storage for small companies has made this crop of service providers more successful than the SSP’s of the bubble days, and even where companies are relying on services like this for primary storage, Burgener argued that the services option is still the better bet. “For small internet businesses services are still a perfect play–they allow businesses to start up rapidly without the kind of capital expense or infrastructure they need for an in-house system.”


February 15, 2008  11:59 AM

A storage reporter’s shameful secret comes to an end

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

I feel the need to make a confession here. Up until yesterday, despite spending a generous portion of my waking hours covering data backup, disaster recovery and data protection, I myself did not have a backup plan.

I do digital photography in my spare time, and creative writing outside work, and I’ve been a digital music addict since the advent of Napster. So I have about 100 GB on two IDE drives inside a Windows XP machine custom-built for me by a highly geeky friend. And it’s just been sitting there, waiting to be snatched away into the ether.

Then another friend of mine told me about how his MacBook hard drive crashed. On his birthday. While he also had the flu.

He told me how his entire visual design portfolio, an important part of his resume for the business he’s in, has been lost, along with all of his digital photographs, many of which he didn’t have posted on Flickr or stored anywhere else.

He went on to tell me that his costs for trying to recover the data from the drive are going to run him upwards of $2,000–if he’s lucky. It could be cheaper, but that would mean less of his data has been recovered, and so now he finds himself in the position of hoping he’ll have to spend more money.

It’s a bittersweet subject for him that so many people he knows, myself included, have credited his experience with finally getting them off their butts and backing up. But that’s the reality.

I ended up going with the 500 GB Western Digital MyBook, because that’s what my friend also ordered once he learned his lesson the hard way, and he’s far more technical than me, so I trust his judgment. The MyBook came with Memeo’s AutoBackup and AutoSync software, of which I’m only using the former. It also came with a bunch of Google software including Google Desktop, which I found rather odd.

Having covered data storage for the enterprise, I’ve had a chuckle whenever I’ve checked on the initial backup job’s progress. Granted, it’s got a QoS feature that cedes system resources to the PC, but let’s just say I’m not seeing the kind of data transfer rates with this thing I’m used to hearing about. It’s been funny, after being immersed in systems that perform at 8 Gbit or 10 Gbit for a few years, to watch my little PC poke along at what seems like 1 MB/hr, if that.

But still. At least I have a backup. Finally. And I can finally rid my closet of that skeleton.

Now my issue becomes off-site disaster recovery. It’s far more likely that my hard drive(s) will crash than that my house will be napalmed or something (knock on wood), but no sooner had I told Tory that he could stop bugging me about backup, than he started bugging me about taking the drive to my office once the data transfer is done.

But the AutoBackup software, like so many low-end and consumer backup offerings, is set to automatically backup changed files, and what I told Tory was, I like having a low RPO over here. And I made that napalm comment, I’ll admit (I can just feel karma coming to get me). So I’m thinking about some kind of backup SaaS for off-site DR, but capacity with those services is at a much higher premium than it is in 3.5 inch external SATA. And so you know what that means…data classification!

I may be poking along at 1 MB/hr, but it all feels like a slow-motion, small-scale version of the issues I cover every day. It’s interesting to see firsthand how “Digital Life ™” is, in fact, blurring the boundaries between home and business computing.


February 14, 2008  3:19 PM

Blackberry outage a storage issue?

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

As approximately the last person in the Western Hemisphere not to own a PDA, I escaped the Great Blackberry Outage of Aught Eight last week, and got to have that much more time to be smug about my lack of dependence on such a thing before I inevitably get one and grow so dependent on it I need Tommy John surgery on my thumbs.

This week, though, the plot thickened for storage folks as it was revealed that the outage was caused by a failure during a systems upgrade. According to Reuters, the outage was caused by an upgrade to a data routing system inside one of the company’s data centers. In the past, RIM suffered an outage to its Blackberry service because of cache upgrades. Drunken Data auteur Jon Toigo thinks they’re still having storage problems, and cites an AP report on MSNBC saying the failure happened during a system upgrade designed to increase capacity.

Meanwhile, Reuters seems to imply that at heart, data growth is what bit RIM. “RIM has been adding corporate, government and retail subscribers at a torrid pace and has had to expand its capacity in step to handle increased e-mail and other data traffic. Its total subscriber base sits at about 12 million according to latest available data.”

The fact of the matter is that no system is failproof–but I think Reuters brings up a good point. We’re opening up new frontiers in massive multi-tenancy and creating new and unprecedented demands on computer systems; we’re also consolidating data into the hands of service providers like RIM. My sense is we’re going to start seeing more of this kind of issue as these trends continue, especially as more and more new services come online. So maybe I’ll just rely on good old dinosaur Outlook for a little while longer.


February 14, 2008  2:49 PM

Of wizards and storage skills

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

After my posts on militant dolphins and black holes, you could be forgiven for taking that headline literally, but this time I’m referring to the software kind of wizard, not the pointy-hat/ Harry Potter kind.

What prompted this post were two stories I saw this week. First, Reldata announced new adaptive software wizards for its storage gateways and I had an in-depth conversation with the company’s CEO, David Hubbard, about that very subject. Second, everyone’s favorite, Storage Magazine, ran a trends story this month headlined “Storage staffing shortage looms.”

Reldata’s adaptive wizards are a little different from some of the others companies like HP have announced for low-end products, in that they’re not just there for setup. Rather, the adaptive wizards are there for several stages of deployment for the gateway’s iSCSI SAN functions (NAS, replication and clustering wizards are still on the to-do list).

We’re hearing a lot about ease of use these days; even I have been guided through setting up volumes on disk arrays from emerging storage companies by way of proving, “See! Anyone can do it!”

But are we headed toward the point where that will literally have to be true?

Continued »


February 12, 2008  2:03 PM

Dell keeps it in the family

Dave Raffo Dave Raffo Profile: Dave Raffo

When Dell purchased  email archiver MessageOne for $155 million today, the computer giant didn’t have to welcome the small startup into the family. MessageOne has been in the Dell family from the start, literally.

MessageOne was co-founded by Adam Dell, brother of Dell founder Michael Dell. Michael Dell also had a financial interest in MessageOne. The Dell founder, his wife, parents, and a trust for his children are investors in two investment funds that backed MessageOne. Adam Dell manages the funds, and served as MessageOne’s chairman.

So when the smoke clears after the deal, Adam Dell will receive around $970,000, Michael Dell, Susan Dell and their children’s trust will receive a total of around $12 million; and Dell’s parents will receive around $450,000. According to the press release Dell issued announcing the deal, the $12 million paid Michael and Susan Dell and their children will be donated to charity.

To the Dells’ credit, they disclosed these numbers in the press release. The company also claims Michael Dell was not involved in the negotiations for MessageOne. Dell’s directors – excluding Michael Dell and CFO Don Carty – handled negotiations and received an opinion from Morgan Stanley & Co. that the price was fair to the company.

You can expect Michael Dell to be especially careful, considering the company had some accounting problems with the SEC in recent years that were part of the reason the founder came back to replace Kevin Rollins as CEO. And the acquisition will have to pass muster with regulatory agencies. Still, the results of this deal will be watched especially closely over the next few months. While Dell can easily justify acquiring email archiving and storage software as a service (SaaS), there will be questions about whether the price was right — even if it is merely tip money compared to the $1.4 billion paid for EqualLogic.

So if Dell doesn’t see a quick boost from MessageOne’s products and services, Michael Dell will have explain more than why the integration is taking longer than expected. He’ll have to convince investors and skeptics that the deal wasn’t just a nice payday and perhaps a lifeline for his brother’s company.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: