Storage Soup


September 15, 2008  11:00 AM

Sun’s “Thor” finds a new green friend

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

A startup called greenBytes emerged from stealth today, bringing with it modifications to Sun Microsystems’ ZFS file system that will add power-saving features to the SunFire X4540 “Thor” storage server.

greenBytes calls its proprietary enhancements to the file system ZFS+. The software bundles in deduplication, inline compression, power management using disk drives’ native power interfaces. Drive spin-down is beginning to be a checklist item as big vendors like EMC and HDS have been adding what was once a bleeding-edge feature offered by startups into their established arrays. However, greenBytes also claims that its enhancements to ZFS store data heuristically on the smallest number of disks possible, freeing up more drives to be put into a “sleepy” spun-down state. (Interesting…Sun had similar things to say about ZFS non-plus when it came to the layout of data for solid-state disks.)

This is part of Sun’s efforts to open up its storage technology to developers, in the hopes of exactly this kind of product development. I’ve talked to some users at big companies who are using Thumper for disk-based backup directly attached to (mostly Symantec NetBackup media servers), but most of them find the product appealing because of its high-density direct-attached hardware, not necessarily for its software features. As Dave Raffo covered for the Soup last week, Sun CEO Jonathan Schwartz is painting a cheery picture of the future for open-source storage, but so far the revenue and market share juries are still out.

September 15, 2008  7:35 AM

Vmworld: Hope Springs Eternal!

Tskyers Tory Skyers Profile: Tskyers

I’ve been on an emotional roller coaster recently. I’ve had a slim chance of making it out to Vmworld in Las Vegas this week and I got really excited. Then … crushing blow I couldn’t go, then the sky’s parted birds chirped and a harpist showed up from out of nowhere I could go again, then disaster … alas it wasn’t meant to be this year. I won’t be able to make the trek out to the show for the product I’m sorely waiting to be released in my toaster so I can ramp up total resource usage making my breakfast (the harpist gave me a dirty look before she packed up and left)(the harpist gave me a dirty look before she packed up and left).

So instead, I decided to write “what I want from Vmworld” here.

1) Non-Windows support for the Virtual Center/Infrastructure stack

Really,  why does it HAVE to be run in Windows? MySQL and friends run on just about everything so what about the server stack is so tied to the Windows code base that couldn’t be run in some other OS or even their OWN OS ala ESX. I’ve been running into more and more folks on my client list that don’t want to manage Windows in order to manage their virtual infrastructure. I’m looking forward to them announcing an alternative to running on Windows.

2) Windows 2008 support

For the folks who are running a Windows centric shop Windows  2008 is a reality, I have a client who runs it now exclusively and if an app can’t be qualified on 2008 it can’t run in their shop, period, no exceptions. They use 2008, they love it and they aren’t looking back. Funny thing, even though 2008 has been available to the public since early this year Virtual Infrastructure stack does NOT run in 2008 without unsupported shoehorning. PLEASE PLEASE PLEASE release (not announce, but release) Windows 2008 support at Vmworld, the harpist is counting on you!

3) A new license server mode/technology

I’m not sure if it has any fans, but I can tell you one of my biggest and in my opinion the most glaring issues with the entire VI (Virtual Infrastructure) stack is the licensing server technology. FlexLM can’t be clustered, it can be made HA but only with knowledge of how to make Windows 2k3 HA. Now Vmware isn’t the only one to use this technology, Citrix does as well, take a browse through Citrix support forums to see how many friends Citrix made when they started using it. Vmware had a similar number of friends, me included. The technology stinks, ditch it, please! Announce a new alternative that can be clustered and is OS agnostic, an application server based license model comes to mind like Tomcat etc.

4) Updated time keeping

The current ESX server technology is pretty good at dealing with time drift of virtual machines on the same host, but across multiple hosts there is some work to be done. Yes, one can use NTP but when things like time sensitive audit data can’t stand even a second drift NTP becomes an unworkable solution. Vmware, you listening? Help us out, let the hosts synch themselves to each other so vm’s on separate hosts have precisely the same time.

5) Physical and Virtual conversion

I’ll kinda give Vvmware a pass on this one because there are apps from companies like Platespin andVizioncore, but … physical to virtual and back again is a weak spot. If they announce better conversion tools, or hey a takeover of one of those companies I’d be a happier admin.

6) Capacity planning

There are services surrounding the Vwmare Capacity Planner that third party vendors offer similar to IBM’s CDAT study, I understand the ecosystem it feeds, but I think Vmware would be better served if the full suite of measurement tools and methodology available to the consultants conducting a capacity exercise were available to the broader public. I’d be willing to bet that the mjaority of people will still make use of experienced third parties to conduct the exercises, however those who can’t or have shops so small they are not on the radar of service providers would be able to take advantage of a great resource. It would really be nice to have the opportunity to even do ongoing internal audits using these tools and methodologies. Make me wish I could make it, and the harpists dirty look even that much more meaningful, announce opening all capacity tools up to the public.


September 12, 2008  12:44 PM

HP to EMC: Drop the SPC smoke screen

Dave Raffo Dave Raffo Profile: Dave Raffo

Heweltt-Packard has fired the latest shot at EMC in the battle over performance benchmarks. HP this week posted records for megabytes per second and price performance in SPC-2 performance benchmark testing of its XP24000 enterprise SAN array, and immediately called out EMC for its refusal to submit its products to the Storage Performance Council (SPC) for benchmarking.

According to a blog by Craig Simpson, competitive strategist for HP StorageWorks:

EMC, we’re once again throwing down the gauntlet.  Today the XP24000 put up the highest SPC-2 benchmark result in the world.  The top spot for such demanding workloads as video streaming goes to the XP.  Once again, your DMX is a no show.  And once again we challenge you, this time to put up an SPC-2 number.  Every other major storage vendor is now posting SPC results.  Every other major storage vendor is now starting to give customers standard, open, audited performance results to show what they’ve got.  You remain the only vendor keeping your product performance behind a smoke screen of mysterious numbers and internal testing.  We challenge you join us in the world of openness and let customers quit guessing at how low the DMX’s performance really is!

Interestingly, the XP24000 isn’t HP’s own system. It is sourced from Hitachi Japan, and sold by Hitachi Data Systems and Sun as well as HP. And HP’s SPC-1 mark for random I/O operations (SPC-2 is for sequential data movement) was recently surpassed by 3PAR’s InServ Storage System.

But from HP’s standpoint, this isn’t about HDS, Sun, or 3PAR. It’s about going after EMC, which remains resolute in its refusal to take part in SPC testing.

“An oversimplified performance test that doesn’t accurately predict real-world performance is of little value to customers,” an EMC spokesman said in response to HP’s latest challenge.

Until now, the benchmarking skirmish was mainly between NetApp and EMC. It’s been going on for years, but last February NetApp took it to another level last February when it published benchmarks for EMC’s Clariion  CX3-40 that showed it performing worse than NetApp’s FAS3040.

EMC blogger Chuck Hollis then came up with his own “standardized measure” for storage capacity efficiency last month. He pulled HP into the fray by comparing EMC CX4 against NetApp FAS and HP EVA series. (Spoiler alert: EMC came out on top).

And if EMC’s results shock you, then I’m sure you’re equally stunned to learn that HP and NetApp took exception to EMC’s numbers.

“Capacity utilization is important, but there’s no third-party body out there that measures cap utilization,” Simpson said. “We felt Chuck’s position was very skewed. We would love to see them agree to have an independent third-party to pick up the challenge.”

Does anybody besides vendors care about these things? I asked Babu Kudaravalli, senior director of operations for Business Technology Services at Port Washington, NY-based National Medical Health Card, if benchmarks were a factor in his buying storage systems. He did say he found SPC-2 more relevant to him than many enterprise shops because he runs large queries that entail sequential data. But Kudaravelli bought his two XP24000s last year, long before the latest SPC-2 numbers were released, so he saw the numbers more as vindication than as a buying guide.

“We pay attention to it, but don’t go purely based on SPC numbers,” he said. “Sometimes benchmarks are not relevant, but I was thrilled when I saw the SPC-2 number. When I saw the results, I said I already bought a winner.”


September 12, 2008  9:34 AM

Rackable to replace RapidScale with NetApp

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

On August 14, Rackable disclosed it was selling its RapidScale clustered NAS business, which was derived from its acquisition of Terrascale last April. Company executives said they were trying to refocus the company on its core competencies after disappointing forecasts for RapidScale. During the company’s Aug. 4 earnings call, execs hinted that a new partnership with a major storage OEM was coming.

This week, Rackable revealed its partner is NetApp.  According to a press release, “Under terms of the agreement, Rackable Systems is joining NetApp’s Embedded Systems Vendor Program and will integrate NetApp storage into [its]… Eco-Logical data center server and infrastructure offerings.”

It remains unclear exactly how this integration will happen. NetApp has a clustered NAS system, OnTap GX, but it won’t be integrated with its other filers until next year. A Rackable spokesperson wrote in an email to me yesterday that GX will be a part of the companies’ collaboration: “We have access to the entire Net App product portfolio and as part of this relationship we intend to collaborate on technical advances and opportunities.  We still believe that there is a requirement in the market for clustered storage and we fully intend to explore the potential of offering On Tap GX within the solutions we will jointly develop.” But an official rollout announcement and plan are still forthcoming.


September 10, 2008  6:13 PM

Ocarina will pay you 10 grand to beat it at data compression

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Ocarina Networks, which came out of stealth in April, claims its compression appliance will reduce file data on primary storage systems. Its main competitor, StorWize, applies standard (2:1) compression to files, but Ocarina claims 10:1 compression and the ability to compress already compressed objects, such as video and photos. The company even claims its algorithms can be used to create a 3-D cube of numeric values to represent a photo or video image, so it can recognize elements that it has “seen” before.  Pretty interesting, albeit ice-cream-headache-inducing stuff.

So it was puzzling to see the announcement of the Ocarina Compression Prize, a $1 million fund that will be doled out in $10,000 increments for each submission that advances the current best scoring compressor by at least 3%. Isn’t the idea supposed to be that Ocarina has the most compression expertise in the market?

“A lot of our compression work is already based on prior art,” CEO Murli Thirumale told me. The idea, he explained, is to make this contest a “category builder,” raising interest in the subject of primary storage compression. “A lot of compression work is focused around transmission of files, rather than reducing them for storage. We want to encourage the best minds in compression to address it in that context.”

So I guess it doesn’t matter how many cool algorithms you can bring to the table if there isn’t really a market yet. “As there’s more widespread adoption [for products], clearly [vendors] with a leadership stance will benefit more,” Thirumale said.

“Good compression has a history of coming from independent researchers, open source or anywhere that can foster easy standardization and non-proprietary code,” Taneja Group analyst Jeff Boles says. “So this seems like a pretty good approach to me. Interesting stunt to boot.”  

The initial prize fund will include awards for three categories: JPEG 2000 recompression, h.264 video recompression and an industry file mix for engineering CAD file types. Maybe Riverbed, Silver Peak and Autodesk will jump in on that last one.


September 9, 2008  12:01 PM

Sun CEO dons rose-colored storage glasses

Dave Raffo Dave Raffo Profile: Dave Raffo

The political conventions are over, but Sun CEO Jonathan Schwartz is spinning Sun storage in a way that would make any candidate proud.

In his latest blog, Schwartz points to recent figures from market research firm IDC as validation of Sun’s open storage strategy. Those numbers release last week showed Sun with the greatest increase of overall storage revenue among the major vendors – up 29.2 percent. Gartner also chimed in by placing Sun’s growth at 34.7 percent in external storage revenue, again tops among the large vendors.

But it’s too early to praise Sun for a great turnaround. Sun ranks seventh in external storage in both lists – behind EMC, Hewlett-Packard, IBM, Hitachi Data Systems, Dell, and NetApp, and fifth on IDC’s list of all storage sales. In each case, Sun’s market share is in single digits.

Sun’s own figures aren’t nearly as cheery for storage. Sun reported a modest revenue increase of 3.9% year-over-year in its earnings report for last quarter – the same quarter IDC and Garnter was reporting on.

It’s also hard to attribute any gains to the open storage initiative. Sun is growing storage revenue for the same reason almost every other vendor is: much more data is stored digitally than ever before, and that trend is still accelerating.

IDC and Gartner attributed Sun rise with big increases in midrange and enterprise disk products, and those systems don’t reflect increases in open storage use. Those systems aren’t even Sun’s IP — they come from OEM deals with Hitachi, LSI, and Dot Hill.

Sun’s Thumper is tied to open storage, and that only generated $100 million in revenue for the fiscal year that ended June 30 according to Schwartz’s blog. Thumper revenue grew 80 percent over last year, but is still “relatively small” as Schwartz put it on the earnings call.

Perhaps there will be reasons to cheer Sun storage soon, although the jury is still out. Sun seems to be keeping up with the market in its embrace of solid state disk and may soon see the fruits of its Fishworks project, which could help drive open source storage. According to Schwartz:

Now, our view is “OpenStorage” (systems built from commodity parts and open source software) will grow far faster than the proprietary storage market. We plan on driving that growth, and over the next few months, you’ll see a tremendous amount of storage innovation targeting the growing breadth of customers wanting better/faster/cheaper/smaller options. Expect to see flash, zfs, dtrace, and good old fashioned systems engineering play a very prominent role in an aggressive push into the storage market.

Time may prove Schwartz right about open storage. But we’ve seen no evidence of any great success yet.


September 5, 2008  3:24 PM

Plasmon, in need of funding, recommends $25 million private equity offer

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Optical archiving vendor Plasmon revamped its management team last December and has since rolled out a new marketing strategy focusing on offering multi-tier archiving packages with partner NetApp (see “NetApp Plasmon’s Trojan Horse in Enterprise Data Centers,” July 16).

But according to a statement released by Plasmon Aug. 8, so far that strategy hasn’t been bearing fruit. The company saw disappointing sales for the first quarter of its fiscal year (which began April 1), 20 percent below earlier predictions. “There have been some encouraging signs, including a fast-growing pipeline, especially for our newest products,” VP of global marketing Patrick Dowling said. “We remain committed to our strategy – it’s just that we’re not getting the results from sales yet.”

Today, the company notified investors that it has been approached by a private equity firm (Dowling declined to name the firm, though some reports say it’s a U.S,-based company) with an offer to take the company private for $25 million, or 0.25 pence per share on the U.K. stock exchange where it’s currently listed. It’s not a done deal yet – there’s still due diligence to be done, and shareholder agreement to get, among other things.  But “it’s our best viable option,” according to a Plasmon statement.


September 3, 2008  12:51 PM

Adaptec adds power management to RAID controllers

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Adaptec’s Series 2 and Series 5 SAS/SATA RAID controllers can now spin down disk drives from several drive manufacturers — Hitachi, Fujitsu, Seagate, Western Digital and Samsung.

Adaptec director of worldwide marketing Suresh Paniker said spin-down is already a part of the serial drive interface specs, so no API or special integration is required for Adaptec’s product to put drives into idle mode or power them off entirely. Sensitive data that may be unexpectedly accessed, such as registry information for Windows apps, can be kept on battery-backed cache within the controller.

The controller will set the rotation, and power draw, of the drives at three levels: normal, standby and power-off. The speed and power draw of the standby stage will vary by drive manufacturer, but generally drives on standby will require 7 to 10 seconds to return to normal operation and will draw between 5 and 7 watts per drive. Power-off requires 20 to 40 seconds for the disk drive to spin up, and though not spinning, the rest of the drive’s internal mechanisms will still draw about 3 watts of power.

The feature will be made available this week through distributors to resellers, OEMs and end users, along with new fields in the controllers’ software interface to manage spin-down policies. Getting the controller and other system parts from a distributor is a cheaper way of building your own storage system, and to some extent, the availability of this feature from Adaptec suggests MAID is going mainstream. But obviously, some assembly is required.


September 2, 2008  11:53 AM

Overland’s latest step towards disk

Dave Raffo Dave Raffo Profile: Dave Raffo

Companies like to try and bury bad news by disclosing it on a Friday, so it’s no surprise that Overland Storage issued a press release about layoffs on the day before the three-day Labor Day weekend.There is really little surprise around Overland’s 53 layoffs, which come to 13 percent of its employees. It’s the next step in the company’s transition from a tape to disk vendor as it fights for survival. Overland acquired the Snap Server NAS product line from Adaptec for $3.6 million in June, but lost $8.6 million last quarter, and had $9.7 million in cash at the end of the quarter. The restructuring is expected to save around $10 million a year.

Overland CEO Vern LoForti said on the last earnings conference call that the company is close to completing financing to support the Snap business. A company spokeswoman says that financing is still in place, which means Friday’s layoff did not come about because financing fell through. But even that financing would not be enough without layoffs.

 “… our recent acquisition of the Snap Server business facilitates our entry into the distributed NAS market, and initial customer response has been very positive,” LoForti said in a statement about the layoffs. “The Snap acquisition did, however, result in a substantial increase to our operating expense base. Having recognized the need to rationalize the newly combined business, we have examined all areas of the company in order to streamline and focus on the geographic regions and product initiatives that offer the most immediate return on investment.”

So basically, it comes down to replacing 53 jobs on the tape end of the business with about the same amount acquired with Snap.


August 29, 2008  11:36 AM

IBM’s got some ’splainin to do in storage

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

IBM. What to make of them these days when it comes to storage?

It’s a question I’ve heard asked a lot this week in my conversations with industry watchers and in my blog reading. Much of it came in the wake of the leak (again) on IBM’s European website of information about an upcoming product announcement.

 “This now makes two ‘new platform’ storage announcements from IBM where they simply post a Web page regarding a completely new storage product on their European site and call it a day,” wrote Chuck Hollis in a blog post that got the word out about the leak. “Has IBM decided to focus its marketing efforts elsewhere, and decided not to bring much attention to their … storage business?”

The “announcement” of the XIV clustered block storage array in similar fashion earlier this month prompted similar head-scratching, and, more worrying if I’m IBM, analysts have begun to sit down and dig through the XIV specs they’ve released to the market without a single PR person or marketeer accompanying it with a message.

“Where’s the beef”? is the phrase I’ve heard used at the end of the analysts’ analyzing. Robin Harris’s StorageMojo blog post is a pretty good representation of the questions I’m also hearing from others in the wider market.

“I hope there is a cohesive strategy behind the XIV product. But so far I’m not able to even guess what it might be,” Harris concluded. “Maybe the decades of warfare between geeks and suits has so totally paralyzed the product marketing function that even the normal IBM facade can’t cover the cracks. It must be something.”

I’m no PR expert, but I have to believe this is what you have PR and marketing for – to at least try to counteract speculation like this. I’ve heard differing opinions on the reasons for the leaks this week–some close to Hollis’s, and others who say IBM has always done this kind of pre-release Web posting (other companies, like Hewlett-Packard, have been known to do it, too) . The problem is, there are many more people nowadays scouring the Web for every morsel of information they can dig up. And IBM’s competitors can quickly criticize those products via blogs, putting spin on IBM’s products before IBM does.

Perhaps the most perplexing part is that IBM is just letting rivals take their shots. As far as I can tell, they haven’t responded at all to the criticisms levied by competitors and analysts. And I can’t figure out why that would be. The cat’s out of the bag. The specs are out there. Pretending it hasn’t been announced yet and declining comment isn’t going to change that.

This isn’t the first wondering I’ve done this year about IBM. I’ve also wondered what the deal is with their DS6000 array (about which I’ve been assured it’s still in existence, but not much more information is forthcoming). I’ve wondered what the deal is with thin provisioning for the DS8000. My news director, Dave Raffo, asked them what the deal is with MAID, dedupe and thin provisioning at this year’s SNW, and got a lot of fairly vague answers.

In fairness, IBM has since acquired Diligent Technologies, finally adding dedupe to their backup hardware product line. But in the dedupe wars (which you can bet are still raging), IBM has been relatively silent.

Instead, yesterday, they sent out a press release saying they’ve developed and tested SSDs at 1 million IOPS. The press release is chock-full of verbiage about how much more technical and expert IBM researchers are and what a wealth of knowledge IBM brings to the SSD table, none of which I doubt.

But the thing is, that’s it. They’ve tested these things as part of Project Quicksilver. IBM labs are the studliest and most advanced in the world. The end, except for an intriguing but vague passage about some future products –

IBM Research has developed breakthrough data center provisioning technology that automatically understands and balances the utilization of diverse storage components in the information infrastructure, including solid-state storage. Additionally, to get the most value from high performance system resources in storage, IBM Research patented key technologies that help maintain required quality-of-service for higher priority applications.

I asked an IBM spokesperson when we’ll see product come out based on what was tested for this press release, and got the following response. “To clarify, there is no timeline/commercialization plan to discuss at this time and we’re not announcing a specific product.” As for the management software (I’m assuming), “we’re not going into specifics at this point.”

To be fair, I’ve heard some criticism recently of other vendors coming out with product pre-announcements months before product availability. But everyone in the industry has by now either launched or announced they will launch solid-state support. IBM, with its server business and experience developing memory technology, ought to be ahead of this pack. Instead, despite the fact that it’s clear they wouldn’t be testing such a thing if there were no potential revenue stream attached, they aren’t saying much else about it.

Maybe the folks running IBM storage think they don’t have to say anything. They’re still an established behemoth with a large, loyal customer base. The phrase “no one ever got fired for buying from IBM” is still thrown around, and IBM officials have argued that customers are willing to wait to get whatever technology is fashionable until they can get it in vetted form from IBM. Given its ginormous customer base, IBM says, its testing and QA processes are much more involved than other vendors, and hence, it takes longer for new technologies to hit the streets from IBM – but customers are willing to wait for the extra assurance.

Good points all, and storage buyers are a conservative lot. But IBM spent $300 million on a product it hasn’t yet promoted except to cast it as the new crown jewel of Big Blue storage. Meanwhile, people in the marketplace are beginning to tear it apart before anyone sees a PowerPoint slide. People are beginning to wonder if it wasn’t really Moshe Yanai IBM was after, and that they had to buy his startups to get him. People are starting to speculate about what’s going on internally at IBM – about a battle between geeks and suits, or that IBM is ashamed of its storage products and therefore hiding them, competitors are having a field day, and IBM’s doing nothing to counteract any of it.

What is the deal?


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: