Storage Soup


September 18, 2008  2:32 PM

VMWorld in Pictures

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Photobucket
As far as I could tell, they did not actually serve liquid genius here.

Photobucket
A busy – and vast – show floor.

Photobucket
Psst…roadmap stuff over here…

Photobucket
Some of the storage roadmap stuff.

Photobucket
Still undecided whether the giant rotoscoped heads were cool or freaky. Or if those two things are necessarily mutually exclusive.

Photobucket
Members of the Fourth Estate taking in Paul Maritz’s keynote Tuesday.

Photobucket
Paul Maritz keynote

Photobucket
The stampeding herd heads for the casino at the end of sessions. Estimated attendance at the show was 14,000.

Photobucket
View from Ghost Bar at the Palms, where VMware held a reception for press, analysts and partners Tuesday night.

Photobucket
The vastness of the keynote hall cannot be overstated.

Photobucket
Keynote cameramen

Photobucket
New product demos at Wednesday morning keynote.

Photobucket
HP wins my personal Best Swag of the Show award this year for their custom-printed inside-out Oreo cookie.

September 16, 2008  10:06 AM

Riverbed preps primary storage dedupe device

Dave Raffo Dave Raffo Profile: Dave Raffo

Riverbed took the wraps off what it previously described as its “data center product” Monday, unveiling its Atlas primary data deduplication device at its financial analyst conference well before it will be available for customers.

Atlas will use the deduplication technology Riverbed employs in its Steelhead WAN optimization products to shrink primary data. Although the product is just entering alpha and won’t be available until next year, Riverbed execs have been giving reporters and analysts a peak into the technology.

Unlike current deduplication products on the market, Atlas will be able to dedupe data across files, volumes and namespaces, Riverbed marketing SVP Eric Wolford said. Atlas will originally support CIFS, but Wolford said it will eventually work with all file data and then extend to non-file data via iSCSI two or three years down the road.

Atlas sits alongside Riverbed’s Steelhead appliances in the data center, in front of NAS file servers. It would typically be used in high availability clusters. One or more Steelhead devices are required for Atlas.

All WAN optimization devices use deduplication to shrink data, but none have disclosed plans to use that technology on primary data yet.

“I haven’t heard of any other vendor doing this,” Yankee Group analyst Zeus Kerravala said. “It’s a logical follow-on to what they already do. They probably got themselves a one-to-two year head start.”

Most deduplication products today are used for backing up data, although NetApp licenses its dedupe for free for primary data. Because Atlas can further shrink data already deduped, Wolford says Atlas can either compete or complement NetApp’s deduplication.

But Atlas may be a few years from mainstream. Wolford admits it might take customers years to get used to the idea of adding another device in the network. While Riverbed may eventually add Atlas’ capabilities right into Steelhead, the first version will be a separate device.

“There are people who are going to be nervous about this and want to wait two or three years,” Wolford said.

He says by offering separate appliances, Riverbed customers can scale to different types of workloads by adding Steelheads or Atlases.

No pricing is available yet. “This isn’t a product launch,” Wolford says. “We’re just starting an Alpha program.”


September 15, 2008  11:00 AM

Sun’s “Thor” finds a new green friend

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

A startup called greenBytes emerged from stealth today, bringing with it modifications to Sun Microsystems’ ZFS file system that will add power-saving features to the SunFire X4540 “Thor” storage server.

greenBytes calls its proprietary enhancements to the file system ZFS+. The software bundles in deduplication, inline compression, power management using disk drives’ native power interfaces. Drive spin-down is beginning to be a checklist item as big vendors like EMC and HDS have been adding what was once a bleeding-edge feature offered by startups into their established arrays. However, greenBytes also claims that its enhancements to ZFS store data heuristically on the smallest number of disks possible, freeing up more drives to be put into a “sleepy” spun-down state. (Interesting…Sun had similar things to say about ZFS non-plus when it came to the layout of data for solid-state disks.)

This is part of Sun’s efforts to open up its storage technology to developers, in the hopes of exactly this kind of product development. I’ve talked to some users at big companies who are using Thumper for disk-based backup directly attached to (mostly Symantec NetBackup media servers), but most of them find the product appealing because of its high-density direct-attached hardware, not necessarily for its software features. As Dave Raffo covered for the Soup last week, Sun CEO Jonathan Schwartz is painting a cheery picture of the future for open-source storage, but so far the revenue and market share juries are still out.


September 15, 2008  7:35 AM

Vmworld: Hope Springs Eternal!

Tskyers Tory Skyers Profile: Tskyers

I’ve been on an emotional roller coaster recently. I’ve had a slim chance of making it out to Vmworld in Las Vegas this week and I got really excited. Then … crushing blow I couldn’t go, then the sky’s parted birds chirped and a harpist showed up from out of nowhere I could go again, then disaster … alas it wasn’t meant to be this year. I won’t be able to make the trek out to the show for the product I’m sorely waiting to be released in my toaster so I can ramp up total resource usage making my breakfast (the harpist gave me a dirty look before she packed up and left)(the harpist gave me a dirty look before she packed up and left).

So instead, I decided to write “what I want from Vmworld” here.

1) Non-Windows support for the Virtual Center/Infrastructure stack

Really,  why does it HAVE to be run in Windows? MySQL and friends run on just about everything so what about the server stack is so tied to the Windows code base that couldn’t be run in some other OS or even their OWN OS ala ESX. I’ve been running into more and more folks on my client list that don’t want to manage Windows in order to manage their virtual infrastructure. I’m looking forward to them announcing an alternative to running on Windows.

2) Windows 2008 support

For the folks who are running a Windows centric shop Windows  2008 is a reality, I have a client who runs it now exclusively and if an app can’t be qualified on 2008 it can’t run in their shop, period, no exceptions. They use 2008, they love it and they aren’t looking back. Funny thing, even though 2008 has been available to the public since early this year Virtual Infrastructure stack does NOT run in 2008 without unsupported shoehorning. PLEASE PLEASE PLEASE release (not announce, but release) Windows 2008 support at Vmworld, the harpist is counting on you!

3) A new license server mode/technology

I’m not sure if it has any fans, but I can tell you one of my biggest and in my opinion the most glaring issues with the entire VI (Virtual Infrastructure) stack is the licensing server technology. FlexLM can’t be clustered, it can be made HA but only with knowledge of how to make Windows 2k3 HA. Now Vmware isn’t the only one to use this technology, Citrix does as well, take a browse through Citrix support forums to see how many friends Citrix made when they started using it. Vmware had a similar number of friends, me included. The technology stinks, ditch it, please! Announce a new alternative that can be clustered and is OS agnostic, an application server based license model comes to mind like Tomcat etc.

4) Updated time keeping

The current ESX server technology is pretty good at dealing with time drift of virtual machines on the same host, but across multiple hosts there is some work to be done. Yes, one can use NTP but when things like time sensitive audit data can’t stand even a second drift NTP becomes an unworkable solution. Vmware, you listening? Help us out, let the hosts synch themselves to each other so vm’s on separate hosts have precisely the same time.

5) Physical and Virtual conversion

I’ll kinda give Vvmware a pass on this one because there are apps from companies like Platespin andVizioncore, but … physical to virtual and back again is a weak spot. If they announce better conversion tools, or hey a takeover of one of those companies I’d be a happier admin.

6) Capacity planning

There are services surrounding the Vwmare Capacity Planner that third party vendors offer similar to IBM’s CDAT study, I understand the ecosystem it feeds, but I think Vmware would be better served if the full suite of measurement tools and methodology available to the consultants conducting a capacity exercise were available to the broader public. I’d be willing to bet that the mjaority of people will still make use of experienced third parties to conduct the exercises, however those who can’t or have shops so small they are not on the radar of service providers would be able to take advantage of a great resource. It would really be nice to have the opportunity to even do ongoing internal audits using these tools and methodologies. Make me wish I could make it, and the harpists dirty look even that much more meaningful, announce opening all capacity tools up to the public.


September 12, 2008  12:44 PM

HP to EMC: Drop the SPC smoke screen

Dave Raffo Dave Raffo Profile: Dave Raffo

Heweltt-Packard has fired the latest shot at EMC in the battle over performance benchmarks. HP this week posted records for megabytes per second and price performance in SPC-2 performance benchmark testing of its XP24000 enterprise SAN array, and immediately called out EMC for its refusal to submit its products to the Storage Performance Council (SPC) for benchmarking.

According to a blog by Craig Simpson, competitive strategist for HP StorageWorks:

EMC, we’re once again throwing down the gauntlet.  Today the XP24000 put up the highest SPC-2 benchmark result in the world.  The top spot for such demanding workloads as video streaming goes to the XP.  Once again, your DMX is a no show.  And once again we challenge you, this time to put up an SPC-2 number.  Every other major storage vendor is now posting SPC results.  Every other major storage vendor is now starting to give customers standard, open, audited performance results to show what they’ve got.  You remain the only vendor keeping your product performance behind a smoke screen of mysterious numbers and internal testing.  We challenge you join us in the world of openness and let customers quit guessing at how low the DMX’s performance really is!

Interestingly, the XP24000 isn’t HP’s own system. It is sourced from Hitachi Japan, and sold by Hitachi Data Systems and Sun as well as HP. And HP’s SPC-1 mark for random I/O operations (SPC-2 is for sequential data movement) was recently surpassed by 3PAR’s InServ Storage System.

But from HP’s standpoint, this isn’t about HDS, Sun, or 3PAR. It’s about going after EMC, which remains resolute in its refusal to take part in SPC testing.

“An oversimplified performance test that doesn’t accurately predict real-world performance is of little value to customers,” an EMC spokesman said in response to HP’s latest challenge.

Until now, the benchmarking skirmish was mainly between NetApp and EMC. It’s been going on for years, but last February NetApp took it to another level last February when it published benchmarks for EMC’s Clariion  CX3-40 that showed it performing worse than NetApp’s FAS3040.

EMC blogger Chuck Hollis then came up with his own “standardized measure” for storage capacity efficiency last month. He pulled HP into the fray by comparing EMC CX4 against NetApp FAS and HP EVA series. (Spoiler alert: EMC came out on top).

And if EMC’s results shock you, then I’m sure you’re equally stunned to learn that HP and NetApp took exception to EMC’s numbers.

“Capacity utilization is important, but there’s no third-party body out there that measures cap utilization,” Simpson said. “We felt Chuck’s position was very skewed. We would love to see them agree to have an independent third-party to pick up the challenge.”

Does anybody besides vendors care about these things? I asked Babu Kudaravalli, senior director of operations for Business Technology Services at Port Washington, NY-based National Medical Health Card, if benchmarks were a factor in his buying storage systems. He did say he found SPC-2 more relevant to him than many enterprise shops because he runs large queries that entail sequential data. But Kudaravelli bought his two XP24000s last year, long before the latest SPC-2 numbers were released, so he saw the numbers more as vindication than as a buying guide.

“We pay attention to it, but don’t go purely based on SPC numbers,” he said. “Sometimes benchmarks are not relevant, but I was thrilled when I saw the SPC-2 number. When I saw the results, I said I already bought a winner.”


September 12, 2008  9:34 AM

Rackable to replace RapidScale with NetApp

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

On August 14, Rackable disclosed it was selling its RapidScale clustered NAS business, which was derived from its acquisition of Terrascale last April. Company executives said they were trying to refocus the company on its core competencies after disappointing forecasts for RapidScale. During the company’s Aug. 4 earnings call, execs hinted that a new partnership with a major storage OEM was coming.

This week, Rackable revealed its partner is NetApp.  According to a press release, “Under terms of the agreement, Rackable Systems is joining NetApp’s Embedded Systems Vendor Program and will integrate NetApp storage into [its]… Eco-Logical data center server and infrastructure offerings.”

It remains unclear exactly how this integration will happen. NetApp has a clustered NAS system, OnTap GX, but it won’t be integrated with its other filers until next year. A Rackable spokesperson wrote in an email to me yesterday that GX will be a part of the companies’ collaboration: “We have access to the entire Net App product portfolio and as part of this relationship we intend to collaborate on technical advances and opportunities.  We still believe that there is a requirement in the market for clustered storage and we fully intend to explore the potential of offering On Tap GX within the solutions we will jointly develop.” But an official rollout announcement and plan are still forthcoming.


September 10, 2008  6:13 PM

Ocarina will pay you 10 grand to beat it at data compression

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Ocarina Networks, which came out of stealth in April, claims its compression appliance will reduce file data on primary storage systems. Its main competitor, StorWize, applies standard (2:1) compression to files, but Ocarina claims 10:1 compression and the ability to compress already compressed objects, such as video and photos. The company even claims its algorithms can be used to create a 3-D cube of numeric values to represent a photo or video image, so it can recognize elements that it has “seen” before.  Pretty interesting, albeit ice-cream-headache-inducing stuff.

So it was puzzling to see the announcement of the Ocarina Compression Prize, a $1 million fund that will be doled out in $10,000 increments for each submission that advances the current best scoring compressor by at least 3%. Isn’t the idea supposed to be that Ocarina has the most compression expertise in the market?

“A lot of our compression work is already based on prior art,” CEO Murli Thirumale told me. The idea, he explained, is to make this contest a “category builder,” raising interest in the subject of primary storage compression. “A lot of compression work is focused around transmission of files, rather than reducing them for storage. We want to encourage the best minds in compression to address it in that context.”

So I guess it doesn’t matter how many cool algorithms you can bring to the table if there isn’t really a market yet. “As there’s more widespread adoption [for products], clearly [vendors] with a leadership stance will benefit more,” Thirumale said.

“Good compression has a history of coming from independent researchers, open source or anywhere that can foster easy standardization and non-proprietary code,” Taneja Group analyst Jeff Boles says. “So this seems like a pretty good approach to me. Interesting stunt to boot.”  

The initial prize fund will include awards for three categories: JPEG 2000 recompression, h.264 video recompression and an industry file mix for engineering CAD file types. Maybe Riverbed, Silver Peak and Autodesk will jump in on that last one.


September 9, 2008  12:01 PM

Sun CEO dons rose-colored storage glasses

Dave Raffo Dave Raffo Profile: Dave Raffo

The political conventions are over, but Sun CEO Jonathan Schwartz is spinning Sun storage in a way that would make any candidate proud.

In his latest blog, Schwartz points to recent figures from market research firm IDC as validation of Sun’s open storage strategy. Those numbers release last week showed Sun with the greatest increase of overall storage revenue among the major vendors – up 29.2 percent. Gartner also chimed in by placing Sun’s growth at 34.7 percent in external storage revenue, again tops among the large vendors.

But it’s too early to praise Sun for a great turnaround. Sun ranks seventh in external storage in both lists – behind EMC, Hewlett-Packard, IBM, Hitachi Data Systems, Dell, and NetApp, and fifth on IDC’s list of all storage sales. In each case, Sun’s market share is in single digits.

Sun’s own figures aren’t nearly as cheery for storage. Sun reported a modest revenue increase of 3.9% year-over-year in its earnings report for last quarter – the same quarter IDC and Garnter was reporting on.

It’s also hard to attribute any gains to the open storage initiative. Sun is growing storage revenue for the same reason almost every other vendor is: much more data is stored digitally than ever before, and that trend is still accelerating.

IDC and Gartner attributed Sun rise with big increases in midrange and enterprise disk products, and those systems don’t reflect increases in open storage use. Those systems aren’t even Sun’s IP — they come from OEM deals with Hitachi, LSI, and Dot Hill.

Sun’s Thumper is tied to open storage, and that only generated $100 million in revenue for the fiscal year that ended June 30 according to Schwartz’s blog. Thumper revenue grew 80 percent over last year, but is still “relatively small” as Schwartz put it on the earnings call.

Perhaps there will be reasons to cheer Sun storage soon, although the jury is still out. Sun seems to be keeping up with the market in its embrace of solid state disk and may soon see the fruits of its Fishworks project, which could help drive open source storage. According to Schwartz:

Now, our view is “OpenStorage” (systems built from commodity parts and open source software) will grow far faster than the proprietary storage market. We plan on driving that growth, and over the next few months, you’ll see a tremendous amount of storage innovation targeting the growing breadth of customers wanting better/faster/cheaper/smaller options. Expect to see flash, zfs, dtrace, and good old fashioned systems engineering play a very prominent role in an aggressive push into the storage market.

Time may prove Schwartz right about open storage. But we’ve seen no evidence of any great success yet.


September 5, 2008  3:24 PM

Plasmon, in need of funding, recommends $25 million private equity offer

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Optical archiving vendor Plasmon revamped its management team last December and has since rolled out a new marketing strategy focusing on offering multi-tier archiving packages with partner NetApp (see “NetApp Plasmon’s Trojan Horse in Enterprise Data Centers,” July 16).

But according to a statement released by Plasmon Aug. 8, so far that strategy hasn’t been bearing fruit. The company saw disappointing sales for the first quarter of its fiscal year (which began April 1), 20 percent below earlier predictions. “There have been some encouraging signs, including a fast-growing pipeline, especially for our newest products,” VP of global marketing Patrick Dowling said. “We remain committed to our strategy – it’s just that we’re not getting the results from sales yet.”

Today, the company notified investors that it has been approached by a private equity firm (Dowling declined to name the firm, though some reports say it’s a U.S,-based company) with an offer to take the company private for $25 million, or 0.25 pence per share on the U.K. stock exchange where it’s currently listed. It’s not a done deal yet – there’s still due diligence to be done, and shareholder agreement to get, among other things.  But “it’s our best viable option,” according to a Plasmon statement.


September 3, 2008  12:51 PM

Adaptec adds power management to RAID controllers

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Adaptec’s Series 2 and Series 5 SAS/SATA RAID controllers can now spin down disk drives from several drive manufacturers — Hitachi, Fujitsu, Seagate, Western Digital and Samsung.

Adaptec director of worldwide marketing Suresh Paniker said spin-down is already a part of the serial drive interface specs, so no API or special integration is required for Adaptec’s product to put drives into idle mode or power them off entirely. Sensitive data that may be unexpectedly accessed, such as registry information for Windows apps, can be kept on battery-backed cache within the controller.

The controller will set the rotation, and power draw, of the drives at three levels: normal, standby and power-off. The speed and power draw of the standby stage will vary by drive manufacturer, but generally drives on standby will require 7 to 10 seconds to return to normal operation and will draw between 5 and 7 watts per drive. Power-off requires 20 to 40 seconds for the disk drive to spin up, and though not spinning, the rest of the drive’s internal mechanisms will still draw about 3 watts of power.

The feature will be made available this week through distributors to resellers, OEMs and end users, along with new fields in the controllers’ software interface to manage spin-down policies. Getting the controller and other system parts from a distributor is a cheaper way of building your own storage system, and to some extent, the availability of this feature from Adaptec suggests MAID is going mainstream. But obviously, some assembly is required.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: