Brocade has new blindingly fast Fibre Channel switches and director blades that integrate almost 100 GB/s [actually 96 GB/s] of encrypting bandwidth.
Kidd is a former Brocade guy, and maybe he’s happy for his old colleagues. But it’s more likely that he sees the encryption switch and blade as a boon for his current company. He goes on to say: “NetApp will resell the Brocade products as our next generation FC DataFort.”
DataFort is the encryption device platform that NetApp acquired when it bought Decru for $272 million in 2005. Brocade’s devices support NetApp key management, and Brocade licensed its encryption technology to NetApp to ensure compatibility between its devices and the DataFort platform. That’s why the headline on Kidd’s blog reads: NetApp and Brocade’s Encryption Partnership.”
Kidd’s blog doesn’t discuss NetApp’s plans for DataFort in his blog. Besides the FC version, DataFort supports iSCSI, NAS and legacy SCSI systems. After getting briefed by Brocade last week, I asked NetApp specifically about the future of DataFort. NetApp’s senior director of data protection solutions Chris Cummings sent an email positioning the Brocade news as an expansion of the platform. “… over the past year, NetApp has also added the ability to deliver key management services combined with encryption delivered by existing components of the data center fabric, including application and tape providers, and now switch providers,” he wrote.
Brocade reps and others in the industry expect NetApp to keep DataFort as a lower-end encryption device while selling Brocade’s products for data center encryption. But it also sounds like NetApp sees Brocade rather than DataFort as its encryption platform for the future.
Seagate is creating a new umbrella company for the services businesses the disk drive vendor has been accumulating over the last few years.
In 2006, Seagate acquired ActionFront Data Recovery Labs, which became Seagate Recovery Services. Last year Seagate added backup and recovery software and services vendor EVault, the Open File Manager product line from St. Bernard Software, and eDiscovery service provider MetaLincs. Those services have been known collectively as Seagate Services, but today were re-branded i365, a Seagate Company. (The “i” stands for information, 365 for every day availability.)
Mark Grace, general manager of i365, says the idea is to become a comprehensive group instead of a collection of acquisitions. “We’re committing ourselves to this space,” he says. “It showed good insight on the part of Seagate to chase this market.”
As of now, there isn’t much difference in i365 from Seagate Services besides the branding. The individual services and products will remain, Seagate will not break out i365′s financials, and Grace has already been running the services group. But he says there will be some integration of products “where it makes sense” such as combining data protection with retention and archiving with data discovery.
As for immediate changes, the St. Bernard product line will become part of the EVault family, and Seagate Recovery Services becomes part of the Professional Services platform. The current i365 lineup consists of: i365 EVault Data Protection, i365 MetaLincs E-Discovery for review of electronic information for legal needs, i365 Retention Management for recovering, migrating, restoring, and managing data for e-discovery purposes, and i365 ProServe Professional Services for training, consulting and implementing products.
The move separates the services brand from the overall Seagate brand, which is tied to disk drives. “Seagate is a component manufacturing company. i365 is a product company,” Grace says. ” The go to market approach is totally different. We’re going to bring a different reputation to this space.”
Now i365 will compete as a service company with the likes of Iron Mountain and others looking to expanding into the SaaS space such as Symantec and CommVault. i365also competes with Seagate disk drive customers, mainly EMC, IBM, and Hewlett-Packard. But Seagate claims its services group grew 39 percent year over year for its fiscal year 2008 while other companies were still plotting a move into services.
Grace says i365 will brace for the expanded competition by rolling out new services and products – bare-metal restore is coming soon – and perhaps a few more acquisitions down the road. “We’ll continue to grow organically and inorganically,” he said.
Last week at VMWorld, IBM announced the Virtual Storage Optimizer (VSO), ESX-based software that reduces virtual desktop storage by creating a “golden image” of the desktop’s operating system and other static files, while saving the changes users might make to that image. It’s a concept akin to NetApp’s space-efficient snapshots, but because it’s delivered in software at the ESX level, IBM said, it can be applied to any storage system.
The next day, at a keynote, VMware officials demonstrated a new concept they’re rolling out in the next version of VMware Infrastructure called LinkedClones. LinkedClones create a “golden image” of virtual desktop files, as well as incremental changes;. The golden images, VMware demonstrated, can be updated with patches that automatically proliferate to all virtual machines based on the image to simplify rollouts and updates.
From the briefing I’d had with IBM the day before and this keynote demonstration, these products seemed similar. Since the VMware demo last Wednesday, I’ve been trying to assess what might be different about them. VMware officials I spoke with Wednesday and Thursday said they didn’t know enough about IBM’s product to comment on its differentiation and IBM spokespeople were unavailable. IBM’s press release about VSO had stated that it was “based on an algorithm developed by IBM Research,” but VMware said LinkedClones wasn’t based on anything from IBM.
Today I spoke with VMware director of enterprise desktop Jerry Chen, who told IBM’s VSO is based on the LinkedClones API. I asked about the algorithm developed by IBM. “There are things our partners can do to further optimize LinkedClone,” Chen said. “For example, there are different settings for the number of LinkedClones each master virtual machine can copy.”
Chen said he wasn’t familiar enough with the IBM product to say what VSO adds on top of LinkedClones. Meanwhile, IBM has been coy on this one. Since last week, I’ve put in numerous requests for comment by phone and email, including a fresh round of requests today after speaking with Chen. So far, no comments have been forthcoming.
A couple of additional storage announcements not captured in our VMWorld preview wrap:
EMC Corp. said EMC Control Center (ECC) 6.1 will now support reporting on thin-provisioned volumes within Symmetrix disk arrays. This was part of a package of update to various management software components in EMC’s portfolio, which also included Infra service desk software and Application Discovery Manager (ADM). ECC will also support thin provisioned volumes when they become available for Clariion arrays.
Hitachi Data Systems (HDS) is now offering the Hitachi Storage Replication Adapter, a heterogeneous replication software plug-in certified with Site Recovery Manager. Xiotech Corp. also announced integration with SRM.
The latest release of DataCore SANmelody storage virtualization software has been certified with the newest release of VMware ESX Server 3.x. The certification covers base iSCSI connectivity as well as high-availability configurations.
GlassHouse Technologies consultants have joined up with Tek-Tools Software to offer a new managed service for virtualization environments. GlassHouse developed a management interface integrated with Tek-Tools’ Profiler for VMware to provide a single pane view into the virtual environment.
From the Storage sites:
- IBM software guts the virtual desktop data hog
- Cisco brings virtualization to Fibre Channel and Ethernet
- Users grapple with virtual data protection at VMworld
From the Server Virtualization group:
- VMware unveils cloud computing vision with new Virtual Datacenter OS
- VMworld 2008 kicks off with address from new CEO Paul Maritz
- VMware CEO fields questions at VMworld 2008
- Best of VMworld 2008 awards announced
Some of the storage roadmap stuff.
Riverbed took the wraps off what it previously described as its “data center product” Monday, unveiling its Atlas primary data deduplication device at its financial analyst conference well before it will be available for customers.
Atlas will use the deduplication technology Riverbed employs in its Steelhead WAN optimization products to shrink primary data. Although the product is just entering alpha and won’t be available until next year, Riverbed execs have been giving reporters and analysts a peak into the technology.
Unlike current deduplication products on the market, Atlas will be able to dedupe data across files, volumes and namespaces, Riverbed marketing SVP Eric Wolford said. Atlas will originally support CIFS, but Wolford said it will eventually work with all file data and then extend to non-file data via iSCSI two or three years down the road.
Atlas sits alongside Riverbed’s Steelhead appliances in the data center, in front of NAS file servers. It would typically be used in high availability clusters. One or more Steelhead devices are required for Atlas.
All WAN optimization devices use deduplication to shrink data, but none have disclosed plans to use that technology on primary data yet.
“I haven’t heard of any other vendor doing this,” Yankee Group analyst Zeus Kerravala said. “It’s a logical follow-on to what they already do. They probably got themselves a one-to-two year head start.”
Most deduplication products today are used for backing up data, although NetApp licenses its dedupe for free for primary data. Because Atlas can further shrink data already deduped, Wolford says Atlas can either compete or complement NetApp’s deduplication.
But Atlas may be a few years from mainstream. Wolford admits it might take customers years to get used to the idea of adding another device in the network. While Riverbed may eventually add Atlas’ capabilities right into Steelhead, the first version will be a separate device.
“There are people who are going to be nervous about this and want to wait two or three years,” Wolford said.
He says by offering separate appliances, Riverbed customers can scale to different types of workloads by adding Steelheads or Atlases.
No pricing is available yet. “This isn’t a product launch,” Wolford says. “We’re just starting an Alpha program.”
A startup called greenBytes emerged from stealth today, bringing with it modifications to Sun Microsystems’ ZFS file system that will add power-saving features to the SunFire X4540 “Thor” storage server.
greenBytes calls its proprietary enhancements to the file system ZFS+. The software bundles in deduplication, inline compression, power management using disk drives’ native power interfaces. Drive spin-down is beginning to be a checklist item as big vendors like EMC and HDS have been adding what was once a bleeding-edge feature offered by startups into their established arrays. However, greenBytes also claims that its enhancements to ZFS store data heuristically on the smallest number of disks possible, freeing up more drives to be put into a “sleepy” spun-down state. (Interesting…Sun had similar things to say about ZFS non-plus when it came to the layout of data for solid-state disks.)
This is part of Sun’s efforts to open up its storage technology to developers, in the hopes of exactly this kind of product development. I’ve talked to some users at big companies who are using Thumper for disk-based backup directly attached to (mostly Symantec NetBackup media servers), but most of them find the product appealing because of its high-density direct-attached hardware, not necessarily for its software features. As Dave Raffo covered for the Soup last week, Sun CEO Jonathan Schwartz is painting a cheery picture of the future for open-source storage, but so far the revenue and market share juries are still out.
I’ve been on an emotional roller coaster recently. I’ve had a slim chance of making it out to Vmworld in Las Vegas this week and I got really excited. Then … crushing blow I couldn’t go, then the sky’s parted birds chirped and a harpist showed up from out of nowhere I could go again, then disaster … alas it wasn’t meant to be this year. I won’t be able to make the trek out to the show for the product I’m sorely waiting to be released in my toaster so I can ramp up total resource usage making my breakfast (the harpist gave me a dirty look before she packed up and left)(the harpist gave me a dirty look before she packed up and left).
So instead, I decided to write “what I want from Vmworld” here.
1) Non-Windows support for the Virtual Center/Infrastructure stack
Really, why does it HAVE to be run in Windows? MySQL and friends run on just about everything so what about the server stack is so tied to the Windows code base that couldn’t be run in some other OS or even their OWN OS ala ESX. I’ve been running into more and more folks on my client list that don’t want to manage Windows in order to manage their virtual infrastructure. I’m looking forward to them announcing an alternative to running on Windows.
2) Windows 2008 support
For the folks who are running a Windows centric shop Windows 2008 is a reality, I have a client who runs it now exclusively and if an app can’t be qualified on 2008 it can’t run in their shop, period, no exceptions. They use 2008, they love it and they aren’t looking back. Funny thing, even though 2008 has been available to the public since early this year Virtual Infrastructure stack does NOT run in 2008 without unsupported shoehorning. PLEASE PLEASE PLEASE release (not announce, but release) Windows 2008 support at Vmworld, the harpist is counting on you!
3) A new license server mode/technology
I’m not sure if it has any fans, but I can tell you one of my biggest and in my opinion the most glaring issues with the entire VI (Virtual Infrastructure) stack is the licensing server technology. FlexLM can’t be clustered, it can be made HA but only with knowledge of how to make Windows 2k3 HA. Now Vmware isn’t the only one to use this technology, Citrix does as well, take a browse through Citrix support forums to see how many friends Citrix made when they started using it. Vmware had a similar number of friends, me included. The technology stinks, ditch it, please! Announce a new alternative that can be clustered and is OS agnostic, an application server based license model comes to mind like Tomcat etc.
4) Updated time keeping
The current ESX server technology is pretty good at dealing with time drift of virtual machines on the same host, but across multiple hosts there is some work to be done. Yes, one can use NTP but when things like time sensitive audit data can’t stand even a second drift NTP becomes an unworkable solution. Vmware, you listening? Help us out, let the hosts synch themselves to each other so vm’s on separate hosts have precisely the same time.
5) Physical and Virtual conversion
I’ll kinda give Vvmware a pass on this one because there are apps from companies like Platespin andVizioncore, but … physical to virtual and back again is a weak spot. If they announce better conversion tools, or hey a takeover of one of those companies I’d be a happier admin.
6) Capacity planning
There are services surrounding the Vwmare Capacity Planner that third party vendors offer similar to IBM’s CDAT study, I understand the ecosystem it feeds, but I think Vmware would be better served if the full suite of measurement tools and methodology available to the consultants conducting a capacity exercise were available to the broader public. I’d be willing to bet that the mjaority of people will still make use of experienced third parties to conduct the exercises, however those who can’t or have shops so small they are not on the radar of service providers would be able to take advantage of a great resource. It would really be nice to have the opportunity to even do ongoing internal audits using these tools and methodologies. Make me wish I could make it, and the harpists dirty look even that much more meaningful, announce opening all capacity tools up to the public.
Heweltt-Packard has fired the latest shot at EMC in the battle over performance benchmarks. HP this week posted records for megabytes per second and price performance in SPC-2 performance benchmark testing of its XP24000 enterprise SAN array, and immediately called out EMC for its refusal to submit its products to the Storage Performance Council (SPC) for benchmarking.
According to a blog by Craig Simpson, competitive strategist for HP StorageWorks:
EMC, we’re once again throwing down the gauntlet. Today the XP24000 put up the highest SPC-2 benchmark result in the world. The top spot for such demanding workloads as video streaming goes to the XP. Once again, your DMX is a no show. And once again we challenge you, this time to put up an SPC-2 number. Every other major storage vendor is now posting SPC results. Every other major storage vendor is now starting to give customers standard, open, audited performance results to show what they’ve got. You remain the only vendor keeping your product performance behind a smoke screen of mysterious numbers and internal testing. We challenge you join us in the world of openness and let customers quit guessing at how low the DMX’s performance really is!
Interestingly, the XP24000 isn’t HP’s own system. It is sourced from Hitachi Japan, and sold by Hitachi Data Systems and Sun as well as HP. And HP’s SPC-1 mark for random I/O operations (SPC-2 is for sequential data movement) was recently surpassed by 3PAR’s InServ Storage System.
But from HP’s standpoint, this isn’t about HDS, Sun, or 3PAR. It’s about going after EMC, which remains resolute in its refusal to take part in SPC testing.
“An oversimplified performance test that doesn’t accurately predict real-world performance is of little value to customers,” an EMC spokesman said in response to HP’s latest challenge.
Until now, the benchmarking skirmish was mainly between NetApp and EMC. It’s been going on for years, but last February NetApp took it to another level last February when it published benchmarks for EMC’s Clariion CX3-40 that showed it performing worse than NetApp’s FAS3040.
EMC blogger Chuck Hollis then came up with his own “standardized measure” for storage capacity efficiency last month. He pulled HP into the fray by comparing EMC CX4 against NetApp FAS and HP EVA series. (Spoiler alert: EMC came out on top).
And if EMC’s results shock you, then I’m sure you’re equally stunned to learn that HP and NetApp took exception to EMC’s numbers.
“Capacity utilization is important, but there’s no third-party body out there that measures cap utilization,” Simpson said. “We felt Chuck’s position was very skewed. We would love to see them agree to have an independent third-party to pick up the challenge.”
Does anybody besides vendors care about these things? I asked Babu Kudaravalli, senior director of operations for Business Technology Services at Port Washington, NY-based National Medical Health Card, if benchmarks were a factor in his buying storage systems. He did say he found SPC-2 more relevant to him than many enterprise shops because he runs large queries that entail sequential data. But Kudaravelli bought his two XP24000s last year, long before the latest SPC-2 numbers were released, so he saw the numbers more as vindication than as a buying guide.
“We pay attention to it, but don’t go purely based on SPC numbers,” he said. “Sometimes benchmarks are not relevant, but I was thrilled when I saw the SPC-2 number. When I saw the results, I said I already bought a winner.”
On August 14, Rackable disclosed it was selling its RapidScale clustered NAS business, which was derived from its acquisition of Terrascale last April. Company executives said they were trying to refocus the company on its core competencies after disappointing forecasts for RapidScale. During the company’s Aug. 4 earnings call, execs hinted that a new partnership with a major storage OEM was coming.
This week, Rackable revealed its partner is NetApp. According to a press release, “Under terms of the agreement, Rackable Systems is joining NetApp’s Embedded Systems Vendor Program and will integrate NetApp storage into [its]… Eco-Logical data center server and infrastructure offerings.”
It remains unclear exactly how this integration will happen. NetApp has a clustered NAS system, OnTap GX, but it won’t be integrated with its other filers until next year. A Rackable spokesperson wrote in an email to me yesterday that GX will be a part of the companies’ collaboration: “We have access to the entire Net App product portfolio and as part of this relationship we intend to collaborate on technical advances and opportunities. We still believe that there is a requirement in the market for clustered storage and we fully intend to explore the potential of offering On Tap GX within the solutions we will jointly develop.” But an official rollout announcement and plan are still forthcoming.