I was happy to see that someone finally acknowledged the root of some of the evils in the storage business. Your editorial, “Dedupe and virtualization don’t solve the real problem,” spoke to the heart of the matter: ”The math is easy: More servers mean more apps, and more apps mean more data.” It cannot be spoken any clearer than that. I have been involved with storage all of my 27 years in IT from the early ‘80′s until now spanning mainframe and open systems and have seen the amount of data expand exponentially. I wish my retirement fund had the same growth curve.
In our business, we continue to satisfy our hosted mainframe customers’ needs with relatively small amounts of data (our bread-and-butter apps in zOS use customized VSAM [Virtual storage access method] files hardly over the “4-gig limit” to provide databases for hospital clinical applications) while similar applications on Windows stretches the imagination – mine at least. As someone who has lived through this transformation and now has to support the backup processes for our open system business, the amount of data we handle makes my head spin.
It isn’t unusual for us to process 25 TB of backup data every day (because we use Tivoli Storage Manager, this consists of only new or changed files). We have accumulated over 2 PB of capacity in our backup inventory. I don’t see it getting any less even though we have an active relationship with users, and encourage them to look at what they backup and how long they retain the backup data. The volume just keeps growing.
With all the technology at our disposal, the industry does not seem to want to address your basic math problem. I believe we live in an age where both technology and its pricing have brought us to a point where “creating data is cheap” — so cheap that there is no turning back. We seem to have lost the thought processes associated with data management: how many files, file size, other data spawned from these files, where does the data reside, what data should be backed up, etc.
I’m not sure, going forward, how to make it appear as though storage costs are kept relatively level while at the same time incurring new costs for hardware, software and people to manage this growth. In our environment we pass on expenses by using a chargeback system, but pressure from the user base (application development) to reduce their costs from one fiscal year to the next usually translates to lower chargeback pricing while the real problem – too much data — persists. We can try to dedupe and virtualize our way out of, this but somebody will have to pay for it.
To really address this problem will require, as you stated, “an awful lot of manual work,” but it will be difficult for many organizations to cough up the resource costs to do so. Let’s face it, that grunt work doesn’t generate any new revenue through new products. So again, it becomes a storage management issue rather than a data management solution.
My view is this: Twenty years ago we had a modest home with a one-car garage (mainframe) to keep all our stuff in. In the last decade we decided we needed more stuff – newer stuff — and moved to a larger house with a two-, heck, three-car garage (Windows). The reality of the economy and housing market is reshaping the world of real estate. I’m not sure what kind of “housing crunch” will be necessary to have us take a different look at how we create data. Getting people to do that would be a good first step in the right direction.
Finally, on a more humorous note, I think one of the problems is in how we refer to amounts of data. One TB is no big deal, right? How do I sell my problem to those who write the checks when I speak in terms of one or two of something? ”So, Jim, you say you can’t manage your 2 PB easily!” or “What is so hard about managing your growth from 1 PB to 2 PB, come on, you only grew by one!” It is all about perception these days and by truncating real capacities, we diminish the true state of affairs. Sometimes I try to communicate the reality by simply changing the language: 2,000 TB makes a larger impact than 2 PB. Maybe we all need to begin speaking in larger quantities than single digits
EHS Storage Management
Siemens Medical Solutions]]>
But with the economy’s problems taking center stage in the U.S. this week, financial analysts dusted off their crystal balls and saw a gloomy future for storage vendors.There were a slew of stock downgrades and even more earnings reduction forecasts for storage and the IT industry in general this week. And almost every one was attributed to the general economy rather than specific company problems. Whether Wall Street or Main Street gets the worst of the fallout, the consensus is less money will be left to spend on technology.
As RBC Capital Markets analyst Tom Curlin put it in a research note this week:
” … our deceleration stance with respect to U.S. IT spending is evolving to a contraction stance. The credit markets continue to tighten and the flow of credit to consumers and corporations is contracting. The metrics we track to ascertain consumer and corporate buying power are also contracting. In concert, degrading employment and capital spending metrics do not bode well for IT spending over the next 12 months.”
Curlin added his research shows “a neutral tone” regarding business this quarter, “but greater concern about the forward outlook. Naturally, this concern has risen after the collapse of Lehman and the various aftershocks in the financial system. Thus far, enterprise storage demand is steady, whereas we sense server demand has waned in recent weeks.”
Curlin downgraded the stock ratings of QLogic, Xyratex, Voltaire, and VMware, and slashed 2009 earnings estimates and price targets of EMC, Brocade, NetApp, Seagate, CommVault, 3PAR, and Compellent. In each case, he cited global IT spending slowdowns.
In a note to his clients this week, Aaron Rakers of Wachovia reported that around 18 percent to 20 percent of enterprise storage revenue comes from the financial services industry. He estimates systems vendors EMC and Hitachi Data Systems and switch vendor Brocade generate around 20 percent of their revenue from financial services, with HBA vendor Emulex likely higher.
So it follows that serious trouble in the financial services industry threatens a good chunk of storage sales.
Also this week:
Ok, it wasn’t all bad news. Caylon Securities upgraded data deduplication specialist Data Domain’s stock based on solid results this quarter. It figures the one storage technology that’s growing these days is one that’s responsible for bringing about reduction.]]>
In addition to office supplies and EVault, Staples also offers reactive online services for small business IT customers for breakfix and network support through a subsidiary called EZMobileTech. However, this is Staples’ first fully managed IT service , and it will use EMC’s Level 4 data center in New England (originally launched as Fortress) to store the data.
Staples was mum on exactly what customizations they’ve made to the Mozy platform. Mozy COO Mozy Vance Checketts also offered no specifics, but said customization options for data security and interface features are built in to the software for service provider partners like Staples so that modifications to the core product aren’t needed.
In the meantime, Checketts said, EMC is dropping the Fortress name and will refer to the whole infrastructure as Mozy for now. “We’re very carefully looking at what to call the next generation of technologies we’re pulling together – stay tuned for a new name,” he said.
My guess? VMware.]]>
Brocade has new blindingly fast Fibre Channel switches and director blades that integrate almost 100 GB/s [actually 96 GB/s] of encrypting bandwidth.
Kidd is a former Brocade guy, and maybe he’s happy for his old colleagues. But it’s more likely that he sees the encryption switch and blade as a boon for his current company. He goes on to say: “NetApp will resell the Brocade products as our next generation FC DataFort.”
DataFort is the encryption device platform that NetApp acquired when it bought Decru for $272 million in 2005. Brocade’s devices support NetApp key management, and Brocade licensed its encryption technology to NetApp to ensure compatibility between its devices and the DataFort platform. That’s why the headline on Kidd’s blog reads: NetApp and Brocade’s Encryption Partnership.”
Kidd’s blog doesn’t discuss NetApp’s plans for DataFort in his blog. Besides the FC version, DataFort supports iSCSI, NAS and legacy SCSI systems. After getting briefed by Brocade last week, I asked NetApp specifically about the future of DataFort. NetApp’s senior director of data protection solutions Chris Cummings sent an email positioning the Brocade news as an expansion of the platform. “… over the past year, NetApp has also added the ability to deliver key management services combined with encryption delivered by existing components of the data center fabric, including application and tape providers, and now switch providers,” he wrote.
Brocade reps and others in the industry expect NetApp to keep DataFort as a lower-end encryption device while selling Brocade’s products for data center encryption. But it also sounds like NetApp sees Brocade rather than DataFort as its encryption platform for the future.]]>
In 2006, Seagate acquired ActionFront Data Recovery Labs, which became Seagate Recovery Services. Last year Seagate added backup and recovery software and services vendor EVault, the Open File Manager product line from St. Bernard Software, and eDiscovery service provider MetaLincs. Those services have been known collectively as Seagate Services, but today were re-branded i365, a Seagate Company. (The “i” stands for information, 365 for every day availability.)
Mark Grace, general manager of i365, says the idea is to become a comprehensive group instead of a collection of acquisitions. “We’re committing ourselves to this space,” he says. “It showed good insight on the part of Seagate to chase this market.”
As of now, there isn’t much difference in i365 from Seagate Services besides the branding. The individual services and products will remain, Seagate will not break out i365′s financials, and Grace has already been running the services group. But he says there will be some integration of products “where it makes sense” such as combining data protection with retention and archiving with data discovery.
As for immediate changes, the St. Bernard product line will become part of the EVault family, and Seagate Recovery Services becomes part of the Professional Services platform. The current i365 lineup consists of: i365 EVault Data Protection, i365 MetaLincs E-Discovery for review of electronic information for legal needs, i365 Retention Management for recovering, migrating, restoring, and managing data for e-discovery purposes, and i365 ProServe Professional Services for training, consulting and implementing products.
The move separates the services brand from the overall Seagate brand, which is tied to disk drives. “Seagate is a component manufacturing company. i365 is a product company,” Grace says. ” The go to market approach is totally different. We’re going to bring a different reputation to this space.”
Now i365 will compete as a service company with the likes of Iron Mountain and others looking to expanding into the SaaS space such as Symantec and CommVault. i365also competes with Seagate disk drive customers, mainly EMC, IBM, and Hewlett-Packard. But Seagate claims its services group grew 39 percent year over year for its fiscal year 2008 while other companies were still plotting a move into services.
Grace says i365 will brace for the expanded competition by rolling out new services and products – bare-metal restore is coming soon – and perhaps a few more acquisitions down the road. “We’ll continue to grow organically and inorganically,” he said.]]>
The next day, at a keynote, VMware officials demonstrated a new concept they’re rolling out in the next version of VMware Infrastructure called LinkedClones. LinkedClones create a “golden image” of virtual desktop files, as well as incremental changes;. The golden images, VMware demonstrated, can be updated with patches that automatically proliferate to all virtual machines based on the image to simplify rollouts and updates.
From the briefing I’d had with IBM the day before and this keynote demonstration, these products seemed similar. Since the VMware demo last Wednesday, I’ve been trying to assess what might be different about them. VMware officials I spoke with Wednesday and Thursday said they didn’t know enough about IBM’s product to comment on its differentiation and IBM spokespeople were unavailable. IBM’s press release about VSO had stated that it was “based on an algorithm developed by IBM Research,” but VMware said LinkedClones wasn’t based on anything from IBM.
Today I spoke with VMware director of enterprise desktop Jerry Chen, who told IBM’s VSO is based on the LinkedClones API. I asked about the algorithm developed by IBM. “There are things our partners can do to further optimize LinkedClone,” Chen said. “For example, there are different settings for the number of LinkedClones each master virtual machine can copy.”
Chen said he wasn’t familiar enough with the IBM product to say what VSO adds on top of LinkedClones. Meanwhile, IBM has been coy on this one. Since last week, I’ve put in numerous requests for comment by phone and email, including a fresh round of requests today after speaking with Chen. So far, no comments have been forthcoming.]]>
EMC Corp. said EMC Control Center (ECC) 6.1 will now support reporting on thin-provisioned volumes within Symmetrix disk arrays. This was part of a package of update to various management software components in EMC’s portfolio, which also included Infra service desk software and Application Discovery Manager (ADM). ECC will also support thin provisioned volumes when they become available for Clariion arrays.
Hitachi Data Systems (HDS) is now offering the Hitachi Storage Replication Adapter, a heterogeneous replication software plug-in certified with Site Recovery Manager. Xiotech Corp. also announced integration with SRM.
The latest release of DataCore SANmelody storage virtualization software has been certified with the newest release of VMware ESX Server 3.x. The certification covers base iSCSI connectivity as well as high-availability configurations.
GlassHouse Technologies consultants have joined up with Tek-Tools Software to offer a new managed service for virtualization environments. GlassHouse developed a management interface integrated with Tek-Tools’ Profiler for VMware to provide a single pane view into the virtual environment.
From the Storage sites:
From the Server Virtualization group:
A busy – and vast – show floor.
Psst…roadmap stuff over here…
Some of the storage roadmap stuff.
Still undecided whether the giant rotoscoped heads were cool or freaky. Or if those two things are necessarily mutually exclusive.
Members of the Fourth Estate taking in Paul Maritz’s keynote Tuesday.
Paul Maritz keynote
The stampeding herd heads for the casino at the end of sessions. Estimated attendance at the show was 14,000.
View from Ghost Bar at the Palms, where VMware held a reception for press, analysts and partners Tuesday night.
The vastness of the keynote hall cannot be overstated.
New product demos at Wednesday morning keynote.
HP wins my personal Best Swag of the Show award this year for their custom-printed inside-out Oreo cookie.
Atlas will use the deduplication technology Riverbed employs in its Steelhead WAN optimization products to shrink primary data. Although the product is just entering alpha and won’t be available until next year, Riverbed execs have been giving reporters and analysts a peak into the technology.
Unlike current deduplication products on the market, Atlas will be able to dedupe data across files, volumes and namespaces, Riverbed marketing SVP Eric Wolford said. Atlas will originally support CIFS, but Wolford said it will eventually work with all file data and then extend to non-file data via iSCSI two or three years down the road.
Atlas sits alongside Riverbed’s Steelhead appliances in the data center, in front of NAS file servers. It would typically be used in high availability clusters. One or more Steelhead devices are required for Atlas.
All WAN optimization devices use deduplication to shrink data, but none have disclosed plans to use that technology on primary data yet.
“I haven’t heard of any other vendor doing this,” Yankee Group analyst Zeus Kerravala said. “It’s a logical follow-on to what they already do. They probably got themselves a one-to-two year head start.”
Most deduplication products today are used for backing up data, although NetApp licenses its dedupe for free for primary data. Because Atlas can further shrink data already deduped, Wolford says Atlas can either compete or complement NetApp’s deduplication.
But Atlas may be a few years from mainstream. Wolford admits it might take customers years to get used to the idea of adding another device in the network. While Riverbed may eventually add Atlas’ capabilities right into Steelhead, the first version will be a separate device.
“There are people who are going to be nervous about this and want to wait two or three years,” Wolford said.
He says by offering separate appliances, Riverbed customers can scale to different types of workloads by adding Steelheads or Atlases.
No pricing is available yet. “This isn’t a product launch,” Wolford says. “We’re just starting an Alpha program.”]]>
greenBytes calls its proprietary enhancements to the file system ZFS+. The software bundles in deduplication, inline compression, power management using disk drives’ native power interfaces. Drive spin-down is beginning to be a checklist item as big vendors like EMC and HDS have been adding what was once a bleeding-edge feature offered by startups into their established arrays. However, greenBytes also claims that its enhancements to ZFS store data heuristically on the smallest number of disks possible, freeing up more drives to be put into a “sleepy” spun-down state. (Interesting…Sun had similar things to say about ZFS non-plus when it came to the layout of data for solid-state disks.)
This is part of Sun’s efforts to open up its storage technology to developers, in the hopes of exactly this kind of product development. I’ve talked to some users at big companies who are using Thumper for disk-based backup directly attached to (mostly Symantec NetBackup media servers), but most of them find the product appealing because of its high-density direct-attached hardware, not necessarily for its software features. As Dave Raffo covered for the Soup last week, Sun CEO Jonathan Schwartz is painting a cheery picture of the future for open-source storage, but so far the revenue and market share juries are still out.]]>