Storage Soup


October 6, 2008  2:42 PM

For all who have ever wanted to throw things at a vendor…

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Witness the carnage at VMWorld of a booth giveaway gone bad…(VMblog)

October 3, 2008  11:25 AM

HDS: Something self-healing and disk-based is coming. . .

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

And if I had to guess, I’d say it’s a new disk array. A self-healing, dynamically performance-optimized disk array.

For one thing, the latest fad is for new disk arrays to be promoted in what public relations pros call a “rolling thunder” fashion, where deliberately mysterious statements are made and glimpses are given of an upcoming product until the moment of its launch. See also: Xiotech’s ISE, Oracle’s Database Machine. HDS’s “to be named” is no exception.

More clues on the HDS preview website: “Hitachi + DLB = agile, no touch, no bottlenecks formula.” My guess is that DLB means dynamic load balancing, especially since, well, everything else on the site is about dynamic load balancing.

For example, click on “View video” and some dude walks up to you, saying:

Get ready. It’s coming. What if you could improve your service level agreements for virtually any storage workload? Like you, I want the perfect formula, minimizing I/O disruption and bottlenecks. But what would that formula be? I believe it includes purchasing the minimum number of required disks to meet the performance criteria of all requests. Automatic workload management and exceptional bandwidth. Now I would like to ask, what if I give you the ability to dynamically shift I/O processing to keep workloads running smoothly? Then, what would your ideal storage environment look like?

At this point three choices appear inside the video screen:

  1. Minimal manual intervention required
  2. Minimize the risk of degradation when shifting I/O processing
  3. Self-healing system to overcome failure of key components

Meanwhile, a countdown clock on the site reads 9 days, 16 hours, 53 minutes, 52 seconds. In other words, Oct. 13 — the first day of Storage Networking World.

Around here, the scuttlebutt has been strong that HDS is prepping a new AMS (Adaptable Modular Storage) midrange array. The high-end USP has already gotten a couple of recent refreshes, including a mini-version, as well as a software update; it would make sense for HDS’s midrange arrays to be up for a revamp next.


October 2, 2008  10:43 AM

iSCSI’s $360M vote of confidence

Dave Raffo Dave Raffo Profile: Dave Raffo

All the talk about Fibre Channel over Ethernet (FCoE) over the last year has raised questions about the future of iSCSI storage once the convergence of FC and Ethernet takes place.

But Hewlett-Packard’s $360 million acquisition of LeftHand Networks proves that HP agrees with its rival Dell that iSCSI SANs are here to stay. Dell paid $1.4 billion for LeftHand’s iSCSI rival EqualLogic in January, and has ridden a mini-wave of iSCSI adoption: IDC said second-quarter iSCSI revenue grew 93.9 percent over last year.

While the acquisitions bring Dell and HP another storage platform and some product positioning issues, the vendors seem willing to let FC remain the dominant protocol at the high end while iSCSI adoption spikes among SMB and midrange shops due to growing interest in server virtualization  and 10-Gig Ethernet. 

Representatives of HP and Dell agree that history indicates FCoE adoption will be slow.

 “The iSCSI standard was ratified in 2003, and here weare in 2008 just getting traction,” HP StorageWorks CTO Paul Perez says. “I think FCoE will follow a similar adoption curve and adoption will be slow. iSCSI will have a prominent place, especially with 10-Gig Ethernet. FCoE is a performance fabric, while iSCSI is a general purpose fabric.”

Dell vice president of marketing John Joseph, who was with EqualLogic before the acquisition, says iSCSI finally has momentum.

“Migration on and off technologies by storage customers is extremely slow,” he said. “It’s a helluva lot slower than watching paint dry. Typical adoption curves are measured in five-to-seven-year increments. We’re still in the early years of [iSCSI's] adoption phase.”

Joseph says while he expects many FC SANs to migrate to FCoE, Data Center Ethernet and 10-gig Ethernet will erode the FC base and lead more storage shops to iSCSI.

“Ten-gigE makes a lot of objections [to iSCSI] go away, and Data Center Ethernet makes even more objections go away,” he said.


October 2, 2008  10:02 AM

IBM virtual desktop storage update – sort of

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Last week I wrote about some confusion I had regarding IBM’s virtual storage optimizer (VSO) for VMware Desktop Infrastructure (VDI), especially after I was told by a VMware official that the IBM product, credited to an internally-developed algorithm, was based on VMware’s Linked Clone API.

I wrote to one of the researchers involved and got a response through IBM’s PR spokesperson that:

  • The IBM-developed algorithm is based on VMware API available in Virtual Infrastructure version 3, not the VMWare LinkedClone API. Specifically, the algorithm uses VMware Infrastructure SDK 2.5.0 as documented at https://secure.techtarget.com/exchweb/bin/redir.asp?URL=http://www.vmware.com/support/developer/vc-sdk/ and file system level access on ESX servers.
  • We developed the algorithm based on the API that was publicly available and supported at the time that we began development efforts
  • VMware can provide detail on the differences between the APIs in Virtual Infrastructure version 3 and VMware LinkedClone API

So far no response from VMware.

Regardless of what API was or was not used, what I am trying to get at is the functional difference between these two products, if any. If there is one, it’s important for users to know about. If there isn’t one, it speaks to the growing convergence between VMware’s virtual infrastructure and storage vendors’ value-add software.

the bottom line right now seems to be that IBM’s product is for existing IBM customers, since it requires professional services through IGS. There are some shops that need the IBM label before they buy, and so VSO could at least be a fit for them.

Appreciate weigh-ins from IBM, VDI, and / or VMware experts.


October 2, 2008  8:20 AM

Quantum-Riverbed bout ends with $11M handshake

Dave Raffo Dave Raffo Profile: Dave Raffo

Backup vendor Quantum and WAN optimization specialist Riverbed Technology dropped their respective data deduplication patent lawsuits against each other this week, with Riverbed agreeing to pay Quantum $11 million. Both sides dropped all claims and agreed not to file more data deduplication patent suits against the other.

The legal scuffle began in October when Quantum charged Riverbed with infringing on a dedupe patent granted to Rocksoft in 1999 and later acquired by Quantum through an acquisition. Riverbed countered in November with its own suit, charging that Quantum’s dedupe products infringe on a Riverbed patent.

From here, the settlement looks like a draw. Quantum got paid, but didn’t get Riverbed to stop using the technology in its WAN appliances as it tried to do in its suit. Such a judgment would have cost Riverbed at lot more than $11 million.

 


October 1, 2008  3:23 PM

Unitrends unites backup, DR management

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

SMB backup and DR vendor Unitrends has released version 4.0 of its RapidRecovery management software for its Data Protection Unit disk-to-disk backup hardware. The new version completes a yearlong effort from Unitrends to bring together what were once separate GUIs for managing backup and offsite vaulting using the DPU devices.

A year ago, the company removed the command line interface, which CEO Duncan MacPherson described as “a late ’90s level GUI that looked old and slow.” At that time, Unitrends gave backup and configuration management interfaces a facelift. The current release pulls in offsite vaulting and data recovery. Other new features include the ability to create customized reports based on the GUI, test DR plans, recover single files from a secondary site, and support for new operating systems including Novell Netware. MacPherson said Windows 2008 will be supported by the end of the year.

Unitrends’ goal is to package all data protection processes and hardware into one product. Combining operational backup and disaster recovery practices also seems to be an emerging trend. This is also being done through backup service providers whose backups by definition are offsite, and who are beginning to offer more affordable system state recovery of hosts using virtual servers. Stay tuned to the SearchDataBackup.com and SearchDisasterRecovery.com sites for more on this.


September 30, 2008  10:58 AM

Let’s manage data, not just storage

Beth Pariseau Brein Matturro Profile: Brein Matturro

 (Ed. Note: this guest blog comes from Siemens Medical Solutions storage administrator Jim Hood in response to the editorial in the July Storage magazine, Dedupe and virtualization don’t solve the real problem).

I was happy to see that someone finally acknowledged the root of some of the evils in the storage business. Your editorial, “Dedupe and virtualization don’t solve the real problem,” spoke to the heart of the matter: “The math is easy: More servers mean more apps, and more apps mean more data.” It cannot be spoken any clearer than that. I have been involved with storage all of my 27 years in IT from the early ‘80′s until now spanning mainframe and open systems and have seen the amount of data expand exponentially. I wish my retirement fund had the same growth curve.    

In our business, we continue to satisfy our hosted mainframe customers’ needs with relatively small amounts of data (our bread-and-butter apps in zOS use customized VSAM [Virtual storage access method] files hardly over the “4-gig limit” to provide databases for hospital clinical applications) while similar applications on Windows stretches the imagination – mine at least. As someone who has lived through this transformation and now has to support the backup processes for our open system business, the amount of data we handle makes my head spin.

It isn’t unusual for us to process 25 TB of backup data every day (because we use Tivoli Storage Manager, this consists of only new or changed files). We have accumulated over 2 PB of capacity in our backup inventory. I don’t see it getting any less even though we have an active relationship with users, and encourage them to look at what they backup and how long they retain the backup data. The volume just keeps growing.  

With all the technology at our disposal, the industry does not seem to want to address your basic math problem. I believe we live in an age where both technology and its pricing have brought us to a point where “creating data is cheap” — so cheap that there is no turning back. We seem to have lost the thought processes associated with data management: how many files, file size, other data spawned from these files, where does the data reside, what data should be backed up, etc. 

I’m not sure, going forward, how to make it appear as though storage costs are kept relatively level while at the same time incurring new costs for hardware, software and people to manage this growth. In our environment we pass on expenses by using a chargeback system, but pressure from the user base (application development) to reduce their costs from one fiscal year to the next usually translates to lower chargeback pricing while the real problem – too much data — persists.  We can try to dedupe and virtualize our way out of, this but somebody will have to pay for it.

To really address this problem will require, as you stated, “an awful lot of manual work,” but it will be difficult for many organizations to cough up the resource costs to do so. Let’s face it, that grunt work doesn’t generate any new revenue through new products. So again, it becomes a storage management issue rather than a data management solution. 

My view is this: Twenty years ago we had a modest home with a one-car garage (mainframe) to keep all our stuff in. In the last decade we decided we needed more stuff – newer stuff — and moved to a larger house with a two-, heck, three-car garage (Windows). The reality of the economy and housing market is reshaping the world of real estate. I’m not sure what kind of “housing crunch” will be necessary to have us take a different look at how we create data. Getting people to do that would be a good first step in the right direction.  

Finally, on a more humorous note, I think one of the problems is in how we refer to amounts of data. One TB is no big deal, right? How do I sell my problem to those who write the checks when I speak in terms of one or two of something? “So, Jim, you say you can’t manage your 2 PB easily!” or “What is so hard about managing your growth from 1 PB to 2 PB, come on, you only grew by one!” It is all about perception these days and by truncating real capacities, we diminish the true state of affairs. Sometimes I try to communicate the reality by simply changing the language: 2,000 TB makes a larger impact than 2 PB.  Maybe we all need to begin speaking in larger quantities than single digits

Jim Hood

EHS Storage Management

Siemens Medical Solutions


September 26, 2008  1:08 PM

Financial forecast calls for gloom

Dave Raffo Dave Raffo Profile: Dave Raffo

Until now, the storage industry has held up well this year in the face of any economic slowdowns – even those affecting the financial services sector.

But with the economy’s problems taking center stage in the U.S. this week, financial analysts dusted off their crystal balls and saw a gloomy future for storage vendors.There were a slew of stock downgrades and even more earnings reduction forecasts for storage and the IT industry in general this week. And almost every one was attributed to the general economy rather than specific company problems. Whether Wall Street or Main Street gets the worst of the fallout, the consensus is less money will be left to spend on technology.

As RBC Capital Markets analyst Tom Curlin put it in a research note this week:

” … our deceleration stance with respect to U.S. IT spending is evolving to a contraction stance. The credit markets continue to tighten and the flow of credit to consumers and corporations is contracting. The metrics we track to ascertain consumer and corporate buying power are also contracting. In concert, degrading employment and capital spending metrics do not bode well for IT spending over the next 12 months.”

Curlin added his research shows “a neutral tone” regarding business this quarter, “but greater concern about the forward outlook. Naturally, this concern has risen after the collapse of Lehman and the various aftershocks in the financial system. Thus far, enterprise storage demand is steady, whereas we sense server demand has waned in recent weeks.”

Curlin downgraded the stock ratings of QLogic, Xyratex, Voltaire, and VMware, and slashed 2009 earnings estimates and price targets of EMC, Brocade, NetApp, Seagate, CommVault, 3PAR, and Compellent. In each case, he cited global IT spending slowdowns.

In a note to his clients this week, Aaron Rakers of Wachovia reported that around 18 percent to 20 percent of enterprise storage revenue comes from the financial services industry. He estimates systems vendors EMC and Hitachi Data Systems and switch vendor Brocade generate around 20 percent of their revenue from financial services, with HBA vendor Emulex likely higher.

So it follows that serious trouble in the financial services industry threatens a good chunk of storage sales.

Also this week:

  • Pacific Crest’s Brent Bracelin cited a forecasted slowdown in data center-related spending while cutting stock ratings for Brocade, Double-Take, CommVault, and QLogic and cut price targets or estimates on EMC, NetApp, 3PAR, HP, Data Domain, Emulex, and Mellanox.
  • Morgan Stanley lowered estimates on PC hardware stocks because of decreased global demand, and dropped stock price targets for EMC, IBM, Dell, Hewlett-Packard and Cisco among others.

Ok, it wasn’t all bad news. Caylon Securities upgraded data deduplication specialist Data Domain’s stock based on solid results this quarter. It figures the one storage technology that’s growing these days is one that’s responsible for bringing about reduction.


September 25, 2008  11:18 AM

Staples to rebrand Mozy online backup service, EMC drops Fortress name

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Office supplies giant Staples is adding a rebranded version of EMC’s Mozy online backup service to its Thrive online backup services, which already include a rebranded version of i365′s EVault for servers. The first release of Staples’ repackaging of Mozy will be targeted at laptop backup and desktop backup. Jim Lippie, president of Staples network services, says the Mozy Enterprise server edition will be added down the road.

In addition to office supplies and EVault, Staples also offers reactive online services for small business IT customers for breakfix and network support through a subsidiary called EZMobileTech. However, this is Staples’ first fully managed IT service , and it will use EMC’s Level 4 data center in New England (originally launched as Fortress) to store the data.

Staples was mum on exactly what customizations they’ve made to the Mozy platform. Mozy COO Mozy Vance Checketts also offered no specifics, but said customization options for data security and interface features are built in to the software for service provider partners like Staples so that modifications to the core product aren’t needed.

In the meantime, Checketts said, EMC is dropping the Fortress name and will refer to the whole infrastructure as Mozy for now. “We’re very carefully looking at what to call the next generation of technologies we’re pulling together – stay tuned for a new name,” he said.

My guess? VMware.


September 24, 2008  9:22 AM

Last stand for NetApp’s DataFort?

Dave Raffo Dave Raffo Profile: Dave Raffo

 In his latest blog, NetApp chief marketing officer Jay Kidd waxes enthusiastic about Brocade’s new encryption devices:

 Brocade has new blindingly fast Fibre Channel switches and director blades that integrate almost 100 GB/s [actually 96 GB/s] of encrypting bandwidth.

Kidd is a former Brocade guy, and maybe he’s happy for his old colleagues. But it’s more likely that he sees the encryption switch and blade as a boon for his current company. He goes on to say: “NetApp will resell the Brocade products as our next generation FC DataFort.”

DataFort is the encryption device platform that NetApp acquired when it bought Decru for $272 million in 2005. Brocade’s devices support NetApp key management, and Brocade licensed its encryption technology to NetApp to ensure compatibility between its devices and the DataFort platform. That’s why the headline on Kidd’s blog reads: NetApp and Brocade’s Encryption Partnership.”

Kidd’s blog doesn’t discuss NetApp’s plans for DataFort in his blog. Besides the FC version, DataFort supports iSCSI, NAS and legacy SCSI systems. After getting briefed by Brocade last week, I asked NetApp specifically about the future of DataFort. NetApp’s senior director of data protection solutions Chris Cummings sent an email positioning the Brocade news as an expansion of the platform. “… over the past year, NetApp has also added the ability to deliver key management services combined with encryption delivered by existing components of the data center fabric, including application and tape providers, and now switch providers,” he wrote.

Brocade reps and others in the industry expect NetApp to keep DataFort as a lower-end encryption device while selling Brocade’s products for data center encryption. But it also sounds like NetApp sees Brocade rather than DataFort as its encryption platform for the future.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: