Storage Soup


July 9, 2008  9:53 AM

EMC enters no-spin zone on VMware CEO’s departure

Dave Raffo Dave Raffo Profile: Dave Raffo

A day later after the fact, is has become clear that VMware didn’t dump CEO Diane Greene because of personal differences between her and EMC executives. The final straw was a business dispute with EMC brass.

Sources close to EMC say the issue that cost Greene her job was she wanted EMC to spin off VMware into an independent company while EMC CEO Joe Tucci and his team want to keep the majority stake EMC holds. EMC spun out around 15% of VMware’s shares last August in an IPO that raised $1.1 billion.

“Diane wanted a spinout of VMware, and EMC said no we won’t spin out VMware at this time because it is our golden goose,” said a financial analyst who covers EMC.

A full spinoff of VMware has been an issue ever since the first IPO. Analysts have asked Tucci on EMC’s last two earnings calls if a spinoff was in the works for next year (a spinoff would be tax-free if EMC waits until 2009). Tucci gave almost the same answer both times: He said EMC had no plans to spin off VMware “right now” because both companies were performing well, and that creates shareholder value. He added both times that management and the board are focused on creating maximum shareholder value for both sets of shareholders.

Try telling that to investors who have watched VMware’s stock price drop from $125.25 to $40.19 in eight months while EMC shares fell 11.6% Tuesday to a 52-week low of $13.39. Now some analysts are wondering if investors will demand a spinoff.

EMC shed little light on the situation since yesterday morning’s news release revealing that Greene would be replaced by former Microsoft and EMC exec Paul Maritz. Neither EMC nor VMware held conference calls to discuss the move, although Tucci and Maritz did speak to selected media outlets. Tucci told Reuters and the Wall Street Journal that Greene lacked experience to run such a large company as VMware. Maritz told BusinessWeek he’s not underestimating VMware competitor Microsoft, nor is he awed by his old company.

A few of EMC’s bloggers have weighed in, wondering what all the fuss is about. EMC’s VP of Technology Alliances Chuck Hollis wrote that some people are reacting irrationally to Green’s departure.

The real surprise for me is that it was such a surprise for so many people. If you think about it, successful companies often go through different phases, and it’s not unusual for different management skills and styles to be required during the journey.

True. But I know one company that has kept the same CEO for seven years while morphing from a storage systems company to a hardware/software/security/services/server virtualization/wannabe network management behemoth. So EMC actually makes the case against change for change’s sake.

In a blog titled VMware Greene’s contract not renewed, EMC’s Mark “Zilla” Twomey points out that Greene’s contract expires in late July. That explains the timing of the move, but not the reason for it.

July 8, 2008  11:41 AM

VMware CEO Greene gets pink slip

Dave Raffo Dave Raffo Profile: Dave Raffo

Last week we ran a story on SearchStorage.com by Beth Pariseau about friction between VMware and storage vendors as more storage gets connected to virtual machines. Because the story focused on technology and not the internal workings of VMware, EMC was not among the storage vendors listed as banging heads with VMware.

But we found out today just how much friction there was between VMware and EMC, which still owns 86% of VMware after spinning out the rest in an IPO 11 months ago. The friction came to a head this morning when VMware’s board, chaired by EMC CEO Joe Tucci, replaced CEO Diane Greene with EMC executive Paul Maritz. Maritz joined EMC in February when EMC acquired his Pi Corp. and installed Maritz as head of its Cloud Division. Before he started Pi, Maritz spent 14 years with Microsoft and was on Microsoft’s executive committee.

EMC did not say Greene was fired, but its press release did not say she left on her own. Nor did it include any comment from Greene. On behalf of the board, Tucci thanked Greene “for her considerable contributions to VMware” and then predicted that VMware would increase its market share lead in the server virtualization market.

Execs from VMware and EMC have never had a warm relationship, but Greene and her team were allowed to run the company as they wanted as long as it kept growing revenue every year at astronomical numbers. Today’s release also mentioned that VMware was “modestly” adjusting its 2008 revenue guidance below the 50% year-over-year growth it forecast. But a slight miss is hardly enough reason to fire a CEO who has had nothing but success since EMC bought the company in early 2004. Perhaps it was an opening, though, and an excuse to make a move that the board wanted to make for awhile.

One financial analyst who follows both companies attributed the move to “a fundamental difference between VMware and EMC management” more than any failures on Greene’s part. There shouldn’t be much difference now — Greene was the lone VMware executive among VMware’s eight directors. Maritz becomes the sixth EMC rep on the board.

Maritz’s Microsoft background could be another reason for the move. Microsoft’s Hyper-V is now on the market as a competitor to VMware, which may be making some investors nervous. VMware’s stock price has also tumbled from $152.25 last October to $53.19 at today’s opening, but that was largely attributed to the overall market. Today’s news only accelerated the fall — shares fell more than 24% to $40.19 at the market’s close.

Forrester Research analyst Galen Schreck said the drop before today was likely due to market conditions more than increased competition. “I don’t think the entrance of Hyper-V has made much of an impact in this short period of time,” he said.

Going forward, Schreck said Maritz might want to take more of an industry leader role than Greene did. “Diane had been characterized as a humble person, not really flashy, not a big evangelist.,” he said. “Paul could take on a role with more public presence, be more of a keynote-speaker type.”

EMC hasn’t yet replaced Maritz as the head of its Cloud Division but an EMC spokesman said he expects a replacement soon.


July 3, 2008  8:59 AM

ILM seems to be MIA, but why?

Tskyers Tory Skyers Profile: Tskyers

I don’t see the terms ILM or Data Lifecycle Management mentioned much anymore on the Interwebs. Odd that we have this many regulatory pressures, and the one thing that could actually save us some money, time and stress when dealing with those pressures hasn’t been seen in headlines for at least a year, maybe two.

Where did ILM go? Did it morph into something else? Based on all the coverage it was getting two years ago, you’d think we would have progressed to hardware-based products supporting ILM initiatives by now. Yet the storage hardware vendors, with the most noticeable exception being Compellent, still haven’t added ILM features to their hardware. Storage software vendors, like Veritas, IBM Tivoli, CommVault, et al., still offer ILM features for their software suites, but today it’s dedupe that has the spotlight. And I’m not so sure I see the reason why.

I understand the marketing behind dedupe: “Hard economic times are ahead, so save money and don’t buy as much disk.” But if you look at the sales figures from the leading storage vendors, they are all meeting their sales estimates, and in some cases exceeding those estimates by a good margin, so businesses apparently haven’t yet gotten into the whole “save money” or the “buy less” aspect of that marketing push.

Managing one’s data seems to me the better way to spend, if you know when to move it to cheap disk, commodity tape and through to destruction. It would free up capacity on fast expensive disk, and reduce the effort needed to satisfy policy pressures. I distinctly remember eons ago sitting in a conference hall and listening to Curtis Preston for the first time, and this topic was the thrust of his talk: Manage your data, figure out where it should live and put it there.

This message holds true now more than ever. Just think, three or four years ago, 250 GB drives were the largest SATA drives certified for storage arrays. Now, with 750 GB to 1 TB in each slot, we have even more of a need to know when the data was created and when it needs to be archived or destroyed. With SSDs rapidly making their way into storage arrays, data management and subsequent movement becomes a crucial cost saving tool.

The part about all this that baffles me the most is liability. You’d think that if you were going to be legally liable to either hold onto or destroy files or information, you’d probably want an automated, “people-resistant” system in place to handle all that. At another recent Techtarget event on DR, Jon Toigo talked about a data map and knowing how valuable your data is in order to best protect the most valuable data. Sounds like a straightforward, common-sense approach, but as far as I know only one vendor is doing it in hardware, and most of the software vendors have gone quiet with their marketing behind it.

The term Information Lifecycle Management conjures up thoughts of managed data, orderly storage environments, documented processes, and responsible governance for me. All these things I’ve seen brought up in blogs (some of my own included) and articles, expressed as concerns for businesses large and small. So why has ILM gone underground?


July 2, 2008  10:10 AM

More server virtualization benchmark drama

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Last time on As the Benchmark Turns, we saw NetApp pull a fast one on EMC. This week’s episode brings us into the torrid realm of server virtualization and Hyper-V.

It began with a press release from QLogic and Microsoft lauding benchmark results showing “near-native throughput” on Hyper-V hosts attached to storage via Fibre Channel, to the tune of 200,000 IOPS.

Server virtualization analyst Chris Wolf of the Burton Group took the testing methodology to task on his blog:

The press release was careful to state the hypervisor and fibre channel HBA (QLogic 2500 Series 8Gb adapter), but failed to mention the back end storage configuration. I consider this to be an important omission. After some digging around, I was able to find the benchmark results here. If I was watching an Olympic event, this would be the moment where after thinking I witnessed an incredible athletic event, I learned that the athlete tested positive for steroids. Microsoft and QLogic didn’t take a fibre channel disk array and inject it with Stanzanol or rub it with “the clear,” but they did use solid state storage. The storage array used was a Texas Memory RamSan 325 FC storage array. The benchmark that resulted in nearly 200,000 IOPS, as you’ll see from the diagram, ran within 90% of native performance (180,000 IOPS). However, this benchmark used a completely unrealistic block size of 512 bytes (a block size of 8K or 16K would have been more realistic). The benchmark that resulted in close to native throughput (3% performance delta) yielded performance of 120,426 IOPS with an 8KB block size. No other virtualization vendors have published benchmarks using solid state storage, so the QLogic/Hyper-V benchmark, to me, really hasn’t proven anything.

I talked with QLogic’s VP of corporate marketing, Frank Berry, about this yesterday. He said that Wolf had misinterpreted the intention of the testing, which he said was only meant to show the performance of Hyper-V vs. a native Windows server deployment. “Storage performance wasn’t at issue,” he said. At least one of Wolf’s commenters pointed this out, too:

You want to demonstrate the speed of your devices, the (sic) you avoid any other bottlenecks: so you use RamSan. You want to show transitions to and from your VM do not matter, then you use a blocksize that uses a lot of transitions: 512 bytes…

But Wolf points to wording in the QLogic press release, claiming the result “surpasses the existing benchmark results in the market,” and claiming that “implies that the Hyper-V/QLogic benchmark has outperformed a comparable VMware benchmark.” Furthermore, he adds:

Too many vendors provide benchmark results that involve running a single VM on a single physical host (I’m assuming that’s the case with the Microsoft/QLogic benchmark). I don’t think you’ll find a VMware benchmark published in the last couple of months that does not include scalability results. If you want to prove the performance of the hypervisor, you have to do so under a real workload. Benchmarking the performance of 1 VM on 1 host does not accurately reflect the scheduling work that the hypervisor needs to do, so to me this is not a true reflection of VM/hypervisor performance. Show me scalability up to 8 VMs and I’m a believer, since consolidation ratios of 8:1 to 12:1 have been pretty typical. When I see benchmarks that are completely absent of any type of real world workload, I’m going to bring attention to them.

And really, why didn’t QLogic mention the full configuration in first reporting the test results? For that matter, why not use regular disk during the testing, since that’s what most customers are going to be using?

On the other hand, QLogic and Microsoft would be far from the first to put their best testing results forward. But does anyone really base decisions around vendor-provided performance benchmarks anyway?


July 1, 2008  10:36 AM

QLogic boss says he’s too busy to quit

Dave Raffo Dave Raffo Profile: Dave Raffo

 QLogic CEO H.K. Desai had planned to step down this year, but says there is too much going on in the storage industry for him to leave now.

That’s why his heir apparent left QLogic in March, and QLogic isn’t looking for a replacement. “I’ll be hanging around for awhile,” Desai, who has been QLogic’s CEO since 1995 and its chairman since 1999, told me.

He wouldn’t have said that a year ago. QLogic hired Jeff Benck from IBM as COO and President in April 2007 with the understanding that he would replace Desai in a year. But Desai stayed put and Benck left QLogic – eventually signing on with its HBA rival Emulex as COO.

Desai says QLogic isn’t trying to replace Benck, because it doesn’t need to replace the CEO. “The whole plan was for us to have a long-term transition, and it didn’t work out,” Desai said. “I’ve been fully engaged the last few months and I expect to be fully engaged for awhile. I won’t even talk to the board [about a COO] unless we plan a transition, which I don’t expect to do.”

Desai said his change of heart came because the storage networking world is so busy now. The industry is in a transition to 8-Gbps Fibre Channel with Fibre Channel over Ethernet (FCoE) and 16-gig FC coming up on its heels. QLogic is also trying to make inroads in InfiniBand. “There’s so much activity going on now, I don’t want to disturb anything,” he said. “Our team has been working together for so many years, and I don’t want to bring anybody in from the outside to disturb things. There’s a time to do a transition, and I don’t think this is the time.”

Of course, if Desai is looking for a lull in emerging storage technologies to retire he just might work forever.


July 1, 2008  10:33 AM

Ibrix gets a new leader

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Yesterday I met with Ibrix, a company I haven’t caught up with in a while. They seem to be doing well with movie production studios, and our meeting featured a screening of Wall-E, the latest picture from Ibrix customer Pixar.

Along the way, I was told that while his business cards still say VP of marketing and business development, Milan Shetti is actually the new president of Ibrix. The handover from former interim CEO Bernard Gilbert has happened over the last few days, according to Shetti. The CEO before Gilbert, Shaji John, remains chairman of the board.

“My gut is that if [Shetti] performs well over the next six months, he’ll end up in that CEO role,” Taneja Group analyst Arun Taneja said.

Generally, companies that undergo frequent reorganizations aren’t doing spectacularly. But Ibrix presented this second transition as a planned one, saying Gilbert had been focused on getting operations moving along, while Shetti has already become the public face of the company. Shetti said Gilbert planned on serving one year as CEO, and that term has been completed.

Or as StorageIO Group analyst Greg Schulz put it, “Ibrix has been shifting from development-focused to that of marketing, business development, partner/reseller/OEM crecruitment and sales deployment execution.” In the past few years, Ibrix has signed channel deals with Dell and HP. Shetti played a key role in both deals, according to the company.

Shetti told me yesterday that Ibrix has been winning deals like the ones with Pixar and Disney because its clustered file system is software-only, and customers can choose their own hardware. Ibrix software can also be embedded within HP or Dell servers at the factory before shipment, so the customer doesn’t have to load software agents on every node.

Ibrix’s software is rumored to be shipping with EMC’s Hulk, but Shetti was mum on that subject.


June 27, 2008  10:50 AM

Brocade still a fan (not FAN) of software

Dave Raffo Dave Raffo Profile: Dave Raffo

Although Brocade has a lot on its connectivity plate these days as it transitions to 8-Gbps FC switches, plots its move to FCoE and gets into the HBA game, it still has plans for its fledgling software business.Brocade’s FAN (file area network) initiative has been a bust so far, but at Brocade’s Tech Day Thursday, Max Riggsbee, CTO of the files business unit, laid out a roadmap for a refocused data management portfolio.

Nobody from Brocade used the FAN acronym, but file management remains a key piece of its software strategy, beginning with the recently released Files Management Engine (FME) product. FME is a policy engine that handles migration and virtualization of Windows files. Brocade will add SharePoint file services, disaster recovery for SharePoint and file servers, content-driven file migration and data deduplication in a series of updates through 2010.

Brocade has been fiddling with its files platform and overall software strategy for months now. It dumped its Branch File Manager WAFS product earlier this year, but kept StorageX file migration and Thursday revealed plans for new replication products, including one that deduplicates and compresses files moved across the WAN.

While the new lineup looks impressive on paper, it will take time to play out. And Brocade is walking a tightrope between expanding its product line and treading on its storage system partners’ turf with file virtualization and replication. “We are interested to see how this is received by storage/server vendors,” Wachovia Capital Markets financial analyst Aaron Rakers wrote of the replication product in a note to clients.

Brocade execs say they will take great care to work with partners and avoid competing with them — something they say rival Cisco does with many of its products. Ian Whiting, Brocade’s VP of data center infrastructure, said the new products will be developed jointly with its major OEM partners. “Our business model is all around partnerships with bigger systems companies,” he said. “We believe that’s how customers will consume the technology.”

At least Brocade’s not calling its new files strategy FAN 2.0. That’s a good start.


June 26, 2008  1:53 PM

Litigation update: Sun 1, NetApp 0

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

According to a blog posted today by Sun’s general counsel Mike Dillon, at least one of the patent-infringment counts is off the table in court, after the US Patent Office (PTO) granted a re-examination request filed by Sun.

With regard to one NetApp patent, the ‘001 patent, the PTO has issued a first action rejecting all the claims of this patent. Based on the positive response we received from the PTO, we asked the trial court to stay a portion of the litigation. Obviously, it doesn’t make sense to go through the expense and time of litigating a patent in court if the PTO is likely to find it invalid. The court agreed with our request and at least one NetApp patent has thus far been removed from the litigation.

NetApp started all this by filing its ZFS lawsuit against Sun with great fanfare last September, but Sun has been the aggressor ever since.  Sun countersued NetApp’s original suit and accused NetApp of violating Sun’s patents.  It tacked on another lawsuit in March alleging patent infringement related to storage management technology NetApp acquired when it bought Onaro in January.

This is the first of six reexamination requests filed by Sun. Dillon said Sun expects to hear more throughout the year.

NetApp refused comment on the latest developments and a survey of NetApp’s many executive blogs hasn’t turned up any further discussion, though some of Dave Hitz’s testimony is now being made available by Sun online.


June 25, 2008  4:01 PM

Nirvanix readying cloud-based NAS

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Startup Nirvanix today unveiled CloudNAS, which will combine Nirvanix software agents with Linux or Windows servers at the customer site to offer standard NAS storage in the cloud. Until now, Nirvanix and most other cloud storage services such as Amazon’s S3 required API integration between applications and the cloud service.

Nirvanix has had three large companies alpha testing CloudNAS, and is now starting up an open beta program, according to chief marketing officer Jonathan Buckley. “CloudNAS can run on a laptop,” Buckley said. “We’re looking to bring dozens more companies into the mix.”

As far as I know, this is a first. Rackspace’s Mosso offers cloud-based NAS, but only for Web developers. Nirvanix says CloudNAS will use commonly available interfaces, including NFS on Red Hat Linux 5 or SUSE 10 and CIFS on Windows XP. Customers must provide the server hardware and hook it up to Nirvanix’s cloud. The company charges 25 cents per GB per month for its service, but the NAS software will be free. Customers will have the option of a $200 per month support contract.

Just don’t look for CloudNAS to replace your production NAS boxes any time soon. Buckley said he expects the service will be used for mainly for backup, archiving, and other applications that can tolerate latency. “We’re not going to restrict what people put on it, but cloud storage can never be as fast as local enterprise storage,” Buckley said.


June 25, 2008  12:17 PM

The enterprise and open-source storage

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Jesse at SanGod wrote an interesting post the other day entitled “Enterprise storage vs….not.”

I have a cousin. Very well-to-do man, owns a company that does something with storing and providing stock data to other users. I don’t pretent do know the details of the business, but what I do know is that it’s storage and bandwidth intensive.

He’s building his infrastructure on a home-grown storage solution – Tyan motherboards, Areca SATA controllers, infiniband back-end, etc. Probably screaming fast but I don’t have any hard-numbers on what kind of performance he’s getting.

Now I understand people like me not wanting to invest a quarter-mil on “enterprise-class” storage, but why would someone who’se complete and total livelihood depends on their storage infrastructure rely on an open-source, unsupported architecture?

Jesse goes on to point out the resiliency and services benefits of shelling out enterprise bucks. His post sparked a conversation between me and an end user I know well whose shops (in the two jobs I’ve followed him through) are as enterprise as they come. This guy also knows his way around a Symmetrix, and Clariion, and NetApp filers, and when it comes to the secondary disk storage and media servers he’s building for his beefy Symantec NetBackup environment…he’s going with Sun’s Thumper-based open-source storage.

Obviously it’s a little different from cobbling together the raw parts, and Sun offers support on this, so it’s kind of apples and oranges compared with what Jesse’s talking about. But I’ve also heard similar withering talk about Sun’s open storage in particular, and can only imagine Sun’s open-source push is making this topic timely.

This is the second person I’ve talked to from a big, booming enterprise shop who picked Thumper to support NetBackup.  The first, who had the idea more than a year ago, was a backup admin from a major telco I met at Symantec Vision.

Obviously it’s not mission-critical storage in the sense that Symmetrix or XP or USP are, but I’d venture to guess that for a backup admin, his “complete and total livelihood” does depend on this open-source storage. As for the reasons to deploy it instead of a NetApp SATA array or EMC Disk Library or Diligent VTL? Both users cited cost, and the one I talked to more recently had some pointed things to say about what enterprise-class support often really means (see also the Compellent customer I talked with last week, who found that the dollars he spent made him less appreciative of the support he got from EMC).

This ties in with a recent conversation I  had with StorageMojo’s Robin Harris. He compares what’s happening in storage to the relationship between massively parallel systems and the PC in the era of the minicomputer.  When the PC arrived, the workstation market was dominated by makers of minicomputers, the most famous being Digital. Minicomputers were proprietary, expensive and vertically integrated with apps by vendors, much like today’s storage subsystems. Just as the PC introduced a low-cost, industry-standard workstation and the concept of a standardized OS, Harris predicts clustered NAS products built on lower cost, industry-standard components will bring about a similar paradigm shift in enterprise storage.

While there will obviously remain use cases for all kinds of storage (after all, people still run mainframes), I suspect people are starting to think differently about what they’re willing to pay for storage subsystems in the enterprise, regardless of the support or capabilities they’d get for the extra cash. And I do think that on several fronts, whether open-source storage or clustered NAS, it is looking, as Harris put it, like the beginnings of a paradigm shift similar to those that have already happened with PCs and servers.

That’s not to say I think Sun will win out, though. For all Sun’s talk about the brave new world of open-source storage, I haven’t heard much emphasis placed on the secondary-storage use case for it. And that so far is the only type of enterprise deployment for Thumper I’ve come across in the real world.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: