Hewlett-Packard’s storage division has taken its share of heat in recent years. It has often underachieved from a revenue standpoint, been knocked by competitors and analysts for lack of innovation, and been reorganized internally. Now it’s led by Dave Roberson, former Hitachi Data Systems (HDS) CEO and currently VP of HP’s enterprise storage business.
So what was the reaction of the storage group and upper management to a sharp hike in revenue from storage last quarter? Not what you might think.
“If there’s anybody taking a lap around the building, I sure haven’t seen him because we’ve got a lot more work to do to be a participant in the way that HP ought to be a participant in the storage market,” CEO Mark Hurd told analysts on the earnings conference call Wednesday night.
Ok, so no laps around the building. But HP did report storage revenue increased 14 percent year over year to just over $1 billion, led by a 17 percent gain in its midrange EVA and 21 percent improvement with its high-end XP [rebranded HDS] arrays. Still, Hurd says that’s only the start. He expects to attach storage to a higher percentage of ProLiant servers going forward, stronger adoption of new HP storage products and a return on some of the money it’s laid out on storage acquisitions in recent years.
“Listen, we can just do better than this,” he said. “We’ve got new product we’ve announced into the market during the quarter in the storage space that we are excited about. We’ve begun to bring together some of the acquisitions we’ve done and align some of our Storage Essentials software with our platforms. We are doing more work in the channel and it turned into good growth. I mean, for us to get mid-double-digit growth or 14% growth in the quarter is better than we’ve seen.”
I’m guessing the new product Hurd referred to was the ExDS9100 Web 2.0/NAS system HP unveiled with much fanfare this month although it also launched a new EVA in February. In any case, Hurd emphasized there is still work to be done, and he expects to see it get done: “I don’t want you for one minute to think we are satisfied with it [storage success].”
Hurd has accomplished many of his goals since he arrived at HP three years ago. Now we’ll see if he can complete a storage turnaround.
Plenty more where that came from, after the jump.
Over the last several weeks, I’ve been writing about the self-healing storage arrays announced by startup Atrato Inc. and a more seasoned company, Xiotech Corp. Seeing this, another company stuck its hand up to say, “Hey, we have that too!”
This other company is Data Direct Networks, and their self-healing product has actually been on the market in one form or another for some time now (DDN has been in business 10 years), but I wasn’t aware of it until they got my attention last week.
DDN’s product line consists of four hardware models: the S2A6620, 9550, 9700 and 9900. The self-healing stuff resides mostly in the S2A operating system. Like BlueArc’s Titan NAS heads, DDN’s products put some of the heavy-duty processing of data, such as parity calculations, into silicon by way of field-programmable gate arrays (FPGAs). According to DDN’s CTO Dave Fellinger, this, along with parallelization, means that the arrays can write and read at the same rate.
What that sets up is the ability to calculate two-disk parity on every read, as well as every write, without performance degradation. So theoretically the system never goes into rebuild mode, because it already operates in that mode all the time.
Like the systems from Xiotech and Atrato, DDN’s systems perform disk scrubbing, as well as isolation of failed disks for diagnosis and attempted repair. The product can conduct low-level formatting of drives, power-cycle individual drives if they become unresponsive, correct data using checksums on the fly if it comes off the disk corrupt, rewrite corrected data back to the disk, and use S.M.A.R.T. diagnostics on SATA disks to determine if the drives need to be replaced. (Atrato also uses S.M.A.R.T., among other error correction codes.)
The key difference between the DDN product and the others is that DDN’s disk arrays are not sealed. According to Fellinger, the company’s experience in large environments suggests that sealed arrays are impractical.According to Fellinger, the company’s experience in large environments suggests that sealed arrays are impractical. So admins swap out individual drives at DDN installations, the largest of which is an 8 PB deployment at Lawrence Livermore National Labs.
However, it seems Xiotech and Atrato are going after slightly different markets than DDN. Each of those vendors talks in capacities of 16 TB and 3U rack units. Xiotech said it is specifically going after enterprise accounts with its ISE system; Atrato seems to be targeting a more similar market to DDN in multimedia and entertainment. DDN claims to already have the big fish, such as AOL/Time Warner, locked up. And, Fellinger added, the company has just released an entry-level system with 60 drives offering up to 2 GBps performance.
For me, anyway, an already interesting and (relatively) new market just got that much more intriguing.
There’s plenty going on out here in Vegas with EMC World in full swing. It seems like there are attendees, press, analysts and EMC’ers stashed in every hotel on the Strip, and about 10,000 people milling around the corridors of the Mandalay Bay Convention Center.
There’s also been plenty of product news so far. EMC finally unveiled its deduping VTLs based on software from Quantum, one of which will also be the first Clariion array to offer drive spin-down, a feature EMC has promised in multiple products going forward. Also on the docket: incremental updates to Avamar’s software and Data Store hardware, and a new mid-market package for Networker.
But the subject on everyone’s lips in the whispering corridors was the one product EMC didn’t spend much time at all talking about: its Web 2.0 storage system, codenamed Hulk and Maui (looks like Hulk’s real name will be InfiniFlex).
While Hulk is shipping, Maui is, to quote StorageMojo blogger and Data Mobility Group analyst Robin Harris, “conspicuous by its absence.” Others who sat in on CEO Joe Tucci’s Q&A with analysts this morning said he was reluctant to answer questions about Hulk and Maui, and his face seemed to darken for a moment when I asked him afterwards if Maui had joined Hulk in shipping yet. But he also gave a straightforward answer, saying it’s not shipping yet but it is scheduled to do so this summer.
Before I could even ask how one was supposed to deploy one without the other, Tucci added, “Maui doesn’t require Hulk and Hulk doesn’t require Maui. They’re both independent but work together.” Furthermore, when both products arrive, EMC “is going to push them both very heavily.”
But for Harris, something doesn’t quite add up. “Joe promised Hulk/Maui in seven months last November,” he said. “EMC showed the hardware at NAB last month. But the software has always been the hard part.”
Word has it Hulk will ship with clustering software from Ibrix. I’ve heard rumblings about what Maui’s major malfunction might be, but nothing definite. Meanwhile, Harris told me that at the show he encountered a customer who’d been given a demo of Maui and described it as a layer of softwre that sits above local storage pools, “which could serve as a global data repository for multinational companies, tying multiple data centers together.” If that’s the case, it wouldn’t be surprising if getting things synchronized on that kind of scale took some time. And it would also explain why you could run Hulk without Maui, and Maui without Hulk.
But we’ve got another couple of months (it seems) before we find out for sure if the rumors about Maui are correct.
Backup software vendors BakBone and CommVault share the same code base — both platforms came out of work done at the Unix Systems Laboratory when it was owned by AT&T. But in the past week, I’ve heard different stories from both companies.
CommVault reported strong earnings on its last call and says it’s looking into expanding its product into adjacent markets such as data deduplication and records management. There’s also some evidence it’s winning converts from its larger rivals Symantec NetBackup and EMC Networker, and winning more deals over $100,000 as well as over $1 million. That move upmarket isn’t by accident, either: CEO Bob Hammer said on the earnings call that building higher-end business is CommVault’s goal for the remainder of this year. Wall Street analysts are still dinging them over operating margin issues, but that’s neither here nor there for those of us more concerned with the technology side of things.
According to analysts, there are a few things making CommVault hot right now – a transition to disk-based backup, the rising popularity of VMware, and a Windows Server OS refresh. CommVault’s got a pretty good story in each of those categories, but the real trump card according to industry observers is its unified software platform that integrates backup with archiving and replication.
It would seem Bakbone shares several parts of that story as well, but they have been a much quieter company. Like CommVault, they have an integrated platform, but have chosen to focus on data protection rather than expanding into adjacent markets. From a business standpoint, they’ve gone in different directions: while CommVault has completed an IPO and had success in the public markets over the past two years, BakBone got booted off the Toronto Stock Exchange in 2004 for accounting issues it still hasn’t solved. But BakBone has continued to move forward with its product line, and in the last couple of weeks it added support for SharePoint archiving (something many competitors rolled out last year) and new integration with VMware’s consolidated backup (VCB).
BakBone previously offered integration with VCB, according to senior product marketing manager Gary Parker, but like many of its competitors in data backup, required scripting based on a publicly available API from VMware. With version 8.1 of its software released today, BakBone has packaged up those scripts into new software agents that sit on the client and VCB proxy server, allowing VCB management through the Bakbone Netvault interface.
Pretty nifty, and according to Taneja Group analyst Eric Burgener, not necessarily something everybody’s doing (TSM, for example, and EMC Networker don’t do it). But plenty of competitors have beaten them to the punch, including Netbackup and CommVault (Symantec’s Backup Exec still has no VMware client. What’s up with that?). And according to Burgener, “there is some value to it, but it’s not rocket science.”
According to Bakbone VP of marketing Jeff Drescher, Bakbone’s goal isn’t to be the first to market but to do right by its customers with what it does release. And it does have some blue-chip customers to its credit, though they’re either shy about talking to the press or Bakbone hasn’t noised them around much – customers like Yahoo! and GE Capital.
“These customers are using Bakbone to deal with departmental issues, in smaller environments,” Burgener said. “Their pitch is simplicity and covering a broad swath of backup functions.”
But right now I have to say my reporterly curiosity is piqued when it comes to these fraternal twins in the backup market. It’s hard for me to tell right now whether or not Bakbone has truly fallen behind CommVault or if it has just been quieter.
Drescher also pointed out the lag between announcement and adoption of emerging technologies, saying Bakbone is following other vendors with some product features, but getting them to customers right on time.
At least one customer I’ve talked to, though, said he’d been waiting for the VCB integration Bakbone has just added. “We’ve waited a little bit longer than we would have liked,” said Bryan Vonk, technical specialist for the Vancouver Police Department. But he added that he watched admins at the City of Vancouver’s main data center wrestle with the scripting for VCB and was content to wait for Bakbone. Furthermore, he said, now that that issue has been addressed, he has no pressing items remaining on his wish list at the moment. So we’ll just have to wait and see how this tale of two backup products plays out from here.
Meanwhile, another product marketed to midsize companies or departments, Atempo, also made incremental updatesl based on a desire to branch out its product, much like CommVault. In Atempo’s case, however, it has added a Mac and Linux backup client to its LiveBackup PC and laptop CDP software, support for more granular administrative roles than just “admin” or “user,” support for a group management console, and more scalability thanks to a change in how the product’s underlying database arranges data.
Like CommVault, Atempo is looking to take this product further upmarket than it has played in the past. That’s especially interesting when you think about how storage giant EMC and others are trying to figure out how to take their products downmarket at the same time. Which way is the market heading? I’m so confused.
(Photo courtesy 416style on Flickr)
Heard a story on the news yesterday and couldn’t help but wonder if it might have implications for green data centers.
It was a report on green roofs, an emerging practice of placing a layer of soil on the flat roof of a building or house and then planting vegetation. It’s chiefly done to compensate for the effect of deforestation on air quality in cities, and to manage stormwater runoff more effectively in the concrete jungle.
But there’s another effect green roofs have, according to the report: the average flat roof can climb to between 150 and 200 degrees in summer months, exposed as it is under the hot sun. A green roof can bring that temperature down to the vicinity of 80 or 90 degrees, alleviating some of the stress on the building’s air-conditioning system and burning less energy to cool it.
I’m sure I don’t have to explain the implications of this for helping to cool data centers, provided they’re in the right location. As it turns out, it would seem there are at least a few people out there who got that idea as well.
IBM hasn’t said much about its roadmap plans for Diligent Technologies since buying the VTL dedupe vendor last month, but Diligent CEO Doron Kempel filled me in on some details this week at Storage Decisions.
Kempel still has no title at IBM but he’s working on the integration and product development roadmap that stretches through 2009. First on the list is a clustered system. Diligent was close to completing a two-cluster beta release when the deal went down, and those plans continue.
“Nobody has clusters for inline deduplication,” Kempel said. Data Domain is also working on a clustered system and NEC HydraStor dedupes across the nodes in its grid, but Kempel thinks Diligent will come out ahead in performance. “To cluster inline, you need indexes in memory to be in sync,” he said. “It took us 24 months to develop this technology, and I believe we won’t see something similar any time soon.”
The next step is what Kempel calls a “blue wash,” which means IBM will put its GUI on Diligent’s ProtectTier and begin shipping it on IBM hardware. He expects that to happen by the end of 2008. Then in 2009, plans call for IBM/Diligent to add replication followed by a NAS interface and IBM mainframe support for ProtectTier dedupe.
Diligent had planned to hire 30 engineers this year, but Kempel said that number is now 50 after the sale. He also said Diligent had around 200 customers at the time of the acquisition, with about half of them coming through its reseller deal with Hitachi Data Systems. IBM and HDS are saying the partnership will continue post-acquisition.
Brocade is off to a great start in 2008 … if the vendor’s goal is to morph into a services company.
Brocade reported revenue from last quarter of $354.9 million Wednesday night. That’s up 3 percent from last year, and ahead of Brocade’s guidance and Wall Street expectations. But the increase comes largely from services revenue, which grew 32 percent over last year. Brocade’s product revenue actually dropped 2 percent from last year, with edge switches down sharply. <P>
Brocade execs point out that revenues usually fall in the first quarter of the year and this year’s drop was less than usual. But they also gave lower guidance than expected for this quarter, blaming it in part on the macroeconomic conditions that they played down a few quarters ago.
Perhaps that’s why on the day after its revenue upswing, Brocade’s stock price today opened 12 cents below yesterday’s close and analyst Shebly Seyrafi of Caris & Company downgraded the stock from Buy to Average.
Looking past the numbers, as the quarter ended there are questions about nearly all of Brocade’s most important current and future products.
Directors. Brocade says its new DCX Backbone director did better than they expected, and director revenue increased 19 percent from last year. Still, Brocade’s short-term success with the DCX could depend on how fast Fibre Channel over Etherenet (FCoE) ramps. Its rival Cisco is betting on customers going to FCoE sooner rather than later, with 8-gig switches not planned before the fourth quarter (Cisco will demo 8-gig cards for its MDS 9513, 9509, and 9506 directors next week at EMC World), while Brocade is counting on widespread adoption of 8-Gbps Fibre Channel well before FCoE. In any case, it might take customers a while to figure out their next move — which will delay sales.
Switches. Revenue from Brocade’s edge switches fell 2 percent from the previous quarter and 16 percent year over year. Did somebody say iSCSI? At least one financial analyst thinks the increase in director sales and decrease in switch sales shows customers are keeping FC for enterprise implementations but switching to iSCSI for smaller storage rollouts.
“We believe the low-end FC switching market could be under pressure from the rapid growth of the iSCSI market,” Kaushik Roy of Pacific Growth wrote in a note to clients today. “The iSCSI protocol seems to be a good enough solution at the present time for the low-end storage connectivity.”
HBAs. Brocade CEO Mike Klayman said one major OEM qualification (probably IBM or HP) of the new HBAs is coming soon, with significant revenue coming next year. Still, Brocade execs hedged when asked if they still expect HBAs to make up 10 percent of its revenue within a few years.
FCoE. Brocade walks a tightrope with FCoE, wanting to be seen as a technology leader without pushing the standard on customers too quickly and risk hurting its 8-gig sales. That’s why Klayko said Brocade had one design win for FCoE, then in the next breath added that it will cost more than FC at the start and everybody is cost-conscious these days. As for the design win, Klayko said, “We just wanted to make a statement we’re clearly in the market with competitive products.” In other words, Cisco/Nuova isn’t the only game in town for FCoE switches.
HP announced this morning that it will pay $13.9 billion for IT services company Electronic Data Systems (EDS) in a transaction expected to close in the second half of the year. EDS will remain a separate business unit known as “EDS, an HP company.”
This acquisition — HP’s largest since Compaq — makes HP the second-largest computer company in the world, behind IBM. The scuttlebutt is that’s no coincidence, that EDS is meant to help HP match IBM Global Services specifically. EDS was second-place finisher in annual worldwide services revenue with $22 billion, behind only IBM’s $54 billion, according to the New York Times. There’s also a history of HP and IBM jockeying for position in services. HP tried to buy another consulting firm, Pricewaterhouse Coopers Consulting (PwCC) in 2000, and lost out to IBM.
Illuminata analyst Gordon Haff, blogging about today’s acquisition, sheds some light on those events and how they may have led to the Compaq merger:
When last we saw this play, it was with Carly Fiorina in the role of HP CEO looking to spend a reported $17 to $18 billion on Pricewaterhouse Coopers Consulting (PwCC) in 2000. A lousy set of quarterly results turned in by HP helped to scotch that deal. Nor did it help that a lot of observers thought that HP was offering way too much for an organization with $6.7 billion in annual revenues (2001) and about 33,000 employees. IBM seemingly provided evidence of this view when it bought PwCC in 2002 for only about $3.5 billion. (A bit of an unfair comparison given the economic and other events of 2001, but still…) Carly went on to get her acquisition kicks by gobbling up Compaq instead.
So what’s different this time around? According to Haff, it’s not the idea, but its execution. Rather than looking to pay 2x the revenues of PWC, this time HP spent less than the $22 billion annual revenue of EDS. Haff also points out, “Mark Hurd has made remarkably few changes to HP’s strategic direction since he took over. . . .The difference from times past is that Mark has a track record for keeping things ship-shape.”
Tom Foremski at Silicon Valley Watcher, commenting on rumors of the deal yesterday, pointed out some of the risks that HP still faces, such as a falling stock price for EDS in recent months and the potential for such a large acquisition to be a drain on the company. “HP would still need to gain a high-end IT consultancy business in order to compete with IBM,” he added.
Meanwhile, HP is far from slowing its services campaign, also reportedly in talks to buy billions of pounds worth of data center facilities from British Telecom in the UK, presumably the better to make a matching European services push. I wonder if somewhere in the midst of all this is a plan to take on EMC’s Fortress as well, and redeem Upline at the same time.
The EMC factor is another interesting wrinkle. According to a note sent out by Wachovia analyst Aaron Rakers this morning, EMC shares could come under pressure from the deal because EDS is one of its preferred partners. “Some industry sources suggest that EDS could generate as much as $200 million in EMC revenue. In Apr 08, EMC extended its EDS relationship with new Information-centric Consulting Services (leveraging RSA offerings).” Other members of EDS’ partner program include Cisco, Sun Microsystems, Microsoft, SAP, Oracle and Xerox.
With data deduplication in the news today, I recommend checking out the responses to Jon Toigo’s questionnaire for data deduplication vendors. I found his questions about backing up deduped data to tape and the potential legal ramifications of changing data through dedupe especially interesting. The responses from the vendors so far about hardware-based hashing are also interesting, in that they seem to break down according to whether or not their companies offer a hardware- or software-based product.
It would be pretty disappointing if Hifn’s announcement of hardware-based hashing led to a religious war around software- vs. hardware-based dedupe systems. It’s clear (and has been generally accepted, or so I thought) that hardware performs better than software, meaning it’s in users’ best interest to improve the throughput of data deduplication systems by moving processor-intensive calculations to hardware. And the dedupe market is full of enough FUD as it is.
Speaking of which, Data Domain and EMC are getting all slapper-fight about dedupe thanks to today’s product announcement from Data Domain (and attendant comparisons to EMC/Avamar), and the fact that EMC is planning to finally roll out deduping tape libraries at EMC World (based on Quantum’s dedupe).
EMC blogger Storagezilla calls the statement by DD in a press release that its new product is 17 times faster than Avamar’s RAIN grid “nose gold” (props for the phraseology, at least), and then points out that Avamar’s back end doesn’t actually do any deduping, which is something I still don’t quite get.
So Data Domain’s box is faster at de-dup than the Avamar back end which doesn’t do any de-dup.
Since the de-dup is host based and only globally unique data leaves the NIC do I get to count the aggregate de-dup performance of all the hosts being backed up?
Yes, I do!
How does Avamar decide what data is ‘globally unique’? If this is determined before data leaves the host, than that processing must be done at the host. ‘Zilla even says he can count the aggregate performance of all the hosts being backed up in the dedupe performance equation. . .which brings me back to the first point again: Avamar’s back end doesn’t do de-dupe, but it’s faster at dedupe than Data Domain anyway?
Chris Mellor explored this further:
Accrding to EMC, Avamar moves data at 10 GB/hr per node (moving unique sub-file data only). Avamar reduces typical file system data by 99.7 percent or more, so only 0.3 percent is moved daily in comparison to the amount that Data Domain has to move in conjunction with traditional backup software. This equals a 333x reduction compared to a traditional full backup (Avamar has customer data indicating as much as 500X, but 333X is a good average).
‘An EMC spokesperson’ (should we assume it was, or wasn’t, Storagezilla himself?) further stated to Mellor:
“Remember that Data Domain has to move all of the data to the box, so naturally they’re focusing on getting massive amounts of data in quickly. EMC Avamar never has to move all of that data, so instead we focus on de-dupe efficiency, high-availability and ease of restore. Attributes that are more meaningful to the customer concerned with effective backup operations. “
Again I ask, where does the determination that data is ‘globally unique’ take place? It’s got to be taking up processor cycles somewhere. The rate at which it makes those determinations, and where it makes those determinations, would be the apples-to-apples comparison with DD, which is making those calculations as data is fed into its single-box system.
All of that is overlooking that the real meat and potatoes when it comes to dedupe is single-stream performance, anyway — total aggregate throughput over groups of nodes (which is really what both vendors are talking about) doesn’t mean as much. For one thing, Data Domain’s aggregate isn’t really aggregate, because it doesn’t have a global namespace yet. For another, I fail to see how EMC can even quote an aggregate TB/hr figure when talking about a group of networked nodes. Doesn’t network speed factor in pretty heavily to that equation?
Personally, I don’t think either vendor is really putting it on the line in this discussion (c’mon guys, get MAD out there ;)!). And if Avamar really performs better than Data Domain, why isn’t its dedupe IP being used in EMC’s forthcoming VTLs? (EMC continues to deny this officially, or at least refuses to confirm, but there’s internal documentation floating around at this point that indicates Quantum is the partner.)
Meanwhile, according to EMC via Mellor:
EMC says Data Domain continues to compare apples and oranges because it wants to avoid the discussion that there are a number of different backup solutions that fit a variety of unique customer use cases.
I have to admit this made me chuckle. Most of the discussions I’ve had about EMC over the last year or so have involved their numerous backup and replication products and what the heck they’re going to do with them all long-term. Finally, it seems we have an answer: Turn it into a marketing talking point!
I don’t think Data Domain even really wants to avoid that subject, either. They’re well aware that there are a number of different products out there that fit different use cases, given their positioning specifically for SMBs who want to eliminate tape.
At the same time, it’s interesting to watch the EMC marketing machine fire itself up in anticipation of a new major announcement–the scale and coordination are something to behold. This market has already been a contentious one. It’ll be interesting to see what happens now that EMC’s throwing more of its chips on the table.