Last week I noted that EDS shareholders had filed suit to delay the closing of HP’s acquisition of the IT services company. The Wall Street Journal has since reported that HP and EDS will settle with those shareholders. As part of the settlement agreement, HP and EDS will delay the merger until Aug. 18. That allows investors to reap an additional 5 cent-per-share dividend. The companies have also agreed to turn over more information about the structure of the deal requested by shareholders.
Last week, we saw a good bit of rain falling on cloud storage’s parade. First there was another Amazon outage. Then it came to light that a cloud storage site called The Linkup (nee MediaMax) has completely failed, because of an apparent problem with data migration. At least that’s what it sounds like from their blog post about going out of business:
It was not possible to satisfactorily complete the move of files from MediaMax to The Linkup as we had expected, and as a result cannot offer a service that meets your expectations and our business requirements. This is a very disappointing outcome for us, and we know it has been a frustrating experience for many of our customers.
Maybe the owners of The Linkup could bounce back by taking Xdrive off AOL’s hands for the bargain price of $5 million and starting over.
Generally, I reserve judgment on the ultimate fate of cloud storage services. I know that online storage had a brief period of interest during the tech bubble but never went anywhere, and some believe this is more of the same. But I can be convinced, for now, that this time might be different. Small outfits such as The Linkup get trampled during the gold rush toward any new technology, and perhaps established service providers such as Amazon are going through growing pains. It’s still too early for these events to be anything other than a possible warning sign.
But things have sure looked ugly lately.
Want to avoid having to archive your emails? All you have to do is get elected President. Or, at least, be President Bush.
The overseer of unprecedented government snooping tactics on private citizens has taken umbrage at the suggestion that his email correspondence be similarly vulnerable to prying eyes, saying he’ll veto a bill passed by the House July 8 that would revise the Presidential Records Act and the Federal Records Act to address Presidential email records–specifically, the archiving and preservation thereof.
This all started last year, when a watchdog group claimed that members of the Republican National Committee used their RNC email addresses, which are supposed to be for campaign matters only, to conduct other business with the White House. When asked to turn over those emails, the Bush administration said, ““Oops.” Oh, and coincidentally, emails relating to the infamous Scooter Libby/Valerie Plame affair are also among the missing, according to CNN.com.
Now, what do you think would happen if the CEO of even the most powerful corporation attempted to respond that way to an e-discovery request?
That said, I don’t expect this legislation to pass anyway, if the track record of data privacy legislation is any indication.
That’s the question I’ve heard asked in the wake of Brocade’s blockbuster $3 billion acquisition of Foundry this week. Some have suggested that Juniper Networks, which is much more competitive with Cisco in the Internet router market, might have been a better choice.
To get a better sense of where these Ethernet players stand in the market, I talked to some analysts at the Dell’Oro Group, which specializes in tracking the networking market. According to Ethernet analyst Alan Weckel, Juniper has 16% of the total router market, while Foundry has 1% (this as compared to Cisco’s share, at 65%). Foundry is also #3, according to Dell’Oro, in the service provider and total Ethernet switching market, behind Force10 Networks and ProCurve Networking, respectively.
However, Weckel pointed out, Juniper doesn’t register yet in enterprise Ethernet switches, having only announced enterprise products earlier this year; its enterprise-class Ethernet switches aren’t shipping yet. “In routing, Juniper’s a clear number two,” he said. “But on the Ethernet switching side, it’s very early to say.”
Marty Lans, senior director of Brocade’s data center infrastructure group, said that Ethernet is the meat of the product strategy behind the acquisition. “We’re looking to sell from the heart of the data center out,” he said. FCoE and 10 GbE are already areas where Brocade has some products, including FCoE equipment that Lans said will ship when the FCoE is ratified, probably later this year.
“Those are within the four walls of the data center,” he said. “This is an extension to our product line meant to go beyond the data center.””
Moreover, Forrester analyst (and, full disclosure, my former news director) Jo Maitland blogged yesterday that
Foundry has all but conceded the enterprise market and has been selling its switches to metro providers building Ethernet MANs. . . .Right now, enterprise networking teams will not buy Brocade (or Foundry) for Ethernet. Period. It’s too risky and operationally foreign. But it’s possible a more robust service provider could do it if there was a competitive angle.
So. Acquire a company that has already shipped product and failed to gain share, or acquire a company with better share in one aspect with a product that could go either way? “It’s a question mark,” said Weckel.
Another clue to the origins of this deal might lie in a name mentioned on Brocade’s conference call: Seth Neiman of Crosspoint Venture Partners. He provided the seed money to found both companies, and just may have had a hand in making the deal happen, according to Maitland.
The news today was about a huge storage networking acquisition by Brocade, but another mammoth merger we covered here earlier is reportedly hitting a snag. The AP reports that some EDS shareholders are trying to pressure EDS into asking HP for more than the $13 billion it’s already been offered. Part of this pressure, according to the AP, is a plan to ask a judge in Collin County, Texas to postpone the company’s annual shareholder meeting. HP declined comment today.
While EDS is a huge player in the IT services industry and the acquisition obviously has tremendous value for HP, these shareholders would do well to reference the recent parable of Carl Icahn and MicroHoo.
The biggest difference between the last time S3 crashed and this time, in my observation, is that there was a much, much bigger chain reaction this time around. Last time, I knew of only a few companies using S3, like photo hosting site SmugMug, and startups that offer online backup services using their own interfaces on the front-end and Amazon’s hardware infrastructure on the back-end.
This time, not only were those types of Web 2.0 companies affected, but much bigger fish also felt the sting: no less than Web 2.0 microblogging phenom Twitter and some iPhone applications crashed along with S3.
The last Amazon outage was attributed to “growing pains” as the service gained popularity. I’d imagine adding popular apps like Twitter and the iPhone constituted another wave of painful growth. This is a new medium, and users of very new storage media accept some level of risk. But two major outages in six months is obviously raising some questions.
“Skype has crashed and stopped responding, Twitter, Tumblr and other major websites are barely working, most aren’t displaying images, widgets or static material that was outsourced to Amazon S3 services,” reported blogger LinkFog as the outage occurred. “It’s kinda funny how this goes against the very nature of the web, in each networks are interconnected in several ways to ensure that a major breakdown won’t happen.”
Others, like a blogger at Web Worker Daily, were not happy with Amazon’s SLAs:
Amazon does offer an SLA for the S3 service, guaranteeing 99.9% uptime or part of your money back. With .1% of a month being around 45 minutes, that means they owe people money. The requirements for claiming a refund, though, are onerous enough that no one except large users will bother (hey, Amazon, how about an automatic refund when you know your servers are down?).
Recent reports suggest that this is actually what will happen.
Clearly it’s not a major disaster for people not to be able to Twitter for a few hours. But when it comes to things like the backup services attached to S3, it might be time for people to rethink whether one cloud back-end is the same as another. Amazon’s appeal is that it’s cheap and relatively unrestricted for Web developers–but I hope the backup companies basing their hardware infrastructure on S3 at least inform their end users what the back end is, so they can make an informed decision about service provider reliability.
In the course of observing the festivities on NetApp and EMC blogs, I came across a sneaky little blog post/announcement from EMC about its LifeLine consumer storage product. According to The Storage Anarchist (aka Barry Burke), his post will be the only official announcement from EMC about LifeLine 1.1.
I’ve given up on trying to understand software release-numbering, btw. Sometimes it’s a “dot-oh” release for new OS support. Other times it’s a “dot one” for, say, Linux, Mac and NFS support, Active Directory integration, RAID 0, an embedded search engine, and oh, yeah, drive spin-down in a consumer NAS box.
But from Burke’s perspective as a
guinea pig user of the product, maybe this release isn’t so significant. At the very least, it hasn’t checked off all the items on his wish list, including integration with Mozy similar to what was announced for Iomega hard drives last week, better TiVo integration and dedupe.
In case you missed it, there’s been an entertaining exchange going on between EMC, NetApp and even IBM bloggers over a bug in NetApp’s SnapLock software.
It all started when EMC’er Scott Waterhouse of The Backup Blog got his hands on a notification from NetApp to customers urging an upgrade to OnTap 7.2.5 to resolve a vulnerability in SnapLock’s WORM functionality. Waterhouse didn’t go into much detail about exactly what the bug was and quoted selectively from the customer-notification document:
“…versions of Data ONTAP prior to 7.2.5 with SLC have been found to have vulnerabilities that could be exploited to circumvent the WORM retention capability.” They go on to say: “NetApp cannot stand by the SnapLock user agreement unless the upgrade is performed.”
Now this is a really big deal. This is not a trivial little upgrade to OnTAP. This is a big one.
Predictably, he then segued into a sales pitch–“Maybe it is time to explore an alternative?”–without giving much more information about what the problem actually was, or why exactly the upgrade between dot releases of OnTap isn’t trivial.
Waterhouse’s take was then picked up by Mark Twomey, aka StorageZilla, who led with the headline, “NetApp SnapLock Badly Broken.” Twomey also emphasized the fear angle “Right now none of those who aren’t running 7.2.5 or above are not compliant and it turns out they never were“ without divulging further details about the problem.
This is about where I came in. I tried pinging Twomey to no avail; I also tried hitting up some of the folks on Toasters. the NetApp users forum, to see what they’d heard. I planned to ping NetApp as well, but if the bug was as bad as the EMC’ers were making it out to be, I didn’t expect them to be willing to talk about it.
They surprised me by contacting me before I could get to them, and last Friday chief technical architect Val Bercovici gave me NetApp’s side of the story, telling me, “We expanded our testing on SnapLock to a third class of protection from tampering with the WORM feature.”
The first two classes, which had already been tested, concerned protection against malicious end-user removal of data, as well as protection from malicious administrative actions. The third and most recent class tested against was a case “where knowledge of the source code combined with some other products that are out there could be used for data deletion” inside SnapLock. Bercovici also didn’t want to give all the gory details, saying the vulnerability had not been exploited in the field, and NetApp wanted to keep it that way.
“It’s a highly unusual case, and in any event would be an audited deletion from the system,” Bercovici said. “It’s a level of testing EMC has never done” with Centera, he added.
Not quite “not compliant and never were”. NetApp bloggers were all over the EMC bloggers last week about the tone of their blog posts. It had begun to seem like the EMC-NetApp rivalry had faded a bit, as both companies go up against new competitors and find themselves with bigger fish to fry. But this was just like old times.
Things have gotten so moody so fast that blogger Tony Pearson from NetApp big brother IBM felt the need to tell EMC to pick on someone their own size:
I was going to comment on the ridiculous posts by fellow bloggers from EMC about SnapLock compliance feature on the NetApp, but my buddies at NetApp had already done this for me, saving me the trouble. . . .The hysterical nature of writing from EMC, and the calm responses from NetApp, speak volumes about the cultures of both companies.
But wait, there’s more. Remember how I mentioned heading over to see what was being discussed about this on Toasters? While there I ran across a thread that mentioned OnTap 7.2.5, and contained another message from NetApp to its customers:
Please be aware that we are investigating a couple of issues with quotas in Data ONTAP 7.2.5. As a precautionary measure, we have removed Data ONTAP 7.2.5 from the NOW site as we investigate the issues. We will provide an update as soon as more information is available.”
According to Bercovici, OnTap 7.2.5, issued as a bugfix for SnapLock, had its *own* bug, this time one that caused quota panic in some filers. In other words, the bugfix NetApp issued for what it said was an esoteric issue spawned another bug, and this time it caused some filers to ‘blue-screen’, according to the Windows analogy Bercovici used to describe the problem to me.
Version 220.127.116.11, which purportedly fixes both bugs, has since come out. As far as I’m concerned, the whole SnapLock bug was a tempest in a teapot, but NetApp still came out of this whole thing with egg on its face, as 7.2.5 introduced a severe and immediate problem in what seems like a well-intentioned effort to protect customers from an obscure corner-case hack. Also, they wound up with multiple EMC bloggers doing the Web equivalent of throwing chairs at them, a la Springer. As they say, no good deed goes unpunished. . . .
Sun yesterday identfied Samsung as its SSD supplier, solving at least part of the mystery around the source for its Flash drives. But Sun’s systems group senior director Graham Lovell says that Samsung will be one of several partners, not a sole source. The other partners remain unnamed.
Sun and Samsung also claim that the Flash devices they’ve collaborated on will have five times the durability of other single-level cell (SLC) enterprise Flash drives, such as the ones manufactured by STEC for EMC. Like other SSDs, the NAND devices still have a finite number of write-erase cycles, and single-layered memory cells exacerbate that problem. Lovell said that wear-leveling algorithms will be built into the Flash memory controller on the Samsung SSDs. A certain proportion of memory cells will also be kept in reserve by the drives for wear leveling.
According to Samsung, these developments mean that the lifespan of its SSDs will up to five times longer than that of other SLC Flash devices. Unfortunately, Lovell was not able to provide details about what testing Samsung has done to substantiate that claim. He also says that Sun won’t “preannounce” the availability of the drives from Samsung. Samsung’s PR rep told me he’d have to forward those questions to Samsung headquarters in Korea. . .which means the drives might be available before I hear the answers.
There’s been a lot of talk about SSDs lately. It seems this past month has started an inevitable “trough of disillusionment” with the technology, as excitement over its advantages has been balanced by industry observers pointing out the disadvantages of first-generation products (such as their durability).
IDC’s Jeff Janukowicz doesn’t see a problem with that. “This is the kind of improvement and innovations IDC predicts you’ll be seeing as this technology comes of age,” he said. Those predictions were written up in a recent report based on lab testing of SSD performance in PCs.
Storage managers for the most part are holding back on deploying SSDs, which to me is understandable. If I’d rushed out and bought a drive from EMC only to hear that another vendor had a much-improved model, I might be kicking myself. And I’d certainly be wondering what was coming next.
Since he sold Softek to IBM and was appointed CEO of British-based optical storage vendor Plasmon last November, Steven Murphy has had a tough row to hoe. Plasmon is one of the few survivors–if not the only survivor–of the optical storage market, which historically has been stunted by usability and cost concerns.
Because of optical’s past, Murphy is pitching Plasmon as an “archiving solutions provider.” He says Plasmon led too much with its blue-ray optical media in the past. “It’s important, but what’s more important for IT managers is a conversation about managing their data requirements,” he said. “This is a change for Plasmon.”
Plasmon began its transition with software for its Archive Appliance that allows applications to access the optical drives through standard CIFS and NFS interfaces, rather than requiring applications to understand the optical media management going on under the covers. That extended to its Enterprise Active Archive software, which supports multiple media libraries as a grid and offers encryption-key-based data destruction for individual files.
Last month Plasmon introduced RAID disk into its branded systems for the first time through a new partnership with NetApp. The main goal of the resulting integration, called the NetArchive Appliance, was to give users a nearline single-instance NAS-based archive for rapid recovery of archive data, since optical media typically has at least a 10 second response time, according to Plasmon chief strategy officer Mike Koclanes.
Plasmon has several irons in the fire when it comes to a turnaround strategy under Murphy, including licensing its media for resale by other channel partners, and riding the wave of interest in archiving brought on by amendments to the Federal Rules of Civil Procedure in December 2006. But it’s still going to take some doing to get tactically-oriented IT folk thinking on as long-term a strategic scale as Plasmon’s value proposition demands–users worried about putting out fires aren’t going to be moved by talk of reducing data migrations over decades.
This is where the NetApp partnership comes in. Plasmon has a big task in front of it, trying to bring a medium back to some of the same doorsteps where it has already been passed over before. Maybe if it’s disguised as NAS, goes the current strategy, it could help get a foot in the door.
Koclanes gave the example of one customer who said he was interested in Plasmon’s archiving, but wouldn’t get funding for a new capital equipment project. When the NetArchive product was discussed, it occurred to the customer that he did have built-in funding for more NAS space. “We’re trying to focus on the ways the archive can solve other problems, and ways for users not to have to try to get an entirely new platform put in,” he said.
Plasmon’s newly-minted direct sales force will also emphasize that single-instance archiving to an optical jukebox provides backup relief by removing stagnant files from primary storage systems, has inherent WORM capabilities, and can natively be used either as an on-site part of a library or an off-site data copy for DR, Koclanes said.
Still, anybody checking purchase orders carefully at such a company might notice some unusual costs–while Plasmon systems go for anywhere from $50,000 to $800,000 (“That’s not a price range,” one of my colleagues said when given these figures, “that’s the entire pricing spectrum”), the “sweet spot” according to Murphy is around $225,000. Each optical disk costs about $33–cheap in absolute terms, but not when compared with the cost-per-GB ratios available in today’s hard-drive-based systems.