Here are some stories you may have missed this week:
As always, you can find the latest storage news, trends and analysis at http://searchstorage.com/news.
Two online data sharing services failed this week — one from a computing giant, and the other a small social bookmarking website.
That’s the trouble in this wild and wooly world of the cloud–especially in its early days. Not every service is going to make it, and then you’re going to have to figure out what to do with your data if your service fails.
Hewlett-Packard pulled the plug on HP Upline, and according to our Australian affiliate, ma.gnolia went under. SearchStorage ANZ reports that “in late January, ma.gnolia experienced a catastrophic data loss event and turned to backups to restore its database of users’ bookmarks. Both the primary and secondary backups failed irrevocably.”
Said a friend of mine who’s a Digg addict (I’m more a del.icio.us woman myself), “Losing my bookmarks would *hurt*.”
In the case of HP’s Upline online backup service, users will at least be able to get their data back. HP confirmed this afternoon will be discontinued as of March 31. In a statement, an HP spokesperson said:
HP continually evaluates product lines and has decided to discontinue the HP Upline service on March 31, 2009.
HP will no longer be backing up customer files to the HP Upline servers as of Feb 26, 2009 at 8 am Pacific time. HP will keep the file restore feature of the Upline service operational through March 31, 2009 Pacific time in order for customers to download any files that have been backed up to Upline.
Blogger AppScout wrote disappointedly, “And so goes the story of one of the slickest online storage and backup services to launch in the past year.” Among Upline’s unique features was the ability for users to tag content for later search and share, and to publish files online using the service through a feature called the Upline Library. However, Upline crashed right out of the gate, drawing opportunistic marketing for competitors.
There are lots of interesting donnybrooks going on in this industry at any one moment, but EMC-NetApp is like the Red Sox-Yankees rivalry: imbued with a sense of historical inevitability, and capable of reaching heretofore undiscovered levels of bickering.
The latest series of skirmishes takes place on one of the most hotly contested battlefields of storage today–VMware. Specifically, the integration with, support of, and general glomming on to the server virtualziation giant’s software.
There was a time when I would’ve guessed NetApp was the most-installed storage system with VMware, especially as VMware over NFS took hold at least in some enterprise shops. Server administrators and application admins were already familiar with running databases over NFS, and NetApp’s NFS was generally considered the best. Plus, NetApp has the whole multiprotocol thing going on with iSCSI and NFS in the same system, built in data protection tools, etc.
Not so, says EMC, triumphantly waving a newly released report from Forrester Research:
EMC…has been cited as “the most prevalent storage vendor in their overall environments” according to a survey of 124 global IT decision-makers currently using x86 server virtualization technology. The January 2009 report titled Storage Choices For Virtual Server Environments also revealed that 98 percent of the 124 survey respondents were using VMware ESX in their virtual server environments, and that 78 percent have virtual server technology in use for production application workloads.
According to the survey, 48 percent of respondents chose EMC as their brand of networked storage for virtual servers – nearly two times as many as the next closest vendor, IBM. Additionally, 63 percent of the respondents prefer to buy from a single storage vendor, which illustrates that buyers show a preference for working with a single storage vendor.
Furthermore, the report states:
There is little correlation between vendor and protocol selection. Surprisingly, there is not
a strong pattern linking the choice of vendor to the preferred protocol. Even NetApp, with its
strong heritage in file storage and ability to offer in-depth best practices for NFS in virtual server
environments, still shows a prevalence of FC — NFS is the least common option. This is due
to the following: 1) VMware did not add support for iSCSI and NFS until ESX Server 3.0; 2)
storage vendors are generally protocol-agnostic — they support and recommend all available
protocols; and 3) customers are often unwilling to diverge from what they know and use already.
This is also the case when it comes to storage vendors in general–companies generally don’t buy new storage systems or try new technologies just for to support VMware, the report finds. More than half (53%) of the users surveyed were using EMC. That’s a much higher EMC sample than the 28% networked storage market share EMC had in the most recent IDC’s quarterly storage report.
Besides the disproportionately high number of EMC customers surveyed and relatively small overall sample of 124 users, the Forrester report also discloses that “in terms of industry, financial services and insurance is the most prevalent, with 41% of respondents.” Both of those verticals tend to favor EMC.
According to Forrester’s survey, 25% were using IBM with virutal servers followed by NetApp at 24% and Hewlett-Packard at 23%. I’m curious how many of the systems counted as IBM in the study are N-Series, which is NetApp under the covers.
Meanwhile, NetApp is not pulling any punches, continuing its VMware space-efficiency guarantee, this time extending it, as it did its primary storage data deduplication capabilities, to V-Series. That means it’s essentially offering a guarantee on third party storage from EMC, IBM, HP and Hitachi Data Systems fronted by a NetApp head.
Not everybody in the industry was impressed with the VMware guarantee. The guarantee is highly conditional, making it highly unlikely that NetApp would ever have to pay off.
As for applying NetApp services to third-party storage, the Forrester report seems to suggest that most users tend not to have more than one vendor in their environment, let alone attaching one vendor’s system to another’s. Can you imagine the finger-pointing if there were an issue?
So. We’re left with a relatively-small-sample-size report skewed toward one vendor on one side, and hollow guarantees on the other.
Oh! And one spoof of the battle scenes in 8 Mile that has to be seen to be believed.
There’s still a lot more ground to be gained and lost this year in the VMware marketplace, and who knows what events might come along to change the industry completely. In the meantime, though, we know there’ll never be a dull moment between the notorious NTAP and E-squared.
While one vendor’s blogger came to bury SPEC SFS, another came to defend it. The clash of vendors as yet seems unresolved.
The Standard Performance Evaluation Corporation (SPEC) SFS benchmark measures file server throughput and response time. The latest version, SPECsfs2008 was implemented last year.
But Sun FISHWorks blogger Bryan Cantrill wrote in a post called “Eulogy for a Benchmark” that the workload mix even in the most recent version remains outdated:
The 2008 reaffirmation of the decades-old workload is, according to SPEC, “based on recent data collected by SFS committee members from thousands of real NFS servers operating at customer sites.” SPEC leaves unspoken the uncanny coincidence that the “recent data” pointed to an identical read/write mix as that survey of…now-extinct Auspex dinosaurs a decade ago — plus ça change, apparently!
Moreover, Cantrill argued, the testing parameters for systems lead vendors to design NAS heads to perform well in the SFS test, which he said is at best irrelevant and at worst detrimental to a real-world environment. He also insists that SPEC benchmark results need to come with system pricing disclosures.
Enter NetApp blogger and senior technical director Michael Eisler, who called his response to Cantrill’s post “Chuckle for Today.”
the philosophy of SPEC SFS has always been to model reality as opposed to the idealist…dream where a storage device never has to process a request. P.S., in an earlier blog post, I made the argument that SPEC SFS 2008′s differences from SPEC SFS 3.0, show the caching on NFS clients has improved.
On the pricing disclosure issue:
Like many industries, few storage companies have fixed pricing. As much as heads of sales departments would prefer to charge the same highest price to every customer, it isn’t going to happen. Storage is a buyers’ market. And for storage devices that serve NFS and now CIFS, the easily accessible numbers on spec.org are yet another tool for buyers. I just don’t understand why a storage vendor would advocate removing that tool.
In storage, the cost of the components to build the device falls continuously. Just as our customers have a buyers’ market, we storage vendors are buyers of components from our suppliers and also enjoy a buyers’ market. Re-submitting numbers after a hunk of sheet metal declines in price is silly.
This is where Cantrill appears to take exception to Eisler’s taking exception, responding in a followup post that Eisler’s defense of the pricing non-disclosure is an “Alice-in-Wonderland defense.”
Mike’s argument — and I’m still not sure that I’m parsing it correctly — appears to be that the infamously opaque pricing in the storage business somehow helps customers because they don’t have to pay a single “highest price”! That is, that the lack of transparent pricing somehow reflects the “buyers’ market” in storage. If that is indeed Mike’s argument, someone should let the buyers know how great they have it — those silly buyers don’t seem to realize that the endless haggling over software licensing and support contracts is for them!
It’s not just this benchmark which is being debated over in the storage industry–SPC benchmarks have also been a bone of contention between EMC and NetApp and between HP and EMC. Even in the comments on this blog I’ve heard everything from “Take the time to read the full disclosures, read the specifications…You might learn something” from a defender of SPC to a nonplussed “I really hope nobody uses SPC-1 results as any criteria for buying storage.”
‘zilla emphasizes that the Networker VSA is for demo purposes only, going so far as to say “Thou shalt not use this for production backups.”
But the curious part is EMC also offers a VSA for its Avamar ROBO/dedupe software that is meant for use in production.
I know that there are big differences between Avamar and Networker, especially in scale. Performance can also limit scalability in virtual appliances. But other companies have offered VSAs using scaled-back versions of software for use in smaller environments, similar to Networker Fast Start (at least according to how it’s described on the product page).
Update:‘zilla let me know that you *can* run Networker in production on a VM, just not this particular time-limited VM.
EMC also has a not-for-production Celerra VSA. ‘zilla encourages a combo of the Networker and Celerra VSAs for a “NetWorker Advanced File Type Device.” But that device would still be not-for-production.
FalconStor basically calls this kind of configuration the Network Storage Server (NSS) and it’s available as a virtual appliance, very much for production use. EMC could have a competitor here with Networker and Celerra VSAs, but discourages their use in production. I’m not sure what to make of that.
In the meantime, there are more VSAs on the market now, for production use or otherwise, than you can shake a stick at. User/blogger Martin Glassborow (StorageBod) is putting several through their testing paces over at his place.
Here are some stories you may have missed this week:
As always, you can find the latest storage news, trends and analysis at http://searchstorage.com/news.
Scuffling drive vendors Seagate and STEC have called a truce in their solid state drive (SSD) patent infringement battle, with both vendors saying today they have dropped their lawsuits against the other.
Seagate filed the first lawsuit last April claiming STEC violated four patents Seagate registered between 2002 and 2006, and STEC countersued. Under terms of today’s settlement, no money was paid and neither company licensed technology from the other. The settlement does not preclude future patent infringement suits between the two.
“Since STEC plays a major role in the proliferation of SSD technology, we view the dismissal as a vindication of our technology,” STEC CEO Manouch Moshayedi said in a statement. “With this case behind us, we can now optimize our resources to take full advantage of the market opportunities at hand.”
Seagate’s position is that the SSD market isn’t so great for STEC. According to the statement Seagate released about the settlement, “The economic conditions today are drastically altered from those that existed when we filed the litigation, and the impact of STEC’s sales of SSD’s has turned out to be so small that the expenditures necessary to vindicate the patents could be better spent elsewhere.”
Emulex today disclosed its next round of convergence products – universal converged network adapters (UCNAs) and a management framework — as well as encryption for its Fibre Channel HBAs.
If you were just getting familiar with converged network adapters (CNAs), you might be thrown by the concept of a universal CNA. The difference is, according to Emulex VP of corporate marketing Shaun Walsh, a CNA only supports Fibre Channel over Ethernet (FCoE) while a UCNA combines FCoE, 10-gig Ethernet, iSCSI offload and RDMA in one chip. Walsh says he expects Emulex OEM partners to begin selling its OneConnect CNAs in the second half of the year.
To manage devices for a converged network, Emulex will replace its HBAnywhere HBA software with a OneCommand platform. OneCommand will manage UCNAs as well as Emulex LightPulse HBAs.
Storage networking vendors are rushing to position their convergence devices in anticipation of FCoE and other methods of getting Ethernet, Fibre Channel and even InfiniBand to work together on one network. Mellanox launched its BridgeX gateway this week to let customers use any or all of those fabrics together.
It’s still up for debate as to how soon enterprises will begin consolidation, but Walsh says “convergenomics” will push them there faster than they would normally adopt new technology. Emulex is counting on customers looking to implement converged networks because it will save them money on cabling, power and cooling and rack space.
“It will be a phase thing,” Walsh says. “We’ll see first-generation products implemented this year, then the second generation will be when OEMs start to endorse those and put them in their product lines. Normally, I would say three to five years before it becomes mainstream, but given the economic situation, I’d say two to three years in this case.”
In the next few months, we’ll get an idea of whether convergenomics will stimulate storage spending or become storage voodonomics, but you can bet there will be no shortage of convergence product rollouts.
Encryption is another hot topic these days, with two standards proposals for key management being set forth this week. Emulex wasn’t involved with either of those, but its new Secure HBA uses encryption and key management from EMC’s RSA Security.
Last week, Hewlett-Packard launched its first iSCSI SAN product based on its acquisition last year of LeftHand Networks. As part of that announcement, HP made it official that LeftHand’s days as a software-based iSCSI SAN vendor are over. The company’s SAN/iQ software will continue to use commodity servers for hardware, but those servers from now on will only be manufactured by HP and will be pre-packaged into appliances with LeftHand’s software.
Most of LeftHand’s customers had them delivered like this anyway, as LeftHand’s software met in the channel with servers and were integrated by VARs. Still, come customers said they were disappointed that they could no longer use LeftHand’s software to repurpose existing hardware.
But this is not the first time a storage vendor has begun as a software-only play and moved into the appliance world once it was acquired by a larger vendor. That was the case with Avmar. Avamar began by delivering its host-based data deduplication software as an appliance, but large organizations could get better economies of scale by purchasing their own hardware from their usual supplier, or had standardized on a particular server build and didn’t want a noncompliant appliance sticking out like a sore thumb.
Thus Avamar went software-only, until it was acquired by EMC Corp. Soon after that acquisition EMC rolled out the Avamar Data Store, saying many of its customers didn’t want to have to assemble their own hardware/software clusters, especially in large environments (the company will still sell the product in a software-only version to users who want it, however). It didn’t hurt that EMC’s relationship with Dell flipped that economies-of-scale equation between Avamar and enterprise customers on its head, just like it doesn’t hurt for LeftHand that somewhere in the world, HP produces a ProLiant server every few seconds.
So I couldn’t help but think about all this when I met with a company a couple of weeks ago that has designs on being the heir apparent to LeftHand. StarWind Software and its eponymous iSCSI target software product are not exactly new. The product has been marketed for years by RocketDivision, a company headquartered in the Ukraine, which spun off StarWind Dec. 1.
CEO Zorian Rotenberg said StarWind is going to charge $2,995 for its most deluxe package which includes CDP, snapshots, repolication, mirroring, thin provisioning and an optional virtual tape library interface. That license fee also covers unlimited capacity in perpetuity.
StarWind is far from alone in value propositions of this type from a whole new generation of companies stepping up to pick up where LeftHand left off. StorMagic, Seanodes Open-e, DataCore, and Double Take’s emBoot are all on the market touting the benefits of commodity hardware and affordable, flexible software.
DataCore is a good example of a company that started off and remains software-only. And Enterprise Strategy Group founder Steve Duplessie suggested to me last year that server virtualization may change IT pros’ mentality around software-only storage. Meanwhile, the cloud data center has got people thinking about commodity hardware and horizontally scalable architectures. So it’s possible that the current “class” of iSCSI SAN software vendors will blaze a new trail.
But having watched the lifecycle of Avamar and LeftHand, I’m also wondering if it’s all just a little bit of history repeating.
According to a note posted on VMware’s KnowledgeBase website, the server virtualization software maker is recommending that users of NetApp’s FAS arrays in a High Availability System Configuration not upgrade to VMware Site Recovery Manager (SRM) 1.0 Update 1.
The note says these users have a 50% chance of encountering a bug that means “replicated datastores are not detected correctly” within the application. It’s unclear whether the bug is on the NetApp or VMware side of the equation, but the companies are investigating and can assist customers with downgrading to previous versions if they experience problems.
This is not the first time integration between SRM, VMware’s disaster recovery/failover application for virtual servers, and array-based replication from storage vendors has proven tricky. An HP user also told SearchDisasterRecovery.com last week that he experienced some pain while trying to bring his EVA environment up to speed with SRM.