Storage Soup


May 6, 2008  10:54 AM

Mosso in the cloud, Monosphere in the dark

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Two small vendors trying to make their way in markets dominated by storage giants made incremental yet interesting offerings this week.  

Mosso, a division of Rackspace,  rolled out a cloud computing platform called the Hosting Cloud in February and followed with the release of MailTrust email hosting. Those first two services are intended for users who run websites. The Hosting Cloud includes storage space, backup, patching and security that developers can execute Web code on top of. MailTrust is meant to provide messaging in that website context.

This week, Mosso disclosed that it will branch out a bit later this year with CloudFS, a cloud-based storage-only service more like Amazon’s S3 than not. As with Amazon’s service, CloudFS will provide a place for users to put files and objects on the Web and will require developers to come up with their own interfaces. According to Mosso’s co-founder Jonathan Bryce, one distinction with CloudFS is that it will have packages of supported coding libraries for each major language including .Net, Java, PHP, Ruby and Python.

The company is “committed to fanatical support” and consistency for developers, according to Bryce, and is hoping that some ISVs will write a hosted online backup interface for it the way they have with S3. Target pricing for the service will be about 15 cents per GB per month, plus bandwidth costs for non-Rackspace customers (existing Rackspace hosting customers pay no bandwidth fees for CloudFS). The service is in private beta now.

Meanwhile, Monosphere launched version 3.7 of its StorageHorizon SRM software. This version will allow customers to make fine-grained maps of their storage capacity against VMware deployment–i.e. “the storage relationships between array LUNs, the ESX server, VMware file systems (VMFS), VMware virtual disks (VMDK), guest OSs, and guest OS file systems/raw devices” according to Monosphere’s press materials.

But what’s really getting some play in the market lately is Monosphere’s claims that it can identify not just resource allocations but actual resource utilization, to a fine degree–identifying “dark” storage, which is free for use but unmapped. Monosphere has been making this claim since at least last year (I remember them talking about it with me in briefings long before this week) but it seems they’re getting more attention for it now. Among the blogs commenting on this “dark” approach to SRM is Jon Toigo’s DrunkenData:

I am not sure whether Monosphere came up with this term, but I like it.  Dark Storage refers to storage that is unmapped, unclaimed or unassigned. I am not sure whether Monosphere came up with this term, but I like it.  Dark Storage refers to storage that is unmapped, unclaimed or unassigned…According to [Monosphere], between 15 and 40 percent of the capacity in the corporate storage infrastructures that they have inspected with their software can be characterized as dark storage.

Could you be sitting on capacity that you didn’t know you had?

Monosphere reports that it’s doing one new installation per week and is looking to make that two in the next few months. Among their claimed customer wins are large companies in networking, insurance, automobiles and business outsourcing, though none of those can be publicly named or interviewed at this point. While there have been some SRM products that have caught on – Novus, for example, which was bought by IBM earlier this year – it’s been a tough market for startups. “Nobody’s making any money on SRM right now,” is what Forrester analyst Andrew Reichman tells me, even though his expertise is SRM. When people do buy SRM, it often comes  their storage hardware vendor. It’s still not clear that even the best independent SRM tools will garner much attention from users – we’ll have to watch Monosphere and see.

May 5, 2008  10:23 PM

Even more Riverbed news

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

While it might seem like we’re about to change our site name to SearchWANOptimizationVendorsFighting.com, I assure you it’s just coincidence. Riverbed and its rivals have had a lot to say lately, and that’s at least in part because of greater competition in their market space, though it’s not getting as much play as other hot markets like SaaS and clustered NAS.

According to Forrester analyst Rob Whiteley, with whom I’ve spoken during the whole Riverbed/AutoCAD debacle, the one incontrovertible point that can be taken out of all the back and forth is that users can no longer just evaluate this gear on price. WAN optimization and wide-area data services have become more strategic markets than when they started, and there’s going to be more fine-grained differentiation between products in this space going forward.

Another trend in this market was identified by Blue Coat’s CEO as well as Brocade officials when speaking about the acquisition of Packeteer and Brocade’s discontinuation of WAFS products, respectively. That is that WAN optimization is growing to encompass a number of fields originally thought of as separate disciplines, whether it’s WAFS being combined with network security or TCP/IP acceleration meeting quality of service. As Brocade’s spokesperson put it, “the WAFS and WAN optimization markets are converging and our customers are looking for a much broader set of functionality beyond just WAFS for remote site IT management.”

Sensing this, it would seem, Riverbed has chosen to partner with other companies to expand its capabilities. It disclosed in February that it would be adding the Riverbed Services Platform (RSP) of services for remote offices on its Steelhead appliances, and this week added network optimization services to the RSP platform. Riverbed’s latest additions are what it calls new “visibility partners” to broaden Steelhead’s WAN optimization features. Partners supplying network visibility features such as traffic monitoring, application performance monitoring and policy enforcement include Opnet, CompuWare, NetScout, Solar Winds and Opsware.

The Steelhead central management console (CMC) was also brushed up with the release of version 5.0 this week. Despite the dot-oh, it’s an incremental upgrade with the addition of the ability to create groups of appliances for policy envforcement and more granular access control roles.

Riverbed’s Alan Saldich pointed out that Riverbed’s going the partnering route because customers might already have another product they want to use with Riverbed. This could be seen as a subtle comment on Blue Coat’s plans for Packeteer, which consist of folding Packeteer’s IP into a platform existing Packeteer customers may not be familiar with. Of course it will remain to be seen which approach will win, but this new, wider context for WAN optimization is something users should consider. If they can hear themselves think over all the bickering, that is.


May 5, 2008  11:56 AM

Silver Peak objects to tests by Riverbed

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

I have yet another story up today about the AutoCAD issue with WAN optimization products. This time, Riverbed did some testing and had Taneja Group validate it. That story is here.

In the meantime, space limitations made it impossible to include the entire response we got from Silver Peak’s director of marketing Jeff Aaron in the story, but here is more of what he had to say:

From: Jeff Aaron
Sent: Thu 5/1/2008 8:38 PM
To: Pariseau, Beth
Subject: RE: Beth Pariseau’s latest article on AutoCAD issue

Hey, Beth.  The numbers that Riverbed quotes in their report for Silver Peak don’t jive [sic] with numbers we’ve seen in-house, in the field, and in tests done with AutoDesk.  I am not sure if they configured our box incorrectly or if there are some other factors at play.  The fact that we show negative data reduction in some examples and that Riverbed comes in first in EVERY single test should be the first indicator that these results are biased and incorrect…

I don’t see much value in going through the results line by line to point out inaccuracies as that will just continue to propagate the “he said” “she said” scenario.  Furthermore, lab results are meaningless as there are dozens of variables that affect performance in live networks – including bandwidth, latency, loss, and whatever other applications are sharing the link with AutoCAD.  That having been said, we only care about how we perform in real customer networks, and are comfortable that our claims will stand up if any end user decides to put us to the test (just like AutoDesk did).  To that end, we encourage anyone concerned about AutoCAD performance to give us a try and see for themselves.

Just wondering – what exactly was Taneja’s role in this?  They have never seen our boxes and have no hands-on experience with WAN optimization, so there is no way that they are capable of “verifying” anything about our product.  Who confirmed that our boxes were correctly deployed (we certainly didn’t)?  Who verified that the same tests were run on each vendors appliances in the exact same environments?  …   Make no mistake – this is a Riverbed report with jacked-up numbers – this is by no means a valid 3rd party verification.

A little context for the section below: Aaron had also pointed out to me that Riverbed will struggle with more recent versions of Microsoft Excel files, which he says also do some bit-scrambling. Riverbed responded that he was referring to an issue with overlapping opens of Excel files which was fixed a long time ago.

Re. Excel – we provided a “diff file” that illustrates how the bytes are being scrambled from one save to the next (without any modifications).  It is clear from that that there is a scrambling issue that is similar to what is happening with AutoCAD.  My point is not to dispute what Riverbed can or cannot do wrt to Excel (even though the problem they said they fixed is a completely different issue).  My only goal was to point out that it demonstrates a data scrambling problem that is similar to AutoCAD, and that we don’t seem to be affected by it.

That is also why I keep referring to other applications, like Citrix and video streaming.  My point is to show that we handle these applications very differently than other dedupe vendors. What are Riverbed’s thoughts on that?  That is the bigger story, in my opinion.  AutoCad is just the latest symptom of a bigger problem – that there are different application types that fundamentally behave differently across the WAN, and you need the right architecture to address ALL of them…

Thanks for giving me a chance to comment.

Jeff


May 5, 2008  10:51 AM

Vendors are buddying up in the cloud

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

So I come back from a day off and what do I find? IBM and Google, Sun and Amazon reportedly pairing up in the cloud.

I have to say that cloud computing has made the growing IBM/EMC rivalry that much more interesting. EMC threw one of the first punches with the rollout of Fortress and acquisition of Pi–it seems EMC will probably stick to building its own infrastructure rather than partnering. But then IBM went for a partnership with one of the other most recognizable brand names in the world (aside from its own) in Google, which consumers are already comfortable using in the real world. Meanwhile, PiWorx remains in stealth. It will be interesting to see where the next leapfrog move comes from.

Incidentally, how long before Sun acquires Zmanda? They’ve already acquired MySQL, for which Zmanda offers open-source backup and now they’re buddying up with Amazon, for which Zmanda offers an interface (Amazon still requires you to roll your own GUI or get one from a partner). It could link Sun’s open storage products–via open source software!–to the cloud with Amazon. It would be just one big happy open-source conflagration…I’m still watching for it.

Meanwhile, the other tizzy lighting up my Google Reader is over the lack of a deal between Microsoft and Yahoo. Rob Enderle has an interesting post up on Google’s role in that situation. I’m wondering, as IT and cloud vendors keep pairing up, if we shouldn’t be looking for familiar faces among those next in line to be Yahoo’s dance partner.


May 5, 2008  9:06 AM

QLogic switching direction on directors?

Dave Raffo Dave Raffo Profile: Dave Raffo

Comments by CEO H.K. Desai on QLogic’s last week raised questions about the future of QLogic’s SANbox 9000 Fibre Channel director switch.

Several times on the company’s earnings conference call, Desai said QLogic was changing the focus on its switching business to edge and blade switches. That would seem to leave out the director switch that QLogic launched 2006 as a low-cost alternative to directors from Brocade and Cisco.

“We continue to gain traction with our Fiber Channel edge and blade switches, which is our primary area of focus,” Desai said. Later he said QLogic is refocusing its switch investments on InfiniBand and blade switches.

To financial analysts on the call, that meant QLogic would leave the SANbox 9000 behind as it begins rolling out 8-Gbit/s HBAs and switches and starts development on 16-gig technology. SANbox 9000 sales have been hurt by lack of any OEM deals with major storage system vendors such as EMC, IBM, and Hewlett-Packard that sell Brocade and Cisco switches under their brand. QLogic’s SANbox 5000 edge switches already support 8-gig connectivity.

“You indicated that edge and blades switches are your primary focus now in Fibre Channel,” analyst Clay Sumner of FBR Friedman, Billings, Ramsey & Co., Inc. said to Desai on the call. “Just curious, does that mean you no longer expect the tier one [OEM] win for your 8-gig Fibre Channel director?”

“We never give up on anything,” said Desai, refusing to clarify his position on directors for the curious analyst.

Several analysts expect QLogic to dump the directors. “Our checks indicate that going forward QLGC may not invest further in the FC high end Director-switch space but could continue to develop mid-low end FC switches and blade switches,” Pacific Growth analyst Kaushik Roy wrote in a note to clients. Roy told me he doesn’t expect QLogic to do any more development on the SANbox 9000 or build any other directors. In other words, it is getting out of the director switch business.

Not so, says QLogic marketing VP Frank Berry. “The SANbox 9000 lives on!” Berry wrote to me in an e-mail. “We continue to sell it.”

Berry also said the SANbox 9000 will be upgraded with 8-gig blades that can replace the 4-gig in there now. What’s changing, he said, is its go-to market strategy. Instead of its original target of Global 2000 firms, QLogic now sees the director as a small enterprise product.

“We’ve been successful for several years in the SME market with our
SANbox 500 line of stackable switches,” Berry wrote. “And we have learned the SME space is where we have been successful selling the SANbox 9000.”

QLogic will make more noise about SME products this summer. Then we’ll see which SANbox it expects SMEs to play in.


May 2, 2008  10:39 AM

Sun still going down in storage

Dave Raffo Dave Raffo Profile: Dave Raffo

When Sun revealed its open source storage push this week, some industry observers wondered about its business model. In other words, how can Sun make money on open source storage products?
Then Sun reported its earnings Thursday night, and it became clear that its storage business isn’t exactly rolling in dough these days anyway.

Sun’s storage products generated $530 million in revenue last quarter, down 5.4 percent from a year ago and $100 million short of its target. Big-ticket items such as tape libraries and high-end disk systems were down in a quarter in which EMC and IBM reported increases. Server revenue also fell short by $100 million, making it a disastrous period for the new combined servers and storage unit.

Overall, Sun lost $34 million in the quarter compared to a profit of $67 million the year before. On the earnings call, Sun execs said they would be restructuring to the tune of 1,500 to 2,500 layoffs.

Can open source save this sinking ship? Sun CEO Jonathan Schwartz seems to think so. Open source was a common theme of his earnings call, with open storage getting its share of attention with statements such as: “We have a great variety of new Open Storage innovations [entering] the market within the next few quarters.”

Schwartz didn’t talk much about how Sun will make money on open storage, except to emphasize that Sun would save money on R&D by having a common open platform for all of its servers and storage systems. Layoffs are expected to save Sun $100 million to $150 million a year, although it’s not clear how much of the reduction will be in storage.

It remains to be seen what the quality of open storage products will be, but Sun has little to lose. It’s tried a lot of things over the years to jumpstart storage sales, including paying $4.1 billion for tape library market leader StorageTek, and nothing has worked. Sun OEMs systems from Hitachi Data Systems, LSI and Dot Hill, and usually has less success than other vendors who sell the same systems. For a while Sun planned its storage future around the 6920 midrange system, which it billed as a virtualization product and an EMC Clariion killer. Customers yawned, and Sun sold the technology to HDS last year.

Now its storage plans revolve around a large DAS system called Thumper and open source software. Considering its track record, things can’t get much worse, can they?


May 2, 2008  8:12 AM

The Storage Admin, DR, and the Down Market

Tskyers Tory Skyers Profile: Tskyers

The economy has been on the mind of just about everybody recently, and with good reason. Gas at near record highs, unemployment rising, housing values reportedly dropping, the credit crunch and foreclosures numbering in the bazillions it is easy to see why people are not exactly upbeat about the state of our economy.

In the storage market, however, it’s looking like a blockbuster year. EMC and others are reportedly on track to meet or beat financial analysts’ estimates, and that leads me to today’s blog.

As it turns out, the impetus for this blog post was my recent attendance at a DR seminar put on by Storage Decisions featuring Jon Toigo. Looking around the room, I couldn’t help but think of what it looked like in the early days of “network administrators” when people didn’t think of network pros as any different from the server guys. Today, the storage admin is being called on to be part lawyer, part business analyst, part networking guru and all-knowing about all things storage, but there are very few companies with a dedicated storage team (outside the Fortune 500′s that have Exabytes of storage to manage).

For the most part (and please chime in with your experience) storage folks are still viewed as “server guys”. This is, of course, changing, and I wouldn’t bring it up if there weren’t a bigger point to be made: if you do a quick scan of Monster, Dice or Jobcircle, there are more and more listings specifically calling for a “Storage Administrator”. Storage is fast becoming the segment to be in–the information infrasructure could not function without it, and it is increasingly becoming the focus of much planning and resource allocation, in terms of both time and money. Talk to most companies, and they have storage budgets that are going up even in a down market, and they are hiring people to dedicate to the task of storage. Storage pros are more highly valued, and their pay is going up.

So what does this have to do with DR? DR is, at its basic level, moving data from one place to another, on a regular basis, far enough away that if you had a disaster you could recover your data and continue operations in the face of a disaster. This, in almost every case that I can think of, requires storage, storage networking technologies and someone who knows enough about them to set it all up and keep it working in a changing environment. Hence all the storage pros in the room vs business types that normally involve themselves in DR.

Toigo put on a great presentation. It was filled with a ton of valuable information and even if you have nothing to do with the DR planning and implementation at your company, I would enthusiastically recommend attending one in your area. I walked in thinking I had a passable grasp of DR best practices and walked out realizing I had barely scratched the surface, and that as a storage professional I needed to understand more about business practices as they relate to DR.

For example, Toigo discussed what a data model was and not only how to build one but suggestions on explaining it to non-technical analysts so we could all use it together to ultimately build a workable DR plan around valuable data instead of putting together a set of technologies to make our systems highly available but unable to really recover from a disaster. And it’s the storage guy who should be taking the lead on that.

Think of the value you bring to the table when you can not only provide the information infrastructure, but also assist in developing a DR plan that will keep the business functioning, and generating revenue in a disaster. In the process, you can also create things that have intrinsic value to multiple business units–think of what information security can do if they know what a document or document type is worth as compared to other documents. My fellow storage pros, I’m seeing a bright future for us.


May 1, 2008  9:26 AM

CEO says Compellent benefits from Dell / EqualLogic fallout

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Compellent joined other storage companies including EMC and Data Domain in reporting a strong first quarter despite a down economy. The company’s revenues more than doubled year over year to $18.3 million, growth of 107% over the first quarter of 2007 and 9% over the previous quarter.

The company also is still a ways from profitability, and lost $1.2 million last quarter despite the increased revenue. CEO Phil Soran said on the company’s earnings call that this is because Compellent is growing and is adding operating expenditures such as salaries for new employees. Soran said he expects Compellent to be profitable by the second half of this year.

With the rest of the country in financial turmoil, how are storage companies staying strong? “Storage is the last thing that gets cut from the IT budget,” was Soran’s answer. I would also imagine it’s because storage has always been a conservative market–it doesn’t have as far to fall as some other markets.

Another thing benefitting Compellent, according to Soran, is the acquisition of midrange disk array competitor EqualLogic by Dell. It’s been well-publicized that EqualLogic channel partners have been wary of the deal, if not downright alienated by it, because of Dell’s poor reputation in the channel. Soran declined to give any specific numbers around how many channel partners have defected or how much new business it accounts for, but volunteered anecdotally that Compellent is seeing more large EqualLogic channel partners looking its way as a result of the Dell deal.

Still, Soran says the company has a ways to go when it comes to gaining that mind share. Echoing some of NetApp’s statements when it rebranded itself earlier this year, Soran said Compellent does well when companies look at its products but often doesn’t get brought to the table.

I also asked him whether or not Compellent is seeing significant business as a tier-2 disk array in large shops. He said yes, but also declined to break out any numbers.

Soran attributed Compellent’s growth to the attractiveness of its consolidation and thin provisioning features in a down economy, similar to the power and capacity savings that have reportedly kept money flowing in to Data Domain’s coffers. But Soran said Compellent’s chief competitor remains EMC, which doesn’t yet offer many of the features he was referring to–and EMC also reported a stronger-than-expected first quarter.

“They have a good brand,” Soran said.


May 1, 2008  8:17 AM

Symantec CEO: Nothing’s for sale

Dave Raffo Dave Raffo Profile: Dave Raffo

John Thompson must have known the question was coming. The Symantec CEO certainly heard the rumors. So when he was asked Wednesday night during his company’s earnings conference call about selling off parts of his company, Thompson couldn’t have been clearer.

“Contrary to popular rumor, we have no plans to divest of anything,” he said. “None.”

The rumors mainly involved the storage products that Symantec acquired from Veritas three years ago. And they were widely circulated. According to an Associated Press earnings preview story that ran this week:

Analysts are particularly interested in the possible sales of backup and recovery software product NetBackup and the company’s non-Windows Data Center Foundation, which comprises of storage and server management products.

Several technology bellwethers, including IBM, Hewlett-Packard and EMC have been named as potential buyers for Symantec’s storage products, including NetBackup.

AP could have added two other bellwethers who have been mentioned as suitors of all or some of the Symantec storage products – Oracle and Microsoft.

From the tone of Thompson’s voice when he answered the question, he’s not happy with the rumors. Yet Symantec is at least partially to blame. There have been frequent reorganizations since it bought Veritas, usually accompanied by layoffs. Symantec admitted a large layoff in April but would not give details. This left the door open for scared Symantec employees, disgruntled former employees and opportunistic competitors to attempt to fill in the details. And Symantec execs have talked about getting rid of poor performing units on previous earnings calls.

But Wednesday’s call was upbeat. Symantec reported outstanding results all around, and storage was front and center. Email archiving, backup, and storage management were among the product segments that posted double-digit year over year growth. Thompson and COO Enrique Salem talked of a bright future for Net Backup 6.5, Backup Exec 12, and Storage Foundation. They emphasized Symantec’s encryption and virtualization capabilities and gushed about three hot storage areas where Symantec has hardly been a pioneer: data deduplication, continuous data protection and software as a service (SaaS).

Symantec’s earnings were impressive in current economic conditions, although with 53 percent of its revenue from international sales, it took advantage of favorable foreign exchange rates against the dollar. Symantec gained share from its major rival EMC on the backup front, with 11 percent year-over-year growth compared to EMC’s 8 percent growth.

The question now is whether the strong storage performance will prompt Symante execs to forget about spinning off any pieces, or will it only add to the value of a possible sale? Thompson’s take is nothing is for sale. Despite what you might have heard.


April 30, 2008  10:23 AM

RenewData takes a single swipe at tapes

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Even as we continue to debate whether or not tape is dead, indicating at least that its salad days are probably behind it, some of the most interesting innovations in tape technology I’ve seen are happening right now.

For example, there’s Index Engines’ tape indexing and search software. If you’d been able to give backup administrators the ability to do a keyword search across dozens of backup tapes to identify what tapes should be restored, as well as the ability to extract single relevant files from said tapes, we might not have ever heard of a VTL.

I’d put the latest development from ediscovery services provider RenewData into that category as well. Renew says its tape-processing systems now only need to take a single pass through a given piece of linear media. Renew previously needed two or three passes, requiring its admins to mount tapes in proper order and reassemble data as it was ingested. The single-pass process will reduce the time it takes to find relevant information stored on its clients’ tapes.

The single-pass process is made possible by software that allows that data to be reassembled on the back end. Renew is not selling that software, except as part of the back-end of its hosted services. Renew’s VP of marketing Bob Little says the company doesn’t have any plans to offer it as an on-premise product.

But I have to wonder if someone else won’t find a way to develop something similar. I also wonder, if the tape space keeps coming up with finding new ways to access data randomly on linear media, whether this disk vs. tape debate could get much more interesting.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: