Rackspace Holdings became the first company to complete an IPO since March when it priced its shares at $12.50 Thursday (the price fell to $10.01 Friday on the first day of trading).
The hosting company doesn’t specialize in storage, but is beta testing its CloudFS Web-based storage service that competes with Amazon S3 and others. Rackspace also launched a dedicated NAS (DNAS) storage service for in June aimed primarily at file sharing Web sites and companies that deal with rich media files. That’s built on NetApp FAS2000 storage, and Rackspace has an Unmetered Backup service trough a partnership with CommVault.
Only a small piece of Rackspace’s revenue ($130.8 million last quarter and $362 million in 2007) come from storage products, and it could be tough growing in the storage space with competition heating up.
“Storage is tough [for hosting companies],” Forrester Research analyst Stepanie Balaouras said. “If it’s traditional transaction oriented processing, the storage really needs to be in close physical proximity to the servers for performance reasons. If it’s non-transaction processing — you need to store Web 2.0 type of content like photos, audio, and video — you’re less performance sensitive and you can rely on storage services in the ‘cloud.’ But a lot of startups who need this type of storage service seem to be going with Amazon S3.”
Considering Amazon’s track record for outages, maybe Rackspace is getting into storage at the right time.
While EMC generally received high marks in the storage industry for brining thin provisioning, MAID, and solid state disk into its midrange arrays, the CEOs of smaller competitors 3Par and Compellent say they’re not worried about the new Clariion CX-4 features stunting their growth.
3PAR and Compellent completed IPOs last year, and have continued to grow in their early quarters as public companies. 3PAR increased revenues 80 percent last quarter and turned a small profit ($2 million) for the first time. Compellent revenues grew 74 percent since last year and it narrowed its losses to $603,000 – down from $1.9 million last year.
Compellent and 3PAR likely owe some of their success to beating the big guys to the punch with thin provisioning and other features. Now EMC comes along with thin provisioning and raises the bar in the midrange with SSD support and gets the jump with 8-gig FC connectivity. But Compellent CEO Phil Soran points out his company’s Storage Center systems have supported thin provisioning since 2004. He said Compellent has even offered SSDs in “special bid” custom installations.
Soran said EMC’s CX-4 “shows that others have been leading in the innovation space for many years now. It wasn’t long ago that EMC said they don’t recommend thin provisioning, and we’ve had it for almost five years.”
Thin provisioning pioneer 3PAR has had it in its InServ systems since 2002. Now thin provisioning has popped up in Hitachi Data Systems’ USP-V platform, EMC’s SAN and NAS boxes, and IBM’s San Volume Controller (SVC) over the last year. IBM also offers thin provisioning in the XIV systems it will rebrand in the coming months.
But 3PAR CEO Dave Scott says not all thin provision is created equal. When asked during his company’s earnings call last week what impact he thought EMC’s virtual provisioning would have on InServ sales, Scott said “not a lot. It’s becoming clear to customers who do any analysis of the [thin provisioning] capabilities that EMC and Hitachi put together know they don’t reflect the kind of ease of use and automation necessary to implement this efficiently. You end up with what we call chubby provisioning, that is, it enhances operational overhead risk.”
Soran also criticized EMC for have a “model-based architecture” with disparate midrange SAN, enterprise SAN, and NAS systems while competitors such as Compellent, 3PAR and NetApp have one model that doesn’t require rip and replace upgrades. EMC claims that is one path it will not follow its rivals down, although some analysts suspect it might.
Google has revamped the hardware and some aspects of the software for the Google Search Appliance, which can now support 10 million files in a single box, up from 3 million in the previous version.
Google has also added new biasing features which allow corporate administrators to weight search results differently for different end users – for example, the marketing and engineering departments might get different documents returned first for the same keyword. This is termed front-end biasing.
Metadata biasing is also new with this release. That lets admins rank metadata such as create date or author on a sliding scale of importance, so that, for example, documents written by the CEO are returned first.
Data growth is making indexing and search necessary, according to Forrester Research analyst Stephanie Balaouras. “I would say, once you can measure your storage in hundreds of TBs,” data indexing will probably be necessary, she said.
“Below 100 TBs, you should have a rough understanding of how that data is broken down between email, unstructured files, and database data,” she added. “And with some basic policies, the most important information should be ingested into a content management system.”
Former Brocade outside counsel Wilson Sonsini Goodrich & Rosati has agreed to pay Brocade $9.5 million to settle a lawsuit the company is bringing against 11 of its former board members and executives in an attempt to recoup what it says are $830 million in costs relating to options backdating charges against former CEO Greg Reyes. The news of the settlement was first reported today by The Recorder.
The original report also shows how Brocade arrived at the $830 million figure:
The voluminous complaint details what the backdating scandal cost Brocade, including $160 million to settle the securities class action, and $7 million to settle SEC charges. The company spent about $7.5 million on its two internal investigations and another $67 million defending Reyes, Jensen and other former executives and $30 million defending itself.
The complaint also said Brocade lost $470 million when the backdating scandal dashed a proposed merger with Cisco.
According to law.com, Wilson Sonsini were named defendants in an earlier derivative lawsuit filed in California’s Northern District court in April, which has been consolidated into the new suit filed last Friday and reported by the Recorder. The original complaint relating to that case cites a 2006 piece by Business Week reporter Peter Burrows, where the revelation of the scuttled Cisco acquisition was attributed to Reyes.
“A successful sale of the Company potentially could have expunged defendants of their liability to Brocade for the stock option manipulation then under investigation by the Audit Committee,” according to the April court filing.
For those of us out here watching with popcorn, it’s fun to think about how different things might have been if Cisco did buy Brocade back then. Today, Cisco would likely be selling Brocade switches as its midrange platform and McData would probably still be foundering on its own.
On EMC’s conference call this morning officially launching the CX4, storage division president Dave Donatelli told analysts that SSDs will also be qualified for Celerra NAS systems over the next six months. The CX4-960 will support SSDs in October, and Symmetrix DMX systems are already available with SSDs.
Despite more and more common features, Donatelli said speculation about a merger of product lines given the multiple overlaps between Celerra, Clariion and Symmetrix is “due to a profound lack of understanding” in the market. “The Symmetrix has 24 boards,” he said. “If you lose a whole board, you lose 1/24th of your performance.” Clariion is a two-board system.
But that hasn’t stopped people from asking this question of EMC. Industry analysts, Wall Street analysts, users…especially the users – a few told me they weren’t sure which way to go between some EMC platforms going forward. A Wall Street analyst asked Donatelli on the conference call today about whether EMC plans to consolidate software and hardware platforms “the way NetApp has.” That had to hurt.
There have been rumors about the CX and Symmetrix DMX merging, communicating, and generally coming together, for years. I remember one rumor that the CX and DMX would be able to share data with the release of ECC 6. Obviously that one never came to fruition. But EMC did standardize hardware components such as disk shelves between the CX and DMX for manufacturing efficiencies a few years ago. And the capacity points, replication and management features have been getting closer and closer together.
“It’s a running joke, every time we update Symmetrix, we get the question about what’s going to happen to Clariion, and every time we update Clariion, we ask what’s going to happen to Symmetrix. We keep making Clariion bigger and Symmetrix smaller,” Donatelli said. But he didn’t say why.
That’s the thing. EMC’s not going to come out and say, “Yes, we’re merging our primary storage products the same way we’re planning to merge our backup products, so please don’t buy anything we have out currently in anticipation of something to come.” So no matter how vociferous their denials that this might be the case, for some in the industry, their actions are speaking louder than their words.
A couple of months ago, my co-blogger Tory Skyers wrote a post questioning the impact of data deduplication on evidence preservation and chain of custody best practices for e-discovery data.
It’s a question many users are asking, apparently, as “Mr. W. Backup”, GlassHouse Technologies vice president of data protection services W. Curtis Preston recently addressed it on his Mr. Backup Blog. First, Curtis brings up the fact that if today’s legal standards for electronic evidence considered dedupe/compression a change in the data rendering it inadmissable, data from tapes (naturally compressed) wouldn’t be admissible either.
But Curtis also brings up another interesting point, and this is where I think the e-discovery waters have been muddied by everyone and their brother positioning products for that space. He writes:
You have to address the entire chain of custody. Let me give an example. If every email that is sent or received by an email system is immediately archived and stored in an archiving system that can demonstrated for anyone concerned when/where an email came from and how long it has been stored, you could use that system to build a non-repudiatable source of data that could be used in legal proceedings. (It’s not just about the software, of course, as you have to address access and all other kinds of issues, but that would be a start.) BUT, IMHO, non-repudiation requirements have much more to do with proving chain of custody than they do with the content of the data, and dedupe systems are just as good at proving that as any other storage system — [in other words] they don’t. It’s usually up to the system that put the data in there and took it out.
I think Preston raises many good points, but this doesn’t negate some of the points Skyers also raised. Among the biggest: “Are we sure our legislators understand the differences between a zip (lossless) and JPEG (lossy) compression?…The answer to these questions, while second nature for us technology folks, may not so second nature for the people deciding court cases.”
If IT pros are mulling and chewing over this question, you can be sure lawyers are, too. And out of the pool of citizens that could make up a jury of your peers, how many would immediately understand Preston’s paragraph above?
Just as its MAID technology did, Copan’s energy-savings deal with Pacific Gas & Electric that began late last year is expanding.
PG&E offers rebates for customers who use storage systems it has certified as energy-efficient. The utility started last year with Copan’s Revolution systems and last week qualified 3PAR’s InServ systems.
An initiative called Conserve IT revealed today by storage research group Wikibon and seven storage vendors looks to bring more vendors and more technologies into the PG&E program, as well as help other utilities offer similar programs. Wikibon is measuring the green-friendliness of technologies such as MAID, flash, thin provisioning and virtualization.
The first vendors to sign on are 3PAR, Compellent, DataDirect Networks, EMC, Hitachi Data Systems, Nexsan and Xiotech.
Wikibon founder and principal contributor David Vellante said his group will validate the baseline for energy efficiency, test products, write reports and submit them to PG&E.
“We bring in the last mile,” Vellante said. “We do the dirty work about really understanding why these products are more energy-efficient than some baseline. We determine where does that baseline come in and who measures this.”
Vellante said Wikibon is talking to utilities companies in Texas, California, New Jersey, Minnesota and Canada and he expects to add others to the list. Other technologies will likely be targeted too, such as data deduplication.
He said Wikibon and Conserve IT has no formal deal with PG&E, and vendors can qualify their technologies without its help. But the group has already helped 3PAR certify InServ. Vellante said along with storage companies, he is talking to software, server and communications firms. “We feel we can do this more quickly and efficiently than anybody right now, other than PG&E,” he said.
The program comes at the right time, as hype over green computing is being replaced by people actually looking to do something about it — even if it is more to save money than to save the environment in many cases.
NetApp’s gift of a T-shirt the other day made me think about all the other s.w.a.g. I’ve collected during my time in the storage industry – I’ve become something of a collector of odd trade-show tchotchkes. And I realized when I looked at the items I’ve kept, my desk has become a personal museum of now-defunct or acquired companies. So I thought a brief tour of my collection might be a fun Friday post.
Here are some of the things I’ve accumulated: Continued »
According to Symantec’s earnings call last night, updates to NetBackup announced at Symantec Vision in June helped keep its storage business strong this quarter. Storage and services revenue increased 20% year over year to $616 million, which COO Enrique Salem attributed in part to new NetBackup features such as continuous data protection (CDP) and integration with PureDisk data deduplication.
The ability to offer customers one throat to choke for storage management, archiving and backup has also paid off, according to Salem, as the sales force focused on selling across product groups. Enterprise Vault sales grew 30% year over year, and the product enjoyed some good publicity this quarter with selection to various analyst product rating lists and customers raving at Vision about its features. The Storage Foundation product line also “posted its best results in years,” according to Symantec, though numbers weren’t given.
Symantec has had a rocky time of it in the recent past, especially over the last year, following frequent managment shifts and its market share slipped in IDC quarterly tracker reports on the storage software market. As recently as last quarter, there was speculation that Symantec would sell off its storage business units.
But during last quarter’s earnings call, Symantec also reported good growth for its storage business units. Email archiving, backup, and storage management were among the product segments that posted double-digit year over year growth for Symantec’s fiscal fourth quarter.
There’s one dark cloud still threatening to rain on Symantec’s parade, however — its sales channel. Earlier this month it was reported that Symantec would be going direct with its largest customers, a report that was later contradicted by top Symantec channel executives.
That hasn’t stopped unrest among channel partners whose feathers were ruffled by the original report, and it hasn’t stopped Symantec competitors from swooping in to try to take advantage of the confusion. Following last night’s earnings call, skepticism over Symantec’s “conflicting channel messages” seemed to have spread to financial analysts, as well. According to a note to investors sent out by TBR:
Although Symantec defends the announcement by explaining that its strategy actually hasn’t changed, but that it only made its customers aware of the option to go direct, TBR believes the damage has already been done in the partner community. Symantec competitors wasted no time in stepping in to try to lure Symantec partners away, as Trend Micro and other smaller players made bids for Symantec’s partners by pushing their own channel programs during the confusion. Although the strategy would give Symantec more control over cross selling its portfolio in its largest accounts and potentially improve margins, TBR does not expect the change to make a big impact on either metric. However, greater involvement in large accounts from the direct sales force will give Symantec more control over cross selling products across its portfolio to drive new license revenue in existing accounts.
Just about all data deduplication vendors make claims about the dedupe ratios their systems provide, with the caveat that the ratios vary by data type and backup frequency.
Sepaton today says it’s willing to guarantee its ratio for Exchange. The VTL vendor said if customers don’t get a 40:1 ratio with its DeltaStor dedupe software in 30 days, it will throw in a free disk shelf with at least 7.5 TB of capacity – a $50,000-plus value.
There are some conditions. First, the customer must use Symantec NetBackup for now, because that’s the only backup software DeltaStor supports. And the customer must do daily full backups, which result in better reduction than incremental backups. The guarantee is part of what Sepaton calls a FastStart Deduplication Package for Symantec NetBackup, consisting of an S2100-ES2 library with 20 TB and DeltaStor.
Analysts who closely follow the dedupe market say Sepaton deserves credit for making the guarantee, but isn’t exactly sticking its neck out. Because Exchange includes a lot of messages sent to multiple recipients with attachments, it tends to have a great deal of duplicated data that can be reduced.
“To guarantee anything takes guts,” Arun Taneja of the Taneja Group said. “It’s a good marketing strategy for them to set the trend and draw a line in the sand. But for full backups for email for 30 days, 40:1 is very achievable. So I would say it’s not a very large risk.”
Glasshouse backup guru Curtis Preston agrees. “I think it’s a great idea and I doubt they would have done it if they hadn’t already done a lot of testing to verify they actually can get more than 40:1 in most Exchange environments,” he said. “There is a lot of duplicate data in Exchange.”
Sepaton director of product management Jim Shocrylas said the 20 TB system would give a customer with 4 TB of daily full backups a retention period of about half a year. He said the guarantee applies to full backups because Microsoft’s best practice recommendation for backing up Exchange is daily fulls.
“This is first of a number of guarantees we’ll be coming up with for specific data,” Shocrylas said. “Others will follow.”