Hardly a day goes by without a new storage service rolling out. On Monday, it was IBM’s turn to launch two storage services as part of its portfolio of services for midsized customers – organizations with 100 to 400 employees and a handful of Windows servers.
The interesting thing about IBM’s Remote Data Protection Express and E-Mail Management Express offerings is they are the first new services IBM has launched from its Arsenal Digital Solutions acquisition in December. The Arsenal brand is gone but the remote data protection service is the same one that Arsenal offered, even still including data deduplication from EMC’s Avamar.
The email archiving service is something Arsenal was working on at the time of the acquisition, pushed to market quicker with IBM technology.
“Email management is a new offering [for Arsenal],” said Arsenal alum Brian Reagan, now an IBM Information Protection Services executive. “Under the covers we’ve started to adopt and integrate more IBM offerings.”
The email service covers Exchange and Lotus Notes, and Reagan said database and unstructured data archiving services are in the works.
Since Arsenal was into managed storage services long before anybody talked of clouds and SaaS (software as a service), I asked Reagan if he thought IBM should have kept the Arsenal brand. He said Arsenal did have a name for itself and partnerships with AT&T and other large providers, but IBM is a pretty recognizable name too.
“We get the benefit of IBM’s brand,” he said. “As Arsenal, we would have to spend twice as much money to get the attention of customers because they didn’t know who we were.” Reagan pointed out that IBM ran advertisements for its services during the PGA Masters broadcast. “That was something Arsenal could only dream of,” Reagan added.
Then again, you don’t have to be IBM to attract attention for storage services these days. Everybody’s getting into the cloud act, and Reagan says the glut of offerings have served mostly to confuse customers.
“There’s a tremendous amount of customer interest,” he said. “The downside is, it’s created confusion. Some of the really low end players that only service the consumer end of the market have clouded the picture. They’ve confused people wondering what the difference is between low end service that’s priced too good to be true and real resilient service.”
In other words, it will take awhile before enough sun shines on cloud computing so we can really know what to expect.
Until now, all we’ve known about Pi Corp., the startup EMC purchased in February, were that it was still in stealth, and that its software was meant to provide access to content on multiple types of Web-enabled devices.
While that description makes sense, it also is vague enough to leave open the possibility of several different directions that type of technology could go in. I had imagined it transcoding things like movies and music for streaming delivery to iPhones as well as PCs, for example, or reformatting Word documents to be read on both laptops and Blackberries by corporate mobile workers.
But a little more information has come out about Pi since the acquisition, at PiWorx.com. You can download a product demo, which unfortunately my system specs on my standard-issue laptop don’t support, but there’s also a walk-through of the software’s features and screenshots.
What it reminds me of most is Flickr — except with music, documents and other content types. Rather than making a “photostream,” it looks like it can create multimedia personal sites or collections for sharing that bring together those different types of content.
It also looks like the product would offer services like those available with network-enabled external hard drives for consumers that allow content to be accessed from the Web, but on a much bigger scale. Those external devices allow access to every piece of data in the repository to the owner of the password/admin rights for the system. This would let people share content selectively with more complex authentication.
So, let’s just say you’re a news writer at a storage conference and you need to see that PowerPoint from last week, but you’re in the middle of the show floor and your laptop’s upstairs. Let’s also say you don’t own an iPhone or PDA. The idea behind PiWorx, if I’m understanding correctly, is that you could go to one of the stations set up for people to check their Webmail, log in and call up the PowerPoint online.
I still wonder if this software will be sold to corporations for internal use as well as deployed as part of EMC’s Fortress? Despite an emphatic PiWorx message about data security, I still think storage admins would be more keen to use this type of thing if the content delivery networks and storage repositories were their own, and access to the information was limited to the employees or authorized partners of the company.
Then again, I’m not a storage admin. Any thoughts from those in the peanut gallery?
In a bit of a Friday surprise, storage systems vendor Nexsan Technologies today filed for an initial public offering (IPO) with the SEC.
The move is surprising not because of Nexsan’s financial situation, but because storage IPOs are so 2007. IPOs in general are down this year with Wall St. fearing recession and waiting to see if the problems financial services firms are having spread to other sectors.
On the storage front, no company has gone public since 3Par last November. NAS vendor BlueArc and consultant firm Glass House Technologies filed for IPOs late last year but neither appears close to actually going public. Neither company has updated their S-1 filings with their latest earnings, which suggests they’re in no hurry to test the market.
Financial analysts and other industry sources say companies such as DataDirect Networks, Copan, and LeftHand Networks are eager to go public and probably would have filed by now if market conditions were better.
But Nexsan is brave enough – if brave is the word – to give it a shot. The systems vendor that specializes in SATA arrays for archiving probably isn’t as well positioned as some of the above mentioned vendors. Nexsan lost $3 million on revenue of $49.8 million for the year that ended June 30, 2007, and dropped another $2.3 million on $30 million in revenue during the last six months of last year. According to its filing, Nexsa has lost a total of $35.1 million.
Histories of losses haven’t stopped other storage companies from going public. Those that took the plunge in 2006 and 2007 – 3Par, CommVault, Compellent, Data Domain, Double-Take, Isilon, and Riverbed – had a quarter of profit at the most and almost all of them had never recorded a profitable quarter before their IPO. The same goes for EqualLogic, which had its IPO short-circuited by a $1.4 billion acquisition offer from Dell.
And while Nexsan‘s finances don’t look much different than those of Compellent’s or Isilon’s before they went public, at least there was a bullish IPO market when Compellent and Isilon went out.
So Nexsan’s progress deserves keeping an eye on. If they do make it public for a decent price, it could spark another rush like the storage market saw in 2006 that lasted until late 2007.
The term “vendor lock-in” is rarely used in a good way by storage buyers. It usually means you’re stuck with products from one vendor, making it difficult to switch if you’re unhappy or something better comes along.
Still, with probably more options for storage products than ever before, most companies still buy all their primary storage from one vendor. That’s according to a Forrester report, “Consolidate Storage Vendors to Reduce Complexity,” released this week.
A Forrester survey of 170 companies ranging from SMBs to large enterprises in North America and Europe found that more than 80 percent bought their primary storage from one vendor over the last year. That includes 64 percent of the companies with more than 500 TB of raw storage.
The report, written by analyst Andrew Reichman, says using more than one primary storage vendor can make it more complex to manage, provision and support the storage environment. And while using multiple vendors can often bring better pricing, buying from one vendor can result in volume discounts.
“You may have tried to contain costs by forcing multiple incumbent vendors to continuously compete against each other, with price as the primary differentiator,” Reichman writes. “This strategy can reduce prices and limit vendor lock-in, but it can also lead to management complexity and poor capacity utilization.”
The report recommends keeping things simple by and using fewer vendors when possible. However, that advice comes with several caveats: buying all storage from one vendor means taking the bad with the good, and some vendors’ product families differ so much “they may as well come from different vendors.”
Of course, I’m sure there are horror stories out there from organizations that have had bad experience with lock-in as well as those who’ve had incompatibility issues with products from multiple vendors.
My daily rounds of the storage industry web this morning brought me to The Storage Anarchist, a blog by an EMCer that I often find interesting. As it turns out, one of my articles was in his sights yesterday following EMC’s earnings call.
Most of the reaction to the first-quarter earnings announcement was rather more negative than I think EMC would like, considering they posted record revenues. All the financial analysts on the call, wild eyed from the fog of battle out in the market as the economy sinks further into doldrums, seemed not to believe that EMC’s forecasts for the year were really remaining unchanged. And they did ask plenty of pointed questions.
TSA’s description is rather more dramatic: “Several of the participating financial analysts inquired about the potential impact that the newly-delivered virtual provisioning for Symmetrix might have on future capacity demands. From the tone of the questions, you could easily imagine a pride of lions circling their prey.”
But I have to say the next sentence surprised me. “And sure enough, by noon Beth Pariseau had her coverage posted on SearchStorage, under the headline EMC’s Tucci: Thin provisioning mandatory but overrated.”
After that there’s some discussion of a Byte and Switch article and there’s no further discussion of my article, so I’m still not precisely sure why it was brought up. A little ways down in the post, though, there’s this reference to a bear that recently killed its trainer that I can’t help but wonder about:
And all I have to say about the bear is: remember, these are wild animals, and they’re driven by instinct and not logic or trust.
Any resemblance between wild animals and industry experts is purely coincidental!
Again, it’s hard to tell exactly where that comment was directed, but I think he compared Mary Jander, Wall Street analysts and me to wild animals? That would certainly be a first for me!
So here’s the perspective from the other side of the coin (or cage, as it were). When the CEO of a major storage company explains to the folks on Wall Street exactly how his company is going to continue to make money on a feature billed by many in the industry as a way to not give vendors like EMC quite so much money in the long run, I think it’s probably important for users to hear that perspective on the technology. I think it’s also probably important for users to have a realistic sense of the benefits of a given technology, one they’re not getting from most vendor marketing. That’s the logic and trust I care about.
Meanwhile, TSA saves most of his scantily-veiled critiques for IBM, though of course he never names names. This in turn prompted IBM blogger Barry Whyte to respond with…the news that IBM is planning thin provisioning for SVC. IBM is giving thin provisioning the title, “Space Efficient Volumes/Vdisks (SEV).”
So lets think about this, if for example you had an appliance that could front all storage types, provide you with online data migration between said storage types, let you manage copy services across them all, soon provide Space Efficient characteristics, natively support any SATA or flash device you decided you wanted, provide many thousands of disks behind a single management interface and integrate with all the ‘Israeli’ products you could imagine… why would you care that just one of your products that has its largest footprint as a Mainframe box didn’t have all of those features, when according to Mr Burke, everything the Mainframe does well it does itself, and by his own admission won’t need or use features like Thin Provisioning.
Interesting. But what’s odd there is that the mainframe box IBM sells is the DS8000, and last I heard, IBM’s planning thin provisioning for that too. Or maybe it will be getting thin provisioning by way of SVC?
HP Upline crashed this week, just a few days after it was launched. As Chris Rock once said, “Grand opening, Grand closing.”
The crash of such a brand-new service isn’t as impactful on end users as a crash with a more established player, but it’s still got to hurt for HP, especially given the importance for storage vendors of establishing competitive offerings in cloud computing and SaaS sooner rather than later.
According to Sheila Watkins, spokeswoman for HP’s Personal Systems Group, “HP chose to temporarily suspend the Upline service to investigate what we believe is an isolated technical issue.” She said HP expects Upline will be available again by the end of the week.
EMC, which has made cloud computing a top priority, went on the offensive with this right away. “HP Upline continues the long tradition of screwing HP customers,” trumpeted EMC employee Storagezilla, who revealed he’s not only a critic of HP, he’s also (technically) one of those customers. Part of his post also includes a copy of the letter HP sent to its customers apologizing for the crash and promising refunds. No way obtaining such a letter was what he was hoping for when signing up for the account…
Meanwhile, type in the words ‘HP Upline’ in Google, and you might see a tasteful advisory from EMC’s Mozy, asking: “Shafted by Upline?”
Carbonite has Upline-ified its own search engine marketing with a similar, if less bluntly worded, ad.
Elsewhere, hosted storage service provider Nirvanix has mounted its strongest attack on rival Amazon S3 yet, offering a 30-day “fee holiday” for all uploads from any source to a new account on its Storage Delivery Network (SDN). If the free 30 days arent’ enough, Nirvanix, which uses a Web content-delivery infrastructure to speed storage transfers over the wire, also unveiled an “Amazon S3 Migration Tool,” specifically meant to get users off S3 and onto the Nirvanix service.
“I say always pick on the biggest guy,” Nirvanix chief marketing officer Johnathan Buckley said. “”If we can show we’re 300 to 400 times as fast as Amazon, why can’t we steal those customers?”
Especially interesting, in light of all this catfighting, is something Storagezilla also pointed out:
Wired wrote a puff piece on Amazon Web Services, the story of which I’ve heard at every web get together from where I’m sitting now, around the world and back again. But what’s interesting is that AWS’s total revenue for 2007 was $100M.
Lets face it $100M in anyone’s language is good money but when you consider that Amazon is the undisputed leader in that space that’s a piddling amount of revenue and a clear sign that this market hasn’t even started moving yet.
In the course of a conversation today with a new SRM vendor, ArxScan, CEO Mark Fitzsimmons mentioned a use case for the startup’s product that had me raising my eyebrows: basically, keeping data deduplication systems honest.
According to Fitzsimmons, a large pharma company wanted the Arxscan product to migrate data identified as redundant by the data deduplication system to another repository and present it for review through a centralized GUI, so that the customer could sign off on what data was to be deleted.
“So you’re replacing an automated process in the data center with a manual one?” was the confused reaction from one of my editors on the conference call.
“Well, we’re working on automating it,” was the answer. “But the customer found dedupe applications weren’t working so well, and wanted a chance to look at the data before it’s deleted.”
I’ve heard of some paranoia at the high end of the market about data deduplication systems, particularly when it comes to virtual tape libraries or large companies in sensitive industries like, well, pharmaceuticals. One question I’ve heard brought up more than once by high-end users is about backing up the deduplication index on tape, the better to be able to recover data from disk drives should the deduplicating array fail. But breaking apart the process for better supervision? That’s a new one for me.
Anyone else heard of anything like this? Or is the customer going overboard?
Until now, IBM’s VTL partner was FalconStor while Diligent supplies Hitachi Data Systems and Overland Storage. IBM and HDS say they’re still ommitted to HDS selling Diligent’s ProtecTier software even though it’s now owned by IBM.
“We don’t’ see any change with Diligent, the agreements we’ve had with them will continue,” HDS CTO Hu Yoshida said today. He compared the situation to EMC buying VMware. “VMware works well for us, we drive a lot of business from VMware,” he said. “This is a new world, we’re in an era of coopetition.”
Still, you don’t have to squint hard when reading the statement HDS issued today to see plenty of wiggle room:
The ProtecTIER software from Diligent Technologies offers very tailored data de-duplication technology that addresses only a fraction of the overall business continuity and disaster recovery capabilities that our customers require. This product comprises a single component of the broader portfolio of market-leading back-up and data protection solutions that Hitachi Data Systems offers its customers.
In other words, HDS is saying it could get by fine without ProtectTier. Where else can HDS go if it wants to switch? FalconStor is certainly available now that IBM has Diligent and EMC is partnered with Quantum for dedupe. Sun and Copan sell FalconStor data dedupe software, but Sun and Copan don’t exactly equate to EMC and IBM for disk backup market share. There’s also Sepaton, which has a VTL OEM deal with Hewlett-Packard — although HP has yet to offer Sepaton’s dedupe software.
One dedupe vendor not looking for an OEM partner is Data Domain, which rode deduplication backup products from stealth to IPO in four years and is generally considered the dedupe market leader. Data Domain CEO Frank Slootman says his channel partners wouldn’t appreciate competition from OEMs.
“That’s a decision you have to make early on as a company,” Slootman said. “If you start off as a channel company like we are, it’s difficult to run an OEM model right alongside it because they are incompatible. OEM usually means death to your channel.”
Two press releases caught my eye this week that aren’t exactly earth-shattering, but got me thinking about the way the storage market is changing and widening.
First, SanDisk revealed that its flash cards are recording footage of an excursion to Everest by a three-member climbing team sponsored by Dell, Windows Vista, MSN and MSNBC. Here’s a media gallery of the chilly-looking expedition so far.
Then there was also an announcement from RAID, Inc. of its compact Razor RAID array using 2.5-inch SAS drives, billed as “ideal for small spaces such as cockpits, tanks, submarines and other civilian applications with specific space constraints.” The ‘cockpits’ idea got my imagination going.
Between flash memory, with fewer moving parts and power requirements, and small-form-factor hard disks, not to mention the continued increase in content we store digitally, enterprise-level data storage is worming its way into unheard-of environments. As such, many in the industry have been predicting an increasing focus on edge devices, mobile computing environments and the mobile workforce for the storage market. Hopefully enterprise storage managers are paying attention to these new frontiers while architecting storage at headquarters.
Also, since it’s Friday, and who couldn’t use a laugh? Check out this priceless Gizmodo post on an internal Microsoft sales video that recently made its awkward YouTube debut. Key line: “You’ve gotta wonder how, in a company the size of Microsoft, there’s not a single person who [can] step up and say “Hey, you know what? This Vista music video we’re making for the sales department, complete with a cheesy Bruce Springsteen impersonator and horrible music, damages the dignity of not only everyone involved in its production, but everyone who watches it.”
Disk vs. tape is not a new argument, but over time it takes on different permutations, especially as disk-based backup in its various forms gains popularity and new technologies get introduced like data deduplication that bring some of the economics of disk closer to those of tape.
One theme I’ve heard cropping up in this discussion among high-end vendors lately is the idea of people in large enterprises deploying vast amounts of disk for backup, then realizing the cost inefficiencies, and space and power requirements of disk, and finally running back to tape either alongside or as a replacement for disk.
This back-and-forth popped up again in post written by IBM’s Tony Pearson in response to a post written by Hitachi Data Systems’ Hu Yoshida. Yoshida’s post referred to a conversation with a storage admin at SNW who said his robotic tape libraries were actually drawing more power than his enterprise VTL.
This idea makes Pearson sputter:
I am not disputing [the] approach. It is possible that [the user] is using a poorly written backup program, taking full backups every day, to an older non-IBM tape library, in a manner that causes no end of activity to the poor tape robotics inside. But rather than changing over to a VTL, perhaps Mark might be better off investigating the use of IBM Tivoli Storage Manager, using progressive backup techniques, appropriate policies, parameters and settings, to a more energy-efficient IBM tape library. In well tuned backup workloads, the robotics are not very busy. The robot mounts the tape, and then the backup runs for a long time filling up that tape, all the meanwhile the robot is idle waiting for another request.
The weird thing is, I’ve heard plenty of vendors debating this of their own accord, usually taking sides along product lines with tape-centric vendors taking the position Pearson did, and vendors who sell disk for secondary storage taking the opposite view.
But I’m curious. I’m sure there’s some middle ground where the advantages and disadvantages just depend on personal preferences. But might there really be a trend here? Are users finding problems with disk-based systems and re-integrating tape? How many organizations really even left tape totally behind to begin with? And how do new data reduction/power reduction technologies change the equation? One thing not addressed by either Pearson or Yoshida’s post is where MAID might come into this argument, as well as the potential combination of MAID and dedupe.