This morning we published a Q&A with EMC backup and recovery division president and former Data Domain CEO Frank Slootman on our SearchDataBackup.com site. However, not all of our conversation with Slootman made it into the final piece. Following are a few of the more interesting tidbits from the cutting room floor, including shifting competitive dynamics following the EMC/Data Domain acquisition, whether Dell will sell EMC’s backup product portfolio, and general trends in data deduplication.
EMC said that Data Domain gained 600 new customers on its fourth-quarter earnings call. Is there anything different about those customers from Data Domain’s existing customer base?
Slootman: I think it’s more concentrated on the global enterprise account side as you can imagine with EMC’s global channel and account presence, that part of the business grew disproportionately fast. That’s one. The other is our international business grew much faster than it historically has. A third thing, which kind of surprised me, is that of these 600 customers who were new to Data Domain, 300 were also new to EMC, which is very significant, because how many companies have not bought products from EMC in their lifetime?
Storage tends to be a trench war – everybody has their accounts and it’s an inch to the left, an inch to the right but it’s relatively static. But this market is not like that – it’s wide open and there’s a tremendous opportunity to just sort of move the boundaries.
Who would you say is EMC’s biggest competition in backup right now?
Slootman:It’s IBM by a considerable margin. Our competitive dynamic obviously changed because of being in the EMC orbit. Number one, we don’t have EMC to kick around anymore. [laughs] Which changed a lot, of course … but secondly, it puts us much more in the enterprise game than we already were. We look at our CRM statistics in that type of engagement, and IBM is very prominent, number one, and then there’s a few others coming after that – NetApp, Symantec, that sort of thing.
What are you seeing from NetApp?
Slootman:They’re getting rid of the VTL. I think that’s a wise move because they pretty much telegraphed to the world that they didn’t think much of their own product, so it’s kind of hard to keep selling it at that point. NetApp is retrenching to their core platform, which I think is a logical way of doing things. They were really fragmenting their platform before by having a separate VTL. So now they’re saying we’re going to sell our core platform and it has data protection built into it, and so that’s what we’re going to represent to the world. Makes sense to me.
It seems pretty obvious Dell might be inclined to expand its relationship with EMC to cover the backup products. There are also some rumors that Dell has already started selling Data Domain. Is that something that you could clarify?
Slootman:They’re already able to sell it through a brokerage relationship ,and they did in the last several quarters. That is not terribly unique and surprising because just about any vendor is capable of taking business down through a brokerage arrangement. So I wouldn’t necessarily take that as a huge revelation – that’s fairly common, sometimes people just have procurement relationships when they need to drive a contract this way or that way. .. The other relationships are still awaiting announcement. There’s a lot of anticipation around it, but I think that will get cleared up pretty soon.
It seems like deduplication has filtered its way into backup software in the last year. Do you see dedupe moving up the stack, or is there a home for it at different levels?
Slootman:It’s kind of interesting, because the two original products that pioneered data deduplication are Avamar and Data Domain. One is storage, the other is software. So from the very beginning this has been true. The other thing I’ll tell you in terms of relative share – at the time Avamar was acquired by EMC three years ago now, Data Domain was sort of 3:1 relative to Avamar, and it still is to this day. That tells me that the relative proportion in terms of how customers deploy the technology hasn’t really changed. I think that an overwhelming amount of dedupe is still deployed as storage, not as software and that’s a sore topic for CommVault and Symantec, which I can understand, but there’s reasons for it. Compelling technology reasons, not just because we’re good salespeople here.
Copan has finally found a buyer, at the fire-sale price of $2 million.
SGI said Tuesday night it picked up Copan’s assets in a foreclosure sale, including its product portfolio and “select” employees, and will maintain the MAID vendor’s Longmont, CO, headquarters.
Copan has been on the block for at least a year, and has been without a CEO since Mark Ward stepped down last July following layoffs in late 2008. There were frequent rumors of Copan’s demise in recent months but the vendor kept a skeleton staff until it could complete a sale.
SGI’s press release announcing the acquisition said it will now “be able to offer an expanded portfolio of purpose-built, innovative, massively scalable high performance storage products. Specifically, with the COPAN platform, SGI will be able to offer new storage technologies that target Persistent Data storage.”
The product page of SGI’s web site lists three Copan products: Data Archive 300A, Native MAID 300M, and 300T/TX Virtual Tape Library.
Copan’s investors took a beating on the sale. The vendor took in more than $100 million in funding, including an $18.5 million round a year ago. The MAID disk-spin down technology that Copan pioneered was a hot technology a few years back, but larger vendors such as EMC and Hitachi Data Systems have since incorporated it (without many sales) and competitors such as Nexsan have added intelligence to spin down that was lacking in Copan’s products.
SGI has its own history of financial problems, and last year was acquired by Rackable for $25 million. Rackable then assumed the SGI name. SGI’s storage platform includes InfiniteStorage NAS and RAID systems, and Rackable Storage Servers.
This past Sunday, a friend of mine and I took a trip to a staple of most New England childhoods (mine included): Boston’s Museum of Science. As we explored the exhibits, a surprising number of which remain unchanged since I was a kid, an exhibit called “The Computing Revolution” caught my eye for the first time.
This exhibit was kind of the inverse of the rest of the museum — while the main exhibit halls contained relics unchanged since my childhood, the computer retrospective introduced machines I remember using as a child to the museum. I’ve never had the experience of seeing things I’ve actually used displayed in a glass case as historic artifacts — but that’s computer time for you.
Luckily I also had my camera with me, and the Museum allows photography — so I can share some of this trip down memory lane with the people in my audience I know will appreciate it.
Photos after the jump.
Tandberg Data is getting into the data deduplication game for SMBs with a new application for NAS and removable disk systems, with the help of a little-known dedupe developer.
Tandberg today launched AccuGuard dedupe software, which is available with its new DPS2000 NAS and all of its RDX removable disk systems Eventually, Tandberg will sell AccuGuard as a standalone application. Tandberg’s product manager McClain Buggle says AccuGuard comes from an OEM deal with Colorado-based Data Storage Group (dataStor). dataStor customized its dS Shield deduplication software to work with Tandberg’s storage, Buggle says.
AccuGuard is source-based global dedupe for physical and virtual Windows servers. Tandberg claims dedupe ratios up to 20-1 in its press release, but Buggle says much larger reduction ratios are possible on some data sets.
While the DPS2000 can serve as a general storage system (it also supports iSCSI), Buggle says he expects customers to use it for backup. “Our focus is on data protection,” he says. The DPS2000 is available in 4 TB desktop and rackmount and 8 TB rackmount configurations.
Pricing for the DPS2000 NAS with AccuGuard starts at $2,290 for 4 TB systems and around $3,800 for 8 TB.
Buggle says AccuGuard is available in a single server edition now with the ability to scale to thousands of servers planned for the next release. That’s consistent with the dS Shield product, which comes in single server and enterprise editions.
Buggle says deduplication was a necessary feature for Tandberg Data to expand from its legacy tape business to disk backup, less than a year after its parent company went into bankruptcy. “To be successful, we had to grow our product portfolio for end-to-end data protection for SMBs,” he said.
It’s been a little over a year since Brocade completed its $2.6 billion acquisition of Foundry Networks, and the Ethernet thing isn’t working so well so far.
Brocade reported $97.1 million in revenue from Ethernet switches last quarter, down 26% from the previous quarter. That caused Brocade’s overall revenue of $539.5 million to fall below its previous forecast, despite a 16% increase in revenue from its core Fibre Channel storage equipment business.
The Ethernet sales dip came in a quarter when competitors Cisco, Juniper, and Hewlett-Packard’s ProCurve platform increased revenue in network switches.
Brocade executives blamed lower sales to the federal government and poorer sales through its new Ethernet OEM deals with IBM and Dell for the downfall. They said they will put more sales people on the Ethernet side to help drive demand, rather than leaving it to the OEMs.
“We don’t need to do a research project on what happened and why,” Brocade CEO Mike Klayko said during the vendor’s earnings call. “We know what to do and we’ve taken immediate actions to get our Ethernet business back on track. … Experience is a valuable teacher, and we’ve learned a valuable lesson here.”
Several Wall Street analysts downgraded Brocade’s stock price today, both because of the results and lack of confidence in the vendor’s plan to improve.
“We are frustrated with Brocade’s results, not just government Ethernet switching, but also the clear market share losses in enterprise and persistent declines in service provider Ethernet switching, as well as what we consider a lack of definitive color with the company’s strategic direction toward a recovery going forward,” Aaron Rakers of Stifel Nicolaus wrote today in a note to clients explaining his downgrade of Brocade.
For storage customers, the big issue is whether a concentration on Ethernet will cause a lapse of concentration on the Fibre Channel side. Brocade took share on the FC side from Cisco last quarter, but it’s storage growth was likely a bit below the industry at large and its HBA revenue is negligible more than a year after it moved into that product area. Klayko said on the earnings call that a concentration on Ethernet sales will likely cause “greater normal seasonal declines in our SAN business over the next few quarters.”
Still, Klayko says having an Ethernet switching portfolio has helped the storage business by letting customers lay a foundation for the converged networks expected to emerge over the next five years or so. He also says it’s mandatory for a storage networking vendor to have both Ethernet and Fibre Channel “or you are just going to get put into a box as a point solution.”
Overall, Brocade’s revenue increased increasing 3.4% sequentially and 25% year-over-year, and it earned $51.1 million in profit for the quarter.
Klayko says he has no regrets about the decision to spend billions on Foundry.
“Would I make the same decision today?” Klayko said. “The answer is yes. The customers we talk to today do want to have an end-to-end solution. There is a tremendous amount of change going on in the data center right now as customers are trying to figure out how to handle this explosive growth not only in just data, but in networking traffic. And if you don’t have the entire product portfolio, I think you are disadvantaged. And so strategically, it is the right decision.”
Not all financial analysts are down on Brocade. According to a note issued today by Wedbush analyst Kaushik Roy: “It may not happen overnight but we believe that the management will be able to fix the sales issues and put Foundry/IP business back on track. While it is true that Brocade lost market share in the IP/Ethernet market in the past quarter, we believe that the market share gain story is yet to be played out.”
Iron Mountain’s $112 million acquisition of Mimosa Systems today is an admission by Iron Mountain that the concept of cloud archiving is not yet ripe. Iron Mountain bought Mimosa as an on-premise alternative to the cloud strategy it has been pursuing.
“We cannot wait for data to come to us in the cloud,” Iron Mountain Digital president Ramana Venkata told me this morning, predicting that it will be another few years yet before enterprise cloud data storage adoption picks up.
For cloud- and services-focused vendor Iron Mountain to cite slow enterprise cloud data storage adoption in its decision to buy Mimosa seems to signal an end to the wall-to-wall hype that dominated the industry discussion around the cloud last year.
Signals about this had begun to filter through in the form of two analyst reports released last month from Forrester Research and TheInfoPro that concluded enterprise data storage and IT pros are not as interested in the cloud as their vendors have been in recent months.
Taneja Group founder and consulting analyst Arun Taneja said Iron Mountain’s outlook could signal a shift to a more balanced view of cloud and on-premise data storage going forward, one that has a longer adoption timeframe than many predicted in 2009. “Whether this is a revelation on Iron Mountain’s part that the cloud isn’t happening fast enough, I’m don’t think so,” Taneja said. “It’s more like they don’t really care – it’s clear across the industry that customers are going to need both. No broad-based supplier like this can afford to ignore that.”
According to Enterprise Strategy Group analyst Brian Babineau, ESG’s reasearch alo indicates comapnies aren’t moving to the cloud as fast as some in the industry had originally anticipated. In 2010, a soon-to-be-published ESG survey found only 17% of respondents will investigate cloud this year, Babineau said. “I don’t think we’re abandoning the cloud story, of course,” he said. “But certain applications have to evolve into the cloud, and that evolution isn’t happening as fast as some people want to see it happen.”
Iron Mountain Inc. today said it intends to acquire data archiving software vendor Mimosa Systems for $112 million in cash, subject to closing adjustments.
Iron Mountain already partners with U.K.-based Mimecast for email archiving software as a service (SaaS). That partnership is still ongoing, according to Iron Mountain. That indicates an on-premise focus for Mimosa under the otherwise services-focused Iron Mountain. “Customers wanting to archive email can now choose either NearPoint for onsite archiving or Iron Mountain’s Total Email Management Suite, powered by Mimecast technology, for archiving email in the cloud,” Iron Mountain said in its release announcing the Mimosa acquisition.
Iron Mountain claims files can be easily transferred from Mimosa on-premise archives to its Stratify Legal Discovery Service “for larger litigation matters.” The announcement also emphasizes Mimosa’s integration with files and SharePoint as well as Microsoft Exchange email.
The Mimosa team, including CEO TM Ravi, will be retained by Iron Mountain. Ravi becomes Iron Mountain Digital’s new Chief Marketing Officer.
Stay tuned to SearchStorage.com/news for more on this story.
It’s no secret that the relationship between Hewlett-Packard and Cisco has deteriorated now that Cisco is selling its own server product, the Unified Computing System (UCS). Like IBM and Dell, HP has been lining up other Fibre Channel and Ethernet switch partners, including a $2.7 billion acquisition of 3COM.
So today’s news that HP is now selling QLogic 8 Gbps 5800V and 5802V stackable switches under the HP brand as the HP SN6000 certainly is no surprise. One Silicon Valley blog directly linked the announcement with Cisco’s decision to drop HP as a certified channel partner in April.
While the rift between Cisco and HP is real, the direct link between QLogic and Cisco here is probably exaggerated a bit. The SN6000 is a FC edge switch, and Cisco doesn’t even have an 8 Gbps FC edge switch. Cisco’s bread-and-butter in the FC space is the MDS9000 director, and QLogic doesn’t sell director switches. HP’s new QLogic switches are really an alternative to edge switches from Brocade, which HP continues to offer through its long-standing OEM deal.
But the OEM deal opens the door for QLogic – predominantly a FC HBA vendor – as a switch player. HP StorageWorks product manager Charles Vallhonrat says the QLogic switches have been qualified on all HP storage systems. The 20-port switches can be stacked without requiring dedicated ports for inter-switch links (ISLs), making it easier and less expensive to expand. Vallhonrat says he expects customers who want to start small and grow their SANs will prefer the QLogic switches.
“I think it will be driven by customers’ growth needs,” he said. “They don’t have to buy everything up front. We have Brocade customers who want to buy 40 ports or 80 ports to start, and we have a switch for that. If they want to grow as they go along, this [QLogic] is an ideal product for that.”
As for Cisco’s dumping HP as a partner, Vallhonrat said “from the storage level, we’re moving forward with Cisco as well as other partners.”
HP’s new FC switch follows its release of two low-end storage systems earlier this week – the next generation of its entry level MSA and LeftHand iSCSI platforms. But HP had bad news for storage when it reported earnings Wednesday. During an otherwise good quarter, storage sales declined 3% year-over-year and sequentially, including what CEO Mark Hurd called “very mediocre” sales of its midrange EVA systems.
Hurd says HP did well with its LeftHand and direct attached storage, but not its midrange and higher-end systems. But he insists storage is a priority.
“We have our top guys working on it,” Hurd said when asked what he’s doing to jump start the storage business. “We believe we now have a better lineup than we have had before and we believe we have a team that’s capable of helping us build the answer.”
That storage team is now led by Dave Donatelli, who jumped from EMC to HP last year. HP is expected to upgrade the EVA this year, and its XP enterprise storage platform is also due for a refresh. All of which means it’s worth keeping a close watch on HP in 2010.
NetApp appears to be the big winner in storage sales at the end of 2009 as spending picked up after a slow year. NetApp Wednesday reported $1.01 billion in revenue for last quarter. Its product revenue increased 17% over the last year, while larger rivals EMC and Hewlett-Packard had year-over-year declines in storage product revenues.
NetApp’s increases are more impressive when you consider its last quarter included January, which means one-third of the quarter came after companies flushed their 2009 budgets.
NetApp execs say low-end systems had the biggest increase, which likely reflects a surge in organizations turning to networked storage as a result of adding virtualized servers. NetApp also sold its largest mix of multiprotocol storage systems ever. In fact, for the first time it sold more systems with SAN and NAS protocols than NAS alone. Multiprotocol systems rose from 34% the previous quarter to 42% while NAS-only systems fell from 48% to 42% and SAN-only systems dropped from 19% to 15%.
NetApp CEO Tom Georgens says that’s likely a sign that organizations want more flexibility to run multiple applications on a system.
“As customers seek to build an infrastructure that can run multiple applications, typically those applications have the need for multiple access methods, both file and block,” he said. “As a result, as we see more of this virtualized shared infrastructure rollout, I see more and more customers that are interested in products that can run both at the same time. The other thing is, it gives them an option that if it’s NAS today, it could be SAN tomorrow or vice versa. … I think single-protocol products are going to become obsolete over time.”
Interestingly, there was no mention on NetApp’s earnings call of any impact from its scale-out NAS clustering capability. Last year at this time, NetApp frequently talked about how its integration of its GX clustered technology with its Ontap operating system was coming soon, and it finally began shipping a converged OS with Ontap 8 last August. Yet lack of a fully integrated scale-out NAS product hasn’t hurt it. Maybe the market for scale-out NAS is overrated – either NetApp customers don’t care about it or are willing to wait.
Georgens maintains automated tiering is definitely overrated, although it’s being hailed as another “must-have” offering within a year or so due to the rise of Flash solid state drives (SSDs) in enterprise storage. NetApp’s smaller competitor Compellent has been trumpeting its Data Progression software while EMC is pushing its burgeoning Fully Automated Storage Tiering (FAST) technology. Georgens downplayed tiering when asked about FAST. He also downplayed FAST.
“FAST is a collection of things, not a specific capability,” he said. “FAST on Symmetrix is different from what FAST is on a Clarion and different from what FAST is on Celerra and different from what FAST is on Atmos. FAST an umbrella name for a bunch of point technologies that are different on every platform. But whatever NetApp does, it’s going to be consistent across all SAN and NAS, high-end and low-end.
“Second of all, I think the entire concept of tiering is dying. The simple fact of the matter is, tiering is a way to manage migration of data between Fiber Channel-based system and SATA-based systems. With the advent of Flash, basically these systems are going to go to large amounts of Flash, and that will be dynamic with SATA behind them, and the whole concept of have tiered storage is going to go away.”
Toshiba today rolled out the first product to come from the hard drive business it acquired from Fujitsu last year: a 600 GB 2.5-inch SAS hard disk drive. Last week, Seagate launched a 600 GB 2.5-inch drive of its own, a 10,000 RPM offering with the option of Fibre Channel or SAS drive interfaces.
At 600 GB capacity, small form factor (2.5-inch) SAS moves into closer competition with Fibre Channel disk drives in the external storage market. Analysts say the transition to small form factor SAS is largely complete in internal storage, but the conversion of external storage from FC to SAS disks has been a long process.
It was only about 18 months ago, pointed out IDC analyst John Rydning, that the SAS-2 spec was ratified, boosting SAS throughput speeds to 6 Gbps (Fibre Channel is now at 8 Gbps). That fairly recent spec also supported cable lengths between drives of up to 10 meters, more suitable for external disk arrays than the previous limit of six meters.
Meanwhile, until these announcements from Seagate and Toshiba, aerial density on SAS drives, particularly in small form factors, also lagged behind the FC gear the enterprise is accustomed to, Rydning said. “These announcements bring 2.5-inch capacity parity with 3.5-inch Fibre Channel drives that are still primarily used [in external storage systems], creating a migration path for external storage players [to small form factor SAS drives],” he said.
This is a transition Rydning said he expects to continue over the next three years, provided external storage products using small form factor SAS drives work as expected out of the gate. Even then, however, the transition will also depend on the comfort level for storage pros charged with managing systems day to day. “People have built up a knowledge base and comfort level with Fibre Channel — there’s hesitancy to make sure SAS is as robust,” Rydning said.