Quantum completed what its CEO called a “challenging” fiscal year at the end of March, and the fourth quarter was similar to the entire year for the backup vendor. Quantum continued to increase year-over-year disk backup and software sales around its deduplication products while its tape sales declined. But its disk backup sales decreased from the previous quarter, leaving Quantum with a long way to go to accomplish its goal of becoming a market leader.
“I think that the emphasis you heard on the [earnings] call is that it’s very much about getting through [fiscal] ’09 while making a lot of changes in the company,” CEO Rick Belluzzo said on the company’s earnings call Wednesday afternoon. “We think our business model was demonstrated last quarter that this can be a very solidly profitable business. There is a lot of cash generation potential. But we really need to focus on building revenue with our new model, with new products focused on tape of course, but as well aggressively on our disk systems and software business.”
Quantum lost $356 million for the year — including a $339 million one-time, non-cash charge for goodwill impairment — and its revenue for the fourth quarter and full year were down substantially from the previous year. It did show a $4 million non-GAAP profit for last quarter, discounting amortization of intangibles, stock-based compensation charges and restructuring costs. But while its $24 million revenue from disk and software last quarter was nearly double the previous year, it’s a far cry from the $79 million recorded by dedupe leader Data Domain.
Belluzzo said over the next year Quantum will have two major software releases and a new hardware platform for its DXi deduplication VTL family. He didn’t get specific, but emphasized the importance of replication and increasing the scale of the systems. He said Quantum also plans a “significant” new release of its StorNext software that moves data between storage tiers.
“[Our] vision includes our ability to deliver a single scalable disk-based architecture with deduplication and replication that can scale for protecting and managing a terabyte of data and remote office to more than 200 terabyte at a data center, and is also compatible with solutions from multiple vendors such as EMC,” Belluzzo said.
EMC has been selling Quantum’s deduplication software in its disk libraries since around the middle of last year. Dell has also said it will OEM Quantum’s dedupe software, although it has yet to announce any products.
“We are working with Dell on our deduplication technology. When are they going to launch a Dell-branded product? I can’t say,” Quantum CFO Jon Gacek said of when pressed about Dell on the earnings call.
You can scratch Dave Donatelli’s name off the list of possible successors to Joe Tucci at EMC.
There are no signs that Tucci’s departure as CEO is imminent, but people in the storage world occasionally play the “who’s next at EMC” guessing game and Donatelli’s name is almost always on the short list. But news came last night that Donatelli has bolted his position as president of the storage division to take as executive vice president Hewlett-Pakcard’s servers, storage and networking division.
Apparently, EMC isn’t letting Donatelli go without a fight. According to a Reuters story that moved late this afternoon, EMC has filed a lawsuit in Massachusetts seeking to enforce a non-compete clause in Donatelli’s contract and Donatelli has filed a suit in California trying to break the non-compete deal. Reuters quotes EMC spokesman Michael Gallant confirming the lawsuits.
Donatelli’s departure was abrupt. Donatelli joined Tucci on conference calls with press and analysts for the much-hyped launch of EMC’s Symmetrix V-Max system two weeks ago. (He almost certainly was deep into negotiations with HP at the time). He also has been one of EMC’s most visible execs. Another EMC veteran, Frank Hauck, has been named the interim head of EMC’s storage division.
As a 22-year veteran of the company who ran a $14.9 billion division, Donatelli will obviously be missed at EMC but the move will probably have a greater impact for HP. EMC will come up with an adequate replacement, either from its deep roster of seasoned vets or by going outside for an experienced executive. Don’t be surprised to see Hauck get the job permanently – he’s already been CIO, EVP of global marketing, and VP of products and offerings in his 18 years at EMC.
Donatelli likely found an offer from HP that he couldn’t refuse. He will report to Ann Livermore, HP’s EVP of the Technology Solutions Group and a top lieutenant to CEO Mark Hurd. Donatelli’s new division brought in $19.4 billion in revenue last year, and HP is expanding it to include ProCurve switching. Donatelli takes over as HP faces increased competition in the server business with Cisco moving in and Sun possibly getting a boost from the Oracle acquisition. It also comes as the storage and networking worlds begin to converge around Fibre Channel over Ethernet (FCoE) and converged server platforms such as Cisco’s Unified Computing System (UCS) and HP’s BladeSystem Matrix.
Donatelli obviously brings some secrets from EMC and perhaps from EMC’s close ally Cisco with him to his new post. The move should make for some interesting competition over the coming months.
IBM today confirmed one of the worst kept secrets in IT – it will begin rebranding Brocade’s Foundry Ethernet switches under an OEM arrangement next month.
The move is seen as IBM retaliation against Cisco, the Ethernet switch market leader that recently launched a move onto IBM’s turf with its Unifed Computing System (UCS) server. IBM will continue to sell Cisco Ethernet and Fibre Channel switches, but adding Foundry gear intensifies the rivalry between Brocade and Cisco. Brocade acquired Foundry late last year for $2.6 billion to add Ethernet to its Fibre Channel product platform.
“This is not a resale relationship,” Brocade CTO Dave Stevens said. “This is a move by IBM to take our products, test our products, label our products, and sell them as IBM products.”
IBM will sell the Brocade NetIron MLX Series as IBM m-series Ethernet routers, and three families of Ethernet switches: the Brocade NetIron CES 2000 (IBM c-series), Brocade FastIron SX (IBM s-series), and Brocade FastIron GS (IBM g-series).
Jim Comfort, IBM VP of enterprise initiatives, said IBM will OEM more Foundry products down the road but not its entire portfolio. IBM will also add Brocade FCoE gear, although Comfort says IBM won’t favor any one vendor.
“Brocade has an FCoE strategy, which it was developing on its own before Foundry,” he said. “We’re working with Cisco, Brocade, Juniper and others to make sure those [FCoE and enhanced Ethernet] standards are in fact standards. As the standard stabilizes, we’ll bring forth whoever’s products are consistent with those standards.”
With its IBM deal sealed, Brocade is talking to Hewlett-Packard, the other major vendor that Cisco irked by getting into the server business. HP has its own line of ProCurve Ethernet switches, but Stevens says there are Foundry products that do not directly compete with ProCurve.
“If you take ProCurve and take our Ethernet portfolio, there are some areas of overlap but there are other areas with no overlap,” Stevens said.
Products that don’t overlap also include the FCoE switch and converged network adapters (CNAs) Brocade launched earlier this month.
Tandberg Data, the Norwegian company that sells tape libraries and removable disk drives, filed for bankruptcy in Norway and has been taken over by one of its creditors. Day-to-day operations continue for Tandberg’s U.S. subsidiary, Tandberg Data Corp., and its other subsidiaries as the parent company restructures. Tandberg went into bankruptcy because it failed to repay a loan to Cyrus Capital, which then acquired Tandberg’s assets and became its biggest shareholder after the bankruptcy.
According to a press release put out by Tandberg,
Tandberg Data has been unsuccessful in repaying a lapsed loan from Cyrus Capital. As a result Cyrus Capital had no other option other than to enforce their pledges of their loan. As Tandberg Data did not have sufficient capital to repay the loan, it had no alternative than to file for bankruptcy for the holding company, Tandberg Data ASA, and Tandberg Storage.
The Board of Directors of Tandberg Data made the decision to file for bankruptcy after consideration of all other alternatives, including a rights issue, which was unsuccessful. This process will allow Tandberg Data to deal with its cost and debt burden, to effectively restructure its operations and to continue its strategic direction of broadening its focus from being a tape company to a company that provides data protection solutions, including tape, disk, software and services.
The press release attributed at least some of the financial woes to “the global financial crisis,” which it said “impacted the company’s ability to successfully deal with its debt burden.”
Tandberg CEO Pat Clarke, who took over in early 2008, said in the press release that the company will live to fight another day. “The difficult steps we are taking now will enable us to build a company that can be successful in providing data protection solutions and support to our valued customers, suppliers, and business partners for a long time to come,” Clarke said
Clarke took over with the goal of restructuring the company, whose storage products have mostly been based on tape (including IP from the 2006 acquisition of Exabyte). Last year Tandberg added more disk products like the ProStor RDX removable disk cartridge to its portfolio, and refreshed its message around archiving and tiered storage workflows rather than differentiating its products based on hardware features.
“Do-it-yourself” infrastructure is a competitive differentiator among providers of storage services, I’ve learned in conversations with providers over the last two weeks. While not every Web 2.0 service is storage-focused, these discussions make me wonder what the results will be for third-party storage vendors looking to supply prepackaged configurations to service-provider data centers.
Following Carbonite’s lawsuit against its former storage supplier, its competitors such as SpiderOak have pounced on the opportunity to tout their own internal infrastructures in an attempt to lure worried Carbonite customers.
SpiderOak CEO Ethan Oberman told me that SpiderOak assembles its own storage systems out of commodity servers and disk drives, purchasing individual components and assembling them under the company’s proprietary storage clustering software. “We don’t rely on a third party pre-assembled storage system” as Carbonite did with Promise, Oberman said.
Shortly after I posted about Oberman’s statements, Carbonite CEO David Friend invited me to see Carbonite’s infrastructure. I took him up on that last Friday, and it turns out Carbonite’s setup isn’t much different from what SpiderOak described.
Carbonite has between 10 PB and 12 PB of storage in two data centers in the Boston area. While the vendor is suing Promise for products it deployed several years ago, Carbonite has already completely changed out the Promise storage in favor of a self-integrated system of Dell PowerEdge MD1000 and MD3000 servers packed with 15 one-terabyte SATA disks, configured for RAID 6. Four of these units are attached to each server node that runs the company’s internally written parallel file system.
SpiderOak’s Oberman said his company assembles the disk drives and RAID controllers internally. Friend said he’s still content to let a third-party vendor assemble the RAID arrays despite the experience with Promise.
“The software is what we worry about,” he said. Promise’s arrays had firmware bugs, he said, something that might not have changed if Carbonite had done more of the hardware assembly. “Even if you buy a disk drive from somewhere, it has firmware in it – we’re not going to get into that kind of stuff,” Friend said.
Carbonite chose Dell to replace Promise based on a discounted price and its willingness to work with Carbonite to design a customized hardware system, according to Friend.
The more I talk to online storage service providers, the more there seems to be a disconnect between what they’re deploying and what storage vendors are marketing in an effort to reach Web 2.0 shops. While new “cloud” storage systems such as EMC’s Atmos and HP’s ExDS are built on industry-standard hardware components, the vendors also supply software to tie those components together.
Friend said he’s learned that a fully prepackged software-hardware system from a third-party vendor won’t fit his business. “Every piece of software we’ve bought along the way has broken,” he said.
But this also may be because Carbonite is an outlier in terms of its workload. “There aren’t a lot of 10 petabyte data centers out there,” Friend said. He estimated some 95% of the processing time in Carbonite’s data center is spent on write, rather than read operations. “There [also] aren’t a lot of data centers out there that are ‘mostly write,'” he added.
Carbonite also designed its parallelized distributed file system to treat data in its data center and on users’ PCs as part of one big geographically distributed pool. Friend claims this is a differentiator, providing speedier restores to users than competitors such as Mozy can do by reassmbling files before restoring data.
For those reasons, Friend said he doesn’t anticipate that online services focused primarily on storing customer data will be fertile ground for existing storage vendors. This hasn’t stopped third-party storage vendors from making regular sales calls to Carbonite’s data center, according to senior director of operations Kai Gray. Gray said he listens to most of the pitches, but he echoed Friend on the issues with prepackaged software, and said the cost comparison equation has yet to change.
“By the time [a storage vendor] puts stuff together and marks it up, it’s too expensive,” he said. Storage product competition in this data center is at the disk-drive level rather than systems. “We’re eagerly awaiting two terabyte disk drive shipments,” Gray said. Right now Carbonite has mostly Western Digital disk drives deployed, but “we are very drive agnostic.”
While Carbonite has yet to go for a third-party “cloud” storage system, Friend also points out it’s a different animal from many other Web 2.0 companies. “Most data centers are a cost center, not the business itself,” he said. “This is our factory – everything has to be customized because it’s a competitive advantage. It’s worth it to spend money designing our own file system, but if you’re, say, Fidelity, you don’t want to do that.”
Digital archiving the next frontier?
The data center I saw was very impressive – it’s in one of the newest facilities in the Boston area, complete with ultrasonic humidifiers and state-of-the-art security. But it’s not too far from Carbonite’s other data center, bringing to mind what ESG founder and blogger Steve Duplessie wrote after Carbonite announced the Promise lawsuit. The analyst cautioned that enterprise users should ask online backup services about things like SLAs and geographic redundancy to distinguish between consumer/prosumer and enterprise services before signing over their backups.
I asked Friend about this. Carbonite sees itself as a consumer/prosumer offering, he said, and does not offer SLAs or redundancy outside the Boston area. “Because we’re offering a backup service, there’s already geographic redundancy between the user’s PC and our data center,” he said. “No one [in our market] seems to want to pay double for a backup of a backup.”
However, “if we get into archiving, where we might have the only copy of a document, geographic redundancy would come into play,” he said. Is Carbonite planning that move? “We’re thinking about it,” he said. “It would be a logical product line extension.”
Fears of decreased storage budgets proved real in the first quarter of 2009, as EMC and IBM suffered large dropoffs from their 2008 revenue. Yet smaller and more focused vendors Data Domain and Riverbed reported their revenues increased more than 20% from the same period last year.
So why didn’t the budget freezes and uncertainties that stopped customers from buying EMC and IBM storage blow a hole in the business of Data Domain and Riverbed?
One reason may be that Data Domain (data deduplication for backup) and Riverbed (WAN optimization) are considered market leaders in the one market they’re in. But EMC and IBM are leaders in more markets and bigger markets than Data Domain and Riverbed, and their revenues declined in those segments.
More likely, the success of the smaller vendors has more to do with what they sell.
Perhaps Riverbed CEO Jerry Kennelly put it best on Riverbed’s earnings conference call: “You’re either selling capacity, or you’re selling efficiency. People don’t need capacity now, they’ve got it. But everybody needs efficiency.”
In other words, Riverbed and Data Domain help people get more value from the storage they already own. Storage admins and analysts have been saying that’s where money would be spent during these poor economic times. Now we know that’s the case. The bigger question is how long that will continue to be the case after the economy improves.
Is it just me, or has this been an insanely busy week in IT news? Here are some highlights in case you had some trouble keeping up with the fire hose.
(0:25) Analysts see Oracle-Sun deal as storage ‘game changer’
(2:13) VMware extends storage features with vSphere 4
(4:21) EMC revenue down, employees asked to take pay cut
(5:45) Ocarina partners take on NetApp in primary storage dedupe
(7:42) HP carves up blade storage with LeftHand software
Parallel clustered NAS vendor Panasas is the latest vendor to put solid state drives (SSDs) in it storage arrays.
Panasas will includes SSD in the highest end of the three ActiveStor Series systems it launched today. Series 7 and Series 8 – with no SSD support – are available today, while Series 9 with SSDs are expected in the second half of the year.
Series 9 will have the highest IOPs and lowest latency of Panasas systems, and is aimed at bringing the vendor beyond its high performance computing (HPC) niche into financial services, media and entertainment and life sciences
Panasas Series 9 tiers consist of DRAM cache, SSD, and SATA drives. “We hate Fibre Channel,” Panasas marketing VP Larry Jones says.
Those three non-FC tiers are placed in “turbo” blades on the Panasas Series 9. Each blade has 40 GB of cache, 36 GB of SSDs and 2 TB of SATA. Each shelf holds 11 blades, and Jones says there is no limit on shelves in a system.
Panasas uses Intel X-25E single-level cell (SLC) SSDs. Jones says no pricing is set yet but he expects the SSDs to have a 40% premium over SATA.
ActiveScale 3.4 software includes automatic tiered storage capability to migrate data to the right tier without requiring customers to set policies. “We put data in the right spot automatically,” Jones says.
The Series 9 scales can generate 120,000 IOPS, according to Panasas.
As with Series 9, the Series 8 model supports 10-Gigabit Ethernet and InfiniBand, and up to 440 GB of cache. The entry level Series 7 is GigE only and maxes out at 50 GB cache. The two higher models also include volume snap shots.
Although SSDs are all the rage in storage now, it’s unlikely that SSD support alone will make Panasas more popular outside of the HPC world. Panasas is also counting on the Parallel NFS protocol (pNFS) to make its systems more accessible to the average NAS shop. pNFS, which will likely replace Panasas’s proprietary DirectFlow protocol, isn’t expected in shipping products until 2010.
Seagate is claiming the world’s first 5900-RPM low-power disk drive today with the 1 TB, 1.5 TB and 2 TB Barracuda LP series. Seagate claimts its internal tests show the series draws 3.0 watts of power when idle and 5.6 watts of power when operating.
Seagate positions the drive against Samsung’s Eco-Green and Western Digital (WD)’s Caviar Green hard drives. Seagate’s testing shows the Caviar and Barracuda drives drawing 3.0 watts when idle, while Samsung’s drive tested at 4.0 watts. In Seagate’s testing, the operating power draw for Caviar – 5.72 watts – was roughly equivalent to Barracuda, while Samsung’s drive tested at 5.5 watts during operation.
Another test by Seagate using the PCMark05 performance benchmark shows the 5900-RPM drive with a performance score of 8444 to WD’s 7802 and Samsung’s 6579. (That’s 95 MBps for the Seagate drive, for those of you keeping score at home). Seagate product marketing manager Anne Haggar said the quirky RPM – most desktop drives run at 5400 or 7200 RPM – helps the drive “strike the optimum balance between performance and power.”
Seagate describes WD’s drive as 5400 RPM, but it may be that WD has just been more coy about its spindle speed. When the Caviar product launched in January, Caviar Green product manager Mojgan Pessian said the drive’s exact RPM–somewhere between 5400 and 7200–was not being disclosed.
In any event, consumers and SOHOs will have multiple low-power suppliers in the market. The Barracuda LP is not recommended for enterprise or SMB use; for the enterprise, Seagate markets the 2 TB Constellation product line.
MSRP for the 2 TB Barracuda LP is $358; for 1.5 TB, $156, for 1 TB, $118.
LSI Corp. has updated its Engenio 7900 storage system sold by IBM and others with new support for 8 Gbps FC, boosted capacity, and Seagate’s full-disk encryption encryption services that include key management and firmware features to take advantage of FDE drives from Seagate.
LSI, along with Seagate and IBM, has been talking about FDE for a couple of years now, but this is the first product LSI will ship that has the feature. The encryption is done in memory a specialized chip attached to the hard disk drive itself. Encryption can be used with a subset of drives within the array, which can also mix in FC and SATA disks. Up to 448 disk can now be attached to the controller, double the previous capacity limit.
Before encrypted disk arrays are widely deployed, key management will probably need to be developed a little further. With this release, users have to supply their own key management program; LSI is supplying key management through its SANtricity GUI. Every encrypted disk in this release would have the same key. Work is still being done to bring key management standards together so users can manage keys centrally within the data center.
Meanwhile, LSI has yet to add support for 10 GbE or FCoE to this array, but host interface cards can be swapped out of the 7900 without changing out the whole box. LSI director of product marketing Steve Gardner says FCoE won’t be ready for prime time until next year. “I think technological immaturity coupled with the economic downturn will slow adoption,” he said. He echoed Symantec CEO Enrique Salem in wondering aloud what the economic downturn will do to financial institutions which are normally early adopters for new technology.
“About a year ago, we started seeing interest in InfiniBand storage outside high-performance computing [HPC],” Gardner said. “Unfortunately, many of those interested were financial institutions with requirements for ‘enterprise HPC’,” he said.
As long as we were discussing FCoE, I was also reminded of my discussion with Brocade CTO David Stevens about the technical differences (or relative lack thereof) between the value proposition of InfiniBand vs. FCoE. Engenio’s 7900 already supports InfiniBand natively, so I asked Gardner as well.
“If FCoE has a better chance to succeed, it’ll be because of the [vendors] behind it, Cisco especially,” he said. “I don’t think it’s a technology question.”
IBM sells the LSI Engenio 7900 as the DS5000. Sun and SGI — recent acquisition targets — also sell the system under their brands.