Tandberg Data, the Norwegian company that sells tape libraries and removable disk drives, filed for bankruptcy in Norway and has been taken over by one of its creditors. Day-to-day operations continue for Tandberg’s U.S. subsidiary, Tandberg Data Corp., and its other subsidiaries as the parent company restructures. Tandberg went into bankruptcy because it failed to repay a loan to Cyrus Capital, which then acquired Tandberg’s assets and became its biggest shareholder after the bankruptcy.
According to a press release put out by Tandberg,
Tandberg Data has been unsuccessful in repaying a lapsed loan from Cyrus Capital. As a result Cyrus Capital had no other option other than to enforce their pledges of their loan. As Tandberg Data did not have sufficient capital to repay the loan, it had no alternative than to file for bankruptcy for the holding company, Tandberg Data ASA, and Tandberg Storage.
The Board of Directors of Tandberg Data made the decision to file for bankruptcy after consideration of all other alternatives, including a rights issue, which was unsuccessful. This process will allow Tandberg Data to deal with its cost and debt burden, to effectively restructure its operations and to continue its strategic direction of broadening its focus from being a tape company to a company that provides data protection solutions, including tape, disk, software and services.
The press release attributed at least some of the financial woes to “the global financial crisis,” which it said “impacted the company’s ability to successfully deal with its debt burden.”
Tandberg CEO Pat Clarke, who took over in early 2008, said in the press release that the company will live to fight another day. “The difficult steps we are taking now will enable us to build a company that can be successful in providing data protection solutions and support to our valued customers, suppliers, and business partners for a long time to come,” Clarke said
Clarke took over with the goal of restructuring the company, whose storage products have mostly been based on tape (including IP from the 2006 acquisition of Exabyte). Last year Tandberg added more disk products like the ProStor RDX removable disk cartridge to its portfolio, and refreshed its message around archiving and tiered storage workflows rather than differentiating its products based on hardware features.
“Do-it-yourself” infrastructure is a competitive differentiator among providers of storage services, I’ve learned in conversations with providers over the last two weeks. While not every Web 2.0 service is storage-focused, these discussions make me wonder what the results will be for third-party storage vendors looking to supply prepackaged configurations to service-provider data centers.
Following Carbonite’s lawsuit against its former storage supplier, its competitors such as SpiderOak have pounced on the opportunity to tout their own internal infrastructures in an attempt to lure worried Carbonite customers.
SpiderOak CEO Ethan Oberman told me that SpiderOak assembles its own storage systems out of commodity servers and disk drives, purchasing individual components and assembling them under the company’s proprietary storage clustering software. “We don’t rely on a third party pre-assembled storage system” as Carbonite did with Promise, Oberman said.
Shortly after I posted about Oberman’s statements, Carbonite CEO David Friend invited me to see Carbonite’s infrastructure. I took him up on that last Friday, and it turns out Carbonite’s setup isn’t much different from what SpiderOak described.
Carbonite has between 10 PB and 12 PB of storage in two data centers in the Boston area. While the vendor is suing Promise for products it deployed several years ago, Carbonite has already completely changed out the Promise storage in favor of a self-integrated system of Dell PowerEdge MD1000 and MD3000 servers packed with 15 one-terabyte SATA disks, configured for RAID 6. Four of these units are attached to each server node that runs the company’s internally written parallel file system.
SpiderOak’s Oberman said his company assembles the disk drives and RAID controllers internally. Friend said he’s still content to let a third-party vendor assemble the RAID arrays despite the experience with Promise.
“The software is what we worry about,” he said. Promise’s arrays had firmware bugs, he said, something that might not have changed if Carbonite had done more of the hardware assembly. “Even if you buy a disk drive from somewhere, it has firmware in it – we’re not going to get into that kind of stuff,” Friend said.
Carbonite chose Dell to replace Promise based on a discounted price and its willingness to work with Carbonite to design a customized hardware system, according to Friend.
The more I talk to online storage service providers, the more there seems to be a disconnect between what they’re deploying and what storage vendors are marketing in an effort to reach Web 2.0 shops. While new “cloud” storage systems such as EMC’s Atmos and HP’s ExDS are built on industry-standard hardware components, the vendors also supply software to tie those components together.
Friend said he’s learned that a fully prepackged software-hardware system from a third-party vendor won’t fit his business. “Every piece of software we’ve bought along the way has broken,” he said.
But this also may be because Carbonite is an outlier in terms of its workload. “There aren’t a lot of 10 petabyte data centers out there,” Friend said. He estimated some 95% of the processing time in Carbonite’s data center is spent on write, rather than read operations. “There [also] aren’t a lot of data centers out there that are ‘mostly write,'” he added.
Carbonite also designed its parallelized distributed file system to treat data in its data center and on users’ PCs as part of one big geographically distributed pool. Friend claims this is a differentiator, providing speedier restores to users than competitors such as Mozy can do by reassmbling files before restoring data.
For those reasons, Friend said he doesn’t anticipate that online services focused primarily on storing customer data will be fertile ground for existing storage vendors. This hasn’t stopped third-party storage vendors from making regular sales calls to Carbonite’s data center, according to senior director of operations Kai Gray. Gray said he listens to most of the pitches, but he echoed Friend on the issues with prepackaged software, and said the cost comparison equation has yet to change.
“By the time [a storage vendor] puts stuff together and marks it up, it’s too expensive,” he said. Storage product competition in this data center is at the disk-drive level rather than systems. “We’re eagerly awaiting two terabyte disk drive shipments,” Gray said. Right now Carbonite has mostly Western Digital disk drives deployed, but “we are very drive agnostic.”
While Carbonite has yet to go for a third-party “cloud” storage system, Friend also points out it’s a different animal from many other Web 2.0 companies. “Most data centers are a cost center, not the business itself,” he said. “This is our factory – everything has to be customized because it’s a competitive advantage. It’s worth it to spend money designing our own file system, but if you’re, say, Fidelity, you don’t want to do that.”
Digital archiving the next frontier?
The data center I saw was very impressive – it’s in one of the newest facilities in the Boston area, complete with ultrasonic humidifiers and state-of-the-art security. But it’s not too far from Carbonite’s other data center, bringing to mind what ESG founder and blogger Steve Duplessie wrote after Carbonite announced the Promise lawsuit. The analyst cautioned that enterprise users should ask online backup services about things like SLAs and geographic redundancy to distinguish between consumer/prosumer and enterprise services before signing over their backups.
I asked Friend about this. Carbonite sees itself as a consumer/prosumer offering, he said, and does not offer SLAs or redundancy outside the Boston area. “Because we’re offering a backup service, there’s already geographic redundancy between the user’s PC and our data center,” he said. “No one [in our market] seems to want to pay double for a backup of a backup.”
However, “if we get into archiving, where we might have the only copy of a document, geographic redundancy would come into play,” he said. Is Carbonite planning that move? “We’re thinking about it,” he said. “It would be a logical product line extension.”
Fears of decreased storage budgets proved real in the first quarter of 2009, as EMC and IBM suffered large dropoffs from their 2008 revenue. Yet smaller and more focused vendors Data Domain and Riverbed reported their revenues increased more than 20% from the same period last year.
So why didn’t the budget freezes and uncertainties that stopped customers from buying EMC and IBM storage blow a hole in the business of Data Domain and Riverbed?
One reason may be that Data Domain (data deduplication for backup) and Riverbed (WAN optimization) are considered market leaders in the one market they’re in. But EMC and IBM are leaders in more markets and bigger markets than Data Domain and Riverbed, and their revenues declined in those segments.
More likely, the success of the smaller vendors has more to do with what they sell.
Perhaps Riverbed CEO Jerry Kennelly put it best on Riverbed’s earnings conference call: “You’re either selling capacity, or you’re selling efficiency. People don’t need capacity now, they’ve got it. But everybody needs efficiency.”
In other words, Riverbed and Data Domain help people get more value from the storage they already own. Storage admins and analysts have been saying that’s where money would be spent during these poor economic times. Now we know that’s the case. The bigger question is how long that will continue to be the case after the economy improves.
Is it just me, or has this been an insanely busy week in IT news? Here are some highlights in case you had some trouble keeping up with the fire hose.
(0:25) Analysts see Oracle-Sun deal as storage ‘game changer’
(2:13) VMware extends storage features with vSphere 4
(4:21) EMC revenue down, employees asked to take pay cut
(5:45) Ocarina partners take on NetApp in primary storage dedupe
(7:42) HP carves up blade storage with LeftHand software
Parallel clustered NAS vendor Panasas is the latest vendor to put solid state drives (SSDs) in it storage arrays.
Panasas will includes SSD in the highest end of the three ActiveStor Series systems it launched today. Series 7 and Series 8 – with no SSD support – are available today, while Series 9 with SSDs are expected in the second half of the year.
Series 9 will have the highest IOPs and lowest latency of Panasas systems, and is aimed at bringing the vendor beyond its high performance computing (HPC) niche into financial services, media and entertainment and life sciences
Panasas Series 9 tiers consist of DRAM cache, SSD, and SATA drives. “We hate Fibre Channel,” Panasas marketing VP Larry Jones says.
Those three non-FC tiers are placed in “turbo” blades on the Panasas Series 9. Each blade has 40 GB of cache, 36 GB of SSDs and 2 TB of SATA. Each shelf holds 11 blades, and Jones says there is no limit on shelves in a system.
Panasas uses Intel X-25E single-level cell (SLC) SSDs. Jones says no pricing is set yet but he expects the SSDs to have a 40% premium over SATA.
ActiveScale 3.4 software includes automatic tiered storage capability to migrate data to the right tier without requiring customers to set policies. “We put data in the right spot automatically,” Jones says.
The Series 9 scales can generate 120,000 IOPS, according to Panasas.
As with Series 9, the Series 8 model supports 10-Gigabit Ethernet and InfiniBand, and up to 440 GB of cache. The entry level Series 7 is GigE only and maxes out at 50 GB cache. The two higher models also include volume snap shots.
Although SSDs are all the rage in storage now, it’s unlikely that SSD support alone will make Panasas more popular outside of the HPC world. Panasas is also counting on the Parallel NFS protocol (pNFS) to make its systems more accessible to the average NAS shop. pNFS, which will likely replace Panasas’s proprietary DirectFlow protocol, isn’t expected in shipping products until 2010.
Seagate is claiming the world’s first 5900-RPM low-power disk drive today with the 1 TB, 1.5 TB and 2 TB Barracuda LP series. Seagate claimts its internal tests show the series draws 3.0 watts of power when idle and 5.6 watts of power when operating.
Seagate positions the drive against Samsung’s Eco-Green and Western Digital (WD)’s Caviar Green hard drives. Seagate’s testing shows the Caviar and Barracuda drives drawing 3.0 watts when idle, while Samsung’s drive tested at 4.0 watts. In Seagate’s testing, the operating power draw for Caviar – 5.72 watts – was roughly equivalent to Barracuda, while Samsung’s drive tested at 5.5 watts during operation.
Another test by Seagate using the PCMark05 performance benchmark shows the 5900-RPM drive with a performance score of 8444 to WD’s 7802 and Samsung’s 6579. (That’s 95 MBps for the Seagate drive, for those of you keeping score at home). Seagate product marketing manager Anne Haggar said the quirky RPM – most desktop drives run at 5400 or 7200 RPM – helps the drive “strike the optimum balance between performance and power.”
Seagate describes WD’s drive as 5400 RPM, but it may be that WD has just been more coy about its spindle speed. When the Caviar product launched in January, Caviar Green product manager Mojgan Pessian said the drive’s exact RPM–somewhere between 5400 and 7200–was not being disclosed.
In any event, consumers and SOHOs will have multiple low-power suppliers in the market. The Barracuda LP is not recommended for enterprise or SMB use; for the enterprise, Seagate markets the 2 TB Constellation product line.
MSRP for the 2 TB Barracuda LP is $358; for 1.5 TB, $156, for 1 TB, $118.
LSI Corp. has updated its Engenio 7900 storage system sold by IBM and others with new support for 8 Gbps FC, boosted capacity, and Seagate’s full-disk encryption encryption services that include key management and firmware features to take advantage of FDE drives from Seagate.
LSI, along with Seagate and IBM, has been talking about FDE for a couple of years now, but this is the first product LSI will ship that has the feature. The encryption is done in memory a specialized chip attached to the hard disk drive itself. Encryption can be used with a subset of drives within the array, which can also mix in FC and SATA disks. Up to 448 disk can now be attached to the controller, double the previous capacity limit.
Before encrypted disk arrays are widely deployed, key management will probably need to be developed a little further. With this release, users have to supply their own key management program; LSI is supplying key management through its SANtricity GUI. Every encrypted disk in this release would have the same key. Work is still being done to bring key management standards together so users can manage keys centrally within the data center.
Meanwhile, LSI has yet to add support for 10 GbE or FCoE to this array, but host interface cards can be swapped out of the 7900 without changing out the whole box. LSI director of product marketing Steve Gardner says FCoE won’t be ready for prime time until next year. “I think technological immaturity coupled with the economic downturn will slow adoption,” he said. He echoed Symantec CEO Enrique Salem in wondering aloud what the economic downturn will do to financial institutions which are normally early adopters for new technology.
“About a year ago, we started seeing interest in InfiniBand storage outside high-performance computing [HPC],” Gardner said. “Unfortunately, many of those interested were financial institutions with requirements for ‘enterprise HPC’,” he said.
As long as we were discussing FCoE, I was also reminded of my discussion with Brocade CTO David Stevens about the technical differences (or relative lack thereof) between the value proposition of InfiniBand vs. FCoE. Engenio’s 7900 already supports InfiniBand natively, so I asked Gardner as well.
“If FCoE has a better chance to succeed, it’ll be because of the [vendors] behind it, Cisco especially,” he said. “I don’t think it’s a technology question.”
IBM sells the LSI Engenio 7900 as the DS5000. Sun and SGI — recent acquisition targets — also sell the system under their brands.
Storage insiders predicted the Oracle-Sun deal would kick off a series of acquisitions, and now today chipmaker Broadcom is making a move on HBA vendor Emulex. Broadcom’s unsolicited offer of approximately $9.25 a share or $764 million is about a 40% premium over Emulex’s closing price of $6.61 yesterday.
Broadcom has actually been after Emulex for a while. When Emulex adopted a poison pill in January to defend it from unwanted suitors, Broadcom was the unwanted suitor it had in mind. A letter that Broadcom Scott McGregor sent to Emulex’s chairman Paul Folino and its directors today revisited that acquisition attempt:
“We were disappointed when, in early January, you responded that the company was not for sale and abruptly cut off the possibility of further discussions. Even more troubling was the fact that merely one week after that communication, you took actions clearly designed to thwart the ability of your shareholders to receive a premium for their shares. … It is difficult for us to understand why Emulex’s Board of Directors has not been open to consideration of a combination of our respective companies. We would much prefer to have engaged in mutual and constructive discussions with you. However this opportunity is in our view so compelling we now feel we must share our proposal publicly with your shareholders.”
McGregor went on in the letter to lay out Broadcom’s vision for single-chip converged network devices delivering Fibre Channel and Fibre Channel over Ethernet. He also laid out a case why it would benefit Emulex to accept the offer:
“Customers will demand from their suppliers advanced chip technology and supply chain scale and reliability which is not an area of strength for Emulex. Broadcom brings tremendous value in advanced chip technology and supply chain scale and reliability to Emulex’s products—and customers.”
McGregor’s letter also stated that Broadcom is taking legal action to declare Emulex’s poison pill invalid.
Broadcom has tried to make inroads in storage before. It has sold chips for FC switches and a few years ago developed a converged network interface (C-NIC) that including a TCP/IP offload engine (TOE), iSCSI HBA and remote memory access (RDMA) technology onto one chip – a forerunner of the current FCoE CNAs without the Fibre Channel. However, Broadcom hasn’t been successful in storage and today’s earnings report – it lost $92 million last quarter — show it hasn’t been successful period lately.
The approach of FCoE could prompt more Ethernet companies to look for FC technology, the reverse of Brocade’s acquisition of Ethernet provider Foundry late last year.
“Broadcom doesn’t want to buy Emulex for its embedded switch business, it wants its Fibre Channel stack,” Wedbush Morgan research analyst Kaushik Roy says. “To compete, you’ll need a Fibre Channel stack. And if Juniper has half a brain they will buy QLogic, although Juniper’s never known for doing a lot of acquisitions.”
Roy says Emulex may use its poison pill to negotiate an even better deal, but he said the time could be right to sell. For years, Emulex and QLogic have had a duopoly for HBAs but there will be greater competition as FCoE takes hold.
“There are a lot of players getting into FCoE, Emulex’s revenues and margins will be under pressure,” Roy said.
In a note to clients today, Stifel Nicolaus Equity Research analyst Aaron Rakers indicated that Emulex has fallen behind QLogic in developing FCoE technology. “We believe [Emulex] would face some strategic and fundamental challenges going forward with regard to its positioning in blade servers, our belief that QLogic is better positioned in FCoE, and continued secular headwinds in its Embedded Storage Product (ESP) division,” Rakers wrote.
All the news that’s fit to read aloud for this week –
Samsung is claiming it’s the first to ship a consumer solid state drive (SSD) with full-disk encryption (FDE) through a new partnership with security vendor Wave Systems Corp. The 256GB, 128GB, and 64GB SSDs will be available in both 1.8-inch and 2.5-inch form factors. Dell says it will ship the drives in its Latitude line of desktops and notebooks.
Samsung’s drives generate and store encryption keys and access credentials are in the drive hardware, and they are never held in the operating system or by application software. When ordered in a new computer, the drives will come bundled with Wave’s Embassy Trusted Drive Manager software for life cycle management of the drive. Teh software includes pre-boot authentication, enrolls drive administrators and users, and enables backup of drive credentials. Available separately, Wave’s Embassy Remote Administration Server allows an IT administrator to remotely turn on SSDs and adds event logs for compliance.
It probably won’t be long before full-disk encryption also hits the enterprise SSD space. It’s already working its way in on the spinning-disk side, where it’s being pushed by drive maker Seagate, controller maker LSI and systems vendor IBM. Multiple converging standards for key management are also being developed for the enterprise.