Brocade cleared its final hurdle to its $2.6 billion acquisition of Ethernet networking firm Foundry Networks today when Foundry’s shareholders approved the deal.
In Brocade’s release announcing the approval, the Fibre Channel switch vendor said it expects to close the deal Thursday. Brocade also said it will probably not require more financing than the $1.1 billion loan it secured in October.
Brocade has been trying to wrap up the deal since July, when it revealed it would pay $3 billion for Foundry. With financing tough to come by with current economic conditions, the companies renegotiated the price downward Oct. 30. That prompted the Foundry shareholders to push back their approval vote.
Under final terms, Brocade will pay $16.50 in cash for each Foundry share. Foundry shares closed at $16.62 today, up from their opening price of $15.96.
Who you gonna call when your computer’s toast?
Originally uploaded on Flickr by alexmuse
Guess which IT discipline the experts expect to be the most resilient in the face of the recession? You guessed it: good ol’ backup. Perhaps the least glorified job in the data center, at times among the most poorly understood–and still chugging along, as the growth of corporate data waits for no man or stock market.
As we begin to look forward to
time off for the holidays the New Year, then, what better perspective to take on the storage industry than a careful look at backup, the technical advances that have made it vastly more complicated (but potentially vastly more efficient as well), and the people who are making it work in different environments? This week we’re running a feature piece by yours truly at our SearchDataBackup.com site that examines all of the above…and more.
Part I is up today, covering everybody’s favorite topic: disk vs. tape. There’s even a little NAS disk vs. VTL disk for those who like things a little spicier. Tomorrow will see Part 2, focused on software. Thursday’s Part 3 will examine outsourcing and the cloud. Friday’s finale will address the ways parts 1-3 still won’t get on top of the data growth rate at big companies any time soon. Yay!
Hope everyone who made the venerable disk vs. tape debate so lively on this blog will tune in, and offer their own views.
A while back I did a riveting post for this blog covering my collection of storage s.w.a.g. (stuff we all get) picked up in my travels through the storage industry. I thought I’d follow up with a notice about the latest showpiece in my collection:
It’s a T-shirt, from EMC/Mozy. Occupying a place of honor (and a place of not-fitting) in my work area at the office, it doubles as a conversation piece and warning to intruders.
While most organizations are likely planning to trim their IT budgets next year, a lot of them are also no doubt finding that cutting storage capacity will be difficult if not impossible.
Take Victaulic Company. The pipe joining manufacturing company purchased 30 TB of usable capacity with its new Sepaton S2100-ES2 VTL in September, and infrastructure manager Fred Railing says he’s already ordered 10 TB more because of an increase in data being backed up. And that’s with a 39-1 deduplication ratio from Sepaton’s DeltaStor software.
“[The VTL] was sized appropriately when we bought it, but the amount of data we have to back up has increased 30 percent already,” Railing says. “We keep six weeks worth of backup, and we’re close to capacity right now. We have to keep an eye on that to make sure it doesn’t fill up.”
Railing says the increase in data stored hasn’t come from an acquisition or any unusual situation, and he doesn’t see it slowing down much soon.
“We have quite a few projects going on now, and we’ve added lot more servers in the last nine months,” he said. “It always will increase, maybe not as dramatically as it has been lately, but our engineering and email data keeps growing and giving us more to back up.”
Victaulic dedupes everything it backs up, Railing says, although reducing data wasn’t the original reason for going to the VTL. He set out to reduce backup windows by using disk, and his backups have gone from 24 hours to 12.
“At first we were looking at just getting a VTL to shrink our backup windows,” he said. “We thought it was worth getting dedupe option because we would end up buying so much more disk. Now we’re deduping everything we back up.”
Bringing up the rear among major vendors pledging support for solid-state drives (SSD), Hitachi Data Systems today said it would ship SSDs in its enterprise Universal Storage Platform and midrange USP-VM disk arrays in the first quarter of next year.
HDS did not reveal any partners for the SSDs it will ship in 73 GB and 146 GB capacities next year. The press release said HDS plans to also offer the SSDs being developed by Hitachi GST and Intel, but those products are not expected to ship until 2010. STEC is the other manufacturer of enterprise FC and SAS drives currently on the market.
HDS is ringing in the new year with a different tune than the one it sang in early 2008. Shortly after EMC announced support for SSDs in Symmetrix in January, HDS sought out reporters to throw water on the idea, saying there was no market for SSDs. (Though chief scientist Claus Mikkelsen also mentioned that HDS could support the drives and would “jump right in” if EMC had in fact “created a market” for SSDs).
Add this as a point in the ‘tape’ column if you’re scoring the ancient debate at home.
The Shoah Foundation, founded by Stephen Spielberg to preserve Holocaust survivors’ narratives after Schindler’s List and now a part of the University of Southern California, has conducted interviews with thousands of survivors in 56 countries. The Foundation has 52,000 interviews that amount to 105,000 hours of footage.
CTO Sam Gustman says the footage was originally shot on analog video cameras, then converted to digital betacam and MPEGs for distribution online. It currently amounts to 135 TB. However, the Foundation is converting the footage to Motion JPEG 2000, which will create bigger files–about 4 PB of data, Gustman estimated. Each video will be copied twice, bringing the total to 8 PB.
Gustman says the Foundation received a $2 million donation of SL8500 tape libraries, Sun STK 6540 arrays and servers from Sun Microsystems in June. The Foundation has an automated transcoding system running on the servers, and that takes up the 140 TB of 6540 disk capacity for workspace. Sun’s SAM-FS software will automate the migration of data within the system, to the 6540 and then to the SL8500 silo for long-term storage.
We’re hearing a lot in the industry these days about rich content applications such as this one moving to clustered disk systems, but Gustman said disk costs too much for the Foundation’s budget. He sees the potential for an eventual move to disk storage, but “disk is still too expensive–four to five times the total cost of ownership, mostly for powerand cooling.”
Another advantage to the T10000 tape drives the Foundation plans to use is that they will eliminate having to migrate the entire collection to disk during copying, transcoding and technology refreshes. One T10000 drive can make copies or do conversions directly between drives in the robot, and the virtualization layer with SAM-FS means that can happen transparently.
However, as an organization charged with the historic preservation of records, Gustman agreed with others I’ve talked to about this subject in saying that there’s still no great way to preserve digital information in the long term. “The problem with digital preservation right now is that you have to put energy into it–you can’t just stick it in a box and hope it’s there 100 years from now,” he said. “Maybe there’ll be something eventually that you don’t have to put energy into, but it doesn’t exist yet.”
Last week, Tory Skyers wrote a post about the unforeseen complexities of disaster recovery after his PC’s motherboard fried. This week, I had an interesting discussion with somebody working with a much bigger enterprise infrastructure who also found that people, process, and sometimes luck–good and bad–can influence disaster recovery planning more than any technology.
Mark Zwartz, manager of information technologies for privately held real estate conglomerate JMB Companies, has been signed on with SunGard’s Availability Services for virtual server-based DR since August. Last week Zwartz gave me a deeper look behind the scenes at his disaster recovery planning process, and the ways it wasn’t so simple.
For one thing, it might not have happened at all without a contract re-negotiation with SunGard. “Our original contract was a wacky deal with [one subsidiary] that expanded to the other entities, but the contracts were so goofy and webbed into each other that if we called with a problem or needing to fail over, the people at SunGard might not have any idea which company or machines we were talking about,” he said. “We saved money on lawyers and negotiations, but quite honestly, if we had to failover, nobody at SunGard might’ve known what to turn on.”
The renegotiation of that contract happened to coincide with the beta program for the SunGard service. “That was the foot in the door to change things,” Zwartz said. Because it was a beta program, the companies got three free months of testing, which caught the attention of Zwartz’s management.
Meanwhile, JMB was in the midst of a hardware refresh, as well as rolling out virtualization. Still, “one of the hardest parts was selling virtualization–these are highly intelligent people who are handling millions if not billions of dollars, and it’s hard to explain the concept that they don’t actually ‘own’ anything [with virtual servers],” he said.
A broker at a hedge fund firm in the conglomerate was highly reliant on Outlook contacts and notes in each contact file to do her daily business. Meanwhile, the company still worked with traditional tape backup, which wouldn’t offer the granular protection to recover all of those contacts in the event of an outage.
I’m sure most of you out there in blogland know what happened next. “She was synching her BlackBerry herself, and blew out her contacts,” according to Zwartz. The only option for restoring Exchange backups was to restore the entire Exchange database from tape to a separate server, which the company had declined to buy. “They lost a significant part of a trade before that,” he said. “Nobody realized such a small thing would make her unproductive.”
Of course, without a fully staffed and replicated secondary environment, Zwartz acknowledged, it’s impossible to be disaster-proof. But the incident also had a silver lining when it came to convincing management to participate in the new SunGard program. “It would’ve cost $1,500 to get a new backup device,” he said. “It wound up taking six weeks to get one contact back and cost between $15,000 and $20,000. It was a selling point when it came to virtual servers with SunGard.”
NetApp today quietly pulled the plug on its SnapMirror for Open Systems (SMOS) heterogeneous data replication software, acquired from startup Topio for $160 million in 2006.
In a press release posted on the NetApp Web site – but not distributed – the vendor said it would discontinue SMOS and close the former Topio development facility in Haifa, Israel on Jan. 15. According to the release, NetApp “has not made final employment decisions” on the 51 employees in Haifa.
NetApp acquired Topio for its Data Protection Suite, at least partly in response to EMC’s purchase of Kashya six months before. But while EMC built Kashya’s replication and CDP into RecoverPoint – a staple of its replication platform — Topio’s heterogeneous replication offering never caught on, even after NetApp re-released it as ReplicatorX and then SMOS.
NetApp’s release blames the product’s failure on a lack of interest in replication between multiple vendors’ products. “Our decision to terminate SMOS product development was based on customer priorities and actual purchase histories,” the release said. “The market for replication products for disaster recovery purposes is dominated by homogeneous, rather than multivendor, solutions. Our ‘any-to-any’ solution with SMOS was never adopted by customers in the way we anticipated.”
NetApp added that it remains committed to its other SnapMirror versions for “any-to-NetApp” data protection. SMOS customers will get three years of maintenance and technical support.
Pillar’s CEO Mike Workman dropped by our office today, and said that while Pillar retains its earlier reservations about SSDs not being best utilized behind a network loop, the company will support them next year. The systems vendor will not have an exclusive partner for the drives, Workman said, though he mentioned Intel as one supplier.
Pillar’s Axiom arrays separate disk capacity from the storage controller with components called bricks (disk) and slammers (controllers). Workman says the Axiom will support SSDs in the bricks, and the arrays’ QoS features will be updated to support moving workloads to SSDs. This can happen either according to policy or automatically (with prior user approval) when the system is under intense workload.
This is definitely a change in tune, though Workman has always said Pillar’s systems were capable of supporting SSDs and probably would. He just thought network latency was too great, and he hasn’t retreated from that position. “It’s there,” he said. “There’s no way to get around that.”
But Workman says the biggest obstance to SSD now is price. “When we show people how they work, they say, ‘Fine,'” he said. “Then we tell them how much it costs, and that’s when they keel over.”
Despite offering an 80% utilization guarantee earlier this year, Workman said only about 15% of Pillar’s customers are at 80%. But the company hasn’t been paying out lots of guarantee money, either. The details of the offer are vague to begin with: the terms are negotiated on a case by case basis, “to remediate any issue, as well as financial pain.” The terms of the guarantee would have to be negotiated as part of the original sale.
Analysts also said users might be wary of pushing utilization that high given that it requires capacity planning to be precise. “I can’t make [customers] write data to the system,” Workman said. “The guarantee was not that they will but that they can.” He added that the average Pillar customer’s utilization currently is 62%.
With sales of EMC storage systems sold through Dell on the decline, EMC CEO Joe Tucci declared last October “there’s a lot more we could and should be doing together” to strengthen the EMC-Dell relationship.
Today, Dell and EMC say they’ve extended their agreement to co-brand Clariion midrange SAN systems to 2013 and added EMC’s Celerra NX4 to the deal. Whether that’s doing “a lot more” or not will depend on how well Dell does with the NX4, which gives Dell another NAS product to sell along with its Windows-based NAS systems. Dell will begin selling the NX4 early next year.
Dell and EMC are calling the new deal a five-year extension, although it’s really a two-year extension because their previous deal was to run through 2011.
“The EMC-Dell relationship has been extremely successful,” says Peter Thayer, director of marketing for EMC’s multi-protocol group. “If you look at the storage industry, you won’t find any relationship as successful as this.”
He’s probably right, considering how many Clariion systems Dell has sold in the seven years since the co-marketing alliance began. But the relationship hasn’t been the same since Dell acquired EqualLogic for $1.4 billion in January, giving it its own midrange SAN system.
EqualLogic’s products are iSCSI only, so Dell still relies on Clariion for Fibre Channel SANs. But it’s probably no coincidence that Dell has sold fewer Clariion systems still picking up EqualLogic. Overall, EMC’s revenues from Dell declined 26% year over year last quarter, and Dell has gone from 15.8% of EMC revenue to 10.4% in a year. Dell accounted for 35% of Clariion revenue a few years back, but less than 30% last quarter.
Wachovia Capital Markets financial analyst Aaron Rakers calls the extension good news because it could end speculation about the two vendors’ commitment to each other. “There have clearly been increased questions surrounding this partnership throughout 2008, or rather since Dell has increasingly focused its attention on driving its EqualLogic business,” he wrote in a note to clients.
Financial analyst Kaushik Roy of Pacific Growth agrees that investors have been concerned about the direction the EMC-Dell relationship is going and the extension should help, but he isn’t sure about how much.
“We will have to wait to see if the relationship bears any fruit,” Roy said of the extension. “While we are not expecting EMC’s revenues from Dell to ramp up materially, investors would be happy if revenues do not decline precipitously.”
EMC is also involved with Dell’s plans to add data deduplication next year. While it hasn’t disclosed any products yet, Dell last month said its dedupe platform will include Quantum software and be compatible with EMC disk libraries.