When Foundry Networks delayed its shareholders meeting last week to vote on the proposed Brocade deal, there was speculation that A) Brocade wanted to renegotiate the price, or B) it had problems raising $400 million in high-yield bonds to help finance the deal.
Turns out the likely answer was C) both. It’s now clear Brocade did want to renegotiate, and the two companies said Wednesday night they agreed to reduce the price. The new price – $16.50 per share – comes to a total of $2.6 billion. That just happens to be $400 million less than the original purchase price of $19.25 or $3 billion. Now Brocade doesn’t have to come up with the extra financing.
Foundry shareholders are now scheduled to vote on the deal Nov. 7.
About three weeks ago, Sun general counsel Mike Dillon posted about the results of some pre-trial maneuvering between his company and NetApp over the patent-infringement suit brought by NetApp against Sun over ZFS. Dillon was jubilant over the Patent and Trade Office rejecting three of NetApp’s claims, and the trial court agreeing to pull one of those claims off the table for consideration in the suit. Not a decisive victory for Sun, but not necessarily good news for NetApp, which is seeking damages and an injunction against Sun’s distribution of ZFS and other products that it claims infringe on its patents.
NetApp co-founder Dave Hitz put up a brief response to Dillon’s comments this past Sunday on his blog, in which he attempts to introduce some nuance into the PTO aspect of this issue. “The Patent Office has issued a preliminary rejection of claims in 3 of our patents (out of 16),” Hitz writes. “Such a ruling is not unusual for patents being tried for the first time, and there are two ways to resolve the issue.” One of them is to wait for the PTO to make a ruling on each case, which Hitz calls “the slow way.”
The fast way would be to just proceed with the trial, which Hitz pushes for in his post. “Dillon mentioned issues with three patents, but NetApp currently has 16 WAFL patents that we believe apply to ZFS, with more on the way,” he wrote. “We believe that we have a strong case, and we want to get it resolved.”
He says Sun’s push for the slow way of resolving the dispute indicates the weakness of its position in the case: “To me, the best indicator of strength is to look at which party wants to get on with the case (the one with a strong position), and which party consistently drags its feet and tries to delay (the one with the weak position).”
Hitz’s post doesn’t offer much in the way of new raw information on the proceedings.
According to a new IDC Enterprise Disk Storage Consumption Model report released this week, transaction-intensive applications are giving way as the main segment of enterprise data to an expanded range of apps as well as a tendency to create more copies of data and records for business analytics including data mining and e-Discovery.
The report estimates that unstructured data in traditional data centers will eclipse the growth of transaction-based data that until recently has been the bulk of enterprise data processing. While transactional data is still projected to grow at a compound annual growth rate of 21.8%, it’s far outpaced by a 61.7% CAGR predicted for unstructured data in traditional data centers.
“In the very near future, the management and organization of file-based information will become the primary task for many storage administrators in corporate datacenters,” the report reads. “And this shift will have a significant impact on how companies assess storage solutions in terms of systems’ performance, operational efficiency, and file services intelligence.”
The IDC report also builds on research first highlighted in an IDC blog last week concerning the cloud. According to the report, the sharpest growth in storage capacity will come from new organizations described as “content depots.” IDC estimates storage consumption from these organizations will grow at a compound annual growth rate of 91.8% through 2012. Examples of content depots include the usual cloud suspects: Google, Amazon, Flickr, and YouTube.
These content depots have different IT requirements and infrastructures than traditional enterprise data centers. We’re seeing examples of these new infrastructures pop up in the market, including systems with logical abstraction between the hardware and software elements; the use of commodity servers as a hardware basis for storage platforms; and the use of clustered file systems.
Some in the industry have compared this “serverization” of storage to the transition between proprietary workstations and PCs in the 1980’s. But IDC analyst Rick Villars says this isn’t a zero-sum game. “This isn’t going to replace traditional IT,” he said. “Ninety-five percent of what people are developing and building in the storage industry today is irrelevant to what the cloud is building. You could take that as a negative, but it also translates into opportunity. These are new market spaces and new storage consumers that weren’t around five years ago.”
There’s been a lot of discussion lately about the role the cloud will play as the global economy softens. There is a difference of opinion between those who see a capital-strapped storage market as an even more conservative and risk-averse one and those who argue the opportunity to avoid capital expenditures will nudge traditional IT applications into the cloud. Still others point out the hurdles to cloud computing that remain, including network scalability and bandwidth constraints.
For example, when it comes to storage applications such as archiving, analyst reports from Forrester Research this year cited latency in accessing off-site archived messages and searching them for e-discovery as major barriers to adoption for archiving software-as-a-service (SaaS) offerings.
Cloud computing “definitely exposes weaknesses in networking,” Villars said, but “the closest point to the end user is the cloud, if you want to distribute content to end users spread around the world.”
Other challenges include the growing pains major cloud infrastructures such as Amazon’s S3 have experienced over the last 18 months, and the potential risk of putting more enterprise data eggs in one service provider’s cloud data center basket. Villars points out, “I doubt Amazon has had more problems than a typical large enteprise, and they offer backup with geographic distribution for free.”
However, geographic distribution brings with it its own challenges, such as varying regulations among different countries. “There are regulatory problems with Europe,” Villars said. “Laws there say that if you have data on a European customer, yhou can’t move it out of Europe. If you want your cloud provider to spread copies between the U.S., Asia and Europe for global redundancy, that becomes an issue.”
Data protection appliance vendor STORServer has a new management team, replacing president and CEO John Pearring with a president and a separate CEO, both promoted from inside the company.
Chief operating officer Laura Buckley takes over as president and Bob Antoniazzi moves up from VP of business development to CEO. Buckley will continue as COO for the Tivoli Storage Manager (TSM)-based backup appliance maker while working with the company’s directors. Antoniazzi will be responsible for exploring new market opportunities and business directions for the company, according to a STORServer press release that said Pearring left for personal reasons.
Antoniazzi told me today that the most likely way for the company to expand its products is to flesh out a line of virtual server appliances. STORServer added a virtual instance of its TSM-based backup software inside its hardware appliance a few months ago, but has yet to make it available without hardware. “Right now, we’re being cautious,” he said. “We’re shipping a physical appliance with a virtual appliance inside, tweaked and optimized according to customer requirements for performance and reliability–we can’t just say, ‘Here you go, put this on your own ESX server and good luck.’ I don’t think it’s responsible to do that now.” But that’s the goal eventually.
STORServer also recently announced support for email archiving, but Antoniazzi said “I don’t see us going and doing more new technologies for new technologies’ sake. Whatever we ship, we support, and we have to make sure our support organization is prepared on anything we add.”
Customers have also incquired about remote replication, data deduplication and support for cloud computing. “These are ideas that we will be investigating, but they’re not going out the door anytime soon,” he said.
Enterprise Strategy Group analyst Lauren Whitehouse said STORServer’s got the right idea by adding virtual servers and email archiving features into the mix, but “I’m not sure they have the opportunity to adopt an all-virtual-appliance strategy. I don’t know what limitations might exist for distributing the OS their applications rely on. But it would be a good step for the lower end of the market. They may also have an opportunity to package up a solution for ROBOs, maybe using IBM’s FastBack.”
She added, “The other thing that is missing for them is just general awareness. They are a great self-sustaining company with a decent channel, but have relatively low visibility in a crowded market. Unfortunately, now it’s a tough economy to make big investments in that way.”
NetApp was supposed to hold its first-ever user conference, called NetApp Accelerate, in February, but yesterday put out a press release saying the conference has been cancelled.
“We had more customer interest in NetApp Accelerate than we anticipated,” said Elisa Steele, senior vice president, Corporate Marketing, in a statement. “But those same customers told us their travel budgets were being cut and it was difficult to commit to attending in today’s climate of economic uncertainty. For those reasons, we decided to cancel this year’s program.”
Wachovia financial analyst Aaron Rakers wonders if NetApp cancelled the conference to trim its own budget.
“While it is clear that economic conditions are resulting in more stringent expense controls at enterprises, we do find this as interesting; we believe possibly a result of NetApp’s own focus on operating expense control,”Rakers wrote in a note to clients.
NetApp said it will be release technical content that had already been prepared for the show between February and May next year.
Today, NetApp said its long-awaited data deduplication feature for its virtual tape library product has finally arrived. The feature, like NetApp’s primary storage dedupe, will be free for new and existing customers. NetApp has taken a contrarian approach to dedupe. It was the first major storage vendor to offer dedupe for primary data — building the capability into its operating system — but the last of the VTL vendors to add dedupe for backup.
IBM and partner Effigent have released a co-developed product for backing up Mac desktops and laptops. Called CDP4Mac, an Apple OS X version of IBM’s CDP for Files desktop / laptop data backup software. Like the earlier Windows version of CDP for Files, CDP4Mac tracks changes to workstation files and can upload them to a USB device, centralized server or a designated URL when connected to a network. Effigent added the Mac interface and ability to recognize the Mac file system structure.
Apple has its own near-CDP backup product for OS X, called Time Machine, but an IBM spokesperson said Effigent and IBM had Apple’s support, including testing assistance, because CDP4Mac can also be used to backup Windows data if a Mac is running both operating systems without the need for separate clients. CDP4Mac can also do single instancing across files from both OSes.
This puts IBM into fresh competition with EMC, which offers both Retrospect and versions of its Mozy backup SaaS that support Mac, as well as Atempo’s LiveBackup CDP product. “There aren’t many solutions out there that support both Mac and PC,” said Enterprise Strategy Group analyst Lauren Whitehouse. “Apple is very tuned to the Apple user.”
She added, “In the corporate environment, users first look to a storage vendor or a familiar partner for backup, rather than Apple, even if they’re running Macs.”
Foundry Networks today abruptly postponed its shareholders vote on its pending acquisition by Brocade, raising questions about whether the $3 billion deal will go through.
Brocade said on July 21 it would buy Ethernet switch vendor Foundry to expand its data center presence. Foundry shareholders were scheduled to vote on the deal today, but the company issued a press release saying the meeting was pushed back to next Wednesday because of “recent developments related to the transaction.”
Foundry did not say what those developments were, and Brocade spokesman John Noh said he could not comment. In a note to his clients, financial analyst Aaron Rakers of Wachovia Capital Markets wrote that Foundry investors are worried that Brocade either hasn’t been able to raise the $400 million in funding to go with the $1.1 billion loan it secured two weeks ago, or is trying to renegotiate terms of the deal.
“It is very hard for us to judge the outcome at this point, but we do believe Brocade has been very committed to the transaction and we believe investors could have meaningful questions on Brocade’s long-term growth story without this acquisition,” Rakers wrote.
During a conference call with storage reporters today to discuss the future for data center networking, Dell senior storage manager Eric Endebrock pointed to the convergence of Ethernet and Fibre Channel as inevitable. “Change is afoot,” he said. “FCoE is a more straightforward management infrastructure–the next generation of intercommunication for Fibre Channel.”
No suprise there. Practically every FC storage vendor is saying that. But where it gets tricky with Dell is, it dropped $1.4 billion in iSCSI SAN vendor EqualLogic less than a year ago. And where EqualLogic’s PS Series iSCSI SAN arrays fit into the converged picture isn’t clear yet.
“Protocols will not necessarily be the top factor in choosing the next storage system for customers,” Endebrock said. “We get caught up in the latest cool technology trend on [the vendor and press] side, but customers don’t necessarily care about that.” He added that lossless Ethernet “will float all storage boats” and that “customers see a place for all protocols.”
Also, “linking EqualLogic to iSCSI is probably not the best way to think about it–we also provide a scaling architecture and solve higher customer needs–it’s far more than just a protocol discussion.”
So far, Dell spokespeople aren’t willing to go into further detail about what its exact plan is for EqualLogic. “We continue to investigate our options and will support 10 Gigabit Ethernet as well as Data Center Ethernet with EqualLogic. We’re going to watch our customers’ needs and what the customers want,” Endebrock said.
A presentation at Storage Networking World titled “Yes, Fibre Channel and iSCSI Can Coexist” by director of global storage and network marketing Praveen Asthana, offered some clues about how Dell sees it all fitting together. “Mixed is in,” Dell’s Asthana said. But he identified Ethernet as the glue–whether it’s providing the base layer of the unified network or providing a simple management and monitoring interface for all endpoints on an IP network.
While traditional Fibre Channel offers better performance for business applications than traditional iSCSI, it also offers better performance for streaming applications and high-performance computing (HPC) workloads, Asthana pointed out. But he also projected scale-out iSCSI, especially with 10 GbE, will surpass the performance offered by both earlier protocols.
Bottom line: Dell will support FC as long as it supports Clariion. Endebrock was mostly mum when it came to the relationship with EMC, as addressed by EMC CEO Joe Tucci in the company’s third-quarter earnings call on Wednesday. “Joe actually laid out that we have a great relationship and we’re actively working together on how to go to market on the best way possible, working on fitting our product lines together. We’re going back to basics and at the ground level refocusing on where we’ve seen success in the past.”
With the global economy crumbling, Data Domain is the rare company that not only exceeded its financial expectations for last quarter but actually raised its forecast for this quarter.
Data Domain’s third-quarter revenue of $75 million was up 134 percent from last year, and earned the company $3.2 million in net income. Data Domain expects revenue this quarter to be between $80 million to $84 million and its estimate for the full year is between $269 million and $273 million, up from the previous estimate of $250 million to $255 million.
The data dedupe specialist proved that curbing data growth is a high priority in data centers these days even if companies are looking to trim storage budgets. Quantum also cited sales of its deduplication products as a highlight in an overall disappointing quarter, and EMC execs said on their earnings call this week that their Avamar host-based dedupe is selling well although they made no mention of the re-branded Quantum products they sell.
But while Quantum, EMC and others say they saw a spending slowdown in September and October, Data Domain execs say its full speed ahead. Data Domain CEO Frank Slootman said his company’s new bigger – and more expensive – DD690 system has been well received, and customers are moving beyond just using dedupe to for backup. Slootman said Data Domain systems are increasingly being used for nearline (archive) data.
“We saw normal spending patterns and behavior, even in the last week of September when all hell was breaking loose,” Slootman said. “If you didn’t watch CNBC, you wouldn’t know something was wrong with the world.”
Data Domain reported 10 deals of more than $1 million and two of more than $5 million, and the average deal grew to $131,700 from $108,000 the previous quarter. Slootman said even financial services companies are buying “because of the much bigger faster product we’re selling.”
Quantum also has a bigger dedupe product out – the DXi750 – and CEO Rick Belluzzo said it spurred an increase in disk and software revenue despite an overall loss of $3 million due to declining tape sales.
“The DXi7500 has tended to take us into bigger accounts and bigger deals, but those become a little harder to close,” Belluzzo said. “It’s clear the global financial crisis impacted our ability to close business at end of the quarter. We saw numerous sizeable deals fall out of the quarter, particularly on the DXi7500.”
Belluzzo said he hopes most of those big deals will close, and several already have. Quantum is clearly betting its future on disk backup fueled by deduplication and replication rather than its legacy tape business. Belluzzo said the vendor is also looking for more partners to license its dedupe IP as EMC has. One financial analyst says Quantum has a deal in the works with Dell. Dell hasn’t done much with dedupe yet, but senior manager of Dell storage Erick Endebrock said on a conference call with reporters today there will be a dedup announcement “in the near future.”
Several weeks after recommending to investors that they approve a bid by a private equity company to take over the struggling maker of optical storage media, Plasmon has been bought by an unidentified U.S. firm.
As a result of the deal, which earlier reports valued at $25 million, Plasmon has become Plasmon Holdings LLC, and will move its headquarters from the U.K. to the U.S. Plasmon will continue its strategy under CEO Steven Murphy of fitting its products in to users’ overall long-term archiving strategies (bolstered by partnerships with NetApp and IBM-FileNet) rather than focusing solely on the speeds and feeds of its optical media.
Murphy has been here before. He was CEO of Softek when it spun out of Fujitsu and went private in 2004. Last year, IBM acquired Softek for an undisclosed amount, and folded Softek’s host-based Transparent Data Migration Facility (TDMF) data migration software into its IBM Global Services (IGS) division,.