After 15 years as CEO – practically an eternity in the storage business – NetApp’s Dan Warmenhoven stepped down today and named Tom Georgens his successor.
The move was anticipated since NetApp promoted Georgens to COO and president in February 2008, yet Warmenhoven had given no timeframe for his retirement. He will stay on as executive chairman “to help build and expand relationships with certain strategic partners around the world, including service providers and key technology partners,” according to NetApp’s news release.
“I am honored to follow in Dan’s footsteps,” Georgens said in the release. “In just 15 years, NetApp has grown from a $14 million startup with 45 employees into a recognized market leader in networked storage and data management with $3.4 billion in annual revenues and approximately 8,000 employees around the world. Dan also helped to cultivate a unique corporate culture, which has resulted in NetApp consistently being recognized as a great place to work.”
Warmenhoven’s final months were a bit rocky, as NetApp got outbid by its chief rival EMC for data deduplication backup specialist Data Domain. NetApp said it would buy Data Domain last May for $1.5 billion, but EMC eventually acquired Data Domain for $2.1 billion. NetApp did rally from a sales perspective at the end for Warmenhoven, though, and today reported better than expected $838 million in revenue last quarter during rough financial times.
Georgens joined NetApp as head of its Enterprise Storage Systems group in Oct. 2005. He was previously CEO of LSI’s Engenio storage systems division for two years and spent 11 years at EMC.
Yesterday, we posted a story about Dell’Oro Group’s prediction that Fibre Channel over Ethernet (FCoE) sales growth would outpace that of FC by 2011. That report got us lots of great feedback during the preparation of that news item, not all of which could fit in the news article, so here are some raw “deleted scenes” — additional points of view to go with that piece from analysts, users and financial experts.
Jeff Boles, director of validation services and senior analyst for the Taneja Group —
Right now, we have a scattering of fabrics and technologies, and while the promises of FCoE are interesting (if not compelling) for the day to day practitioner, transitioning to this new fabric is a bit more complex than filling your shopping cart on Amazon.com.
What I fully expect to happen is a multi-year integration of converged Ethernet as a broad fabric that joins together the multiple fabric domains in the enterprise data center. Those separate domains – FC, InfiniBand, and even traditional Ethernet – may rapidly become converged in a 10gb core, but will likely keep growing at a steady pace, or at least being maintained with regular equipment replacements. Once a converged core is in place (over years), we’ll likely see new equipment deployments taking place on the converged fabric when it is justified (high I/O demands, cable simplification in large infrastructures).
But a full tilt shift to FCoE as the new fabric, is likely out beyond the 3 year mark for aggressive businesses, and well beyond the 5 year mark for less aggressive businesses. The problem plain and simple, is that many, many businesses are well served by their current fabrics and skillsets, and the transition to converged ethernet, and FCoE, will only get near term adoption when it is fully justified. Many times, existing fabrics and skillsets will outweigh the battle over port prices and power utilization. While CEE/FCoE will change the computing landscape, my expectation is that this will happen in the long term.
Andrew Reichman, senior analyst for Forrester Research —
I’m seeing vendors like Brocade, Cisco, QLogic and NetApp move towards greater support for FCoE. The benefits often include reduction of complexity in cabling, and a longer term desire for simplification of SAN and LAN networking through network convergence. That said, it is likely to take a long time to see the benefits, and require a fairly significant investment in new equipment and re-architecting. I do believe that storage traffic will be on Ethernet at some point, the question is how soon- The FCoE standard has been slow to emerge, which has delayed adoption, but early adopters seem to be getting started now. 2011 seems a bit ambitious for broad adoption beyond FC, but I think it might not be too far off. You do have to remember that storage buyers are extremely conservative and like to see very mature products and architectures before making a big change, but once the momentum gets going, it’s likely to grow rapidly.
Mark Kelleher, Managing Director, Equity Research, Brigantine Advisors —
Dell’Oro isn’t really going out on a limb with its prediction that FCoE will supplant Fibre Channel by 2011 – that’s a common assumption in the storage industry. One converged fabric for all enterprise communications makes a lot of sense. The fibre channel switch and HBA people are moving in that direction, the Ethernet providers are moving in that direction, there’s really no reason it would not. The key difference between FC and Ethernet is that Ethernet can lose packets and take its time to recover, while FC guarantees delivery, and does not drop packets. To port the upper layers of the FC stack onto Ethernet, the Ethernet protocol itself has to be augmented to allow ‘lossless’ transmission of data under certain circumstances. That is all incorporated in FCoE, and the technology is just now reaching the market. Deployment starting now thru next year, widespread adoption by 2011.
Keep an eye on the core FC vendors: Brocade, Emulex, and QLogic. Brocade sells switches (although moving into the host-bus adapter market), while Emulex and QLogic are knows for selling the input/output offload engines that connect servers to FC (host-bus adapters, or HBAs). To connect to Ethernet, servers use “Network Interface Cards”, or NICs. With the new FCoE protocol, those two functions are combined into a “converged network adapter”, or CNA. Sell through of CNAs will tell us how the adoption of FCoE is progressing.
Reinoud Reynders, IT manager at University Hospitals Leuven in Belgium–
I believe very strongly in FCoE. Cisco is pushing this very hard and indeed, they have a strong story. Just one plug for all your I/O (network and SAN) on 10 Gb, 1 switch that separates client access (IP network) [from the] storage network: it’s a great plan.
I will replace my FC-SAN switches [around] Q2 2011. Personally, I believe 2011 is a little bit to early for the [broader industry] cross over, but maybe 2012.
Feel free to add your own perspective in our comments section below!
Remember the 2007 stock option backdating trial that ended in the conviction of former Brocade CEO Greg Reyes and cost Brocade hundreds of millions of dollars in legal fees? Well, get ready for the rematch.
The 9th U.S. Circuit Court of Appeals Tuesday ordered a new trial for Reyes, claiming prosecutorial misconduct. Reyes was convicted of fraud and other counts, and sentenced to 21 months in prison and fined $15 million in January 2008. The appeals court ruling said a prosecutor falsely claimed the Brocade finance department was unaware that Reyes was granting backdated stock options to lure employees to the company.
“We reverse Reyes’ conviction because of prosecutorial misconduct in making a false assertion of material fact to the jury in closing argument,” the three-judge panel said in its decision.
The appeals court claimed prosecutor Timothy Crudo knew employees of Brocade’s finance department told the FBI they were aware of the backdating scheme, yet he told the jury the finance department did not know about it.
“We do not conclude the prosecutor’s conduct was so egregious as to require dismissal of the prosecution,” the appeals court wrote. “Reyes’ case must be remanded for a new trial.”
There is no word yet on when a new trial will take place.
The appeals court upheld the conviction of former Brocade VP of human resources Stephanie Jensen but ordered that she be given a new sentence for falsifying corporate records. Jensen was sentenced to four months in prison and a $1.25 million fine. That sentence included an obstruction of justice charge, but the appeals court ruled that was her counsel’s fault and she should not be penalized for obstruction.
Reyes and Jensen have been free pending their appeals.
Reyes left Brocade in 2005 after the first hint of the backdating charges was made public. Brocade paid $160 million to settle shareholder lawsuits and $7 million to settle an SEC suit.
Recommind and Clearwell Systems expanded their e-discovery and regulatory compliance records management product lines this week with support for more areas of the e-discovery Reference Model (EDRM).
Recommind, which already has Axcelerate eDiscovery and Insite Legal Hold products out on the market for preservation, collection, processing, culling, review and production of data, added support for information management, collection, and classification with the new MindServer Categorization software module.
The new search, indexing and classification module is based on the same underlying search and index engine as the rest of the Recommind product line. Recommind uses an algorithm devised at MIT that can be “taught” to derive meaning and relevance from content, and perform “concept searches” that don’t rely on keyword matches.
Recommind VP of marketing Craig Carpenter says there’s little difference in the underlying technology, but each of Recommind’s modules are used for different purposes. While Axcelerate eDiscovery is generally used by law firms, and Legal Hold by internal counsel, MindServer is mostly targeted to a corporate records management or enterprise end users for “search and index for knowledge management” rather than strictly litigation support. The product began shipping this week. Pricing is offered according to a per-seat licensing model or a per-server license. Pricing varies according to size of deployment, but Carpenter said enterprise deals are typically $50,000 to $100,000.
Clearwell Systems added new modules to its e-discovery framework for pre-processing, review and production. “Clearwell had been in early stage processing, but now they can perform full review, including redaction and auto-redaction in preparation for formal production,” said Brian Babineau, senior analyst with Milford, Mass.-based Enterprise Strategy Group ESG).
Babineau says the product launches represent “a natural progression for both companies,” as e-discovery software makers across the board look to broaden their reach across the full EDRM spectrum. Babineau said the number of small companies looking to create “one-stop-shops” for compliance and litigation support is an indicator of how strong the market is right now.
“Not every vendor can survivve being all things to all people,” he said. “At least right now, there’s enough money in the market with things like the [Bernie] Madoff investigation and new regulations [following last year’s financial crisis] to keep all of these players alive and fund their R&D efforts.”
Amazon now supports data export from its S3 storage cloud onto customers’ removable hard drives.
Amazon first opened up this “sneakernet” for import/upload to the Amazon cloud earlier this spring, allowing customers with large data sets to send the data to Amazon on removable media rather than trying to migrate the data over an Internet connection. This most recent announcement means users can extract data from the cloud using this method, too.
At the time of the first announcement, Amazon bloggers referenced the quote that immediately jumped to my mind reading about the export feature: “Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway.”
Amazon is far from the first or only cloud storage vendor to use seeding devices to get large data sets into the cloud rather than trying to squish terabytes through the average broadband Internet connection. Indeed, this network bottleneck is considered one of the biggest barriers to cloud computing adoption to date, and cloud backup vendors including EMC’s Mozy already send out seeding devices to upload or restore terabytes of data.
Companies such as NetEx are also offering software that promises to cut down on bandwidth between service providers and consumers downloading large, say, video files from centralized data centers. Others, including Cleversafe, are proposing to split data into chunks and among multiple sites to cut down on bandwidth and preserve data security.
So far, however, for the largest data sets — as this Amazon announcement demonstrates — nobody’s quite beaten the highway.
Akorri’s cross-domain reporting software, BalancePoint, is getting deeper into VMware analytics with version 3.0, which also contains some storage updates including support for more vendors and SAN switch performance analysis.
The version launched today includes virtual-machine level granularity for identifying resource contention, CPU and memory utilization, and CPU efficiency within the physical host; previously, analytics were performed by Akorri according by physical server rather than VM.
On the storage front, this new release adds support for 3PAR storage arrays to existing support for HDS and NetApp. BalancePoint can also now map storage switch infrastructures made up of Brocade and Cisco switches, analyze storage switch performance, and identify overutilized and underutilized SAN switch ports.
The most comparable product to Akorri that storage users would be familiar with would be NetApp’s Onaro, which is also software deployed without host agents, and also contains SAN switch performance analysis, as well as mapping virtual-machine level relationships to the underlying storage infrastructure. However, Akorri also includes more server-specific analytics while Onaro focuses on the storage infrastructure. Another storage-focused monitoring tool, Symantec’s CommandCentral Storage, also offers virtual-machine level analytics, but requires host agents. CommandCentral Storage, while also storage-focused, can be rolled up into Symantec’s overall data center management framework for cross-domain support.
In recent years, widespread VMware deployment has called for better analytics in IT to help smooth out performance bottlenecks and resource contention within the physical infrastructure. Until recently, however, deployment of storage resource management (SRM) tools been sluggish, although recent analyst research this year suggests that the economic downturn has more users looking to analyze and improve existing assets for better storage efficiency using these tools. This research also suggests that storage teams are increasingly cooperating with other IT departments, especially networking, to better optimize data center performance.
Open source backup software vendor Zmanda Inc., among the first to offer customers a direct link between its backup software and cloud storage, is opening up its API for connecting backup software with cloud backup service providers to other vendors, including competitors.
CEO Chander Kant says Zmanda is sponsoring a new open source project called ZCloud, which offers an API to show how backup software connects to storage cloud providers to avoid duplicating work as more backup vendors offer the option. “Today every backup software has to talk a different language,” Kant said. “This will get rid of those idiosyncrasies and make interoperability easier.”
The subject of standard cloud APIs and interoperability is a hot one in the still-nascent cloud computing market. While some are already calling for or developing standardized cloud interfaces, others say it’s too early to establish industry-wide standards without quashing differentiation.
Kant points out that an API specifically for backup tools isn’t the same as industry-wide, homogenizing standardization. “This is how standardization is actually going to happen — not an overall set of specs, but specific ones based on specific use cases,” he said.
He added that while the API specifies common aspects of connecting backup to the cloud, there’s no reason a backup software vendor can’t add its own differentiators to the integration. “The API allows for discovery of the underlying storage clouds, keeping developers from having to repeat basic stuff,” he said. At this early stage, he says “most people are first trying to get to a basic level of functionality.”
Admittedly late to the data deduplication game, Hewlett-Packard Co. is brewing new dedupe offerings to compete with the market’s new 800-pound gorilla — EMC/Data Domain.
“We welcome the competition and the fact that our competitors have shown that owning IP in this space is important,” said Kyle Fitze, HP’s marketing director for storage platforms. HP partners with Sepaton for high-end VTLs and Ocarina for primary storage data reduction, but also develops deduplication software for its entry-level disk backup devices.
Fitze was mum on whether the relationship with Sepaton will change given EMC’s $2.1 billion buyout of Data Domain, but HP does have a track record of acquiring partners after a few years if things go well. Fitzke said HP is focused on the higher-volume SMB market with its dedupe products, and is seeing more demand for the midrange/entry level D2D products. “One of the things we’re seeing develop is very large enterprise accounts with multiple sites consolidating their backup operations, and there’s been a lot of interest in the D2D portfolio.”
Currently, D2D cannot be used to replicate to the higher-end Sepaton-based VLS product. One HP shop, the Mohegan Tribe in Uncasville, Conn., is a midsized operation with about 8 TB capacity on its EVA 4100 primary SAN, but chose VLS over D2D because of Sepaton’s content-aware dedupe. “We felt content-aware deduplication was more efficient,” said David Shoup, technology manager for the tribe.
RenewData announced today it’s bought the privately-held Digital Mandate for an undisclosed sum, and plans to add Digital Mandate’s Vestigate legal review software to its eDiscovery software as a service (SaaS) offerings.
RenewData CEO Steven Horan says law firms and large corporations use Digital Mandate software to do “first-pass review” of electronically stored information (ESI) to determine its relevance to litigation. The application culls data sets before they are submitted for more granular legal review. “If I’m a customer, I don’t want to invest a million dollars just to know if I have a problem,” he said. Horan mainatins Vestigate can provide about 85% validity of the final data set, with the goal of cutting down legal fees that would be incurred by submitting the full data set for granular review.
RenewData provides services for planning, preservation and collection, processing, review, and production of electronic evidence, risk management, and data archiving. Horan said Vestigate was also offered as a service, but ReviewData will use it to offer something that can be put “behind the firewall” in customer environments where compliance or security concerns make a service less appealing.
Enterprise Strategy Group senior analyst Brian Babineau said RenewData, which had been partnering with Attenex (now owned by FTI), had to make this move in response to wider industry consolidation in eDiscovery. “This is a simple e-discovery market roll up,” he said. “[RenewData] had a legal service provider business by using Attenex software. However, when FTI bought Attenex, FTI also has a legal service provider business, [and] Renew was then using a direct competitor’s software.”
Babineau called the merger with Digital Mandate a “good hedge, but it doesn’t propel them forward” in the market.
Socha Consulting founder George Socha, said he’d had a “limited degree of exposure” to Digital Mandate’s product, but RenewData’s approach of bringing more parts of the e-discovery process in house is “consistent with what I hear consumers and law firms say they want – one throat to throttle. RenewData has expanded further across the eDiscovery spectrum.”