Brocade last Friday said its earnings last quarter fell short of its forecast, leading to questions about whether this is a sign of an overall slump in storage sales. Brocade is the market leader and one of two major vendors of Fibre Channel (FC) SAN switches (Cisco is the other FC switch vendor), and FC switches are a staple of SAN implementations.
Brocade’s preliminary results call for revenue of about $333 million to $336 million in storage gear revenue, which is a five percent to six percent year-over-year decline instead of the three percent to five percent increase the vendor originally projected. The new figures also represent a drop of 14 percent to 15 percent from the previous quarter. Brocade’s FC sales usually decline no more than two percent for the quarter ending in July. Brocade’s Ethernet sales also missed expectation although they will still be up at least 12% from last year.
It’s hard to say exactly why Brocade missed its sales goals, but there are three possibilities: FC SAN sales have dropped n recent months, Cisco has picked up market share, or Brocade has internal problems that caused it to miss its forecast.
Part of the problem is because Brocade sells its products through OEM deals with storage vendors, it has less clear expectations of coming sales than vendors who sell directly. That makes it tougher to accurately forecast its revenue. Still, Brocade said its sales last quarter were hurt by “weaker-than-expected storage end-user demand, which was down slightly from the previous quarter.” But Brocade’s biggest storage partner, EMC, beat its expectations last quarter and other storage vendors did about as well as expected.
Brocade’s quarter runs through July while most others end in June, so perhaps sales fell off during July. We’ll get a better idea of this when Cisco (Wednesday), NetApp (Aug. 17) and Brocade (Aug. 18) report their earnings in the coming weeks. Those earnings reports could also help clarify if Cisco picked up share in FC switching.
Wall Street analyst Kaushik Roy of Merriman Capital maintains that the FC storage market remains strong, and that iSCSI and Fibre Channel over Ethernet (FCoE) haven’t made much of a dent in SANs.
“Considering the healthy SAN sales from EMC, IBM, NetApp/Engenio, Hewlett-Packard/3PAR, Dot Hill and others, we do not believe that the end markets for Fibre Channel SANs are converting to iSCSI or FCoE faster than expected,” Roy wrote in a research note issued today.
In the statement released last Friday, Brocade CEO Mike Klayko said he would give details on plans to grow revenue and “manage expenses” during its earnings call. By manage expenses, does he mean Brocade will follow Cisco’s recent heavy layoffs?
Industry sources say Brocade has been for sale for several years, with Hewlett-Packard and Dell looking at it before deciding to buy Ethernet switch vendors – HP bought 3Com in 2009 and Dell recently said it would acquire Force 10. Brocade has also had a lot of management turnover since it acquired Ethernet vendor Foundry in 2009, most recently losing CFO Richard Deranleau in June.
“Management’s credibility has sunk to the bottom and some current (and past) investors are wondering why the board is not acting on it,” Roy said. However, he added, “Brocade is still an attractive acquisition target for companies who want to enter the datacenter market” and listed Oracle and private equity firms as candidates to acquire the switch vendor.
Fusion-io just completed it’s first quarter as a public company, and already its acting like a grown-up vendor. The PCI-based solid state storage vendor today acquired caching software startup IO Turbine for around $95 million.
The deal comes as competition heats up for Fusion-io from other flash vendors. STEC today brought out its first Kronos PCIe SSD card as well as its EnhanceIO SSD Cache Software. And OCZ Technology Group Wednesday upgraded its Z-Drive PCIe SSD card.
IO Turbine, which had $8 million in venture funding, came out of stealth in May with Accelio software designed to help mitigate IO performance latency problems in VMware environments by offloading IOPS from primary storage to flash. The software works with virtual machines that use locally attached solid-state storage or flash. Accelio installs on VMware servers, identifies the highest-priority data and offloads IOPS from primary storage to flash.
Accelio software is still in beta, but Fusion-io CEO David Flynn said his company tested the IO Turbine software with its own products at customer sites and was happy with the results. He said Fusion-io will integrate IO Turbine software with its ioMemory Virtual Storage Layer (VSL) software that virtualizes memory and storage. Fusion-io has a DirectCache software plug-in for ioMemory VSL but that does not focus on virtual machines.
Flynn said the ability to deal with VMs is crucial to his product line.
“We’re talking storage from the mainframe area of big proprietary centralized systems to the virtual era,” he said on a conference call to discuss the acquisition and Fusion-io’s earnings. “We do with data what VMware does with compute.”
There have already been three major acquisitions and mergers involving vendors that produce SSDs and hard drives since March. Western Digital said it will acquire hard drive and SSD vendor Hitachi Global Storage Technologies (HGST) for $4.3 billion, SanDisk acquired SSD vendor Pliant Technology for $327 million, and Seagate and Samsung formalized a strategic alliance involving Seagate’s NAND flash.
Storage Strategies Now analyst James Bagley said he expects to see more deals.
“This is further evidence that consolidation is coming,” SSN analyst Jim Bagley said. “All the SSD vendors out there won’t survive, but clearly the ones that have a good story are getting bought.”
Bagley said IO Turbine helps Fusion-io’s software story, but he said it doesn’t solve what he sees its biggest problem — increased competition from newer products such as those from Micron, Texas Memory Systems and STEC that offload the flash management from the CPU.
“Fusion-io’s hardware is getting long in the tooth,” Bagley said. “Having the server do all of the flash management is not a good idea.”
Flynn said he’s not concerned with the increasing competition or his company’s products. He claims Fusion-io’s biggest advantage is the reliability of its devices. He also said competitors focus on IOPS while Fusion-io concentrates on bandwidth and latency optimization.
“Nobody has really been able to compete with us on performance metrics that ultimately matter for accelerating applications,” he said. “It’s easy to make IOPS look good by putting eight SandForce controllers on a RAID controller and say, ‘Look at all the IOPS we have.’ But aggregation of SSDs on a card is problematic for reliability. What matters more is how the software is integrated with the operating system.”
Fusion-io’s first quarter as a public company was successful. The company raised $218.9 million in an IPO last June, and it said today that it slightly beat expectations with $71.7 million in revenue and $5.8 million of net income last quarter. That compares to $10.9 million revenue and a loss of $11.9 million a year ago. Its revenue in the previous quarter was $67.3 million.
Facebook and Apple remain Fusion-io’s largest customers. Flynn said they both accounted for more than 10% of the vendor’s revenue for the quarter, but he didn’t say exactly how much of Fusion-io’s products they bought.
As we see more companies begin Virtual Desktop Infrastructure (VDI) deployments, we spend more time explaining storage issues related both to server virtualization and VDI.
VDI deployments depend heavily on the intelligent usage of storage and on high-capability storage systems. The typical VDI project begins with a pilot program using existing physical servers and existing storage systems. The next phase usually involves scaling the VDI deployment to a threshold of close to 30% of the desktops. It is during this phase where the storage issues become apparent. The success of the VDI project will hinge on acquiring the right high-capability storage system – usually as an emergency purchase. The final phase is to scale to more virtualized desktops if the emergency found in the 30% phase can be dealt with satisfactorily.
The features most important with the high-capability storage systems include:
• Intelligent caching and data placement,
• Tiering of data – usually with solid state devices,
• Thin provisioning,
• Wide striping, and
• Cascadable read/write snapshots
Evaluator Group provides a document that discusses the needs for a storage system with VDI.
Our advice to IT includes a method for avoiding the emergency purchase by proactively understanding whether a storage system will meet requirements for a VDI project. The most effective way is to use a benchmark for VDI that focuses specifically on storage issues, and can show the right configuration to support the number of virtual desktops in use and the response time required.
The IT team needs to understand the type of workloads on virtual desktops required from specific users — such as a knowledge worker (using office tools), developer or financial analyst — and have the ability to dial in the right type of workload into the VDI benchmark. Acquiring a storage system with the correct amount of I/O streams requires a benchmark that will give IT enough information and confidence during the planning for a VDI project to make an informed selection.
Stay tuned for more information from Evaluator Group on storage benchmarks for VDI environments.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Quantum has a lot riding on its new DXi Accent deduplication software and DXi6701 and 6702 backup appliances that launched Wednesday.
During the vendor’s earnings conference call Wednesday night, executives made it clear they are counting on the new midrange releases to revitalize its slumping disk backup business. Quantum’s disk and software revenues dropped last quarter, causing it to fall short of overall revenue forecasts.
Quantum added client-side dedupe with DXi Accent, and increased the scalability and connectivity options of its midrange appliances. Quantum CEO Jon Gacek said he is optimistic that these improvements will make the DXi platform more competitive with EMC’s Data Domain.
“With the changes we made, it’s a damn good product,” he said in an interview with Storage Soup. “If my sales team can’t sell this, I’ve got big problems.”
This isn’t the first time Quantum has revamped its disk backup products. Quantum refreshed its entire DXi hardware line following a rocky start that included an OEM deal with EMC that was terminated in 2009. It also rolled out DXi 2.0 software in March. But that didn’t do the trick.
Quantum’s total revenue last quarter was $153.5 million, below its $160 million forecast. Disk and software revenue – the DXi and StorNext platforms – came in at $27.6 million, down from $34.7 million the previous year. DXi revenue reached only 70% of the vendor’s internal plans, with the biggest shortfall on the DXi 6000 midrange platform.
Gacek said part of the sales slump came because Quantum rolled out DXi 2.0 software on the 6500 system but not on the 6700, causing confusion for his sales team and channel partners. Now Quantum is collapsing five 6500 and 6700 midrange configurations into two, and the 6701 can be upgraded in the field to the 6702. However, Quantum is bringing out its new DXi Accent software in phases – it is not expected to be available on the enterprise DXi8500 platform until late this year.
“I’d like to have it on 8500 as well, but we have to make some tradeoffs,” Gacek said.
Quantum is looking for the new releases to bring it into more deals with Data Domain. Gacek said when Quantum runs into Data Domain now, EMC will drop its price to win the deal.
“They dismiss our technology as being inadequate, yet they’re fighting so hard when we’re in deals with them,” he said. “I don’t think they price like that when we’re not in deals. We need to get in front of more customers.”
He said Quantum’s DXi win rate last quarter dropped to 45% from 55% the previous quarter.
Revenue from Quantum’s StorNext file archiving software also declined , taking a nine percent drop over last year. Quantum is counting on a recently signed reseller deal with NetApp and a new StorNext appliance for rich media customers to spark sales of that product.
Quantum’s tape automation business did grow eight percent over last year. “Tape is far from dead,” Gacek said on the earnings call.
Overall, Gacek said “We are not pleased with our revenue” in the first quarter of its fiscal year. “This start puts us a little behind for the year, but we plan to make it up.”
Storage tiering, where data is automatically placed and migrated between different storage media, improves the performance of a system by exploiting the access characteristics of data. However, the net effect of tiering sometimes gets overlooked in discussions about maximizing storage system performance.
• Increased utilization by reducing the number of storage systems with reserved or unused capacity.
• Reduced requirements for management and administration of storage systems. Along with bringing more advanced management software with newer storage systems, reducing the number of storage systems reduces the amount of time required for administration.
• Reduction in power, cooling, and physical space is a common result of implementing new technology. Consolidating systems where a new storage system can support larger workloads typically has a greater impact on the environmental reductions.
• Reduced maintenance costs/support contracts from fewer storage systems.
What role does storage tiering play in consolidation? Tiering can maximize the performance of a storage system and may be the most important enabler for consolidation from an economic standpoint.
Implementing storage tiering on solid state drives (SSDs) and high capacity disks requires capabilities built into the embedded storage system software to intelligently and automatically move data for optimal performance.
Results from deployments in customer environments verify the effectiveness of storage tiering, even when SSDs make up only four percent or less of the total capacity. This brings a new economic calculation to bear for storage tiering and the return on investment for a storage consolidation project.
Vendors are increasingly focusing on the automation and effectiveness of the tiering implementation, especially with the emergence of SSD as a storage tier. These are not esoteric elements in a storage system but critical, high value functions with the potential for storage efficiency improvements and relatively quick economic payback. This means the understanding of how the tiering works, how effective it is, and the differences in product costs with the amount of SSDs required for optimization requires evaluation and independent information for making a decision. Tiering has huge payback and needs to be included in the strategy for IT operations.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Object storage has been hailed as a better way to store files than traditional NAS, and perhaps even a long-term replacement for file storage. Now open-source software vendor Gluster has integrated object and NAS capabilities in the same file system.
The GlusterFS 3.3 beta that became available this week lets users store and access the same data as an object and a file. They can store objects and then access those objects as files, or view and access files as objects. The idea is to make it easier to migrate file-based applications to object storage so they can be used in the cloud.
GlusterFS is an open source file system that scales to petabytes with global namespace. Version 3.3 has an object interface integrated into that file system.
“Probably 95 percent of enterprise applications haven’t been able to leverage object storage and move to the cloud,” Gluster director of product marketing Tom Trainer said. “Integrating it in one system will accelerate the integration to object storage.
Gluster customers can access data as objects from any Amazon S3 compatible interface and access files from its NFS and CIFS interfaces. Trainer said service providers can use GlusterFS to build Amazon-like storage for customers. It can also be used to migrate legacy applications to the cloud and scale NAS across the Internet to a public cloud.
There have been other approaches to integrating object and file storage, although Trainer maintains that GlusterFS has the deepest integration of object and file interfaces.
Most object storage products such as EMC Atmos, Scality, OpenStack and Amazon S3 don’t have file systems. Caringo, which began as object storage, added an NFS and CIFS interface for its object store and Nirvanix’s CloudNAS makes its object storage look like NAS to an application.
EMC’s earnings today revealed few surprises with revenue of $4.85 billion and income of $793 million at or slightly above expectations, but the results include a few interesting product trends:
• Isilon scale-out NAS is EMC’s fastest growing software platform, doubling in revenue over Isilon’s performance as a standalone company last year. EMC said its other Big Data products — Atmos and Greenplum — also doubled revenue year-over-year, but they were from smaller bases. EMC acquired Isilon for $2.25 billion last November. Isilon’s product revenue for the second quarter of 2010 was $64.9 million.
• More flash solid-state capacity shipped with VMAX and unified storage systems in the first half of this year than in all 2010.
• FAST VP automated tiering is EMC’s fastest selling storage software product, and more than 90% of systems with FAST are shipped with flash and SATA drives.
• Vblock integrated stacks sold by the VCE partnership between EMC, Cisco and VMware had more revenue for the first half of 2011 than in all of 2010.
• EMC took a charge worth $66 million – lowering earnings per share by 2.5 cents – for “remediation” to retain customer loyalty related to a breach of RSA’s security tokens at defense contractor Lockheed Martin.
• The Information Intelligence Group, which includes Documentum and EMC’s compliance products and services, had a 5.1% drop in revenue from last year. EMC CEO Joe Tucci said the vendor remains behind its IIG products and is determined to get results back on track. “We’re going to stick with it,” he said. “We’re convinced that by the end of this year or early next year, we’ll return this business to growth.”
Dell’s acquisition of Ethernet switch maker Force10 today should end the expectations that Dell will buy Brocade, which sells Fibre Channel and Ethernet gear.
Financial analysts have speculated and even prodded Dell to acquire Brocade as it tries to move up into an enterprise class vendor instead of a PC and server specialist. There have been whispers for at least a year that Dell was considering Brocade, but it instead followed Hewlett-Packard’s lead in going just for Ethernet switching. HP also looked at Brocade before buying 3Comm in 2009.
Apparently, system/storage vendors prefer to own their Ethernet technology while getting FC connectivity from Brocade and Cisco. The move to Ethernet for Dell and HP is motivated at least in part by Ethernet switch market leader Cisco’s getting into the server business with its Unified Computing System (UCS).
Price may also have been a factor for Dell. It did not disclose how much it will pay for Force10, but it is believed to be around $600 million to $800 million while Brocade would command billions of dollars.
Passing on Brocade doesn’t mean Dell won’t buy another storage company, though. It has picked up EqualLogic, Exanet, Ocarina and Compellent since 2008, and Dell executives including Michael Dell see storage as a major part of the company’s future.
All new storage technology doesn’t come from startups, although you might get that impression by reading about industry acquisitions.
The reasons most often listed for acquisition of a start-up company are:
• Technical infusion (technology acquisition)
• Expansion into a new business area (new technology and staff)
• Complementary solutions (filling in a product line hole).
These reasons would lead to the conclusion that startups bring new technology to customers more effectively than large established companies. Considering that popular technologies such as data deduplication, thin provisioning and iSCSI storage were originally brought to market by startups, there is merit to this line of thinking. But it is not such a simple issue. Large vendors do have brilliant and dedicated people, but developing and bringing a product to market in these companies can be a complicated process. That’s because they create a corporate structure that often makes it difficult to take a new idea or approach and bring it to reality.
Large companies have processes that their people are required to follow, making it difficult to innovate. Any initiative or idea must conform to their interpretation of the company process, and there are organizations and people inside each company that can create enough resistance to hinder realization of the new ideas. I call these people and processes the Department of Revenue Prevention.
If a large company has an entrenched Department of Revenue Prevention, it is easier for people with ideas to take them through the startup route. That route has less resistance, and innovators’ time and efforts are not spent battling the department but actually moving the innovation to market. Unfortunately, the rewards may be limited based on what must be given up to get the funding necessary to take the technology innovation to a product stage. Ultimately a startup may not be successful for a variety of reasons, including:
• A bad board assigned by investors that do not understand the market, the technology, or what is required to bring the technology to fruition
• Missing or subpar key people in areas such as strategy, marketing, and sales.
• Technology that may not meet customer needs at the right time -– either being too early or too late.
Large companies that understand how to nurture and develop the ideas of their talented people will be more successful than those that succumb to the bureaucratic sprawl and paralyzing Department of Revenue Prevention structure. Even inside these great companies, things change over time and bureaucracy spreads. To re-invigorate a company requires periodic review and change to enable innovation. It’s either that, or continue to acquire other people’s ideas.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
While EMC formally launched its VMAXe enterprise storage system to compete with IBM’s XIV (as well as Hewlett-Packard’s 3PAR) this week, IBM was giving XIV an overhaul.
IBM launched what it calls XIV Gen 3 with InfiniBand connectivity between modules, 2.4 Ghz quad core Nehalem CPUs, 2 TB native SAS disk, and 8 Gbps Fibre Channel support. By next year, IBM also expects to offer up to 500 GB of solid-state drive (SSD) capacity per module for a total of 7.5 TB in a fully configured 15-module configuration. According to senior management consultant for IBM system storage Tony Pearson’s Inside System Storage blog, XIV will use SSDs as DRAM cache similarly to NetApp’s Performance Accelerator Modules (PAM) – a product IBM resells as its N Series.
None of the XIV enhancements are ground-breaking, but IBM claims to get a two- to four-times boost over Gen 2 for workloads such as transaction processing, sequential reads and writes, and file and print services, and applications such as Microsoft Exchange and Hyper-V, Oracle Data Warehouse, and SAS Analytics Reports.
IBM will keep XIV Gen 2 around for at least a year for customers who don’t need the new system’s performance or capacity (Gen 2 uses 1 TB drives).
In case you’re wondering, Gen 2 was the first version of the product that IBM launched in Sept. 2008 after acquiring XIV the previous January. Gen 2 had different disks, controllers, interconnects and software enhancements over the Gen 1 product that it bought from XIV.
While IBM characterized XIV as a Web 2.0 system when if first purchased it -– the same label EMC used to describe it during the VMAXe launch –- Pearson wrote that XIV is a full-blown enterprise system that competes with EMC’s high-end VMAX. “As if I haven’t said this enough times already, the IBM XIV is a Tier-1, high-end, enterprise-class disk storage system, optimized for use with mission critical workloads on Linux, UNIX and Windows operating systems, and is the ideal cost-effective replacement for EMC Symmetrix VMAX, HDS USP-V and VSP, and HP P9000 series disk systems,” Pearson wrote.
He did point out, though, that the DS8000 remains IBM’s platform for mainframe connectivity.
The XIV launch was low-key and took second fiddle to Big Blue’s IBM zEnterprise 114 Mainframe Server rollout this week, as Enterprise Strategy Group analyst Mark Peters pointed out on his The Business of Storage blog. Peters was generally impressed by the new XIV, though.
“The third generation of XIV is all about adding performance– and plenty of it,” Peters wrote. “Besides more cache, more/faster ports, and a change to SAS drives, there’s also Infiniband connectivity within the XIV (helping, surprise surprise, with performance) and ‘spare’ CPU and DRAM slots for ‘future software enhancements’ … IBM is keen to point out that the SSD is transparent caching, with no tiers per se to manage. Of course, it would be, since XIV has always proclaimed there’s no need to tier. But, pragmatically, as a user I’d only worry if it economically makes the system better and still does it without me needing to manage things. Assuming so, then I’ll give it a thumbs up and leave the semantic debate to others.”