NetApp’s earnings report this week was similar EMC’s report last month. Like EMC, NetApp’s revenue a tad below expected while its forecast missed by a larger margin. The problems are the same – IT people are taking a long look at storage systems before they buy and are buying less. Companies are considering more ways to use the cloud and flash, and that is disrupting traditional storage sales.
NetApp executives are looking to win market share in this new world by helping customers move into the cloud with its OnTap storage operating system – which they now commonly refer to as software-defined storage – and gain performance through a myriad of flash products.
NetApp CEO Tom Georgens said during Wednesday’s earnings call that it is inevitable that the cloud will cut into the amount of storage people buy. The key for storage vendors, he said, will be to help customers better manage the data they move into the cloud.
“We’re not going to deny that data has gone to the cloud,” he said. “So I believe that will depress somewhat the growth rate of the industry or some of the historical norms. But our challenge is to recognize that data is going to go to cloud and the needs of customers to manage that data well and protect that data always gets more acute. So our point of view is we have Clustered OnTap and Data OnTap, and manage that data in our hardware and other people’s hardware.”
“Is the cloud mainstream? No, it’s not. And I think the [storage company] that enables customers to make it mainstream is going to be the winner in this race.”
NetApp’s long-time cloud strategy has been to sell storage to cloud providers. Now it is trying to tie on-premise use of OnTap to public clouds, as NetApp Private Storage for Amazon Web Services (AWS).
Instead of trying to match EMC’s software-defined storage strategy of bringing out a new platform (ViPR), NetApp pledges to do the job with OnTap – especially Clustered OnTap. Georgens said the vendor will soon bring out its first clustered-optimized FAS arrays. Presumably, the next step would be to tie clustered capabilities into its offering for cloud providers.
With flash, NetApp faces intense competition from the large storage vendors as well as startups that beat the big guys into the market with all-flash enterprise arrays. NetApp said it has shipped nearly 75 PB of flash storage, including roughly 25 PB on its all-flash EFF series and FAS arrays fully loaded with flash. NetApp has yet to announce the ship date of its FlashRay all-flash system, which will have storage management features such as data reduction that the EFF lacks. NetApp also sells flash as a cache and server-side flash.
“It’s a jungle out there,” Georgens said of the flash market. As for NetApp’s strategy, he said, “We believe that customers will deploy flash at every layer in the stack to solve a wide variety of challenges. This market is clearly not one-size-fits-all.”
NetApp reported revenue of $1.61 billion, up four percent from the previous quarter but down one percent year-over-year. Wall Street analysts expected $1.63 billion. Its income of $158 million beat expectations, but its guidance of $1.6 billion to $1.72 billion for this quarter was below analyst expectations of $1.73 billion.
Reasons for low guidance include cautious IT spending, particularly by the U.S. federal government and uncertainty at IBM, which sells NetApp FAS and E-Series storage through OEM deals. IBM sold its x-86 server business to Lenovo, which could have an impact storage sales, and it has been emphasizing its internally developed storage over OEM systems.
Server-flash aggregation software provider PernixData Inc. this week added native support for VMware Inc. vSphere 5.5 and the vSphere web client with its newest product release at the VMware Partner Exchange 2014 in San Franciso, Calif. The startup also launched the PernixDrive technology storage alliance for interoperability with flash hardware devices.
PernixData’s FVP software aggregates server-side flash across storage and computing devices and decouples flash capacity from performance. PernixData calls FVP a flash hypervisor because it lets users cluster flash resources to VMware features such as vMotion and high availability (HA), and create a scale-out flash strategy. FVP is deployed within the VMware hypervisor, not as separate software, so no changes are made to deployed virtual machines (VMs), servers, or primary storage.
Jeff Aaron, PernixData’s vice president of marketing, said the FVP software now supports any version of vSphere from version 5.1 to version 5.5. “What’s exciting about that is that we’re one of the few vendors, if not the only vendor, that does true native support,” Aaron said. “What we mean is that you literally are a tab within vSphere for setting up our clusters, pulling reports, and managing everything. We’re not launching a separate application.”
Dave Russell, a vice president and distinguished analyst at Gartner Inc., a technology research and consulting firm, said adding vSphere 5.5 support will allow PernixData to reach more of the flash storage users. “That opens up a lot more of the market where otherwise they could roadmap, but they couldn’t sell product to anyone looking for shared flash,” Russell said.
Supporting vSphere’s web client extends FVP’s ease of management by allowing users to use vSphere to manage the software within a web browser.
Aaron said PernixData has a roadmap to add support for other server hypervisors, especially Microsoft’s Hyper-V and the open-source KVM hypervisor.
According to Aaron, his company created the PernixDrive technology alliance program to advance collaboration and interoperability between PernixData and flash hardware vendors. Initial alliance members include Intel Corp., Kingston Technologies Corp., and Toshiba America Electronic Components Inc.
Russell said PernixData is facing two types of competitors right now — business-as-usual thinking and the new breed of flash-storage supporting companies. He said many organizations still throw more server-side flash at situations where they need more performance, without considering aggregating existing server flash resources.
PernixData is also one of the new companies supporting server-side flash implementation with software tools that make deployments more efficient and more flexible. Other companies in this group include SanDisk Corp. with its FlashSoft aggregating and caching software; Atlantis Computing’s ILIO software; and Proximal Data’s AutoCache software.
With an eye on hyper-scale virtualization and solid-state storage, the Fibre Channel Association (FCIA) today laid out the roadmap for the Gen 6 Fibre Channel (FC) industry standard protocol that allows a bare speed of up to 128 Gbps for storage area networking (SANs).
Gen 6 is 32-Gbps FC, but it will reach 128-gig through four striped lanes. Until now, each generation of FC technology has doubled bandwidth from the previous generation. Gen 5, which has been available since 2011 but is still in the relatively early days of adoption, supports 16-Gbps bandwidth. Gen 6 will be the first time an FCIA standard includes specifications to stripe four lanes.
“What people see is one connector from the host side and underneath it is made up of four lanes,” said Mark Jones, President of FCIA and director of technical marketing at Emulex. “Gen 6 is comprised of both 32 Gbps and 128 Gbps in parallel speeds.”
Gen 6 is expected to hit the market in 2016. It will provide 6,400 MBps full-duplex speeds, twice that of Gen 5 FC.
“We have never been able to come to an agreement of four set of lanes,” said Skip Jones, chairman of the FCIA and director of technical marketing at QLogic. “With four lanes, you are aggregating each lane and put them in sequence. It’s not four trunked lanes. It looks like a small 128-port all the way up to the APIs.”
Emulex’s Jones said Gen 6 also includes several features that go beyond speeds, which include Error Code Correction (EEC) to maintain the quality of the links to keep the error rate low and ensure data quality. The Gen 6 Fibre Channel protocol also is backwards compatible and better encryption, meaning it supports the 800-131a information security standard of the National Institute of Standards and Technology (NIST).
“[Backward compatibility] is not new but we want to emphasize this because we thought we might lose people when we talk about 128 Gbps,” said Jones, of Emulex.
Gen 6 also includes N-Port ID Virtualization (NPIV), which is an ANSI T11 standard that describes how a single Fibre Channel physical HBA port can register with a fabric using several worldwide port names (WWPNs) that might be considered Virtual WWNs.
“We are finding [NPIV] use is expanding among our user base,” said Emulex’s Jones.
George Crump, president and founder of Storage Switzerland, said it will be interesting to see how Gen 6 affects the server-side flash market because customers in this area generally are concerned with network latency.
“Gen 6 could cause people not to do server-side flash,” said Crump. “This has the potential to forestall some from going to server-side flash. Gen 6 will also drive down the price of Gen 5.”
I’m surprised to hear from IT people that storage vendors are still using “speeds and feeds” in their sales pitches. Salesmen for these companies talk about how fast and how big the storage systems could be.
When I asked what specific details were being emphasized, the list included:
- Bandwidth – The maximum total bandwidth the system could support was presented.
- IOPS – The aggregate number of a fully configured system was given without information regarding the response time.
- Type of processor, clock rate of processor, and number of processor cores for the controller.
Maybe the presentations included more about the function and value of the storage system being, but this was the information relayed by customers.
I thought storage sales had moved beyond that. Most customers are looking to solve specific problems or address some complex workload needs. The most basic for traditional IT environments include:
- Capacity growth. There is a need for more storage but not at the sacrifice of getting the same relative amount of work completed. This means not just adding capacity, but having the same ability to access the capacity. This is usually measured as accessed density, which is the number of I/O’s possible divided by the capacity.
- Workload requirements. Some workloads need improvement. Most commonly cited needs are improving transaction processing, increasing the virtual machine density (number of VMs per physical server), and the number of virtual desktops supported per physical server and storage system. These have performance needs but are much more complex than speeds and feeds numbers presented. Necessary improvements include the latency per I/O to allow write-dependent transactions to move ahead. Using an aggregate number of IOPS can be a very misleading number in this case.
- Consolidation of storage with a technology upgrade. This is usually a generational change for storage that can be caused by the end of the financial life of the storage system (usually dictated by increased cost of maintenance) or perceived technical obsolescence. The expectation is the new system will provide greater capacity and performance to allow consolidation of multiple older storage systems. This brings improvements in the amount of power, cooling, and physical space required. Consolidation is really a workload discussion as well.
The simple speeds and feeds sales approach is a throwback. Most sales have moved beyond this, recognizing the sale is all about solving a problem or meeting a need for the customer. In solving a problem or meeting a need, the salesmen must understand the customer and not just present the speeds and feeds attributes. Proposing a product with a focus on those attributes can only short-circuit that understanding. It pushes the responsibility for finding the correct solution onto the customer.
This brings to mind the recent Super Bowl commercial from Radio Shack with stars from the 1980’s and the message “The 80’s called. They want their store back.” Within days after airing, Radio Shack announced the closing of 500 stores. Maybe this should be a hint about the speeds and feeds sales approach.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
DataGravity, the storage startup founded by EqualLogic veterans Paula Long and John Joseph, is in the whisper stage about its product that is scheduled to launch this year.
The startup is still keeping product details under wraps, and won’t get any more specific on a launch data than to say it will be generally available in 2014. But DataGravity will have a team headed by president Joseph at VMware Partner Exchange this week to woo channel partners. Joseph said DataGravity’s unstructured data management system is in beta following an alpha program that started last June.
Joseph said the DataGravity product will sell as an appliance, with the hardware serving as a delivery mechanism for the software that will manage, analyze and unlock the business value of file data.
“We want to get out of being a box pusher and into offering data solutions,” Joseph said of the system. “I would like to bring insight to our customers’ file-oriented business content.”
Joseph said he expects the appliance to be installed by a storage administrator, but will be a tool for business units rather than merely for storing data.
“The value starts in IT and motions out to the line of business,” he said. “We have the capability that IT people will love, but people in the legal department, finance, HR and marketing are going to say ‘Holy smokes, my time to answer has been cut in half by using this product and it’s so user-friendly I can get what I need in a fraction of the time.’”
Long, who was responsible for the engineering expertise behind the EqualLogic iSCSI SAN startup that Dell acquired for $1.4 billion in 2008, isn’t talking technology yet regarding DataGravity. In a YouTube video produced to drop hints about the product, CEO Long says DataGravity is “out to change table stakes for storage,” is “turning the light on to your data” and “will bring color and flavor and language to your storage.”
So if your storage looks and tastes bland and doesn’t say much, DataGravity will change all that.
VMware historically has had a good relationship with storage vendors, especially those who focus on storing and protection for virtual machines. The annual VMworld conference draws more storage vendors than any storage-dedicated show. The VMware Partner Exchange (PEX) show has also been storage vendor-friendly, although less so this year.
As first reported by CRN, VMware asked Nutanix and Veeam Software to stay away from PEX next week. Those two vendors have had success with products that help organizations running VMware hypervisors. Nutanix sells hyper-converged systems that include storage, servers and VMware hypervisors in one box, and Veeam sells backup for virtual machines.
But VMware has also become more competitive with Nutanix and Veeam as VMware expanded its capabilities and products over the past couple of years. VMware’s Virtual SAN (vSAN), currently in beta, is a software-only version of what Nutanix does. Nutanix also competes with Vblocks, sold by the VCE joint venture comprised of VMware, its parent EMC and Cisco. And VMware’s vSphere Data Protection (VDP) backup product, an OEM version of EMC’s Avamar software, directly competes with Veeam’s Backup & Recovery.
Still, VMware isn’t telling all hyperconverged systems and VM backup vendors to stay away from PEX. SimpliVity, which also sells hyper-converged systems that bundle VMware hypervisors, will be at the show and its CEO Doron Kempel will be a speaker. Unitrends PHD, which also handles VM backups, will also be there.
Why Nutanix and Veeam? Maybe because of their success. They are among the fastest growing storage vendors, according to the numbers released by the private companies. Nutanix claims it has gone over $100 million in revenue in barely two years of selling products, and forecasts more than $80 million for 2014. Veeam claims its annual revenue passed $100 million in 2012, and that its software protects more than 5.5 million VMs. These vendors can also be seen as growing threats to EMC and VMware’s larger storage partners, including NetApp, Dell, Hewlett-Packard and IBM.
Although Veeam now protects Microsoft Hyper-V VMs too, it much of its early success came from customer referrals from VMware. Doug Hazelman, Veeam vice president of product strategy, said he would not speculate why Veeam is no longer welcome at VPX but said his company still considers VMware an ally.
“We still have a good relationship with VMware,” he said. “The vast majority of the more than 80,000 customers we have are running on VMware. In the software industry there is always overlap between vendors like VMware and Microsoft and their partners, and Veeam is no different.”
Hazelman wouldn’t say if he thought being officially absent from PEX will hurt business, but he said Veeam will send a team to the show for meetings with VMware partners. “We have a strong and vocal customer base and partner base,” he said. “Just because we may not be on the show floor or at the partner exchange, doesn’t mean we won’t be out there.”
Despite all the hyper around Fibre Channel over Ethernet (FCoE) a few years ago, old fashioned Fibre Channel (FC) remains the dominant SAN protocol.
A report released today by technology research firm Evaluator Group shows there is good reason for that. Evaluator Group testing found FC significantly faster than FCoE with far less CPU utilization. FC also required fewer cables and power than FCoE, according to the report.
Before we get into the numbers, I want to point out that FC-centric Brocade funded the testing. Brocade sells FCoE gear too, but has been more bullish on FC while its rival Cisco has been FCoE’s chief evangelist. That doesn’t mean the results were skewed – Evaluator Group senior partner Russ Fellows said his group conducted the tests at its labs without vendor interference – but Brocade may not have released the results if FC did not come out a clear winner.
Evaluator Group used Hewlett-Packard BladeSystem c7000 chassis and 16 Gbps FC switching and HBAs on the FC side. For FCoE, Evaluator Group used Cisco UCS 5108 blade chassis and 10-Gigabit Ethernet (GbE) switching. In both cases, the storage was a 16-gig FC solid-state arrays.
The difference in response times for FC and FCoE didn’t show up until workloads surpassed 70% SAN utilization. However, FC response times were two to 10 times faster than FCoE as workloads surpassed 80% SAN utilization. FC also used 20 percent to 30 percent less CPU power than FCoE according to the report.
Speed and low latency aren’t FCoE selling points, so those results were no big surprise. A need for less cabling and power are supposed to be FCoE’s advantages, however, so it was a surprise that FC required 50% fewer cables for LAN and SAN connectivity. “This highlights and confirms the inaccuracy of the FCoE claims of fewer cables and connections,” the report states.
The tests also found the Cisco UCS required 50% more power and cooling than the HP blade with FC equipment.
The tests also determined that FC has more predictable performance with FCoE, which had twice as great a difference between average and standard deviation at 50% utilization than FC. The difference was 10 times as great with 90% workload utilization.
“If you have a high-performing application and use solid state storage, Fibre Channel is the better way to go,” Fellows said. “There is less overhead and better performance. I was surprised that Fibre Channel looked as much better than it did. The cabling and power advantage was a bit of a surprise, too.”
Fellows added that CPU utilization was almost identical when using a hardware initiator for FCoE. The test results for the report used a software initiator because that is the standard configuration for UCS, but FCoE performed better in subsequent tests using hardware initiators.
FCoE adoption for storage has been slow, for several reasons. Fellows said that while FCoE performance is good enough for many workloads, he doesn’t expect it to supplant FC any time soon. “It will continue to roll out, but I don’t think adoption will be that strong,” he said. “I think FCoE will be similar to iSCSI – it will work, people will use it and it will expand, but iSCSI hasn’t taken over the world yet.”
Violin Memory today named Kevin DeNuccio as its CEO, and he must decide if he wants to try and turn around the struggling all-flash array vendor or sell it off.
DeNuccio replaces interim CEO Howard Bain, who held that position since Violin’s board fired Don Basile in December. Bain remains Violin’s chairman. Basile cut his remaining tie with Violin when he officially resigned from the board last Friday. DeNuccio, who also has the title of president, replaced Basile on the Violin board.
Basile left as CEO less than three months after Violin went public, and saw its stock price plummet from its initial public offering of $9 to $2.68. It didn’t help Basile that Violin’s first earnings report as a public company was disappointing, as it lost $34.1 million and missed its revenue and guidance targets.
Clinton Group, a Violin investor, has been pushing the board to sell the company. Clinton Group president Gregory Taxin wrote a letter to the Violin board in December urging it to sell the company. Last week he told Bloomberg at the Activist Investor Conference that Violin has received informal inquiries from five suitors.
DeNuccio won’t be making any public statements for at least a few days according to a Violin spokesperson, who said the new CEO will be tied up in meetings with employees, partners, investors and customers.
Although he is a director for solid-state drive (SSD) vendor SanDisk, DeNuccio’s background goes far deeper in telecommunications that storage. He most recently managed angel investor Wild West Capital, which he founded in 2012. He also served as CEO of Metaswitch Networks from 2010-2012 and Redback Networks from 2001-2008. He took Redback through Chapter 11 bankruptcy before sellng it to Ericcson. He also held executive positions with Bell Atlantic Network Integration, Cisco, Wang Laboratories and Unisys Corp.
Data protection appliance vendor Quorum enters 2014 coming off what it claims are record sales with expectations to grow more this year, a fresh $10 million funding round and a still-emerging cloud DR market in front of it. At the same time, the company is in transition as it searches for a new CEO to manage its expected expansion.
Walter Angerer, who works for one of Quorum’s venture capitalists and sits on its board, became interim CEO in November when the company announced its latest funding. He replaced Larry Lang, who served as CEO since 2010.
“As you go through growth cycles, you grow out of the startup phase and enter a growth phase,” Angerer said, explaining the reason for the CEO change. “It was appropriate for us to hand things over to new leadership to accelerate growth.”
Angerer said he hopes to find a new CEO soon. He has reason to find somebody quick. Besides his role as a venture partner with Quorum VC Toba Capital, Angerer is also founder CEO of Parsec Labs, a NAS virtualization startup preparing to roll out its first products.
Quorum has already begun revamping its executive team with the addition of VP of marketing John Gallagher and VP of products Kemal Balioglu. Gallagher has storage experience with DataDirect Networks, LSI and EMC Isilon. Like Angerer, Balioglu spent time at Symantec.
Quorum bills its onQ appliances as “one-click” backup and recovery. Customers can use an appliance on-site and replicate to a second appliance at a DR site. Or, the second appliance can be hosted by a VAR or cloud provider.
Angerer said Quorum has close to 500 customers and it has doubled its revenue every year for the past three years. He predicts that Quorum will more than double revenue this year.
“There’s a lot more awareness and appreciation of the need for DR rather than just backup,” he said.
Gallagher said it will also be an active year for product releases. “Our engineering team is quite busy,” he said. “We’ll have core technology advances late in the first half of the year, and some more technologies that will expand or market in the second half.”
IBM’s sale of of its x86 server business to Lenovo raises questions about IBM storage.
Lenovo did not buy any storage systems in the $2.3 billion transaction, but it will OEM and license IBM Storwize storage arrays, tape systems and General Parallel File System (GPFS) software as part of the deal.
I see much potential impact on the storage business from this deal. Certainly it is too soon to make any conclusions and even to speculate at this point. But, there are some areas worth considering:
- Storage sales are often server-led, meaning customers often buy new storage systems when they purchase servers. The x86 server line did have storage drag along with it. That’s part of the reason why Lenovo will OEM IBM storage, and may mitigate some of the disruption for customers. But will the sales drag continue for the Storwize systems? Lenovo is a low-cost provider and there will be pricing pressure. And, Lenovo may want a more commodity priced offering.
- Will the pricing pressure drive down the prices of storage overall and make the storage business less profitable? With less profit, there will be less motivation to invest in development and the competitive pressures may be too great.
- A possible upside is Lenovo’s success as a low-cost provider will drives more volume and increases profit, allowing greater investments in the storage business by IBM.
- IBM’s service and support for enterprises has been a factor for customers buying IBM systems. Will that change after the acquisitions? This could be a hidden impact which will take some time to surface.
- What about the IBM storage products not in the OEM deal? This is less of an issue because these systems are primarily sold with the server systems that IBM has held on to (Power and System-z) or through independent storage sales. The Storwize systems should continue sales from IBM and its resellers for independent storage sales and with the Power servers as well. But with Lenovo selling the Storwize and tape systems, will there be conflict with IBM? There may be engagement rules established to handle this situation but we don’t’ know that yet.
- What does the big picture look like for IBM? IBM is moving away from low-margin devices and systems. The threshold for selling off the business is not clear and probably for good business reasons. But, the question comes down to whether the investment and resulting innovations in storage will produce enough added value to continue the investments. IBM has an excellent portfolio of storage products currently and some great people in sales, support, and in the channel and distributors. There is great potential to continue to be competitive and move forward with technology and solutions. This is what is expected of IBM.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).