QLogic’s acquisition of Ethernet controller assets from Broadcom this week won’t have much immediate effect on storage, but it could become important in a few years.
The $147 million deal will give QLogic 40-gigabit and 100-gigabit Ethernet, RDMA and virtualization technologies it currently lacks. QLogic also gets about 170 engineers in the deal, and claims the Broadcom portfolio makes it No. 2 in the Ethernet controller market behind Intel.
What makes the deal interesting is QLogic’s long-term roadmap. During a conference call to explain the deal, QLogic executives laid out plans to add services such as caching, replication, deduplication, encryption and monitoring on converged Ethernet-Fibre Channel controllers. That would turn QLogic from a network connectivity vendor to a platform vendor.
The first step is to port the Mt. Rainier caching technology used on QLogic’s Fabric Cache FC host bus adapters to Ethernet cards. FabricCache serves as a caching SAN adapter. A Cluster of FabricCache cards can access all the combined caches in that cluster.
The other features on QLogic’s roadmap – all important in storage and data protection – will follow. The process will likely take at least three years to complete the entire list, and a lot can happen in that time. One thing that can happen is features like replication, dedupe and encryption will already be common in storage products by then.
Vikram Karvat, QLogic VP of marketing, admitted those features are likely to be built into high-end storage arrays by then, but he said they will fit into other platforms.
“Some things that are considered high-end features are migrating into other markets, such as private or public clouds, or appliances,” he said. “This will allow non-traditional providers to add them.”
Based on a review of all-solid state storage systems, it may be time for IT professionals to re-think their strategy regarding the lifespan of storage systems.
The storage system lifespan is a critical part of a storage strategy for scheduling replacements, planning acquisition costs, and in calculating Total Cost of Ownership (TCO). The lifespan is used in the amortization schedule or depreciation for the purchased asset.
Two major technology factors are important to consider when changing the strategy for a storage system lifespan. The first is the use of scale-out storage systems and the ability to replace individual nodes transparently for a technology upgrade. I will deal with this case in a future article. The second factor for consideration is the use of solid state storage.
The lifespan of storage systems does not mean when the system is no longer usable or will have its “use by date” come to an end. It is really about planned replacement with several elements playing into the replacement motivation:
- Wear out. The system may have characteristics such as mechanical wear that causes the failure rate to increase, which leads to more costly service and potential impacts.
- Maintenance costs. There is a warranty period for storage systems and the maintenance costs begin to increase after that period. The costs become more prohibitive due to the increase likelihood of service required.
- Migration of data. Many storage systems do not have transparent or seamless migration of data to a new system. Scheduled replacement does not alleviate the problem but does make it a planned activity.
Solid state storage in the form of NAND flash today has different wear out mechanisms than spinning disks. Most storage vendors have been making tremendous strides in the improvement of managing the wear out issues in NAND flash. They have accomplished this by changing the methods in the way page erase is done and using shadow RAM to minimize the number of erases. These improvements have continued to increase the lifespan for those vendors that have made the technology investment.
Vendors of solid state systems are now quoting longer lifespans for their systems. There are two notable examples:
Pure Storage has changed its upgrade and maintenance model with an approach called Pure Storage Forever Flash. The approach includes a five-year maintenance plan with a free controller upgrade every three years when renewing the maintenance. With every upgrade, the maintenance terms for the storage system can be reset to the current pricing, which is expected to be lower.
Nimbus Data offers up to a 10-year end-to-end warranty for its system.
The warranty period and the upgrade plans will be competitive issues with vendors. Customers will see the economic benefit from the better offerings in TCO calculations and other vendors will have to react. The change can lead to longer term economics for customers with use of solid state storage systems. The change is also an indicator of the evolving technology with solid state NAND Flash. Flash is getting less expensive and use of data reduction is adding to the cost reduction. It is also a reflection of the current methods of purchasing solid state, which is generally for a specific purpose or application workload usage today which leads to a sizing to match those needs. As price changes and application capacity needs increase, another system is purchased, increasing the amount of solid state storage.
For vendors, the longer warranties can be costly if the systems do not live up to the vendor promises. Service Cost Estimates for storage systems are calculated on intrinsic failure rates or demonstrated failure rates where there is field data. The warranty period requires a financial reserve from the price of sales to cover the costs. The decision to extend the warranty is not made lightly by vendors but competitive pressures do factor into the decision.
It should be expected that more all solid state storage system vendors will extend the warranty for the system in some manner. The bigger issue for the customer is the change in the strategy for acquiring storage — and planning for replacement.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Tintri CEO Ken Klein says the vendor will use its $75 million funding haul collected this week to grow the company substantially in hopes of going public next year.
Tintri, which sells arrays designed specifically for virtual machines, will use its latest funding to invest in “an aggressive product roadmap, double our sales force, triple our marketing spend and build out our infrastructure to be a public company,” Klein said.
“Our plan is to become the next great storage company.”
Tintri claims revenue from its VMstore storage arrays increased 115 percent last year over 2012, and its customers are running more than 100,000 virtual machines in full production
Klein became CEO last October when founder and former CEO Kieran Harty moved into the CTO role. Klein said no change of direction was required and he was brought on to oversee the company’s growth.
One change in Tintri is it now bills itself as “application-aware” storage instead of “VM-aware storage.” Klein said that’s because Tintri will add features such as application-specific quality of service and other capabilities that focus on applications rather than VMs.
“I believe VMs are proxies for applications,” he said. “We will be adding more capabilities for applications, that’s the direction of the company. We upleveled the messaging.”
As for the aggressive product roadmap, Klein said Tintri will add more enterprise features and cloud functionality. He added, “We’re always looking at designing new platforms” to complement its VMstore arrays.
The E series funding brings Tintri’s total to $135 million, and added Insight Venture Partners to its list of investors. Previous investors Lightspeed Venture, Menlo Ventures and NEA also participated.
NetApp’s earnings report this week was similar EMC’s report last month. Like EMC, NetApp’s revenue a tad below expected while its forecast missed by a larger margin. The problems are the same – IT people are taking a long look at storage systems before they buy and are buying less. Companies are considering more ways to use the cloud and flash, and that is disrupting traditional storage sales.
NetApp executives are looking to win market share in this new world by helping customers move into the cloud with its OnTap storage operating system – which they now commonly refer to as software-defined storage – and gain performance through a myriad of flash products.
NetApp CEO Tom Georgens said during Wednesday’s earnings call that it is inevitable that the cloud will cut into the amount of storage people buy. The key for storage vendors, he said, will be to help customers better manage the data they move into the cloud.
“We’re not going to deny that data has gone to the cloud,” he said. “So I believe that will depress somewhat the growth rate of the industry or some of the historical norms. But our challenge is to recognize that data is going to go to cloud and the needs of customers to manage that data well and protect that data always gets more acute. So our point of view is we have Clustered OnTap and Data OnTap, and manage that data in our hardware and other people’s hardware.”
“Is the cloud mainstream? No, it’s not. And I think the [storage company] that enables customers to make it mainstream is going to be the winner in this race.”
NetApp’s long-time cloud strategy has been to sell storage to cloud providers. Now it is trying to tie on-premise use of OnTap to public clouds, as NetApp Private Storage for Amazon Web Services (AWS).
Instead of trying to match EMC’s software-defined storage strategy of bringing out a new platform (ViPR), NetApp pledges to do the job with OnTap – especially Clustered OnTap. Georgens said the vendor will soon bring out its first clustered-optimized FAS arrays. Presumably, the next step would be to tie clustered capabilities into its offering for cloud providers.
With flash, NetApp faces intense competition from the large storage vendors as well as startups that beat the big guys into the market with all-flash enterprise arrays. NetApp said it has shipped nearly 75 PB of flash storage, including roughly 25 PB on its all-flash EFF series and FAS arrays fully loaded with flash. NetApp has yet to announce the ship date of its FlashRay all-flash system, which will have storage management features such as data reduction that the EFF lacks. NetApp also sells flash as a cache and server-side flash.
“It’s a jungle out there,” Georgens said of the flash market. As for NetApp’s strategy, he said, “We believe that customers will deploy flash at every layer in the stack to solve a wide variety of challenges. This market is clearly not one-size-fits-all.”
NetApp reported revenue of $1.61 billion, up four percent from the previous quarter but down one percent year-over-year. Wall Street analysts expected $1.63 billion. Its income of $158 million beat expectations, but its guidance of $1.6 billion to $1.72 billion for this quarter was below analyst expectations of $1.73 billion.
Reasons for low guidance include cautious IT spending, particularly by the U.S. federal government and uncertainty at IBM, which sells NetApp FAS and E-Series storage through OEM deals. IBM sold its x-86 server business to Lenovo, which could have an impact storage sales, and it has been emphasizing its internally developed storage over OEM systems.
Server-flash aggregation software provider PernixData Inc. this week added native support for VMware Inc. vSphere 5.5 and the vSphere web client with its newest product release at the VMware Partner Exchange 2014 in San Franciso, Calif. The startup also launched the PernixDrive technology storage alliance for interoperability with flash hardware devices.
PernixData’s FVP software aggregates server-side flash across storage and computing devices and decouples flash capacity from performance. PernixData calls FVP a flash hypervisor because it lets users cluster flash resources to VMware features such as vMotion and high availability (HA), and create a scale-out flash strategy. FVP is deployed within the VMware hypervisor, not as separate software, so no changes are made to deployed virtual machines (VMs), servers, or primary storage.
Jeff Aaron, PernixData’s vice president of marketing, said the FVP software now supports any version of vSphere from version 5.1 to version 5.5. “What’s exciting about that is that we’re one of the few vendors, if not the only vendor, that does true native support,” Aaron said. “What we mean is that you literally are a tab within vSphere for setting up our clusters, pulling reports, and managing everything. We’re not launching a separate application.”
Dave Russell, a vice president and distinguished analyst at Gartner Inc., a technology research and consulting firm, said adding vSphere 5.5 support will allow PernixData to reach more of the flash storage users. “That opens up a lot more of the market where otherwise they could roadmap, but they couldn’t sell product to anyone looking for shared flash,” Russell said.
Supporting vSphere’s web client extends FVP’s ease of management by allowing users to use vSphere to manage the software within a web browser.
Aaron said PernixData has a roadmap to add support for other server hypervisors, especially Microsoft’s Hyper-V and the open-source KVM hypervisor.
According to Aaron, his company created the PernixDrive technology alliance program to advance collaboration and interoperability between PernixData and flash hardware vendors. Initial alliance members include Intel Corp., Kingston Technologies Corp., and Toshiba America Electronic Components Inc.
Russell said PernixData is facing two types of competitors right now — business-as-usual thinking and the new breed of flash-storage supporting companies. He said many organizations still throw more server-side flash at situations where they need more performance, without considering aggregating existing server flash resources.
PernixData is also one of the new companies supporting server-side flash implementation with software tools that make deployments more efficient and more flexible. Other companies in this group include SanDisk Corp. with its FlashSoft aggregating and caching software; Atlantis Computing’s ILIO software; and Proximal Data’s AutoCache software.
With an eye on hyper-scale virtualization and solid-state storage, the Fibre Channel Association (FCIA) today laid out the roadmap for the Gen 6 Fibre Channel (FC) industry standard protocol that allows a bare speed of up to 128 Gbps for storage area networking (SANs).
Gen 6 is 32-Gbps FC, but it will reach 128-gig through four striped lanes. Until now, each generation of FC technology has doubled bandwidth from the previous generation. Gen 5, which has been available since 2011 but is still in the relatively early days of adoption, supports 16-Gbps bandwidth. Gen 6 will be the first time an FCIA standard includes specifications to stripe four lanes.
“What people see is one connector from the host side and underneath it is made up of four lanes,” said Mark Jones, President of FCIA and director of technical marketing at Emulex. “Gen 6 is comprised of both 32 Gbps and 128 Gbps in parallel speeds.”
Gen 6 is expected to hit the market in 2016. It will provide 6,400 MBps full-duplex speeds, twice that of Gen 5 FC.
“We have never been able to come to an agreement of four set of lanes,” said Skip Jones, chairman of the FCIA and director of technical marketing at QLogic. “With four lanes, you are aggregating each lane and put them in sequence. It’s not four trunked lanes. It looks like a small 128-port all the way up to the APIs.”
Emulex’s Jones said Gen 6 also includes several features that go beyond speeds, which include Error Code Correction (EEC) to maintain the quality of the links to keep the error rate low and ensure data quality. The Gen 6 Fibre Channel protocol also is backwards compatible and better encryption, meaning it supports the 800-131a information security standard of the National Institute of Standards and Technology (NIST).
“[Backward compatibility] is not new but we want to emphasize this because we thought we might lose people when we talk about 128 Gbps,” said Jones, of Emulex.
Gen 6 also includes N-Port ID Virtualization (NPIV), which is an ANSI T11 standard that describes how a single Fibre Channel physical HBA port can register with a fabric using several worldwide port names (WWPNs) that might be considered Virtual WWNs.
“We are finding [NPIV] use is expanding among our user base,” said Emulex’s Jones.
George Crump, president and founder of Storage Switzerland, said it will be interesting to see how Gen 6 affects the server-side flash market because customers in this area generally are concerned with network latency.
“Gen 6 could cause people not to do server-side flash,” said Crump. “This has the potential to forestall some from going to server-side flash. Gen 6 will also drive down the price of Gen 5.”
I’m surprised to hear from IT people that storage vendors are still using “speeds and feeds” in their sales pitches. Salesmen for these companies talk about how fast and how big the storage systems could be.
When I asked what specific details were being emphasized, the list included:
- Bandwidth – The maximum total bandwidth the system could support was presented.
- IOPS – The aggregate number of a fully configured system was given without information regarding the response time.
- Type of processor, clock rate of processor, and number of processor cores for the controller.
Maybe the presentations included more about the function and value of the storage system being, but this was the information relayed by customers.
I thought storage sales had moved beyond that. Most customers are looking to solve specific problems or address some complex workload needs. The most basic for traditional IT environments include:
- Capacity growth. There is a need for more storage but not at the sacrifice of getting the same relative amount of work completed. This means not just adding capacity, but having the same ability to access the capacity. This is usually measured as accessed density, which is the number of I/O’s possible divided by the capacity.
- Workload requirements. Some workloads need improvement. Most commonly cited needs are improving transaction processing, increasing the virtual machine density (number of VMs per physical server), and the number of virtual desktops supported per physical server and storage system. These have performance needs but are much more complex than speeds and feeds numbers presented. Necessary improvements include the latency per I/O to allow write-dependent transactions to move ahead. Using an aggregate number of IOPS can be a very misleading number in this case.
- Consolidation of storage with a technology upgrade. This is usually a generational change for storage that can be caused by the end of the financial life of the storage system (usually dictated by increased cost of maintenance) or perceived technical obsolescence. The expectation is the new system will provide greater capacity and performance to allow consolidation of multiple older storage systems. This brings improvements in the amount of power, cooling, and physical space required. Consolidation is really a workload discussion as well.
The simple speeds and feeds sales approach is a throwback. Most sales have moved beyond this, recognizing the sale is all about solving a problem or meeting a need for the customer. In solving a problem or meeting a need, the salesmen must understand the customer and not just present the speeds and feeds attributes. Proposing a product with a focus on those attributes can only short-circuit that understanding. It pushes the responsibility for finding the correct solution onto the customer.
This brings to mind the recent Super Bowl commercial from Radio Shack with stars from the 1980’s and the message “The 80’s called. They want their store back.” Within days after airing, Radio Shack announced the closing of 500 stores. Maybe this should be a hint about the speeds and feeds sales approach.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
DataGravity, the storage startup founded by EqualLogic veterans Paula Long and John Joseph, is in the whisper stage about its product that is scheduled to launch this year.
The startup is still keeping product details under wraps, and won’t get any more specific on a launch data than to say it will be generally available in 2014. But DataGravity will have a team headed by president Joseph at VMware Partner Exchange this week to woo channel partners. Joseph said DataGravity’s unstructured data management system is in beta following an alpha program that started last June.
Joseph said the DataGravity product will sell as an appliance, with the hardware serving as a delivery mechanism for the software that will manage, analyze and unlock the business value of file data.
“We want to get out of being a box pusher and into offering data solutions,” Joseph said of the system. “I would like to bring insight to our customers’ file-oriented business content.”
Joseph said he expects the appliance to be installed by a storage administrator, but will be a tool for business units rather than merely for storing data.
“The value starts in IT and motions out to the line of business,” he said. “We have the capability that IT people will love, but people in the legal department, finance, HR and marketing are going to say ‘Holy smokes, my time to answer has been cut in half by using this product and it’s so user-friendly I can get what I need in a fraction of the time.’”
Long, who was responsible for the engineering expertise behind the EqualLogic iSCSI SAN startup that Dell acquired for $1.4 billion in 2008, isn’t talking technology yet regarding DataGravity. In a YouTube video produced to drop hints about the product, CEO Long says DataGravity is “out to change table stakes for storage,” is “turning the light on to your data” and “will bring color and flavor and language to your storage.”
So if your storage looks and tastes bland and doesn’t say much, DataGravity will change all that.
VMware historically has had a good relationship with storage vendors, especially those who focus on storing and protection for virtual machines. The annual VMworld conference draws more storage vendors than any storage-dedicated show. The VMware Partner Exchange (PEX) show has also been storage vendor-friendly, although less so this year.
As first reported by CRN, VMware asked Nutanix and Veeam Software to stay away from PEX next week. Those two vendors have had success with products that help organizations running VMware hypervisors. Nutanix sells hyper-converged systems that include storage, servers and VMware hypervisors in one box, and Veeam sells backup for virtual machines.
But VMware has also become more competitive with Nutanix and Veeam as VMware expanded its capabilities and products over the past couple of years. VMware’s Virtual SAN (vSAN), currently in beta, is a software-only version of what Nutanix does. Nutanix also competes with Vblocks, sold by the VCE joint venture comprised of VMware, its parent EMC and Cisco. And VMware’s vSphere Data Protection (VDP) backup product, an OEM version of EMC’s Avamar software, directly competes with Veeam’s Backup & Recovery.
Still, VMware isn’t telling all hyperconverged systems and VM backup vendors to stay away from PEX. SimpliVity, which also sells hyper-converged systems that bundle VMware hypervisors, will be at the show and its CEO Doron Kempel will be a speaker. Unitrends PHD, which also handles VM backups, will also be there.
Why Nutanix and Veeam? Maybe because of their success. They are among the fastest growing storage vendors, according to the numbers released by the private companies. Nutanix claims it has gone over $100 million in revenue in barely two years of selling products, and forecasts more than $80 million for 2014. Veeam claims its annual revenue passed $100 million in 2012, and that its software protects more than 5.5 million VMs. These vendors can also be seen as growing threats to EMC and VMware’s larger storage partners, including NetApp, Dell, Hewlett-Packard and IBM.
Although Veeam now protects Microsoft Hyper-V VMs too, it much of its early success came from customer referrals from VMware. Doug Hazelman, Veeam vice president of product strategy, said he would not speculate why Veeam is no longer welcome at VPX but said his company still considers VMware an ally.
“We still have a good relationship with VMware,” he said. “The vast majority of the more than 80,000 customers we have are running on VMware. In the software industry there is always overlap between vendors like VMware and Microsoft and their partners, and Veeam is no different.”
Hazelman wouldn’t say if he thought being officially absent from PEX will hurt business, but he said Veeam will send a team to the show for meetings with VMware partners. “We have a strong and vocal customer base and partner base,” he said. “Just because we may not be on the show floor or at the partner exchange, doesn’t mean we won’t be out there.”
Despite all the hyper around Fibre Channel over Ethernet (FCoE) a few years ago, old fashioned Fibre Channel (FC) remains the dominant SAN protocol.
A report released today by technology research firm Evaluator Group shows there is good reason for that. Evaluator Group testing found FC significantly faster than FCoE with far less CPU utilization. FC also required fewer cables and power than FCoE, according to the report.
Before we get into the numbers, I want to point out that FC-centric Brocade funded the testing. Brocade sells FCoE gear too, but has been more bullish on FC while its rival Cisco has been FCoE’s chief evangelist. That doesn’t mean the results were skewed – Evaluator Group senior partner Russ Fellows said his group conducted the tests at its labs without vendor interference – but Brocade may not have released the results if FC did not come out a clear winner.
Evaluator Group used Hewlett-Packard BladeSystem c7000 chassis and 16 Gbps FC switching and HBAs on the FC side. For FCoE, Evaluator Group used Cisco UCS 5108 blade chassis and 10-Gigabit Ethernet (GbE) switching. In both cases, the storage was a 16-gig FC solid-state arrays.
The difference in response times for FC and FCoE didn’t show up until workloads surpassed 70% SAN utilization. However, FC response times were two to 10 times faster than FCoE as workloads surpassed 80% SAN utilization. FC also used 20 percent to 30 percent less CPU power than FCoE according to the report.
Speed and low latency aren’t FCoE selling points, so those results were no big surprise. A need for less cabling and power are supposed to be FCoE’s advantages, however, so it was a surprise that FC required 50% fewer cables for LAN and SAN connectivity. “This highlights and confirms the inaccuracy of the FCoE claims of fewer cables and connections,” the report states.
The tests also found the Cisco UCS required 50% more power and cooling than the HP blade with FC equipment.
The tests also determined that FC has more predictable performance with FCoE, which had twice as great a difference between average and standard deviation at 50% utilization than FC. The difference was 10 times as great with 90% workload utilization.
“If you have a high-performing application and use solid state storage, Fibre Channel is the better way to go,” Fellows said. “There is less overhead and better performance. I was surprised that Fibre Channel looked as much better than it did. The cabling and power advantage was a bit of a surprise, too.”
Fellows added that CPU utilization was almost identical when using a hardware initiator for FCoE. The test results for the report used a software initiator because that is the standard configuration for UCS, but FCoE performed better in subsequent tests using hardware initiators.
FCoE adoption for storage has been slow, for several reasons. Fellows said that while FCoE performance is good enough for many workloads, he doesn’t expect it to supplant FC any time soon. “It will continue to roll out, but I don’t think adoption will be that strong,” he said. “I think FCoE will be similar to iSCSI – it will work, people will use it and it will expand, but iSCSI hasn’t taken over the world yet.”