Two of the storage startups with large bankrolls will have to spend a big piece of their cash lawsuits rather than business.
EMC Inc. fired a legal salvo at all-flash startup Pure Storage Inc. and NetApp Inc. has sued hybrid storage vendor Nimble Storage Inc. In both cases, the establish vendors allege that former employees who are now employed by the two startups stole trade secrets, customer lists, and solicited other employees in violation of employment agreements.
Both industry heavyweights are seeking monetary damages, injunctive relief to stop the defendants from using alleged stolen materials, and the return of alleged company secrets. They are also making sure everyone knows how they feel about their upstart competitors.
Pure completed a $150 million funding round in August, and has a total of $245 million in venture funding. Nimble has $98 million in funding. Both startups plan to go public, and Nimble already filed its S-1 registration for an initial public offering. They also both face serious legal bills now.
EMC claims the theft of confidential information by former employees “arises out of a deliberate scheme advanced by Pure Storage through a nationwide pattern of collusion ….”
NetApp’s complaint describes Nimble as “a company built on unlawful hiring and business practices.”
EMC v. Pure Storage
In its complaint filed in the U.S. District Court in Massachusetts on Nov. 4, EMC said the theft of “tens of thousands of proprietary, highly confidential, and competitively sensitive EMC materials” by former employees and in the possession of Pure Storage are in violation of the Key Employee Agreements (KEAs) each former employee signed when they joined the company.
The agreements require employees to return any EMC materials in their possession when they leave the company, not to divulge any “company secrets” after leaving the company, not to solicit any EMC customers as employees of Pure Storage, and not to solicit any current EMC employees to leave the company.
The EMC complaint alleges that “These claims arise from conduct apparently orchestrated by or known to the highest executive management levels of Pure Storage.”
Most of the former EMC employees named in the suit are in sales, and the lawsuit is weighed heavily towards allegedly stolen sales trade secrets, including customer lists and “sensitive pricing solutions and strategies custom-tailored for each individual customer.”
Pure Storage CEO Scott Dietzen returned fire at EMC in a blog post Nov. 5, claiming EMC’s charges have “no merit whatsoever,” that Pure Storage will defend themselves vigorously, and that it has the resources to do so – citing the company’s recent funding round.
Dietzen also criticized EMC’s own hiring practices, claiming that “in general more mature companies risk forgetting the golden rule—they are happy to recruit great people to join their companies from competitors (indeed they aggressively solicit such hires), but then resort to onerous non-compete agreements and lawsuits to deter the same employees from exercising their freedom to seek employment elsewhere.”
NetApp v. Nimble
NetApp’s filed its lawsuit against Nimble and three former employhees Oct. 29 in the U.S. Northern California District Court. It claims that two of the three former employees violated the Computer Fraud and Abuse Act by using unauthorized access to NetApp’s computer systems to acquire confidential and proprietary information and pass the information on to Nimble.
NetApp also alleges that the three former employees violated their NetApp employment agreements by taking or keeping proprietary NetApp materials, and soliciting NetApp employees to join Nimble.
Generally, lawsuits against former employees that involve non-compete and employment agreements that last after an employee has left a company are hard to win because the courts view such agreements as restraint of trade that could hinder a person’s ability to gain employment.
But these cases center more around people who joined direct competitors directly after leaving the plaintiff companies and whether they took sensitive information with them that is helping their new companies gain competitive advantages.
Ultimately, the question probably won’t be how “onerous” the EMC and NetApp employee agreements are. The key legal questions are whether the courts uphold the agreements, if the former employees breached the agreements, and whether EMC and NetApp suffered harm.
NetApp unveiled a controller and memory upgrade to its EF all-flash array system today, less than a week after EMC finally made its XtremIO flash platform generally available.
The EF550 replaces the EF540 that NetApp launched in early 2013. George Kurian, NetApp’s executive VP of product operations, said the vendor’s other flash platform – the FlashRay – will go into beta before the end of the year but won’t be generally available until 2014.
NetApp claims the EF550 delivers more than 400,000 sustained IOPS, around 100,000 IOPS more than the EF540. The new system uses 800 GB multi-level cell (MLC) SSDs, and scales to 96 TB in a 24u enclosure. A base system holds 12 or 24 drives, and can scale to 10 12-drive enclosures or five 24-drive enclosures.
NetApp claims it has shipped more than 550 EF540 arrays this year. “We believe that puts us in the number one or two market position for all-flash arrays,” Kurian said.
NetApp likens the performance of one EF550 enclosure to that of two full racks of traditional spinning drives. Kurian said database and virtual desktop infrastructure (VDI) acceleration are the major use cases for the EF flash platform.
Unlike the FlashRay, which will have a new operating system designed specifically for flash, the EF550 uses the same SANtricity operating system as other E-Series systems. During NetApp’s earnings report call last week, CEO Tom Georgens said the EF series “should lay rest to the canard” that flash storage systems need new disk controller technology to work.
The EF5400 was part of an E-Series launch that also included the E2700 for remote offices and the E5500 high performance midrange system. Those block storage systems replace the E2600 and E5400. The E2700 supports 12 Gbps SAS and can scale to 768 TB with 4 TB drives. The EF5000 supports 16 Gbps Fibre Channel along with 10-Gig Ethernet iSCSI and InfiniBand and can scale to 1.5 PB. The E2700 and E5500 can support SSDs for hybrid configurations.
Hyper-converged storage startup SimpliVity’s executives were in hyper-funding mode the last few months. SimpliVity closed a whopping $58 funding round today, bringing its total to $101 million over three rounds.
CEO Doron Kempel said SimpliVity will use the cash to significantly grow the size of the company and its sales. SimpliVity’s OmniCube stack includes storage, server, and VMware hypervisor in one box, with the ability to cluster 40 units.
“This gives us a lot of dry powder and we plan to triple the size of our organization next year and multiple our sales by five times,” Kempel said of the funding round.
He said SimpliVity has around 130 employees now. He won’t disclose revenue but said the startup has more than 100 customers, many with more than one OmniCube. He said one customer bought six systems in 17 days.
Kempel said SimpliVity will add more form factors and capabilities next year. You can expect a smaller system in the 2 TB to 3 TB range for remote offices and support for the KVM hypervisor early in the year and Microsoft Hyper-V to follow.
When asked how he raised so much money, Kempel said the investors agreed with him that SimpliVity can take over the data center. “The IT stack has 12 products,” he said. “VMware virtualizes the servers, and we virtualize everything else.”
Nutanix, Scale Computing, and Pivot3 also sell hyper-converged hardware stacks, and software players are getting into the game. Last week Maxta came out of stealth with software that pools capacity and processing power on virtual machines. VMware’s Virtual SAN (vSAN) – currently in beta – behaves similarly.
Kleiner Perkins Caufield & Byers (KPCB) Growth and DFJ Growth venture companies led the SimpliVity funding round, with Meritech, Swisscom Ventures, Accel and Charles River Ventures participating.
As expected, NetApp took a big financial hit from the U.S. federal government shutdown in October. That hit caused the vendor to miss its revenue target for the quarter and the continued uncertainty prompted a lower forecast than expected for this quarter.
NetApp’s $1.55 billion in revenue for last quarter – which ended Oct. 31 – was below the $1.6 billion Wall Street expectation. Its forecast for this quarter of between $1.575 billion and $1.675 billion fell short of the $1.69 billion consensus expectation.
EMC also missed its expectations last quarter, blaming it largely on the government shutdown. NetApp relies even more on government sales than EMC, and usually cashes in when the government fiscal year ends in September and agencies spend the remainder of their budgets. NetApp CEO Tom Georgens said federal government revenue fell $85 million short of NetApp’s expectation last quarter, leading to the $50 million revenue miss.
“Usually there’s a lot of [government] money sloshing around at the end of the fiscal year, and it generates a very, very frothy September,” Georgens said on NetApp’s earnings call.
“You put that $85 million back … and this would have been a blowout quarter across every metric.”
That $85 million isn’t likely to be put back soon, though. The resolution to the shutdown left the door open to another one in February, leading to the soft forecast. “Really nothing’s been resolved, right?” Georgens said. “We just pushed the continuing resolution out to January and the debt ceiling to February. It’s likely that the sequester spending levels will remain intact no matter what happens, and this could just be kicked down the road another 90 days.”
Georgens said one thing that did not hurt NetApp last quarter is the emergence of all-flash arrays on the market. He said NetApp’s strategy of selling hybrid flash arrays with its Data OnTap-based systems and an all-flash EF540 high performance platform is working. The vendor is also expected to launch its FlashRay all-flash platform in 2014, too.
NetApp’s main rival EMC launched its first all-flash array today.
“I think the success and the raw performance of the EF540 should lay to rest the canard that there’s something magical about flash drives that work around prior disk technology somehow is irrelevant,” he said. “The fact that the EF540 can bring high performance and mature HA [high availability] to that environment is a key differentiator. And that pretty much knocks out a lot of the startup companies and the immature products from the other mature vendors out of that category.”
He said a Data Ontap-based all-flash array would be less successful, but not because of the controller technology.
“Ontap is really built around a broad feature set around data management, which are not suited to necessarily an all-flash array,” Georgens said. “The all-flash array is about delivering performance ay low latency to applications and that’s really the optimized design point for the EF540.”
Georgens said around 60% of NetApp storage is sold with some type of flash, including 6 PB of flash as a cache shipped last quarter.
A few days ahead of EMC’s ballyhooed official XtremIO launch, Hitachi Data Systems (HDS) made flash news of its own this week. HDS bumped up the performance, capacity and number of arrays supported by the Hitachi Accelerated Flash (HAF) modules that are the heart of its flash strategy.
HDS first brought out its home-grown flash modules for its enterprise flagship Virtual Storage Platform (VSP) array in November 2012. Last July, it added them for the Hitachi United Storage (HUS) VM enterprise unified storage system, which HDS pushes as its preferred all-flash platform.
Today it added 3.2 TB HAF modules, double the capacity of the original modules. The 1.6 TB HAF trays are still available. HDS also rolled out new flash optimization software for HUS VM that it claims can deliver more than 1 million IOPS in one system. The optimization software previously delivered 500,000 IOPS. The flash modules are also now available for the HUS 150 midrange array.
To get 1 million IOPS, customers must use at least four HAF trays plus Hitachi’s flash controller. The code is a separate license.
Each HAF is a 2U tray with a controller and up to 12 flash drives. The 3.2 TB drives brings the total capacity per tray to 38.4 TB. An all-flash HUS VM holds up to eight HAF modules for 308 TB with 3.2 TB drives.
Bob Madaio, HDS senior director of product marketing, said HAF gives HDS an advantage in the flash wars over rivals using traditional third-party SSDs.
“We built a lot of smarts right into the device, such as flash management, wear leveling and garbage collection,” he said. “That’s a big differentiator for us.”
“We missed you at the [name deleted] Institute for Sports Medicine.” This was the title of an email I received, reminding me of when I blew out a knee skiing years ago, and the subsequent rehab process.
The knee injury was not a positive experience. It was one of those events that you use to gauge different points in your life. Among other things, to me it meant no more runs on black diamond trails. But the email didn’t only bring up personal memories. The introductory paragraph also reminded me of marketing messages I’ve seen recently in the storage industry:
“Do you have something that has been bothering you for a while? Sometimes early intervention of an issue can be addressed conservatively through a variety of treatment options. Don’t let a little problem right now become a big problem later!”
This had several negative implications. The first was that I’d hurt myself again and did not seek treatment. This is playing on the probability that I’ve been injured once so I’m likely to do it again. That’s either a reflection on the activities I’m involved in or that I have a tendency to get hurt. The other is that I have not addressed the problem that results. The final sentence is a thinly veiled prediction that it will get worse.
This type of marketing is not much different than what we see in the storage industry – not to mention super paranoia information security marketing. Many approaches around storage data protection try to get the potential customer to identify with a recent real-world disaster. If the customers have experienced a data loss or unavailability incident themselves, even better.
There are several approaches taken, depending on the focus and product being sold. The easy association is with data protection and the need to guard against high-profile disasters. Headlines involving IT usually involve bad news and protecting IT from the calamities experienced by others is a great sales approach.
There are also several negative sales approaches used for storage systems. One is to avoid the type of failures that result in data unavailability. Purchasing a more reliable, mature system with the best support available is the answer. There are enough examples to remind potential customers of failures with inadequate storage systems or storage software because their functions were not mature (meaning they lacked extensive field experience in critical environments).
Arguing from the negative works with those that have been injured previously. (I can feel my knee twitch a bit now). But it may not work so well with those who have not. A better approach may to explain the value a solution brings. In most cases, that value must be explained in economic terms.
Unavailability to information has an economic impact. Looking at the potential impact and what it means requires understanding the customers and their businesses. From there, you have to show the prevention alternatives provided by the product and the economics – cost vs. value. This requires more homework and evaluation but provides a better solution with better understanding to the customer. It may not be as quick a sales opportunity as associating with a negative, potentially painful, event but it is probably the best for the customer and leads to building trust and follow-on business.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Flynn and Fusion-io co-founder Rick White haven’t given much specifics on their new company. The fundingpress release said they plan to bring a product to market in 2014 that will solve problems caused by a “new era of storage that spans from flash to cloud.”
I spoke with Flynn recently and he didn’t get much deeper into product specifics than that, but said “I want to be a vendor to help solve problems coming our way. We want to deal with the hard problems of managing distributed data, performance and capacity.”
He added that these management problems are leading to trends such as putting data in the cloud and on bring-your-own-devices (BYOD). “I think we’re at a tipping point,” Flynn said. “Hardware is either being sent off to the cloud or to a person’s own devices. Running a business, you don’t put in your own server infrastructure now, you send it off to somebody in the cloud.”
He said this is changing the way people buy storage, and who they buy it from. Flynn said he expects a lot more organizations to take the lead from the way companies such as Amazon and Facebook are building their architectures.
“The cloud is forcing a distributed model for storage,” Flynn said. “It’s not centralized any more, it’s distributed in an infrastructure. It’s commodity off-the-shelf industry standard platform storage, not proprietary systems from EMC. You think Amazon uses NetApp or EMC? No way, that’s crazy. And anybody who wants to compete with Amazon is not going to use NetApp or EMC. It’s not cost competitive, and you can’t scale it big enough.
“Facebook is having servers built to their specs by Chinese ODMs, and not the major system vendors anymore.”
As for BYOD, Flynn said IT may still consider it a nuisance but will growth to love it. “Didn’t PCs come about in the same way?” he said. “Administrators held on to their mainframes and they hated PCs. I think the same is true for BYOD. It’s inevitable because of productivity improvements. People are doing most of their work on their own devices — tablets, smartphones, and laptops.”
Flynn said flash is in a similar position as the cloud – it’s here to stay but brings a new set of management issues.
“I think there are still significant challenges with how to get the manageability of traditional storage with the performance of a distributed flash architecture and the capacity of distributed cloud object storage,” Flynn said. “There are still significant challenges with how to deploy and use flash and we are still in the early innings with that. But that problem is not just about flash. It’s about distributed storage architectures. That’s a big problem. Doing something centralized makes it easy. Managing a mainframe was easy compared to managing PCs on everybody’s desktop.”
How will Primary Data try to solve these problems? That’s one of the secrets that will be revealed in 2014.
Twitter isn’t the only technology IPO this week. Security and backup vendor Barracuda Networks began trading on the New York Stock Exchange Wednesday with promising initial results.
While Barracuda is still more security than storage, CEO BJ Jenkins said backup makes up about one-third of Barracuda’s new business, and is increasing year-over-year at a faster rate than the 26% overall market growth.
He attributes that to Barracuda’s end-to-end data protection approach. While it still sells its backup software standalone, most deals are for integrated appliances along with cloud subscriptions for backup and disaster recovery. Barracuda maintains its own multi-petabyte cloud, and Jenkins said most of its backup appliance customers also use it.
“If you back up into the cloud and have an issue locally, you can spin up a virtual server in our cloud and run your business off a deduped backup copy,” Jenkins said. “This end-to-end offering as made a big difference. Customers used to buy Symantec and some kind of disk and tape, and rotate tapes and do replication for DR.”
Barracuda sells mostly to SMB and mid-range companies, competing primarily with Symantec Backup Exec.
Jenkins, who ran EMC’s backup division before becoming Barracuda CEO in Nov. 2012, said one reason Barracuda went public is to gain more credibility with customers who want to know their security and data protection vendors are stable companies. Unlike flash vendor Violin Memory’s first day as a public company, Barracuda’s price rise in the hours after its IPO. Barracuda began trading at $18 – the low end of its projected range – but its shares closed at $21.55 Wednesday.
“I feel good about the first day of trading,” Jenkins said. “We were fortunate to get out before Twitter. They’ve taken a lot of oxygen out of the air.”
Coho Data this week pulled in $25 million in funding to expand and market the scale-out storage platform it launched into beta last month.
CTO Andy Warfield said new funding will be used to add features to the Coho DataStream series in 2014. The DataStream is a hybrid storage system that combines a software-controlled switch with PCI flash and hard drives that Coho sees as a building block for companies who want Amazon-style storage.
The original product is file-based. Warfield said the vendor has seen little demand for Fibre Channel or iSCSI but there have been requests for FC over Ethernet (FCoE), so FCoE support will likely follow next year. There will also probably be SMB protocol support coming to go with NFS-only in the original version, and deduplication and replication are on the roadmap list.
Warfield said Coho Data will also announce a product upgrade path next year. “You can expect a continuous and dynamic approach to upgrades than has historically been the case for storage products.”
As Coho Data prepares to make its product generally available, Warfield said the target audience has shifted from the original plan of marketing to small-to-medium businesses (SMBs).
“We found SMBs had no need for the performance we get from our box,” he said. “But we found larger storage environments with from three petabytes to 10 petabyte range environments often had performance pain with their existing enterprise storage.”
Coho DataStream micro arrays ship with 3.2 TB of raw flash (Intel SSD 910 PCI cards) and 36 TB of spinning disk. Warfield said the startup uses a pricing model similar to the Amazon AWS provisioned IOPS model. He said 40,000 IOPS with a three-year support contract costs around $2.50 per GB.
The B funding round was led by new investor Ignition Partners with previous investor Andreeson Horowitz participating, and brings Coho Data’s total funding to $35 million.
If you’ve never heard of Load DynamiX, that’s probably because until today the start-up was known as SwiftTest. And if you never heard of SwiftTest, that’s probably because until today it only sold its storage validation software directly to storage vendors.
Along with the name change, Load DynamiX today launched a series of infrastructure and application performance validation appliances for IT organizations. The appliances generate massive loads to stress enterprise storage systems, simulate production workloads and validate new devices before putting them into production.
The appliance models include the 10G Base Series with two 10 Gigabit (10 GigE) ports and support for iSCSI and NAS protocol emulation; the 10G Advanced Series with support for NFS 4, SMB 3, HTTP/S, CDMI and OpenStack Swift protocol emulation on top of the Base Series; FC Series with two 8 Gbps Fibre Channel (FC) ports and FC and iSCSI emulation; and the Unified Series with support for two 10 GigE and two FC ports and iSCSI, NFS, SMB 2, FC and SCSI emulation. List prices are $130,000 for the 10G Base, $225,000 for the 10G Advanced, $95,000 for the FC and $180,000 for the Unified appliances.
All of the appliances include Workload Insight Manager, the software the vendor has made available to storage vendors since 2009.
Load DynamiX VP of marketing Len Rosenthal said EMC, Dell, NetApp and Hitachi Data Systems (HDS) use Workload Insight Manager to test their storage arrays.
Rosenthal said each 2u Load DynamiX appliance has the load generation capabilities of 20 servers, and they emulate the I/O profile of applications. He said the appliances are an alternative to using Iometer with a bank of servers. Unlike Iometer, Load DynamiX simulates metadata.
“We’re about understanding changing workloads,” Rosenthal said. “We get people to simulate workloads before going live.”
Rosenthal said GoDaddy.com used Load DyanmiX to validate a hybrid solid-state drive (SSD) storage array and significantly reduced its cost before putting it into production, and the Healthcare.gov site fiasco was caused at least in part by lack of load testing before going live.
If you’ve never heard of the Healthcare.gov fiasco, that’s probably because you’ve been spending too much time trying to get your SAN or NAS up to speed.