Tintri, which sells storage built for virtual machines, today closed a $125 million funding round that it plans to use to step up its attack on NetApp’s installed base and take the startup public in 2016.
Tintri’s Series F round brings its total funding to $260 million. Its founder and CTO Kieran Harty said Tintri will use the funding to grow its sales and marketing to continue to make inroads in a storage market in transition. He sees Tintri as one of the hot newcomers that are hurting the established storage vendors. As a file storage vendor built for virtual environment, Tintri’s main target is NetApp, which has struggled for more than a year.
“We’re going squarely against NetApp,” Harty said. “We will be going up against NetApp aggressively, and you’ll see that in our marketing. You can’t avoid EMC in the market, but we don’t go after EMC specifically. We are seeing the market separate newer companies who are large enough and have enough capital to be players in the market while others are falling behind. We see Nimble, Pure and Nutanix, and we don’t see much of anybody else.”
Harty said Tintri’s goal is to become cash-flow positive in 2016, and then move toward an IPO. Tintri executives last year said the company would have an initial public offering (IPO) in 2015 but market conditions are unfavorable for an IPO. Investors want to see profitability, not only growth.
Industry sources say Tintri generated around $50 million in revenue in 2014 with plans to double that in 2015. Tintri said it grew revenues more than 100 percent in 2014. The company has not had a profitable quarter, however.
“We want to be a public company. We expect to be in position to start that process at the end of this year,” Harty said. “Our focus is on going public when we’re ready and when the market is ready. There is no longer just a focus on growth. The market cares about being cash-flow positive and seeing that you will be profitable. That’s the new reality of the market.”
He said Tintri plans to add “a few hundred people” to its current 450 employees with the new funding. Along with beefing up sales and marketing, Tintri is also planning a new product launch for later this month.
Silver Lake Partners led the funding round with previous investors Insight Venture Partners, Lightspeed Ventures, Menlo Ventures and NEA participating.
Amazon his week made enhancements and added features to its Amazon Simple Storage Service (S3) and its Amazon Aurora relational database. The S3 has been upgraded with more event notifications along with support for bucket-level Amazon CloudWatch metrics while Amazon Aurora now comes with zero-downtime migration.
Customers now can get notifications when an object has been deleted from an S3 bucket. The company launched its event notifications model last year, with support for when objects are created through Puts, Posts and Copy.
Amazon has been reducing prices on its cloud storage and added features such as reduced redundancy storage model, VPC endpoints and cross-region replication. The event notifications are its latest improvements.
“You can now arrange to be notified when an object has been deleted from an S3 bucket,” according to the AWS blog. “Like the other types of notifications, delete notifications can be delivered to an SQS queue or an SNS topic or used to invoke an AWS Lambda function. The notification indicates that a Delete operation has been performed on an object, and can be used to update any indexing or tracking data that you maintain for your S3 objects.”
The Amazon CloudWatch is designed to track metrics for AWS services and for customers’ applications. Customers now can monitor and set alarms on their S3 storage usage. The CloudWatch allows customers to set alarms to trigger when a metric reaches past a specified limit.
“Available metrics include total bytes (Standard and Reduced Redundancy Storage) and total number of objects, all on a per-bucket basis. You can find the metrics in the AWS Management Console,” according to the AWS blog.
Amazon also unveiled enhancements to its Amazon Aurora, which is designed for high performance and scales up to 64 TBs of storage. The Aurora is a MySQL database engine for Amazon. The database was launched last year and Amazon is enhancing the product based on customers’ input. Amazon has added zero-downtime migration to Aurora.
“Immediately after you migrate, you will begin to benefit from Amazon Aurora’s high throughput, security, and low cost,” according to the blog. “You will be in a position to spend less time thinking about the ins and outs of database scaling and administration, and more time to work on your application code.”
The Amazon Aurora also has been enhanced with replication capabilities. Each Amazon Aurora instance can have up to 15 replicas that add additional read capacity.
“It means that Amazon Aurora is able to handle far more concurrent queries (both read and write) than other products,” according to the blog. “Amazon Aurora’s unique, highly paralleled access to storage reduces contention for stored data and allows it to process queries in a highly efficient fashion.”
Amazon also is launching this in the East and West regions in North America and Europe regions, and intends to expand to other locations over time.
Amazon’s announcement comes on the heals of Google’s launch of its Google Cloud Storage Nearline archiving service, which is a sign that Google is making an aggressive run at AWS and Microsoft Azure. Google Cloud Storage Nearline currently is in beta and its low-cost archive cloud boasts a data-retrieval of three seconds, while Amazon Glacier has a restore time ranging from three hours to five hours.
Add Quantum to the list of storage vendors that struggled last quarter.
Quantum Thursday reported revenue of revenue of $110.9 million, down around $17 million from last year and below its original forecast. The company lost $10.1 million, down from a loss of $4.3 million last year. The declines were especially sharp in Quantum’s core tape business.
Quantum CEO Jon Gacek said organizations seemed reluctant to buy, and Quantum refused to give deep discounts at the end of the quarter to close deals.
“The overall storage environment was particularly challenging at the end of the quarter as customers seemed to pull back on purchases,” Gacek said. “I’m not willing to panic and do dumb things [to make sales].”
The challenges are expected to continue this quarter. Quantum’s forecast for the quarter of between $120 million and $130 million is an increase from this quarter, but down from last year when the vendor had $135 million in the same quarter.
“I’m not confident that the market is going to bounce back,” Gacek said, “although I do feel better about how [this quarter] has started. It just feels choppy to me.”
Quantum did not give guidance for the entire fiscal year, which started last quarter.
Gacek sees a silver lining in what Quantum calls scale-out storage, which is primarily its StorNext file system, Lattus object storage and new Artico NAS archiving appliance. Those products grew 54 percent year-over-year to $28 million, a hike of around $10 million, due to strong sales in media and entertainment and video surveillance. But disk backup revenue was flat at $17.3 million and tape revenue fell about $28 million.
Because the tape revenue is still the majority of Quantum’s business, it can’t grow overall by increasing scale-out if tape keeps slipping. The previous quarter, Quantum beat expectations when tape sales stabilized and DXi disk backup and scale-out storage increased.
“I like how we’re positioned in scale-out, and there is a lot of opportunity there,” Gacek said. “I would like to be more confident in the data center stuff.”
EMC, which has downplayed the significance of industry benchmarks, today published its first Storage Performance Council (SPC) block benchmarks for its unified storage systems.
Results for the VMAX 400K and VNX 8000 arrays were strong, but not dominant. The VMAX 400K achieved the highest SPC-2 (bandwidth) result for MB per second with 55,643.78 and finished second behind the Hewlett-Packard XP7 (based on Hitachi technology) for price/performance at $33.58 per SPC-2 MBps.
The VNX 8000 finished second in SPC-1 (transactional) performance and fourth in price per SPC1 IOPS.
The big surprise is that EMC published benchmark numbers at all. The vendor has long insisted that customers should do application performance testing instead of relying on SPC numbers, although it has published benchmarks for file storage.
In an EMC Pulse blog about the benchmark results, authors Jeff Boudreau and Fidelma Russo say they still favor application performance testing. “We continue to believe that application performance testing is the best predictor of real-world performance, especially for critical workloads,” they wrote.
However, customer requests have prompted EMC to benchmark.
“The world’s changing and our customers are changing,” said Jonathan Siegal, VP of marketing for EMC Core Techologies. “Customers have more choices, and there’s a lot of noise out there.”
The Pulse bloggers added: “By publishing these SPC results, customers can use these standard assessments to help simplify their initial high-performance storage evaluations and eliminate much of the ‘noise’ in their screening process.”
The VMAX 400K system EMC tested was an all-flash model with eight 512 GB engines and 32 200 GB solid-state drives (SSDs) per engine for 4 TB of global memory and 264 SSDs. The storage connected to the host system with 128 Fibre Channel links.
The list price for the system tested was $4.9 million, although SPC used a discount price of $1.9 million for its price/performance calculation.
SPC-2 uses three workloads executed separately to gauge system performance. The Workloads are large file processing, large database query and video on demand. Besides the VMAX and HP XP7, other enterprise systems that have run the same benchmark include the Eternus DX8700 S2, IBM DS8870 and XIV Gen3, and Hitachi Data System VSP.
The VNX 8000’s 435,067.33 SPC-1 maximum IOPS finished second behind Huawei
Oceanstor Dorado 5100’s 600,000 IOPS rate. Its price/performance rating of $0.41/IOPS finished behind Infortrend EonStor DS 3024B ($0.17/IOPS), the second-place X-IO ISE 820 G3 and third-place Dell SC 4020. The SPC-1 test measures a system’s performance on an OLTP IO workload.
The VNX tested was also an all-flash system, with 40 100GB eMLC SSDs for 3.9 TB of capacity and four 300GB SAS disk drives for the VNX operating system and other system information.
The system had 256GB of DRAM and was connected with 32 8-GBps FC links to 16 host systems. The list price was $317,000, but a street price of $177,000 was used for price/IOPS.
FalconStor’s 2015 revenue has been paltry at the mid-way point of the year, but the vendor is crowing about new OEM deals for its FreeStor data services software.
FalconStor said it signed three OEM agreements and four service provider deals for FreeStor last quarter. The OEMs include storage vendors X-IO Technologies and Kaminario.
X-IO is using FreeStor for software services in its Iglu SAN platform launched Tuesday. X-IO did not advertise FalconStor as the OEM technology for Iglu, but did not deny it and gave FalconStor permission to disclose the deal.
All-flash vendor Kaminario is expected to use FreeStor to add more storage management features to its K-2 arrays, which were originally designed with performance in mind more than manageability. Kaminario added thin provisioning, inline deduplication and compression, and encryption in 2014, but replication remains a roadmap feature. Replication is covered by FreeStor, which was originally developed for all-flash vendor Violin Memory and is used with Violin Concerto arrays.
FalconStor CEO Gary Quinn said cloud service providers Egenera and Telefonica have signed on to resell FreeStor. He said FalconStor now has six OEMs for FreeStor along with four service providers.
FalconStor can use the sales help. Revenue for last quarter was $9.6 million, down from $11.3 million last year and $10.1 million in the first quarter of this year. The vendor lost $2.2 million, down from $3 million last year. With $18.8 million in cash, FalconStor needs to reverse historical trends and start turning a profit soon.
Quinn is trying to spin FreeStor and its partnerships as a turning point for the company.
He began FalconStor’s earnings call Wednesday by saying, “The first half of 2015 is a tale of two cities story. It’s the beginning of a new era and winding down of the past.”
He ended with: “Past performance and future opportunities are on the opposite ends of the spectrum” for FalconStor.
None of FalconStor’s OEM partners are market leaders, however, so it will need a lot more deals and enterprise customers to turn things around.
Few people argue that information is important. The value of information varies and changes over time but is the most critical resource for most organizations.
Yet, we see storage products where the importance of storing, accessing, and managing information is not addressed effectively or is seemingly trivialized.
There is complexity in managing information, especially as value changes. Requirements must be met when storing and managing information. It must be available, 100 percent valid, secure from an access standpoint, and protected from disasters, hardware failures and human errors.
People in IT often forget about these requirements, as do vendors. We see storage products that only emphasize moving data to execute a program against it, assuming there is no real issue regarding storage beyond that. They ignore that with the high value of information, the residency of information is with storage and is transient for servers and networks. The stewardship of information required for the processing and analysis is the responsibility of where the data is stored.
Another important consideration is that information is stored for a long time, typically for decades. The real concerns are about storing, managing, and administering the information over that period. The infrastructure will change over that time. Think of how many servers will be replaced over the information’s lifespan. This is also the area where major costs are incurred. The costs for storing and managing information over its lifespan can be far more significant than other technology costs.
Systems and solutions must make allowances for the cost of storing information for the lifespan of the data. When solutions do not address that concern, someone (the customer who understands the value of the information) must incur greater cost and effort to add those capabilities. To not do so adds an unacceptable measure of risk. The priority of information must be covered effectively when evaluating and making decisions about storage.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Commvault’s revenue keeps sliding while data protection software rival Veeam makes steady gains. Coincidence, or is Veeam taking deals that Commvault used to win?
Commvault’s revenue of $139 million was down nine percent from last year and eight percent from the previous quarter. Software revenue of $56.5 million fell 22 percent from last year and 19 percent from the previous quarter. Revenue from enterprise deals ($100,000 or more) dropped 29 percent from the previous quarter.
Commvault executives forecast revenue for the fiscal year, which ends in March 2016, to be roughly the same as last year — $608 million.
Veeam, which is a private company but discloses partial financial results, said its revenue bookings increased 22 percent last quarter over the previous year. Veeam claims its enterprise revenue (Veeam considers this deals of more than $50,000) increased 64 percent year-over-year.
Veeam said its total 2014 revenue bookings were $389 million and it has grown more than 20 percent in each of the first two quarters of 2014. That would bring its 2015 revenue close to $500 million if it keeps going at this pace.
“The headlines for the quarter are that we had a more challenging quarter than expected,” Commvault CEO Bob Hammer said on his company’s earnings call.
Commvault has actually had five consecutive challenging quarters as it rebuilds its sales organization and shifts its product strategy. Hammer said the company and the industry are in transition, but he doesn’t see Veeam as the cause of Commvault’s problems.
Hammer said Commvault, which handles virtual and physical backup and recovery, archiving and compliance and cloud data protection, goes far beyond Veeam’s technology. Veeam’s specialty is virtual data protection, but it has added replication and cloud capabilities in its move from SMB play to the mid-market and enterprise.
“As far as Veeam and the enterprise, Veeam does not have a platform,” Hammer said. “If you want to talk about enterprise, you can incrementally improve scale, but the scale we are talking about is nowhere near where Veeam’s is in terms of big enterprise scale.”
Commvault COO Al Bunte added: “It’s hard to do big enterprise scale and operational automation without a platform … And to my knowledge, the Veeam folks aren’t there yet.”
Doug Hazelman, Veeam’s VP of product strategy, sees it differently. He said Commvault has been the main competitor for Veeam as it moves into the enterprise.
“Commvault is definitely the one we’re competing with most as we go up market,” Hazelman said. “Look at the results we’ve had. We’re growing, and they’re not.”
Both companies have major upgrades coming.
Veeam is preparing to launch Veeam Availability Suite 9 with image-based VM replication to the cloud to enable disaster recovery as a service and greater storage array snapshot support.
Commvault also plans a 2015 launch of the next version of its Simpana platform, although it might not be called Simpana. Commvault executives never said the word Simpana on the earnings call Tuesday. They repeatedly referred to their “platform,” and emphasized the point products they have added in the past year in areas such as virtual data protection, cloud and end point protection. After all the changes Commvault is going through, it would not be surprising if they re-branded their platform.
Cisco finally figured out what to do with its Invicta all-flash array acquired from startup Whiptail for $415 million. It killed it off.
Cisco put out an end-of-life announcement last Friday for Invicta, and has stopped taking orders for the array. If you’re one of the few who bought an Invicta array, your final day to renew your service report is Oct. 19, 2019 with July 31, 2020 designated as the last day of support.
Cisco bought Whiptail in Sept. 2013, but the deal had problems from the start. With close storage partners such as EMC and NetApp pushing into flash storage at the same time, Cisco hesitated to declare the Whiptail arrays storage products. They were rebranded as Invicta, sold by the Cisco UCS server group, and omitted from the Vblocks sold through Cisco’s VCE alliance with EMC and the FlexPod reference architectures with partner NetApp.
However, quality issues prompted Cisco to take Invicta off the market last September with plans to fix the problems and bring it back out. That never happened, however, as Cisco confirmed last week that the product is finished.
Not only did Google offer new incentives with its Cloud Storage Nearline service to tempt customers to switch from Amazon and Microsoft Azure last week, the company also beefed up its ecosystem to make it easier to change cloud platforms.
Google added Actifio, CloudBerry Lab, Pixit Media and Unitrends as Google Cloud Platform partners. They join earlier partners Veritas/Symantec, NetApp, Iron Mountain and Geminare.
The partners have integrated Cloud Storage Nearline with disaster recovery, backup and archival and hybrid cloud solutions using Google’s open APIs. For example, copy data management vendor Actifio has mostly focused on selling into enterprise-level environments but now has set its sights on cloud platforms.
Actifio customers will be able to add a Vault profile that lets them move an application directly into Nearline.
Actifio CEO Ash Ashutosh said it makes sense to become a Google Cloud Platform Premier Partners program because “50 percent of our business comes from these cloud service providers.”
Cloud backup vendor CloudBerry allows its managed service provider (MSP) partners to integrate Nearline with all other Google Cloud Platform services using the same unified API.
Unitrends Free for Google Cloud platform will also support Nearline. Unitrends Free is free backup software that deploys as a virtual appliance in VMware vSphere and Microsoft Hyper-V, backing up data locally and connecting to the cloud.
Pixit Media, which sells storage for broadcast companies, has object plug-ins to Google Cloud Storage and Nearline.
EMC NetWorker and Avamar, and CommVault Simpana backup software also allow users to move data to Nearline, as does Egnyte’s file sharing application.
Google is trying to make an aggressive run at Amazon Web Services (AWS) and Microsoft Azure with its Nearline archiving service, plus new services such as the Cloud Storage Transfer Service and the Switch and Save progream. Switch and Save offers 100 PB of free storage in Nearline for up to six months for customers who switch from any other cloud provider or on-premise environments.
Google’s Nearline Storage is the answer to Amazon Glacier for cheap, cold storage. A new on-demand I/O service works with Storage Nearline to allow faster recovery for customers with large amounts of data.
EMC executives reduced their 2015 revenue forecasts for the second time this year following a quarter of tepid growth. They also said the vendor will implement plans to cut costs by $850 million a year and shift investment from traditional storage products such as its VMAX and VNX arrays to emerging technologies including flash and software-defined storage.
EMC CEO Joe Tucci also continued to defend keeping the EMC Federation intact instead of spinning off VMware or other significant pieces.
The forecast reduction and spending cuts came out during EMC’s quarterly earnings call. The vendor reported revenue of $6.1 billion last quarter, up three percent year over year. The storage business grew only one percent to $4 billion. As a result of those results, EMC now expects $25.2 billion in 2015 revenue, a $500 million downward adjustment over its previous guidance and $900 million from its original 2015 forecast.
“The results were mixed. We fell a bit short of revenue expectations,” Tucci said, noting profit of $487 million was a bit better than expected.
The plan to save $850 million annually in cost cuts will be in place by the end of 2016, with $50 million in cuts coming this year, according to CFO Zane Rowe. Tucci and Rowe said some of those savings will be shifted to growth products such as flash and ViPR, ScaleIO and Elastic Cloud Storage.
EMC emerging storage products, which include Isilon clustered NAS along with flash and software-defined storage, increased 49 percent year over year to $718 million. XtremIO grew more than 300 percent.
On the downside, VMAX revenue fell 13 percent to $892 million, and backup and data recovery dropped nine percent to $1.43 billion.
David Goulden, CEO of EMC Information Infrastructure (storage), said he expects traditional storage – VMAX and VNX – to grow two percent annually until 2018, and only about one percent this year.
“We believe the traditional storage market will not improve this year,” he said, adding that EMC will invest in flash, software-defined storage, big data and the cloud “to remain ahead of the market. We will rebalance resources to self-fund growth initiatives.”
Goulden said he expects a new Isilon release and the generally availability of the DSSD flash system in the second half of 2015 should help sales. Tucci said “I’ve never seen a product with as much demand for betas as DSSD.”
Tucci repeated that he is opposed to breaking up the EMC Federation, which includes VMware, Pivotal and RSA Security. Tucci said EMC II and VMware realize twice as much revenue from deals where both companies are involved than when each is in deals alone. He maintains that in the shift to convergence and cloud computing, the combination of companies makes EMC stronger.
“Splitting this federation or spinning off VMware is not a good idea,” he said. “One of the biggest transitions every company has to do is move to the cloud. Data centers are moving to cloud technology, both private and managed clouds. If you are doing that, would you rather do that as just VMware, as just EMC, as just Pivotal? Or are you much stronger doing it together?”
Tucci, who is also the EMC chairman, would not speculate on when EMC would name his successor as CEO. “I don’t want to comment on the timing,” said Tucci, who has postponed his retirement several times. “I am committed to giving the board the time they need to make sure the succession process works terrifically. I don’t want to put a deadline on the board, but they are actively engaged [in the succession process].”