Nearly one third of cloud service providers spend more than 10% of their revenue on storage while 46% of them spend five to 10 percent of their revenue on storage, according to a survey conducted by Tintri.
Twenty-three percent of the CSPs surveyed held their storage spending to less than five percent.
“Storage can make or break a CSP’s business,” according to the report. “It represents an area of significant investment, both money and time. The highly virtualized environments that CSPs operate depend on storage to serve as an enabler rather than a bottleneck.”
Tintri, which sells storage systems for virtualized and cloud environments, surveyed 78 CSPs in December 2015. Forty percent of those participants were from companies with more than 1,000 employees and 38 percent were from organizations with less than 100 employees.
The CSPs identified performance as the top priority when evaluating storage. Eighty-six percent of the respondents considered it the most important criterion, while 69% sited reliability, availability and serviceability as the top considerations for storage. Fifty-eight percent said cost was the top factor while 41% stated manageability and 38% cited scalability.
“The fourth criterion, manageability, has a huge impact on performance and reliability that CSPs may underestimate,” the report stated. “In an open-ended question, we asked respondents to describe their problems with existing storage.”
The survey found that respondents frequently cited performance, scalability, management, monitoring, reporting and troubleshooting as problems.
“Which point to manageability as a pain-point sitting right below the surface of more obvious performance pains,” according to the report.
The survey also found that smaller CSPs provide a more diverse set of services to customers generally because they are competing in a much more crowded market. Eighty four percent provide infrastructure as a service (IaaS), while 67% provide private cloud hosting and 48% provide traditional managed services.
“More granular data show that larger CSPs have a much stronger holding in managed services while smaller CSPs have marched into disaster recovery as a service (DRaaS),” according to the survey. “Larger CSPs likely went through a journey from VAR to MSP to CSP while smaller CSPs entered the cloud market offering newer, differentiated services.”
Larger CSPs don’t come close to matching Amazon’s billions of dollars in annual revenue from its cloud business. The survey found that 21% of the respondents that are from companies with more than 1,000 employees have over $500 million in annual revenue.
“In contrast, 56 percent of respondents’ companies have annual revenue under $50 million,” the report stated. “This percentage is greater than the 38 percent of respondents coming from small companies that are fewer than 100 employees.”
Michael Dell and other Dell executives have told employees the $67 billion acquisition of EMC will go through despite reports that his group is having trouble securing financing for the deal.
Dell will incur approximately $57 billion in debt to complete the deal. Reports in the financial press last week said the deadline for securing the first $10 billion in financing had to be extended because of market conditions.
During a Field Readiness Seminar held Feb. 9 with Dell employees, Michael Dell called the reports “click-bait” from a dying media business.
“You may have read a story that questions if this deal is going to happen. If you have, you’re wasting your time,” Dell said to applause from employees, according to a transcript released by the company. “The media business is under a lot of stress and their business model is sort of cratering. And what they do to survive in those tough times is they create something called click bait. They create an inflammatory headline. So and so was impregnated by aliens, or whatever, click on here to read about this story, see some ads, try to get some money. So don’t fall for that, OK?
“We’re absolutely moving forward with the transaction under the original timeline, the original terms, at full steam ahead. And it’s not contingent on the share price of EMC or VMware. It is subject to a shareholder vote and regulatory approvals. But, we expect to close in the same time frame that we announced before May to October.”
Dell’s comments were followed by a letter from the company’s chief integration officer Rory Read to all employees a week later.
“I want to address some of the chatter over the past few weeks about possible financing headwinds with the transaction,” Read wrote. “I can assure you any suggestions our debt financing is in jeopardy are off-target and do not reflect our financing terms and the progress of our financing to date. The debt financing is fully-committed and is being underwritten by many of the leading global banks. The process of syndicating and placing the debt for a transaction of this nature frequently encompasses a time period of several months from start to finish. That process currently is underway and remains on track, as planned. We anticipate closing the transaction sometime in the May – October timeframe, as originally communicated, subject to achieving customary closing conditions.”
Compellent seems safe post-merger
Michael Dell’s comments and Read’s letters were disclosed in documents EMC filed with the SEC.
Michael Dell also addressed storage product overlap during the Field Readiness Seminar, and said the Compellent SC array platform will survive the merger. “We have a great vision for how the SC Series is a key part of the combined storage portfolio with EMC,” he said, adding that Dell has five times as many customers and 10 times as many installed SC systems as Compellent did before Dell bought that company five years ago.
Dell and EMC received better news this week with reports that the European Union will give its antitrust approval for the merger next week.
Negotiation timeline revealed
EMC’s SEC filings included a timeline of negotiations that led to the deal. The deal was straightforward for a $67 billion acquisition. There were no other serious negotiators after the first conversation between Dell and EMC CEO Joe Tucci, and the original offer was close to the final price.
It was widely reported at the time that Elliott Management investment group pushed EMC to divest pieces or sell itself soon after buying shares in the company in mid-2014. It is also well known that Hewlett-Packard, referred to in the filing as “Company X,” talked to EMC about buying VMware or perhaps all of EMC in 2013 through 2014 but nothing came of those talks.
Michael Dell first contacted Tucci Sept. 24, 2014 about a “potential transaction” between the companies, according to the SEC filing. Representatives of Dell’s holding company Denali and EMC continued from there. Dell and Tucci held several conversations by phone and face-to-face, including a meeting at the World Economic Forum in Davos, Switzerland in January 2015.
On July 15, 2015, Dell made its first offer, suggesting a price of $33.05 per share to EMC shareholders for all of EMC, including VMware. That offer consisted of $24.69 per share in cash and $8.36 per share in a non-voting tracking stock for VMware. That offer was slightly revised Sept. 1 to $24.92 in cash and $8.13 in tracking stock, which still came to a total of $33.05 per share.
The final offer of $33.15 – roughly $67 billion — came on Sept. 23, although EMC’s share price had actually dropped since the previous offer. The sides continued to discuss issues such as allocation of per share total between cash and tracking stock until agreeing on the deal Oct. 11 for $24.05 per share in cash and $9.10 in tracking stock for $33.15 per share. The deal was formally announced the following day.
Go-shop period drew no shoppers
Dell gave EMC a 60-day window to shop itself to other potential buyers. EMC contacted 15 potential buyers but none made an offer. Those contacted included “Company Y” – identified as a “global provider of servers, storage and networking solutions” and most likely Cisco. “Company Y” declined to participate in discussions.
“Company X” – HP – was not contacted during the go-shop period “due to changes in [its] structure and business …” HP was completing its split onto two companies during that time.
X-IO Technologies today expanded its iglu blaze fx enterprise platform with an all-flash version that scales from 4.3 TB to 466 TB in an array.
The iglu 800 series uses all Toshiba enterprise MLC solid-state drives (SSDs) and includes X-IO software features such as a new stretched cluster technology with synchronous mirroring, snapshots, replication, and data-at-rest encryption.
The vendor claims the 800 can deliver 600,000 IOPS with 366 TB of flash capacity. The stretch clustering provides disaster recovery for data centers up to 100 kilometers (6.2 miles) apart. Stretch clustering can be used with any iglu storage system.
Customers can federate 32 pairs of iglu controllers.
For the all-flash version, X-IO upgraded iglu controllers with double the number of CPUs and cores and up to three times as much memory of its hybrid arrays. The 800 series includes Intel Xeon E5-2680 v3, 12 cores, and 64GB of memory.
The iglu 800 all-flash arrays start at around $120,000 for 28 TB of capacity.
One feature missing is data deduplication, which has become popular in all-flash arrays because it expands the amount of effective data that can be stored on a system and reduces price per GB. Ellen Rome, X-IO’s vice president of marketing, said the vendor is concentrating on performance and dedupe can slow it down.
“We’re positioning this for really high performance environments where they might not benefit from dedupe,” Rome said. “We’re running high performance databases, small block loads, things like that.”
The latest object storage systems are often misunderstood by potential buyers. This may be because these systems, capable of storing extremely large numbers of object and files, serve more than one type of environment. The messaging for one environment can leave an impression that the object storage system is not an applicable solution for another.
Based on Evaluator Group’s work with clients, we have discovered several ways to apply object storage. It helps to understand that in a majority of IT environments, a parallel IT organization has evolved. So you have the traditional IT group tasked with continuing to run current operations – keeping the lights on, if you will. Proficient in current operations, this group struggles with demands for more capacity and greater productivity. The second IT group in the parallel IT organization is the one charged with changing the delivery of services by creating and deploying a private or hybrid cloud.
Object storage can be used in both of these groups. For the traditional environment, object storage serves as a content repository to directly access information that may have been on primary or secondary systems before.
The system can meet the growing capacity demands by using replication and versioning, altering and simplifying the data protection model. This type of system adds value by also serving as a target for retained copies of backups and an online archive. The following diagram illustrates the traditional IT usage of object storage systems.
Use of object storage in private or hybrid clouds is understood but is somewhat hard to depict in a diagram. In general, object storage in a private cloud is a separate system used for many of the same purposes as traditional IT but with different access methods. The major difference is that object storage in private/hybrid clouds is the target for newly written applications that often deal with mobile devices and distributed access. A content repository is another major use case and many times is coupled with file sync and share software. As with traditional IT environments, object storage can serve as online archive and retained backup targets in the cloud. The following diagram shows use of object storage in private/hybrid clouds, representing it as a separate system (logically or physically) from the compute/storage node instances federated together for creating the cloud environment.
Object storage systems are really multi-dimensional with many uses in different environments. In the parallel IT groups that have evolved in organizations, object storage systems can be applied as solutions for growing capacity demands and solving data protection issues from the growth. As this dual applicability becomes better understood, expect more deployments.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
NetApp’s latest earnings report shows the vendor is still in a hole with a lot of digging to do before it returns to revenue growth.
NetApp reported another quarter of declining revenue Wednesday and laid out plans to reduce its workforce by 12%. NetApp’s overall revenue of $1.39 billion was down 11% from last year and four percent from the previous quarter. Its product revenue of $750 million was down 19% year-over-year and eight percent from the previous quarter. The vendor did turn a profit of $153 million, but that was down from $177 million a year ago. NetApp revenue was below its previous guidance of $1.4 billion to $1.5 billion. Its guidance of $1.35 billion to $1.5 billion for this quarter was also below expectation as financial analysts expected roughly $1.51 billion.
George Kurian, who took over as NetApp CEO in June 2015, said he has completed a formal review of the company and developed a turnaround strategy. The plan includes concentrating on growth areas of the market, cutting costs, and trying to build value in the company through share repurposes, dividends and long-term investments.
On the plus side, NetApp is making progress selling the clustered DataOntap (CDOT) operating software that requires customers to do a disruptive migration. It also reported a strong uptick in all-flash arrays, not including the SolidFire all-flash systems NetApp acquired for $870 million last December.
“NetApp does not need to completely reinvent itself …” Kurian said, adding the vendor is in a transformation that will require taking “significant steps to streamline the business and further advance our pivot to the growth areas of the market.”
The growth segment of NetApp’s products consists largely of CDOT, its all-flash arrays, E-Series performance arrays and OnCommand Insight management software. NetApp is phasing out its OEM business and the DataOnTap 7-Mode product being replaced by CDOT.
The plan is to return to growth by late 2017.
Kurian said NetApp is looking to reduce costs by $400 million annually, with half of that coming from cutting around 1,500 jobs. He said most of the layoffs will occur this quarter.
Kurian said CDOT is now running on 24% of NetApp’s installed FAS arrays, including nearly 80% of FAS arrays bought last quarter. The number of customers who bought CDOT last quarter increased by about 60% over last year.
NetApp executives said their all-flash revenue increased about 60% last quarter from the previous quarter, to around $150 million.
The focus on growth areas at the same time as NetApp makes cuts might not leave much opportunity to jump into new technologies. For instance, NetApp apparently has no plans to come out with a hyper-converged system. Kurian said NetApp can solve customers’ problems with its current products, such as SolidFire and its FlexPod reference architecture program with Cisco.
“We see what customers really want [from hyper-convergence] is essentially simplified provisioning and operational management, like our relatively simple pay-as-you-go building block architecture,” Kurian said. “And you will see address those customer needs with both the SolidFire scale-out architecture, as well as exciting new innovations in the FlexPod lineup.”
With 2015 in the rear-view mirror, FalconStor CEO Gary Quinn says the storage software vendor has completed its transition phase and is ready to reverse its years-long streak of losing money.
“We believe that FalconStor has moved from its transition phase from when I first took over the company in July 2013, and we are now on a normal operating pattern in 2016 and beyond,” Quinn said Tuesday during FalconStor’s earnings call.
“Normal” for FalconStor means its FreeStor data protection and storage management software is fully in the market and subscriptions are coming in. It doesn’t mean the vendor is flush in sales or money, though. For the final quarter of 2015, FalconStor reported revenue of $9.4 million, down from $11.8 million from the previous year. The vendor lost $1.3 million in the quarter compared to a loss of $2.1 million a year ago.
For the full year of 2015, FalconStor’s $48.6 million in revenue was up from $46.3 million in 2014 and its loss of $1.3 million compared to a $6.1 million loss for 2014. FalconStor finished the year with $13.4 million in cash.
Quinn and CFO Lou Petrucelly said the company’s goal is to at least break even for this year. FalconStor has suffered through years of losses and turmoil, including the 2011 suicide of founder ReiJane Huai in the wake of fraud charges.
FalconStor executives claim more than 170 customers are using FreeStor in production. Quinn said FalconStor has 0.1 percent of the software-defined storage market as defined by IDC. But he expects that to grow significantly and predictably as FalconStor moves from perpetual licensing to a subscription model. Because the subscription pricing is deferred revenue, the switch resulted in lower revenue over the last few quarters but Quinn said it will bring growth in the long run.
However, FalconStor is losing a steady revenue stream from an OEM deal with Hitachi Data Systems (HDS). HDS sales of FalconStor virtual tape library backup software regularly accounted for more than 10% of FalconStor revenue, and came to 34% in the fourth quarter of 2014. HDS is now selling its own disk backup product from its 2014 Sepaton acquisition, which will relegate FalconStor software to occasional deals through HDS.
“I would not view them as a contributor going forward,” Quinn said of HDS. “It’s really more opportunistic on a one-off deal here and there where our technology is better than Sepaton or we have existing installed base customers who like our technology and are renewing it.”
Violin Memory’s latest all-flash arrays haven’t received the warm reaction in the market the vendor hoped for, so it’s releasing bundled systems that will make it easier and cheaper to deploy.
Violin launched its Flash Storage Platform (FSP) 7700 with data reduction and protection features last year, hoping to make a renewed push into the all-flash world. But sales have been slow with only $12.5 million in total revenue and $6.3 million in product revenue in the third quarter of 2015. So this week Violin came out with what it calls Starter Kits to simplify sales.
Violin calls these the Violin Scalable Starter Kit and the Stretch Cluster Starter Kit. The Scalable Starter Kit includes two FSP 7700 array controllers, two Brocade Fibre Channel switches, two all-flash arrays with 35 TB each, and Violin Concerto OS7 and Symphony Management Kit. The Stretch Cluster kit adds a Stretch Cluster license that automates recovery of critical applications and data by using two data centers.
Customers can scale beyond 70 TB by adding flash drive shelves.
This is the first time Violin has bundled its arrays with switches. “Before, you would have to buy a 7700 system, switches and a storage shelf as discrete purchases,” said Keith Parker, Violin’s director of product marketing. “Now we’ve packaged everything together in a kit.”
The starter kits can save customers considerably. The list prices are $470,000 for the Scalable Kit and $840,000 for the Stretch Cluster kit. That’s about half the price as it would cost to buy everything separately, Parker said.
Asigra is packaging its Cloud Backup software on the Oracle ZFS Appliance for cloud providers who would rather not purchase and maintain their own hardware.
Toronto-based Asigra for decades have sold backup software only to managed service providers. As cloud backup grows in popularity, MSPs find their hardware infrastructure costs rising to keep up with capacity demands, Asigra executive vice president Eran Farajun said.
Farajun said Asigra has been looking to reduce costs for service providers for several years, beginning with a recovery license model. “Our partners said, ‘We like that but we have other costs in the stack, and that’s the hardware,’” he said. “We said, ‘We’re not a hardware company, we’re a software-company.’ They said ‘Go into your cave in Toronto and try to figure out a way to lower costs.’”
Farajun said Asigra looked a low-cost hardware and found the ZFS Storage Appliance. “We didn’t know Oracle even made storage,” he said. “We did trials and got good performance. We joined their OEM program. That gives us access to Oracle products and aggressive prices, and we can pass that on to our partners.”
He said for providers, buying Asigra backup on the Oracle appliances is “cheaper than cobbling together your own system. Some try using commodity hardware with CentOS, fully open-sourced, then they run into trouble and there’s nobody to call for support.”
Asigra actually does cobble together its own appliance for smaller partners. Last year it began bundling Cloud Backup on Supermicro appliances with FreeBSD open-source software. Farajun said the Asigra appliance is scales to 100 TB, while the Oracle ZFS appliance begins at 100 TB.
Is Commvault back on track?
The backup vendor snapped a string of four quarters of year-over-year revenue declines when it reported of $155.7 million last quarter, up two percent from the previous year. More impressively, its software revenue of $71.4 million increased 24% from the previous year.
While Commvault continued to make money during its sales declines, its $13.2 million income last quarter was its highest take in a year.
Now we’ll see if Commvault can gain momentum with its its new platform launched in October. Wall Street analysts expect revenue of $156.3 million this quarter, a small uptick from last year.
Commvault spent the past year revamping its sales and marketing teams before the new data protection and management platform launch. Discussing the earnings Wednesday, Commvault CEO Bob Hammer said that transformation period is over.
Hammer said the latest software platform was a controlled release to select customers, with broader distribution expected this quarter. But he said the changes made to that release played a key role in attracting customers. Commvault reacted to claims that its software was too expensive and complicated by altering its pricing model and making it available in smaller bundles.
“As we went into this transformation, we had to segregate the product line, pricing, packaging … “ he said. “We had to make some really significant underlying changes to the platform to align to the cloud..
“We experienced significant increases in the amount of large enterprise deals and had higher than normal close rates,” he added.
Commvault’s enterprise deals of more than $100,000 in a quarter represented 54% of its software revenue, and increased 33% from the previous quarter.
Hammer said more enterprises are moving backups to the cloud, which hurts storage hardware vendors but is a positive for Commvault because it helps with the process. Commvault software use the cloud as a repository, just as it would if the backups were going to disk or tape.
“As customers migrate to the cloud, adopt new IT infrastructures and deploy newly architected applications, they are looking for a strategic partner who has technology and services that can help them make the transition,” he said.
““AWS might have hundreds of different data centers, so we can move the data in those clouds, orchestrate those infrastructures, spit up compute network storage in the cloud, tie to a given application and manage and index all that data. So now customers have a complete picture of what the heck is going on with the data no matter where it resides. And we can federate across the different AWS silos or [Microsoft] Azure silos or between Azure and AWS or Azure and OpenStack, et cetera.”
Another thing helping Commvault is the largest backup software vendors are going through their own changes. EMC is getting acquired by Dell, Veritas just spun out from Symantec, Hewlett-Packard split up and IBM has been in a storage revenue freefall.
“I’m not going to get into a specific commentary on Veritas or what’s going on with EMC or some of the other larger competitors of ours, HP or IBM,” Hammer said. “But, clearly we’ve gotten significantly out in the front of every one of those competitors from a technical standpoint, a services and support standpoint, and we’re going to create additional distance between us and these competitors. We are on a much firmer foundation right now … and every one of the competitors I just mentioned has what I call significant architectural underlying issues that will take time and money to address.”
Commvault still has to worry about Veeam Software, which has been cutting into Commvault’s business from below. Veeam, a private company, said it had a 55% increase in bookings revenue in the fourth quarter of the year. Veeam said its bookings revenue for 2015 totaled $474 million. Commvault’s revenue for calendar 2015 was $581 million.
While waiting to become part of Dell, EMC is planning a massive flash injection into its storage systems.
David Goulden, CEO of EMC Information Infrastructure, teased new product launches to the VMX, VNX, DSSD and Data Domain product lines Wednesday during EMC’s earnings conference call. Flash will play a major role in all of the primary storage arrays, including at least one new all-flash platform.
“Flash is one of the megatrends that’s changing the infrastructure business forever,” said Goulden’s boss, EMC CEO Joe Tucci.
Goulden didn’t give away too many details, but he said VMAX enterprise and VNX midrange arrays would be “re-archtected” for flash. The VMAX and VNX are legacy arrays developed for hard disk drives but were retrofitted to accompany solid-state drives (SSDs) when they became available for primary storage. Goulden said new all-flash versions will include substantial changes to take full advantage of new flash technologies.
In an attempt to set a record for using the word “flash” the most times in a sentence, Goulden said this quarter EMC will introduce “a new flash-optimized all-flash VMAX that will significantly change the way flash is deployed in high-end primary storage.”
It’s unclear if EMC will re-name VNX or add another midrange product. While Goulden referred to VNX by name several times on the call, he also talked about a “mid-tier storage family’ coming in the second quarter, “which will change the use cases for flash in the mid-tier.” Whether that is VNX or another family remains to be seen.
DSSD is a new product, with technology acquired when EMC bought Andy Bechtolsheim-founded startup DSSD in 2014. EMC has previewed server-based DSSD at events over the past year but the system has not been released. Goulden said DSSD will launch this quarter, calling it a “quantum leap” in flash. He said DSSD will deliver “mind-blowing performance, bandwidth and latency for high-performance business applications like Hadoop analytics and ultra high-performance databases.”
Goulden did not mention any updates to XtremIO, which is the market leader in all-flash systems with $1 billion worth of sales in 2015. He did mention integrated copy data management (ICDM) added to XtremIO last year that puts database copies on primary storage.
EMC will also release a software-only version of the Data Domain disk backup platform and new converged products from VCE. The vendor has been in development with the virtual Data Domain for several years but wasn’t sure if it should cannibalize its popular hardware backup system.
Goulden said EMC and VMware have developed “a new next-generation hyper-converged appliance family” and VMware will make an announcement in February.
Despite any backup or hyper-converged developments, 2016 is shaping up as the year of all-flash for EMC. Or as Goulden said, a shift to “all-flash, all-the time ‘ in the midrange and high end.
“Flash is not about a single product,” Goulden said. “It’s a key technology across our portfolio. We really think that the technology has advanced to the stage with the latest 3D NAND technology and things like 3.8 terabyte drives. Of course you need to architect your system to optimize to use something that big and that fast, which is why we talked about the re-architecting. We can really come to market with a complete family of VNX, VMAX, XtremIO, DSSD, leveraging this latest technology and basically use all-flash all the time for primary storage.”
Product overlap is nothing new to EMC, but how will it explain to customers which all-flash platform to use?
“DSSD is going to address a whole new class of workloads,” Goulden said. “XtremIO and VMAX are playing in broadly similar markets, but with different attributes. The mid-tier line fits underneath that. So we really have the market exceptionally well covered and of course we’re leading with an all-flash agenda.