When technology executives discuss initial public offerings these days, they usually say how the goal has shifted from rapid growth to fiscal responsibility. In other words, investors want to see a path to profitability more than wild revenue increases.
That has particularly been the case in storage. The recent initial public offerings (IPOs) of flash pioneer Pure Storage and hyper-converged vendor Nutanix showed companies that grew revenue significantly often spent so much money doing so that they mounted hundreds of millions of dollars in losses. Because of the recent poor IPO market, investors started preaching that companies needed to pay more attention to the bottom line than top line growth.
Then along comes the Tintri IPO. Tintri filed its S-1 with the SEC last week to become a public company. Its finances look a lot like those of Pure and Nutanix — and Nimble Storage and Violin Storage before them — when they went public. Storage array vendor Tintri has grown its revenue and sales significantly in recent years, only to see its losses accelerate anyway. In Tintri’s case, it has lost $339 million since inception.
That means either investors have decided they don’t mind losses so much after all, or Tintri will have trouble raising the $100 million or so it is looking for from its IPO.
Tintri started shipping its VMstore arrays in 2011, but the bulk of its revenue and losses have come over the past three years. The Tintri IPO filing reports revenue of $50 million for its fiscal year ending Jan. 31, 2015, $86 million for the following year and $125 million last year. That comes to revenue growth of 73% and 45% over the past two years. But annual losses over that period were $70 million, $101 million and $106 million, showing that more revenue only produced greater losses. Tintri raised $260 million in five venture funding rounds to pay off most of the deficit.
As with Pure, Nutanix and other smaller storage companies, Tintri faces the problem of having to pour more money into sales and marketing than it generates from sales to compete with IT giants. Only in the past year has Tintri’s revenue exceed its sales and marking budget, which came to $109 million. But it pumped another $53 million into research and development. That is a small sum compared to Dell EMC, NetApp, Hewlett Packard Enterprise (HPE), Hitachi Data Systems and IBM, but enough to keep Tintri in the red.
Tintri IPO filing details spending increases
The Tintri IPO filing made it clear the vendor remains in growth mode, which means its expenses will rise.
“We anticipate that our operating expenses will increase substantially in the foreseeable future as we continue to hire additional employees, develop our technology and enhance our product and service offerings, expand our sales and marketing teams, make investments in our distribution channels, expand our operations and prepare to become a public reporting company,” Tintri said in its S-1 filing.
Tinitri’s revenue growth has come from its tapping into flash and enterprise cloud storage. It began selling virtual machine-centric storage for VMware customers. That has evolved into a private cloud platform through Tintri’s Connect services that include analytics, automation and self-service. To boost performance, Tintri added an all-flash platform in 2015. With the help of Tintri’s VMstore T5000 all-flash systems, the vendor’s average selling price rose from $111,000 to $160,000 over the past two years.
Tintri’s customer base quadrupled over the past three years to 1,273, while its employee headcount rose from 177 to 527 in that span.
If the Tintri IPO goes forward as planned, it will be in the same situation as Pure and Nutanix, trying to grow its way to profitability. Or it could end up like Nimble Storage, which sold itself to HPE after it became clear it would not reach its break-even goal anytime soon.
Recent Hewlett-Packard Enterprise (HPE) storage acquisitions will play a key role in “reengineering” the company, CEO Meg Whitman said.
During HPE’s earnings call Wednesday night, Whitman called out hyper-converged startup SimpliVity and flash array vendor Nimble Storage among a list of “strategic acquisitions in key growth segments.”
“These were all the right strategic moves for HPE’s long-term success, but they were not done in a vacuum,” she said. “We’ve been reengineering our company while facing challenging market conditions, including stiff competition.”
For now, 3PAR all-flash arrays remain the highlight of HPE storage. Whitman said 3PAR all-flash revenue last quarter increased 33% year over year compared to an overall 14% decline of HPE storage sales. That tells you HPE isn’t doing a lot of business with its other storage products, and needs Nimble and SimpliVity to fill gaps.
Nimble, which sells all-flash and hybrid flash and disk arrays, also brings cloud-based predictive analytics to HPE storage. “We are seeing a rapid shift to all-flash,” Whitman said. “We’re extremely well-positioned here given our 3PAR portfolio and recent Nimble and SimpliVity acquisitions.”
All-flash and hyper-converged systems are high growth areas that still have relatively low adoption with room to grow.
“With Nimble, we now have a complete world-class flash storage portfolio from entry-level to the high end in a market growing around 17% per year,” Whitman said. “Nimble also brings a simple user experience platform based on predictive analytics that we plan to roll out across our storage portfolio.”
CFO Tim Stonesifer said HPE flash revenue is still negatively impacted by NAND shortages but he expects that to “loosen up” in the second half of 2017.
Neither SimpliVity nor Nimble products have driven much revenue yet for HPE. HPE closed its $650 million SimpliVity deal Feb. 17 and its $1.2 billion Nimble purchase closed April 17. A Nimble product was part of an HPE storage launch last week, however. HPE said it would ship Nimble Storage Secondary Flash Array SF-Series as well as a new 3PAR StoreServe all-flash array and MSA entry-level hybrid flash array.
HPE is selling SimpliVity software on ProLiant servers. Whitman said the SimpliVity product line is more integrated than Nimble but she expects the weight of HPE to increase sales of both the new product platforms. Nimble research and development will merge with the 3PAR R&D team.
The 3PAR product line came to Hewlett-Packard in a $2.35 billion 2010 deal.
As for future acquisitions, Whitman didn’t list any specific companies or technologies she would like to add but gave general guidelines for HPE deals.
“We have to buy it right,” she said of HPE’s acquisition strategy. “It has to be complementary technology that can leverage our distribution channels.”
She said HPE’s first option is to innovate organically, and it is also funding startups through its Pathfinder venture investment group. HPE helps fund storage startups Cohesity, Coho Data, Hedvig and Scality.
“I’m not looking for venture returns there,” she said. “I’m looking for companies that further our strategy of powering hybrid IT, powering the intelligent edge and the services that we can weave into our solutions.”
Seagate this week launched its highest capacity 10,000 rpm 2.5-inch, 12-gigabit per second SAS hard disk drive (HDD), which includes a dash of flash. The Enterprise Performance 10K HDD is designed for enterprise workloads such as databases, online transaction processing, virtual desktop infrastructure, and file and print servers.
The new Seagate SAS HDD comes in 2.4 TB, 1.8 TB, 1.2 TB and 600 GB capacity models, and infuses 16 GB of flash cache to speed reads. Barbara Craig, Seagate senior product marketing manager of enterprise HDDs, said the Enterprise Performance 10K HDD also adds firmware-based advanced write caching that can improve random writes by approximately 60% over Seagate’s prior generation.
“It’s kind of a poor man’s SSD,” Craig said.
Craig said the flash cache makes the new Seagate SAS HDD three times faster than previous 10K drives without flash. The advanced write caching technology uses enhanced algorithms and 8 MB of non-volatile cache and media cache, she said.
The Seagate SAS HDD has a five-year warranty, a mean time between failures of 2 million hours, and supports the company’s ninth generation of magnetic recording technology and enterprise firmware.
Can HDDs hold off SSDs in enterprise?
The high-speed SAS HDD market tends to be one where enterprises consider flash solid-state drives (SSDs), but Seagate claims its new HDD would hold appeal for a range of workloads with enterprise users.
“They may be turned off by availability of flash right now. It’s hard to get, and the prices are high,” Craig said. “Maybe some small- to medium-sized businesses or even large data centers, the performance on the 10K drive is close enough, and the cost is that much more impressive for some customers.”
John Rydning, an IDC research vice president for HDDs, said he does not envision enterprise SSDs reaching price-per-GB parity with 10,000 rpm HDDs over the next five years. Rydning predicted sustained demand for 10,000 rpm HDDs for storage workloads where they provide “good enough” performance. He wrote via an email that 10,000 rpm HDDs continue to provide a “good balance of performance and value for several storage workloads.”
The new Seagate SAS HDD ships with a variety of different model numbers. The 2.4 TB HDD is model ST2400MM0129, without encryption. The self-encrypting model that supports the Federal Information Processing Standard is model ST2400MM0149.
The HDDs support Seagate’s FastFormat technology to enable customers to switch between applications with block sizes of 512 bytes and 4 KB.
The new Seagate SAS HDD is currently shipping to major OEMs such as Super Micro and Huawei for qualification, according to Craig. She said channel shipments will begin in the middle of August.
Nutanix’s success selling hyper-converged software depends largely on how well it both competes with and partners with the large server vendors.
Nutanix revenue beat expectations last quarter, after giving a disappointing forecast three months ago. Its revenue of $192 million increased 67% and smashed its guidance of $180 million to $190 million for the quarter. The hyper-convergence vendor still lost $112 million – up from $46.8 million a year ago – despite the increase in sales, but still has $350 million in cash and investments. Nutanix also had a higher forecast than Wall Street expected, guiding for $215 million to $220 million.
On the earnings call, Nutanix CEO Dheeraj Pandey said the vendor picked up 790 new customers in the quarter including 50 new Global 2000 customers. He credited OEM partners Dell EMC and Lenovo for helping it land bigger deals.
Nutanix executives spent much of the Thursday night earnings call discussing its relationship with the large server vendors who it competes and partners with.
Here’s a scorecard of Nutanix server relationships:
Dell EMC. The largest storage and server vendor is determined to be No. 1 in hyper-converged, and is looking to knock Nutanix from that perch with its VMware vSAN hyper-converged software and VxRail HCI appliances. Yet, Dell EMC also re-brands Nutanix software on its PowerEdge servers in a deal that Dell struck before it acquired EMC. Dell EMC XC Series appliances account for approximately 10% to 15% of Nutanix revenue each quarter. Nutanix did not give the exact figure for last quarter, but CFO Dustin Williams said it was below 15%.
“We compete and cooperate with Dell on a deal-by-deal basis,” Pandey said.
Nutanix revenue through Dell declined slightly last quarter from the previous quarter.
Lenovo. Lenovo doesn’t have a home-grow HCI product, and makes several vendors’ software available on its servers. Nutanix is its preferred partner, though, with a similar OEM deal that Nutanix has with Dell EMC. Nutanix executives said Lenovo sales rose last quarter, making up for the Dell declines.
“Lenovo is actually a great sign up for us,” Pandey said.
“Our Lenovo bookings increased sharply,” Williams said.
IBM. Nutanix and IBM last week said IBM would make Nutanix software available on RISC-based Power servers. Nutanix doesn’t have any revenue through IBM yet, but Depay said the deal had great potential.
“I think IBM could be dark horse,” Pandey said. “What’s interesting is, for the first time, a single control plane, a single data plane, a single hypervisor runtime can now span Intel x86 and Power microprocessor’s hardware.”
Cisco. Like Dell EMC, Cisco has its own HCI platform. Unlike Dell EMC, Cisco has no official partnership with Nutanix. But Nutanix and Cisco channel partners bundle Nutanix software on Cisco UCS servers. Depay said he hopes to turn Cisco into a willing partner, even if Cisco has its HyperFlex product.
“It’s perilous to predict what will happen in these situations,” Pandey said. “But one thing I’ve learned about the art of negotiation is that what was non-negotiable yesterday could probably become negotiable tomorrow.
“We’re hoping to have this process play out where Cisco understands what HyperFlex is, and Cisco also understands the value that we can bring to their rackmount servers. So I think there is something between us.”
Hewlett Packard Enterprise. Nutanix software is certified to run on HPE ProLiant servers, and sold on channel bundles similar to Nutanix on UCS. While Cisco has been mostly silent on Nutanix encroachment, HPE makes it clear it does not appreciate Nutanix piggy-backing on ProLiant. HPE marketing VP Paul Miller made that clear in a blog post titled, “Don’t be misled … HPE and Nutanix are not partners.” The blog urged customers to buy its SimpliVity software.
Nutanix executives said little about the HPE relationship on the call, except that it is early and they hope to build a relationship with HPE through successful channel sales.
Pandey made it clear Nutanix wants to provide its software on as many platforms as possible.
“We continue to build with ubiquity by offering customers’ choice of hardware, choice of hypervisor, and choice of public cloud providers for secondary storage, all managed by Prism,” he said. “Building an operating system is a journey and no more than one or two are successful each decade. It requires an immense focus in applications, interoperability, performance, security, automation and reliability and to make it all ubiquitous, that is location agnostic is the biggest engineering challenge.”
A DataCore IT survey on the state of software-defined storage, hyper-converged systems and cloud storage showed a gradual uptake of some of the most heavily promoted new technologies.
For instance, flash use is growing, yet it will represent only a small percentage of overall storage capacity in 2017, according to the DataCore IT survey of 426 customers and prospective customers conducted from late 2016 through April 2017 via Survey Monkey.
The majority (76%) of the surveyed IT professionals indicated flash would represent less than 20% of their storage capacity in 2017. Among that group, 14% weren’t using flash at all, and 32% projected less than 10% of their storage capacity would be flash-based.
However, the IT survey respondents listed all-flash arrays as their top preference to overcome performance problems, followed by software acceleration on the host machine and switching to in-memory databases.
Flash also factored into the responses to the question: “What technology disappointments or false starts have you encountered in your storage infrastructure?” Flash failed to accelerate applications for 16% of the IT survey respondents.
The most cited disappointments were “cloud storage failed to reduce costs,” selected by 31%, and “managing object storage is difficult,” mentioned by 29%.
Confusion over hyper-converged
The DataCore survey indicated there’s confusion over what the term “hyper-converged” means. Close to half (41%) of survey respondents think hyper-converged means software is tightly integrated with the hypervisor but hardware agnostic. Another 27% view hyper-converged as an “integrated appliance” with “hardware and software locked together.”
The majority of the IT survey respondents (67%) have not deployed hyper-converged infrastructure (HCI), although 34% said they are strongly considering it. The primary reason for ruling out hyper-converged was lack of flexibility, followed by expense and vendor lock-in.
Among those who have deployed HCI, 6% have standardized on it, 7% have a few major deployments, and 20% have a few nodes. Top reasons for deploying or evaluating hyper-converged systems were simplifying management (48%), ease of scale-out (39%), and reducing hardware costs (35%). Leading use cases were databases, data center consolidation, enterprise applications such as customer relationship management (CRM) and enterprise resource planning (ERP), and virtual desktop infrastructure (VDI).
Surveyed IT pros noted the following business drivers for implementing software-defined storage: simplify management of different models of storage (55%), future-proof infrastructure (53%), avoid hardware lock-in from storage manufacturers (52%), and extend the life of existing storage assets (47%).
When DataCore conducted the IT survey in 2015, a lower percentage (45%) indicated they were trying to simplify management of different storage classes by automating frequent or complex storage operations.
Top use cases that IT pros noted for public cloud storage were long-term archive (35%), back up to cloud and restore on premises (33%), and disaster recovery (33%). A small percentage (11%) said they use the public cloud for primary storage, but a substantial percentage (40%) are not currently evaluating or using the cloud for storage.
For IT pros unwilling to move applications to the public cloud, the primary reasons are security (57%), sensitive data (56%) and regulatory requirements (41%). The applications they’re most willing to move to public or hybrid cloud infrastructure were select enterprise applications such as Salesforce (33%), data analytics (22%), databases (21%), and virtual desktop infrastructure (VDI), according to the survey.
DataCore said the 426 IT professionals who responded to the survey are using or evaluating software-defined storage, hyper-converged systems and cloud storage. The majority of the respondents (84%) were from North America and Europe.
NetApp quietly slipped an acquisition of storage memory software startup Plexistor into an earnings call otherwise noteworthy for strong results last quarter and a disappointing forecast for this quarter.
NetApp CEO George Kurian disclosed the Plexistor acquisition during the Wednesday night earnings call. NetApp did not include the acquisition in its press release or filing with the SEC, and provided no financial details.
Plexistor, which developed software that uses nonvolatile memory as primary storage, fits into NetApp’s strategy of trying to dominate in flash and other emerging storage technologies.
Plexistor came out of stealth in late 2015 with SDM – a software-defined memory product designed to deliver high-capacity nonvolatile storage at near-memory speed. The vendor chased customers running big data analytics and in-memory database processing.
SDM talks directly to a physical memory device, presenting DRAM and persistent storage in one namespace. It uses dual-inline memory module (NVDIMM) memory cards, NVMe flash and a spinning disk tier.
In late 2016, Plexistor bundled SDM on Supermicro servers and Micron NVDIMM cards in a product called Persistent Memory over Fabric Brick (PMoF Brick). PMoF Brick was aimed at big data analytics and high-performance NoSQL databases.
It’s unclear whether NetApp will sell Plexistor as a separate product or embed the technology into its flash storage. Kurian referred to Plexistor as “a company with technology and expertise in ultra-low latency persistent memory. This differentiated intellectual property will help us further accelerate our leadership position and capture new application types and emerging workloads.”
SDM runs on servers, making it a candidate for incorporation in NetApp’s coming hyper-converged product that will use SolidFire’s all-flash technology. NetApp has yet to formally launch the hyper-converged appliance.
The Plexistor acquisition is an example of how NetApp is trying to use flash – as well as the cloud – to bounce back after a tough two-year stretch. Kurian said the vendor has turned the corner, offering its results last quarter as proof.
NetApp’s revenue of $1.48 billion last quarter increased 7.3% over the previous year and beat the consensus Wall Street analyst expectation by $400 million.
Kurian called the last fiscal year “a pivotal year for NetApp. We started the year with bold commitments, and we delivered against all of them. We did what many said could not be done: return the company to growth while simultaneously expanding operating margins. With each successful step in our transformation, my confidence in our ability to create new opportunities and execute against those opportunities grows.”
NetApp’s problems may not be completely behind it, though. Its forecast for this quarter fell short of expectations. NetApp guided for $1.24 billion to $1.39 billion this quarter, which amounts to around two percent growth at the midpoint and falls below analysts’ expectations.
When Kurian became CEO two years ago, NetApp struggled with its flash strategy and with a disruptive upgrade process that slowed customer upgrades from its Data OnTap 7-Mode operating system to Clustered Data OnTap (CDOT). The vendor was stuck in a cycle of flat or declining revenue, which it didn’t snap until the final quarter of 2016.
NetApp now stands second behind Dell EMC in all-flash revenue and Kurian said most of the capacity on NetApp FAS arrays has moved to CDOT. He said 95% of FAS systems that shipped last quarter had CDOT installed. “The transition from 7-Mode to Clustered OnTap is now behind us,” he said.
The transition from disk to flash continues. Kurian called it the “early innings” in flash, and said NetApp’s all-flash revenue grew almost 140% last quarter. He said NetApp’s All-Flash FAS (AFF), EF Series and SolidFire are on pace for $1.4 billion in revenue over the next year.
“We are winning with flash and expanding our intellectual property in this market, positioning us for success in the multiyear transition from disk to flash,” he said
As for the low guidance, Kurian said NetApp would rather err on the side of caution. “We really are giving realistic and, in some cases, conservative estimates,” he said. “We want to make sure we meet or beat every commitment we made, as we have the last four quarters.”
Pure Storage executives claim the flash pioneer is on pace for $1 billion in revenue for 2017, plus its first profitable quarter by the end of the year.
Neither goal is assured, but both seem possible following Pure’s first-quarter earnings report Wednesday. The vendor reported $183 million, up 31% from last year. It cut losses only slightly to $62 million compared to $64 million a year ago, but execs claim spending decreases in the second half of the year, along with revenue growth, should bring it past break-even for the first time.
Pure forecast revenue of $214 million to $222 million this quarter and from $975 million to $1.025 billion for the year. That annual prediction includes a second-half surge in revenue and Pure will have to hit the midpoint of its annual revenue to achieve the $1 billion goal.
CEO Scott Dietzen cited the $1 billion and profitability goals and gushed “all of that is setting up 2017 to be Pure’s best year yet,” on the earnings call Wednesday night.
Of course, there is still a lot of 2017 left. Dietzen said he is counting on Pure Storage flash products taking advantage of hot industry trends. He expects to cash in on the emergence of Express (NVMe) solid-state drives (SSDs) with new Pure Storage FlashArray//X systems, continued growth of Pure Storage FlashBlade unstructured data storage, and the need for storage for the emerging private cloud, artificial intelligence and machine learning markets.
Dietzen said FlashBlade, which became generally available at the start of 2017, is selling at twice the rate FlashArray did when it first launched nearly six years ago. “FlashBlade is transforming the unstructured data market in the same way FlashArray revolutionized structured data,” he said.
Pure Storage FlashArray//X is another key to Pure’s achieving its goals. While Pure currently trails Dell EMC, NetApp and Hewlett Packard Enterprise in all-flash revenue, the vendor is looking to pick up organizations that want the improved speed of NVMe over current SSDs. Neither Dell EMC, NetApp nor HPE have an all-NVMe system yet, although all major flash storage vendors will eventually add NVMe.
“We see a new set of use cases that NVMe opens up,” said Matt Kixmoeller, Pure’s vice president of products. “Certainly, faster database type workloads … but we’re also really going after consolidation of cloud providers. A lot of the cloud vendors out there have really consolidated on server DAS over the past few years, and now we have an opportunity to go in there with NVMe and take the flash out of each of those servers and consolidate it at the top of the rack to drive more efficiencies for them.”
Dietzen said Pure is looking to become the storage of choice for “roughly 80% of enterprise workloads not currently a candidate for the public cloud.” He said enterprises are turning to private cloud in great numbers.
“While we occasionally compete with the big three public clouds … our customers use Pure’s data platform in conjunction with the public cloud, particularly for datasets that are too large to move across the internet,” he said.
NEW ORLEANS – Veeam Software opened its user conference with the launch of the latest version of its data protection software, moving deeper into cloud and physical device support.
Veeam Availability Suite v10 rolled out today at VeeamON adds continuous data protection, support for network-attached storage and native object storage. The object storage support includes data on Amazon Web Services (AWS) and Microsoft Azure. Veeam Availability Suite v10 frees up primary backup storage with policy-driven automated data management for long-term retention and compliance. The major upgrade to Veeam’s flagship product is in technical preview.
“V 10 takes [Veeam] to a multi-cloud world full speed,” said co-founder Ratmir Timashev. “V 10 is where everything comes together.”
Veeam Continuous Data Protection (CDP) replicates data to private or managed public clouds. The default recovery point objectives setting in Veeam Availability Suite v10 is 15 seconds. CDP is commonly found in data protection products, especially those emphasizing data recovery.
Other enhancements and features in Veeam Availability Suite include:
- Veeam Availability for AWS. Delivered through a partnership between Veeam and cloud backup and disaster recovery provider N2W, the feature offers cloud-native, agentless backup and availability to protect and recover AWS applications and data. Availability for AWS is geared toward helping enterprises migrate to and manage a multi-cloud or hybrid cloud environment.
- Veeam Agent for Microsoft Windows. Veeam announced this feature in August, but it became generally available Wednesday. It is designed to provide availability for Windows-based physical servers, workstations and endpoints, as well as Windows workloads running in public clouds.
Veeam previously released its Agent for Linux, which provides availability for public cloud and physical workloads hosted by Linux-based servers and workstations running on premises or in the public cloud.
“You need an availability strategy that’s going to extend beyond your virtualized workloads,” John Metzger, vice president of product marketing, said during the general session at VeeamON Wednesday. “Protecting workloads is important, but ensuring availability of those workloads is critical.”
Veeam Availability Suite includes Veeam Backup & Replication and Veeam ONE.
General availability of Veeam Availability Suite v10 is projected for late 2017. Pricing is not available at this time.
Veeam earlier this week announced changes to its executive team.
Check SearchDataBackup this week for more news out of VeeamON.
NEW ORLEANS — Veeam Software has changed CEOs for the second time in less than a year.
On the eve of the VeeamON user conference this week, the data protection software vendor elevated two executives into co-CEO positions as it strives to become a billion-dollar company.
Peter McKay, previously COO and president, and Andrei Baronov, co-founder and CTO, will serve as co-CEOs. McKay will retain his title as president and Baronov will continue as CTO. Former Veeam CEO William Largent moves into a new role as chairman of the company’s Finance & Compensation Committees.
The moves come 11 months after Largent replaced Veeam’s other founder, Ratmir Timashev, as CEO, and former VMware executive McKay joined the company as COO/president. Timashev remains with Veeam as a director of the private company.
McKay will lead Veeam’s “go-to-market,” finance and human resources functions, and work with Baronov to drive future growth, according to the company. The go-to-market strategy will specifically focus on the company’s continued expansion into the enterprise and cloud segments, as well as accelerating growth into the Americas and Asia/Pacific markets.
Baronov will oversee Veeam’s research and development, market strategy and product management functions. Largent will be responsible for the oversight of all corporate governance matters, tax structure, investment management and internal audits.
Founded in 2006, Veeam has a goal of becoming a $1 billion revenue company by 2018 and a $1.5 billion company by 2020, McKay said today at VeeamON.
Veeam recently reported its 2016 revenue bookings at $607 million.
“As we continue to grow and scale our business, we need to do it the right way,” McKay said.
Veeam reported about 2,500 global employees at the end of 2016 and is looking to add 800 over the next year, McKay said. The company plans to invest $126 million in marketing in 2017, which is about 20% of its revenue.
Veeam is looking to expand in four specific areas: geographic, platform (physical, virtual and cloud), segment (increased investment in SMB, commercial and enterprise markets) and partners.
Before joining Veeam, McKay was senior vice president and general manager of the Americas at VMware. He was also CEO of startups Desktone, Watchfire and eCredit.
“Peter took the company to the next level,” Timashev said of McKay’s first year at Veeam.
Veeam is growing faster and is more innovative now, Timashev said.
Veeam claims a total of 242,000 customers, and says it is adding 4,000 customers each month.
“There is an unbelievable opportunity in front of us,” McKay said. “We have to be bold.”
Nutanix is making its hyper-converged infrastructure (HCI) software available on another server platform, this time with the server vendor’s full cooperation.
Nutanix and IBM today disclosed an OEM deal for IBM to sell Nutanix HCI software on Power Systems servers. The deal gives IBM an HCI system and brings Nutanix beyond the x86 platform where hyper-convergence is dominant.
Greg Smith, Nutanix senior director of technical marketing, said IBM will sell Nutanix HCI software on IBM-branded turnkey appliances beginning sometime in 2017. This is different than Nutanix’s recent initiatives to make its HCI software available through channel partners with Cisco and Hewlett Packard Enterprise servers. Neither Cisco nor HPE were willing partners, as both sell competitive products.
“This allows our software to run on a different class of server,” Smith said. “We have done well on x86 platforms, and this allows us to venture into a different market segment. Power systems are used for more advanced big data, machine learning and AI cognitive workloads. These are demanding applications that demand high performance.”
Unlike x86 systems running Nutanix software, the IBM HCI systems will only support the Nutanix Acropolis Hypervisor (AHV). Nutanix started out supporting only VMware hypervisors, and most of its customers still use VMware virtualization. But Smith said AHV will be incorporated on all of the IBM Nutanix systems. Smith said AHV is designed for cloud-native applications and running microservices and containers.
“The objective is for [IBM] customers to run the Nutanix AHV hypervisor,” he said.
IBM sold off its x86 server platform to Lenovo and sees need to enter x86 HCI market, according to IBM storage general manager Ed Walsh. In an interview with TechTarget in February, Walsh said IBM’s converged infrastructure platform provides the same benefits to customers as x86-based HCI.