I can’t tell you exactly how many storage products have launched in the past year, but I know it was in the hundreds. I can tell you it was more than I can count. That’s because hardly a day goes by when I don’t receive a briefing, press release, or pitch for a briefing from a storage vendor. And the rest of the TechTarget storage editorial staff can tell you the same.
I do know how many storage products won Storage Magazine/SearchStorage Storage Products of the Year awards last year: 14. That’s not many considered all the new products that ship in a year.
Now it’s time to start judging the hundreds of products that came out in 2018, and pick the 14 or so that deserve the honor this year. If one of those products is yours, you can enter it for consideration by our judging panel, made up of our editors, independent storage analysts and end users. You can find the entry form here. The deadline has been extended to Friday, Sept. 28, 2018, at 5:00 p.m. The form also includes the judging criteria and tips for completing the form if you’re new to this. This is the 17th year we’ve been giving these awards, so many of you have been through this before and know how prestigious they are.
Check out this year’s categories: Storage Arrays, Software-Defined/Cloud Storage, Storage Management Tools, Backup and Disaster Recovery Software/Services and Backup and Disaster Recovery Hardware. That pretty much covers the gamut of storage products, so you should be able to find your category. But you have to be in it to win it, so make sure you don’t miss the deadline and get shut out.
Besides proving there is plenty of growth still going on in the hyper-converged infrastructure market, Nutanix’s last earnings report showed HCI has moved well beyond VDI and niche applications.
Nutanix closed a $20 million deal last quarter, its largest ever. The unnamed Department of Defense agency uses the Nutanix platform “to power combat edge clouds around the world,” according to Nutanix CEO Dheeraj Pandey. He said the customer will use Nutanix in 15 remote sites.
Then there was a financial services firm that spent $5 million on a Nutanix deal last quarter. The HCI pioneer claimed 23 deals worth more than $1 million in the quarter.
It’s not only Nutanix that’s seeing larger HCI deals. VMware also closed its largest deal for its vSAN HCI platform last quarter – a 1,200-retail store server refresh involving Dell EMC VxRail appliances.
“It’s part of the journey of maturation of hyper-convergence,” Pandey said in an interview after the earnings call. “We’re solving big problems for the enterprise. And there are different approaches. The approach we take is, we don’t bundle anything, we don’t have the luxury to bundle things with larger deals that are beyond just hyper-converged. For us, it’s all product and quality and customer service, and how we handle the highest-end workload.”
Nutanix is now nine months into its shift to a software-centric business model. It still sells branded appliances but has flexible licensing options. Customers can choose to spread their software licenses across Nutanix appliances or server purchased through OEM deals with Dell EMC and Lenovo or other hardware partners.
The change to a software model hasn’t changed Nutanix’s pattern of growing revenues while losing money. Nutanix reported $304 million in revenue last quarter, up 20% year-over-year. It lost $87.4 million compared to $66.1 million in the same quarter a year ago. For the full year, Nutanix generated $1.16 billion in revenue compared to $845.9 million in the previous year. Its full year loss of $297.2 million was less than the $379 million loss the year before.
Nutanix customers now number more than 10,000, including 1,000 last quarter.
“The biggest change is around consumption,” Pandey said of the move to a software model. “When people have portability, they can take these software licenses and run them on different hardware platforms. Many companies don’t want to buy hardware up front because of Moore’s Law and the commoditization of hardware. They want to buy more software and less hardware up front.”
Nutanix’s consumption model will likely change again over the next year with more cloud-based subscription coming. Its Xi Leap – a cloud-based disaster recovery service with one-click failover and failback – is due to be generally available the end of the year.
Beam, Nutanix’s multi-cloud management dashboard, is available now. And Nutanix in August spent $165 million to acquire Frame, a startup developing desktop-as-a-service that Nutanix intends to make available as part of Xi. Pandey said the goal for making Frame available is early 2019.
So now that Nutanix has $1 billion in annual revenue and projects to hit $3 billion in three years, what’s its timeframe for profitability? Pandey refers to the Nutanix strategy as “measured growth” and it funds its spending through free cash flow. Nutanix had $22.7 million in operating cash flow from last quarter.
“Right now we believe Nutanix customers are willing to pay us more, and it’s important that we create the foundation of a customer base that continues to buy from us,” he said. “We’re going to spend a lot of money this year, but we don’t want to touch the bank. How do we do this at cash-flow break-even rather than touching the bank? That’s one guardrail that helps balance the two paradoxes.”
Quad level cell flash technology is leading to high capacity flash devices that have applicability in archival storage or content repositories that are primarily read access. Currently at 96 layers, QLC devices are different than the flash devices used in primary storage systems. The opportunity for this technology in archival storage and content repositories has led to the term “archival flash.”
Toshiba, Samsung, and Intel will most likely be the first vendors delivering the archival flash devices, and other flash device vendors may follow with their own devices. Archival flash devices are projected to have capacities in the 100 TB range – far higher than almost any SSDs on the market now. While Nimbus Data already ships 100 TB SSDs, it is not using QLC and may not have the much lower cost of the newer technology.
The large capacity is warranted given the primary use for storing archival data. Some in the industry remember the issues in primary storage when disks increased in capacity, raising concerns about rebuild times. Those familiar with the protection from device failures used in object storage systems — the most likely systems to use archival flash — will understand how the circumstances are different. Object storage systems protect device failures through information dispersal algorithms and erasure codes within a node. Node failures are protected with N+1 node protection using data distributed across those nodes. A site is protected with either replication or geographic dispersion, adding another level of protection to the immutable data stored in object storage systems.
Archival flash economics
Unfortunately, some still look at the economics of storing data with a one-dimensional view of data-at-rest economics with the focus on acquisition costs. Using data-at-rest economics effectively says that all data is equal and low cost is the only value a user would achieve from acquiring a system. It ignores different types of devices that have many different attributes. With achieving value from performance that has been demonstrated with current SSDs (not using QLC technology), data-at-rest economics should have been discounted. For archival flash devices, the main value for customers will be in the longevity of the QLC devices. Having 12 to 15 years of expected lifespan, the longevity changes the value for customers.
Data has gravity, meaning that data stored for a period of time tends to persist and incurs costs to move to new systems. Archival flash longevity will invalidate the one-dimensional acquisition cost economics and require evaluation of TCO that include technology lifespan.
Longevity requires the storage system to have the capability to disaggregate the storage devices in an enclosure independent of the controller function. Disaggregation allows the data to remain in place on the longer lifespan devices while the controller is updated based on the technology change rate for processor, adapters, etc. in the controller. Many vendors have already accomplished this and feature the capability with “Evergreen” programs. This allows the economics of the different technology change rates to be optimized.
To show the effect of the longevity of archival flash used in an archival storage system, let’s look at an economic model to show the total cost of ownership over a 15-year time span. The model includes a wide variety of parameters and what-if scenarios:
- Initial capacity
- Annual capacity growth rate
- Lifespan of archival flash device and lifespan of standard storage device
- Capacity of archival flash device and capacity of standard storage device
- Costs and requirements used in TCO calculations for administration, deployment, space, power, rack, and cables.
- Average yearly price declines for archival flash device and standard storage device
- Average yearly price decline for controller/node
- Number of devices attached to controller/node
- Hardware and software average discounts
- Controller/node costs
- Software costs – capex and software capacity-based license charges
- Initial per GB price of archival flash and standard storage devices
- Device level – protection – number of devices data is distributed across and number of segments of data protection (for example – Archival flash default of 12 devices with toleration of three devices failures in addition to protection of data distributed across N nodes). For standard devices the example would be 8 devices with two used protection.
The first diagram shows the comparison with capacity and longevity values, which are shown on the right side. This diagram has the acquisition costs of the archival flash to be 50% higher than that of the standard devices, which in this case are large capacity disks. For simplicity, price declines are set to zero for this example. Object storage systems capable of retiring a node and automatically redistributing data is assumed, which will avoid migration costs that would be evident in other storage systems. In this model diagram, data reduction was enabled for both types of devices with equivalent effectivity. Discounts were at 30% of list price. Premium support services were not added to the costs. Technology transitions occur in the final year of the lifespan, accounting for overlap in the technologies.
It should be clear that the longevity of archival flash has a dramatic effect compared to the shorter lifespan of standard devices. This also illustrates the inadequacy of using a one-dimensional, data-at-rest cost of storage measure. Savings come from having to purchase fewer replacement devices with archival flash over the 15-year span because of the longer lifespan and from the higher capacity with archival flash requiring fewer nodes as the system scales. Increasing capacity of devices over time was not included in the projection because of the limited history with archival flash devices.
The following diagram shows where the economic savings are realized:
The chart shows significant cost differences, with the largest contribution coming from the avoidance of acquiring replacement devices.
Looking at the “what-ifs,” the following chart includes changes in some parameters. Specifically, the device prices were set to be equal and the lifespan of archival flash was set to 15 years.
Another “what-if” adds the price decline experienced over the last 5 years. Those results show another 8% improvement in costs for archival flash over standard devices.
Given that a large amount of data is retained “forever” in archive and content repository systems, the economics of the large capacity requirements will impact archival flash.
It will be difficult to overcome initial thoughts such as the one-dimensional data-at-rest economics and limited understanding of data protection for large capacity devices used in object storage systems. But that will only delay the rise of archival flash for storing and managing data.
The long-awaited convergence of the Dell EMC midrange storage into one platform will happen in 2019, according to Dell Technologies’ chief storage honcho.
Jeff Clarke, VP chairman of products and operations, said on Dell’s earnings call today that the engineering teams from the current Dell EMC midrange platforms are hard at work on the next-generation midrange array. Midrange storage has been a sore spot for Dell’s storage sales since the EMC merger completed in 2016. Dell EMC storage did have its second straight quarter of strong growth, however, increasing 13% year-over year to $4.2 billion. Dell EMC has now consecutive quarters of double-digit year-over-year growth following a series of share losses during the merger transition.
“We’re pleased but not satisfied,” Clarke said of Dell EMC’s storage performance.
Clarke pointed to increased demand in high end storage, unstructured data and data protection products as storage highlights. Dell also reported triple-digit growth in hyper-converged infrastructure, but Clarke spent a lot of time talking about his midrange plans.
“I’m pleased with the progress we’ve made, and there’s more to do,” he said. “I’ve been very clear that we still have more midrange product than we’d like long-term. It takes a while to develop a new midrange product, which we are focused on and committed to have next year. In the interim, we’re going to increasingly make the portfolio we have more competitive. Behind the curtains, there’s more developers working together on new technologies and innovating as one single team than we had a year ago.”
He referred to recent upgrades to Dell EMC’s major midrange platforms, the Unity (legacy EMC) and SC (legacy Dell) arrays as steps to make them more competitive. The vendor made the products’ UIs look more alike and now supports CloudIQ predictive analytics on both products, foreshadowing the move to a single platform.
Dell executives did not take questions on the privately held company’s plans to sell shares on the public market. However, Dell filed an amended S-4 registration notice this week with the Securities and Exchange Commission (SEC) and is expected to set a date for an initial public offering within a few weeks.
Sometimes a storage startup makes a bang, fades into the background, and you forget about it – until the company snares a passel of new investors. That was the case with NAS controller specialist Infinite io, a newcomer that wants to shake up traditional file storage.
The Austin, Texas-based network virtualization vendor this week said it has $10.3 million to speed its advance into NAS and cloud, with a special focus on NetApp shops. The money was provided by a combination of institutional and private investors. Former Motorola CEO and Cleversafe founder Chris Galvin led the round with his son, David Galvin, who runs San Francisco-based Three Fish Capital.
Chris Galvin launched Cleversafe in 2005 and helped pioneer the concept of object storage. IBM acquired Cleversafe for $1.3 billion in 2015 and has adapted the technology as its IBM Cloud Object platform.
Infinite io also obtained institutional funding from Chicago Ventures, Dougherty & Company, Equus Holding and PV Ventures, a venture firm run by X-IO Technologies CEO Bill Miller. Chicago Ventures is a repeat investor, having furnished Infinite io with $3.4 million in seed funding in 2015.
Another storage industry notable to invest is Dean Drako, the founder and former CEO of data protection specialist Barracuda Networks. Drako now runs cloud-based security vendor Eagle Eye Networks.
Infinite io CEO Mark Cree said the funding will be used to hire engineers, sales reps and operations staff. Six newcomers have been brought aboard this week, Cree said.
“A lot of it will be just getting more feet on the street in sales. The other part of it is that our device is so foreign to anything else out there in storage. We aren’t a file system or additional mount point. What we really are is a big flash meta-database,” Cree said.
Using a hardware gateway to offload data to the cloud is not a new idea. Avere Systems (now part of Microsoft), Nasuni Corp. and Panzura have offered NAS file gateways in the past. There is still demand for such hardware products, but enterprise preferences are changing. More and more data centers prefer to run cloud-based NAS software on industry-standard gear.
Scale-out NAS vendor Qumulo has added a cloud-spanning file fabric in its software-defined storage appliance, and there are a number of object storage vendors that position their products as a low-cost, low-latency but high-capacity archives.
Infinite io eschews a native file system, while still tackling the growing demand for native scale-out storage. The 2U Infinite io Network Storage Controller (NSC) white box serves metadata from DRAM and includes 5 TB of flash to handle software code and large file systems. The product embeds standard x86 code and off-the-shelf packaging.
Customers have the option to purchase it solely as a NAS accelerator or bundle it with Infinite io’s cloud tiering software for back-end object storage. Cree said his company plans to introduce a software-only version on prequalified commodity servers in 2019.
Cree said the product is used by organizations in genomics, government, media and entertainment.
The NSC appliance sits as a bump on the wire to encrypt each payload before it is sent to the storage. The in-band device fronts a NAS filer, but application clients see NSC as local storage. The transparent control plane serves as a proxy to connect servers and storage. Three nodes are required for failover, and a single cluster can scale to 12 nodes. The cluster connects to any back-end object storage for cloud tiering.
Infinite io NAS software inspects all data traffic in the NAS head. That helps it build a metadata library of commonly accessed files. The cloud software automatically shuttles inactive data to the cloud, based on user-defined policies.
Cree has been down this road before. He launched NAS cloud-based caching startup StorSpeed in 2007, and it amassed $13 million before investors turned off the spigot. The company was renamed CacheIQ. NetApp subsequently paid $90 million for the CacheIQ technology in 2012, tucking it on its unified FAS arrays.
Cree started Infinite io with Jay Rolette and Dave Sommers. Rolette is vice president of engineering and formerly was the chief technologist at Hewlett Packard Tipping Point, which was sold to TrendMicro when HP split into two companies in 2015. Sommer is Infinite io’s vice president of operations and a former vice president of engineering at Adaptec, now part of Microsemi.
In “The State of IT Resilience,” analysis firm IDC estimates that as many as 50% of organizations could not survive a disaster event. A co-author of the report says long-time threats of hardware failure and human error have been minimized by advanced technologies only to be replaced by sinister threats such as ransomware.
“Many organizations do not have properly protected and staged offsite data, have not tested the [disaster recovery] environment, or do not have automated DR processes as part of documentation and planning,” said the report, written by analysts Phil Goodwin and Andrew Smith. “The reasons for this are complex, but principal among them are typically cost, time and training.”
The report, commissioned by DR and resilience provider Zerto, defines IT resilience as “an organization’s ability to protect data in the event of any unplanned or planned disruption and, simultaneously, support data-oriented initiatives for business modernization and digital transformation.”
Forty-nine percent of organizations have suffered a data loss event in the last three years, said Goodwin, an IDC research director of storage systems and software. Ransomware and other types of malware are top drivers of data loss.
“The cardinal sin of data protection is not being able to recover data,” Goodwin said.
Goodwin said an air gap – creating a physical disconnect between primary and backup data – is critical in protection from ransomware. He noted that ransomware hitting backups is prevalent now. Replication, for example, doesn’t have an air gap. The cloud and tape, though, are two possible platforms for creating that disconnect.
Damages rise quickly when you’re down
For the report, IDC received responses from 500 senior-level IT and business managers. Ninety-three percent reported experiencing a tech-related business disruption in the past two years. In addition, 20% of respondents experienced major reputational damage and permanent loss of customers as a result of disruptions.
IDC has determined that the average cost of downtime is $250,000 per hour, including indirect costs of lost revenue and lost productivity.
“The survey results indicate that most respondents have not optimized their IT resilience strategy, evidenced by the high levels of IT and business-related disruptions,” the report said. “However, the majority of organizations surveyed will undertake a transformation, cloud or modernization project within the next two years.”
On the plus side, organizations are taking advantage of the cloud for data protection, Goodwin said. Disaster recovery as a service (DRaaS) has helped make DR more affordable and easier, especially for businesses that might have previously balked at the cost and complexity.
“DRaaS has fundamentally changed the economics of disaster recovery,” Goodwin said.
In addition, 85% of organizations plan to hire and/or train more staff, and 94% expect to spend more on IT resilience in the next 24 months.
However, if cost is still an issue, Goodwin recommended an organization figure out how to more effectively use people and technology resources.
“Take a look at process and improve that first,” Goodwin said.
After weeks of uncertainty, the Tintri bankruptcy saga has been resolved.
DataDirect Networks (DDN) said it will pay nearly $60 million for Tintri and plans to reveal the product roadmap by December. According to a published report, DDN’s bid topped an offer from Austin, Texas, hedge fund ESW Capital. U.S. District Bankruptcy Court Judge Kevin Carey reportedly approved the terms of DDN’s asset purchase following an auction that spanned six rounds of bidding over two days.
DDN said its immediate plan is to restart service and support for existing Tintri customers as soon as this week. DDN and Tintri signed a letter of intent in July.
Considering Tintri’s financial woes, industry observers wondered if Tintri could be had for pennies on the dollar. Legal news website Law360, citing court documents, said DDN’s final offer of $53.8 million was nearly $14 million higher than its stalking horse bid of $40 million. According to Law360, Santa Clara, Calif.-based DDN reportedly will provide $35 million in cash and $15 million in guaranteed royalty payments to be spread over a three-year period, followed by nonguaranteed royalties in the ensuing years.
DDN subsequently disclosed its acquisition price at $60 million. DDN officials must have had an inkling a favorable decision was forthcoming. The vendor is attending VMworld this week, a trade show where it has never had much of a presence. Company representatives at Tintri’s trade booth cracked open champagne to celebrate the news.
Tintri had already bought a booth at VMworld before it went belly up. The signage on that booth read, “Tintri by DDN.” As late as Tuesday, however, reports circulated that a private equity fund was making a last-minute play for Tintri and the judge pushed back his decision one day.
Aside from the Tintri virtualization technology, DDN will inherit Tintri’s 1,500 customers, which include 21 Fortune 1000 firms. The customer list includes AMD, Avaya, Chevron, Comcast, NASA, SONY and Toyota.
“This is a great opportunity for us to expand into the enterprise space,” DDN president Paul Bloch said. He said Tintri’s “fantastic file system” provides a counterpoint to DDN’s Gluster parallel file system-based flagship storage for high-performance computing.
Bloch said DDN plans to boost Tintri headcount to about 200 employees within the next year. “We plan for Tintri to be in the black within a year,” Bloch said.
DDN said it plans “substantial investments” in Tintri’s roadmap in areas that include analytics, databases, NMVe flash and server virtualization.
DDN said Tintri will operate as a separate engineering, sales and support division. As the Tintri bankruptcy process unfolded, the vendor’s customers expressed fears it would interrupt maintenance and support. DDN said it would honor those existing agreements.
According to Tintri bankruptcy filings, the Mountain View, Calif., vendor owes about $8.8 million to creditors and has a cash balance of about $200,000. Tintri laid off about 75% of its workforce to cut costs, although that alone was not sufficient to outrun mounting losses.
The Tintri bankruptcy capped a stunning fall for the once-promising company, which carved a niche in the all-flash market by selling virtualization arrays to VMware shops. The company completed an ill-fated initial public offering in June 2017, an undertaking it took when private funding dried up.
Tintri filed to raise $109 million, but its IPO was postponed due to lackluster interest. When it popped, shares opened at $7, well off the $11 target. By June, Tintri shares dropped to penny-stock status.
CEO Thomas Barton resigned in June, less than two weeks after Tintri sought Chapter 11 protection. Barton took the reins in April, but stepped down when he could not cobble together a funding deal.
Tintri claims to have between 1,000 and 5,000 creditors. FlexTronics International, Tintri’s chief manufacturing partner, ceased shipping products this year, citing unpaid bills. According to court documents, FlexTronics is owed $4.48 million, which accounts for about half of Tintri’s outstanding debt.
(Dave Raffo contributed to this story)
The record $94 million funding round that Cloudian trumpeted today suggests investors have plenty of interest in highly scalable object storage and distributed file systems.
Cloudian sells turnkey HyperStore object storage and HyperFile NAS appliances. Customers also can install Cloudian storage software on industry-standard servers.
The San Mateo, California-based startup’s fifth round of financing exceeded all prior rounds combined and lifted its overall today to $173 million since Cloudian storage launched in late 2011.
The latest Series E round isn’t all money. It included a previously announced $25 million contribution from Digital Alpha, a private equity firm started by former Cisco Systems executives. Additional investors include Eight Roads Ventures, Goldman Sachs, INCJ, Japan Post Investment Corp. (JPIC), NTT DoCoMo Ventures and Wilson Sonsini (WS) Investments.
Cloudian plans to use about two-thirds of the money to expand its global sales, marketing, service and support efforts, and the rest will bolster its engineering work, according to Jon Toor, the company’s chief marketing officer.
Toor said Cloudian currently employs about 165 people in North America, EMEA and a recently opened office in Australia. He said the company plans to add local staff in regions such as Eastern Europe, Spain, and possibly Dubai.
Cloudian storage plans
Toor said the Cloudian storage engineering team’s areas of focus will include expanded Amazon S3 API functionality, additional partner qualifications and certifications, and further integration of and HyperFiler technology. Cloudian in March completed the acquisition of Milan, Italy-based Infinity Storage, which previously worked with the vendor on HyperFile.
“There’s always things that customers are looking for. For instance, in media and entertainment, they’re always looking for functionality that makes information easier to find,” Toor said. “As we go into more and more use cases, we identify new and different opportunities to improve the product and make it more suitable for those use cases.”
Cloudian had significant momentum heading into the year. The startup finished 2017 with 3x revenue growth and its customer base growing past 200. CEO Michael Tso said the vast majority of sales went through value-added resellers by the end of the year, indicating to him that the product was “ready for a broader channel.”
Substantial Cloudian partnerships include an OEM deal with Lenovo and an EMEA-based reseller agreement with Hewlett Packard Enterprise. Others include Cisco, Microsoft Azure, Google Cloud Platform and Rubrik.
The customer base for Cloudian storage tends to be industries that need to store large files and data sets, including health care, media and entertainment, and manufacturing. Toor said the company is also seeing growth in Internet of Things (IoT) use cases that require highly scalable, distributed storage systems to store data cost effectively.
What does a vendor do when storage sales lag projections? If you’re Hewlett Packard Enterprise, you blame it on potential vagaries in currency.
On Tuesday, the Palo Alto, Calif, hardware maker reported $887 million in storage revenue for the July quarter. In dollar terms, HPE storage revenue gained 1% from the same period a year ago, but it’s down 2% after adjusting for currency.
The decline interrupts HPE’s string of strong storage earnings stemming mostly from the 2017 acquisition of Nimble Storage and, to a lesser extent, hyper-converged pioneer SimpliVity Corp.
HPE CEO Antonio Neri downplayed the slip and pointed to growth in HPE hyper-converged systems. He said that segment grew 130% year over year and reached an annual run rate of $1 billion. Neri said HPE Synergy composable infrastructure is deployed by more than 1,600 customers.
He said demand for HPE storage is rising in analytics, edge environments and high-performance computing. HPE plans to invest $4 billion in its Intelligent Edge segment, which grew revenue 10% to $785 million last quarter. Year to date, the Intelligent Edge-Aruba software-defined WAN product portfolio has gained 12% to $2.1 billion.
“At the same time, we saw 70% growth in big data storage. We expect improved organic growth (next quarter) as we drive increased sales productivity and as our latest storage offerings gain customer traction,” Neri said.
HPE storage revenue is not broken out by product category. The HPE storage flagship is the all-flash 3PAR StorServe family of SAN arrays. The vendor also sells entry-level MSA Series SAN, StoreVirtual systems and and StoreEasy NAS, along with HPE ProLiant SAS enclosures for server-side storage.
HPE acquired Nimble hybrid and all-flash SAN arrays mainly for the Nimble InfoSight cloud-based predictive analytics, which it has gradually been implementing on 3PAR arrays.
Still, HPE overcame weak storage performance to notch overall gains. Top-line revenue of $7.76 billion was up 4% and beat the consensus of $7.68 billion. Non-GAAP earnings of 44 cents a share also beat estimates by 7 cents.
HPE hybrid IT accounted for 78% of the growth during the last quarter. Sales of HPE compute servers generated $3.5 billion, HPE storage accounted for $887 million and data center networking gear produced $59 million. HPE’s consolidated revenues for the nine months through July climbed 8% to $22.9 billion.
“From a portfolio mix perspective, we continue to drive good growth in our value offerings and our core volume business continues to grow better than expected,” HPE CFO Tim Stonesifer said.
This was Stonesifer’s final earnings call with HPE. The company said Stonesifer was stepping down at the end of October. Stonesfier has been CFO since the old Hewlett Packard split into two companies in 2015. Neri said former Sprint Corp. CFO Tarek Robbiati has been hired and will take over for Stonesifer on Sept. 17.
LAS VEGAS — Unlike many of its data protection software rivals, Veeam Software has resisted putting its applications on a branded integrated appliance. Instead, it partners with large and small backup target vendors to compete with integrated appliances from the likes of Veritas, Dell EMC, Commvault, Cohesity and Rubrik.
Today, Veeam landed a partnership with Cisco to bundle Veeam High Availability on Cisco HyperFlex hyper-converged infrastructure (HCI) appliances. Cisco represents a large reseller channel for Veeam, although HyperFlex is far from a market leader in the HCI market.
Disclosing the deal today at VMworld, Veeam CEO Peter McKay said the Veeam-HyperFlex partnership has been a year in the making. He called it “step one in a journey,” and the relationship will expand. It will begin with a single SKU that Cisco will begin selling around October. Cisco will also handle support for the system.
Siva Sivakumar, senior director of data center solutions for Cisco, said the first HyperFlex appliance with Veeam will scale to around 200 TB of usable data but more models will follow. Veeam software is already available to protect data stored on Cisco HyperFlex but Sivakumar said the vendors tuned the software to optimize it for HyperFlex.
“Veeam already worked with HyperFlex,” he said. “Now Veeam works on HyperFlex.”
Having Cisco as a partner could help Veeam in its quest to move deeper into the enterprise. Veeam already partners with Cisco HCI rival Nutanix, and last month added the ability to protect data on the Nutanix AHV hypervisor. Veeam added a partnership with software-defined storage startup Hedvig in July. Veeam also has partnerships of varying degrees with NetApp, Hewlett Packard Enterprise and Dell EMC.
Cisco works with other data protection vendors, and has an OEM deal with Commvault to sell Commvault’s HyperScale software on Cisco UCS servers, rebranded as ScaleProtect with Cisco UCS. UCS is also the hardware platform for Cisco HyperFlex.
“One thing we do well is work with many partners, and we make it work well for both,” Sivakumar said. “Veeam goes after highly virtualized customers, while Commvault goes after both bare metal and virtualized customers and legacy migrations.”