Commvault’s retiring CEO Bob Hammer said the data protection vendor’s ongoing transition will produce long-term gains, although they will likely follow short-term pains.
Commvault unveiled a series of product and management changes in recent months, following a letter from unhappy investor Elliott Management last April highly critical of Commvault management. The changes, under the banner of Commvault Advance, included Hammer’s announced retirement pending hiring of a replacement, two new board members and a simplifying of its product line from 20 products to four.
The early results have had no positive impact on Commvault’s financials. Its product revenue of $75.1 million last quarter was flat from a year ago. Services revenue increased 11% and its overall revenue of $176.2 million ticked up six percent but came in $1.74 million lower than Wall Street consensus. Commvault also lowered its forecast for this month and the overall year during its earnings report Tuesday. It now expects revenue of $179 million this quarter and from $745 million to $750 million for the year. Financial analysts expected revenue forecasts of $182.5 million for the quarter and $770 million for the year.
Commvault lost $8.6 million last quarter, although $11.4 million of that came from restructuring and other one-time expenses.
“We still have much work to do to translate what we have done into better, more predictable revenue growth,” Hammer said on Commvault’s earnings call. “This transformation we’re going through is massive. When you’re in the middle and you’re moving hundreds of people around to different functions and different roles within the company, and you’re expanding your partnership engagement, you go through a period of disruption. But fundamentally, we feel we put a plan together and we’re executing that plan, and we feel really good about it.”
Hammer said Commvault Advance had three objectives: to simplify the business, drive improved and consistent revenue growth, and improve profitability. He said the product changes were Phase One of Commvault Advance, with cost reductions planned for Phase 2. Commvault reduced its workforce by six percent last quarter, finishing with 2,679 employees. That was a greater cut than its four percent reduction goal.
Commvault hired a search firm in May to find and interview CEO candidates to replace Hammer, who is stepping down after leading the company for 20 years. Hammer said he is personally involved with the search, but added there is no timetable for hiring a replacement.
Elliott called for a complete review of Commvault’s management in its April letter. Elliott, which owns more than 10% of Commvault stock, outlined plans for changes including four new board members. Last week Commvault added Martha Bejar and Chuck Moran to the board, replacing long-time directors Robert Kurimsky and Armando Geday.
Hammer said Commvault’s HyperScale scale-out storage appliance is a key to its revenue growth but HyperScale has not had significant revenue since its late 2017 launch. He admitted competitors such as Veeam – which said its bookings grew 20% year-over-year last quarter – and relative newcomers Rubrik and Cohesity have put pressure on Commvault. But Hammer added he expects HyperScale and new partnerships with Cisco, IBM, Hewlett Packard Enterprise, Microsoft and AWS to help Commvault’s competitive position.
“All the chips are on the table now,” Hammer said. “Now, it’s just [a matter of] pure execution. We’re going to start taking the pole position relative to these new upstarts, technologically. This is all opportunity now, it’s just how high up that is, because the elements are in place now.”
Veeam Software has launched updates to its product line to increase platform support in recent weeks while continuing on a trajectory to becoming a $1 billion company by the end of the year.
Veeam said the second quarter of 2018 was its 40th in a row of double-digit bookings growth. Bookings grew 20% year-over-year and Veeam now claims 307,000 customers. Veeam executives point to alliance partners as a key to that growth.
Veeam product updates include support for the Nutanix Acropolis Hypervisor (AHV) and a new version of Backup for Microsoft Office 365.
New for Nutanix
Veeam Availability for Nutanix AHV provides the same backup and recovery capabilities that the vendor offers for users of the VMware vSphere and Microsoft Hyper-V hypervisors. Those features include multiple restore options, ranging from recovery of an entire VM to individual files and application items. Backups are taken from Nutanix VM-level snapshots.
In addition, Veeam designed the web-based user interface to look and feel like the management UI for the Nutanix infrastructure stack.
Edwin Yuen, senior analyst for Enterprise Strategy Group, said he’s impressed with the “breadth and depth of Veeam’s capabilities.” If it has all the capabilities of the Hyper-V and vSphere backup, it will be comprehensive, he said, and valuable to have for protection.
“It was important for Veeam to have an AHV solution,” Yuen said. “It rounds it out for them.”
Veeam first announced its intention to support AHV in July 2017 at Nutanix’s .NEXT conference. The backup and recovery vendor originally said the support would be available by the end of last year.
The Availability for Nutanix AHV has been in beta throughout 2018, said Rick Vanover, director of product strategy at Veeam. While there wasn’t a problem, he said, the release a few weeks ago of the most recent Veeam updates to its Backup and Replication product “unlocked” the capability for the Nutanix support.
“We don’t want anyone to have a false sense of security with this product,” Vanover said.
Yuen said he doesn’t think there are concerns with the support coming out later than anticipated.
“It’s about getting it right,” Yuen said.
Several other data protection vendors have recently launched support for AHV, including Veritas, Commvault, Cohesity, Rubrik, Arcserve and Unitrends. HYCU sells software built specifically for Nutanix AHV backup.
General availability for Veeam’s Nutanix AHV product launched July 26.
Veeam updates Office 365 backup
The Veeam updates to its Backup for Microsoft Office 365 include data protection for OneDrive for Business and SharePoint.
That support was “a really important piece of the puzzle,” Vanover said.
The data protection for SharePoint includes backup for SharePoint Online and on-premises.
Version 2 of Backup for Microsoft Office 365 is available in one- to five-year annual subscriptions. Veeam recommends a three-year subscription, billed annually at $1.28 per user, per month.
More than 35,000 organizations have downloaded the Backup for Microsoft Office 365, representing 4.1 million mailboxes, according to Veeam.
Closing in on $1 billion
Veeam did not give a revenue figure for last quarter, but said the 20% year-over-year growth keeps it on track to hit $1 billion in annual bookings by the end of 2018. The vendor pointed to the cloud as its fastest growing segment with 64% year-over-year growth.
Vanover said product expansion and partnerships will help Veeam reach its bookings goal.
“The whole company is aware of this goal of reaching $1 billion,” Vanover said. “We’re definitely not going to do it with one product. And we’re definitely not going to do it alone.”
Veeam also revealed plans to add 300 positions in a new research and development office in Prague.
Rubrik today added an application to its Polaris SaaS platform, with the goal of automating protection against ransomware attacks.
Polaris Radar is the second Polaris application from the converged secondary storage vendor, following Polaris GPS that launched in April. Polaris is a SaaS framework for managing secondary data. Polaris GPS provides policy management for data in multiple clouds and on-premises. Radar analyzes that data to detect threat behavior.
Chris Wahl Rubrik’s Chief Technologist said the vendor will continue to rollout new apps for Polaris roughly every few months. “We will continue at an energetic pace,” he said.
Polaris Radar monitors all data on-premises and in the cloud under management by the Rubrik Cloud Data Management platform, and generates alerts for suspicious behavior. It uses machine learning algorithms to analyze all metadata from backups and snapshots, checking for anomalies such as massive encryption or deletion of files. It then helps users identify and find impacted applications and files. After users find and select impacted data, Radar automates recovery by restoring to the most recent clean state.
Wahl said Rubrik already helped customers restore to a clean state after ransomware attacks but its Cloud Data Management platform did not find threats retroactively before Radar.
“The onus was on the customer to identify and put their arms around the scope of it,” he said. “Radar will do all that for them. We then give them a push-button approach to recovering. You can wipe all this out, and then select these apps or the files or folders you want to restore. Pick the clean state you want to go to. Radar will then contact all data centers and handle all the orchestration of replacing encrypted files. We can restore to the most recent state with a few clicks.”
Third-party developers can use Polaris APIs to integrate Radar into monitoring dashboards and other data protection and security products.
Ransomware was the top variety of malware found in 2017, according to the 2018 Data Breach Investigations Report. High-profile ransomware attacks such as the 2017 WannaCry virus and the 2018 Atlanta attack have raised awareness, leading to data protection vendors adding ransomware protection to their products.
New 96-layer 3D NAND flash memory is starting to roll out that can store more data per chip and potentially lower per-bit storage costs over 64- and 32-layer technologies.
Toshiba Memory America and Western Digital are sampling their 96-layer 3D NAND bit column stacked (BiCS) flash – formerly known as bit cost scalable – that can store four bits per cell. The capacity of a single quad-level cell (QLC) flash chip is 1.33 terabits (Tb), and a stacked 16-die package can store 2.66 terabytes (TB) of data.
Western Digital and Toshiba developed their proprietary 96-layer BiCS flash at their joint manufacturing facility in Yokkaichi, Japan. They expect to ramp up volume shipments of the 96-layer QLC 3D NAND chips later this year and eventually ship products designed for enterprise and consumer use cases requiring high-density data storage.
This week, Toshiba unveiled its first solid-state drive (SSD) based on the 96-layer, fourth-generation BiCS flash that can store three bits per cell, known as triple-level cell (TLC) flash. Toshiba’s new NVMe-based PCIe XG6 SSD primarily targets PCs, mobile computing, gaming applications and embedded systems but could also see limited use in data-center servers for boot, log and cache purposes.
The XG6 is a single-sided 22mm-by-80 mm M.2 SSD that offers capacity options up to 1 TB and is not hot swappable. Toshiba manufactures other SSDs designed to address a broad range of enterprise and data center use cases.
The XG6 has roughly 40% higher capacity per chip unit than its predecessor XG5 model, which uses 64-layer TLC 3D NAND BiCS flash. The XG6 also improves power efficiency, at 1.2 volt I/O; offers higher interface speeds, at 667 to 800 megatransfers per second; and slightly faster read and write speeds, according to Doug Wong, a senior member of the technical staff for Toshiba Memory America.
“As die density increases over time, that will improve the likelihood that pricing will improve as well,” said Grant Van Patten, product line manager for client, OEM and data center SSDs at Toshiba. “But I think the biggest impact the 96 layer really has long term is the density. In a data center, you can have fewer racks that do just as much work. So you start to get into some of those TCO arguments.”
Toshiba is shipping initial samples of its XG6 SSD to select OEM customers and plans to showcase the new drive at next month’s Flash Memory Summit in Santa Clara, California. Toshiba also plans to show off a packaged prototype that uses the 96-layer QLC 3D NAND BiCS flash technology.
Alternative to HDDs for cold storage
Scott Nelson, senior vice president of Toshiba’s memory business unit, predicted the 96-layer QLC 3D NAND technology will have an especially significant impact in the area of cold storage, where its higher density and lower cost per GB could be a “game changer in the industry.” He said certain types of data do not require the higher performance that TLC flash can provide.
Data reads are about two to three times slower, and data writes, or programs, are about five times slower with QLC 3D NAND in comparison to TLC 3D NAND, according to Nelson.
“We’re going to see a migration from TLC to QLC. Now that’s not to say that QLC is going to replace TLC, because it will not. But there is a niche that QLC can fill,” Nelson said.
Nelson said 96-layer QLC 3D NAND could become a high-density alternative to cheaper hard disk drives (HDDs), providing faster access to cold storage.
The 96-layer QLC BiCS flash technology could account for 10% to 15% of Toshiba’s NAND chip shipments by 2019 or 2020, but 64-layer TLC 3D NAND will continue to dominate the market at that point, according to Nelson. He added that “long-tail demand” would also remain for lower density planar, or two-dimensional (2D), NAND flash. Nelson expects shipments of planar NAND based on floating gate technology to continue for the next three to five years, representing perhaps 10% of the market. Toshiba’s BiCS flash uses a charge trap memory cell.
NAND manufacturers moved from planar NAND to 3D NAND technology when they faced challenges scaling the technology. The cost of planar technology drops as the die size shrinks, and the price of 3D NAND technology falls with the addition of layers that increase the density of the chip.
Wong said the performance of TLC 3D NAND BiCS flash has been similar or better than 15-nanometer devices using planar multi-level cell (MLC), or two bits per cell, technology. He noted that Toshiba did research years ago on planar QLC NAND flash, but it didn’t make sense from a timing standpoint. He said it made more economical sense to do QLC with the BiCS flash technology because the cell size isn’t shrinking as fast and the inter-cell interference effect is lower. The 96-layer TLC and QLC 3D NAND use a similar architecture, but QLC requires stronger error correction code (ECC), Wong said.
Toshiba plans to begin shipping samples of its 96-layer QLC 3D NAND flash chips to SSD and SSD controller vendors in September. Nelson said he also expects demand from hyperscalers and tier 1 data centers that build their own SSDs.
Western Digital was unavailable for comment. The company said it is sampling the 96-layer QLC BiCS flash and expects volume shipments to start later this year, beginning with consumer products under the SanDisk brand. Western Digital eventually plans to use the QLC BiCS technology in a wide range of applications from retail to enterprise SSDs.
Like the other major server vendors, Lenovo proclaims it is “all in” on hyper-convergence and software-defined storage. It just goes about it in a different way.
Unlike the Dell EMC VxRail, Cisco HyperFlex and Hewlett Packard SimpliVity and Synergy platforms, Lenovo does not develop its own hyper-converged or composable infrastructure software. It relies on partners to provide that on top of Lenovo ThinkAgile servers. In the hyper-converged space, Lenovo partners with VMware (ThinkAgile VX Series), Nutanix (ThinkAgile HX Series) and Microsoft (ThinkAgile SX for Microsoft Azure Stack).
This week Lenovo added a newer, smaller partner to the mix when it launched ThinkAgile CP Series powered by software from startup Cloudistics. Lenovo bills the ThinkAgile CP as a composable private cloud platform, although the underlying architecture is similar to that of hyper-converged. The difference, according to Lenovo and Cloudistics, is Cloudistics software was designed from the start for private clouds.
“We call it ‘cloud in a box,” said Shekar Mishra, Lenovo’s director of product marketing for software defined datacenter. “We have a robust hyper-converged portfolio through partnerships with Nutanix and VMware. Now we’re looking more towards cloud models. We have a hybrid cloud platform with Microsoft. Now this (CP) brings the cloud model within the customer’s firewall. It’s everything they love about the Amazon public cloud model, but bring it within their data center and within their control.”
Does Lenovo need another platform for that? VMware and Nutanix will happily tell you they bring the public cloud model inside an organization’s data center, too.
“We are all in,” Mishra said when asked about any overlap.
So what does Cloudistics bring? The Lenovo ThinkAgile CP is managed by Cloudistic Cloud Controller (Cloudistics brands it as the Ignite Cloud Controller). Cloud Controller is a SaaS application that orchestrates storage, compute (Red Hat Enterprise Linux operating system) and networking resources. Cloud controller also includes the Cloudistics Application Marketplace with templates for quick deployments of applications and operating systems.
Cloudistics software also includes backup and replication, and microsegmentation.
“We modeled this after the cloud from day one,” said Todd Frederick, a Cloudistics founder and its COO. “It’s like Amazon, when you log into VPC (virtual private cloud), you have your own virtual private network. We’ve done the same thing. You log in as a tenant, and you get your own virtual network. That’s how we carve up this infrastructure and deliver services.”
Lenovo is launching the ThinkAgile CP Series with two hardware models. The ThinkAgile CP4000 includes two to four 2U compute nodes and a storage block. The compute nodes use Intel Xeon processors and each node has either 128 GB or 256 GB DDR4 memory. The storage blocks include 4.8 TB to 28.8 TB of usable capacity.
The larger ThinkAgile CP6000 consists of one to 10 compute node enclosures with up to four nodes per enclosure, and from one to five storage blocks. Each storage block has 9.6 TB to 115.2 TB of usable capacity. The architecture lets customers scale compute and storage blocks independently in an enclosure.
List price starts at $180,000 for an entry level enclosure. Lenovo, which handles all support, has begun customer trials in North America. Mishra said he expects to CP platform to be general available by late August.
As an alternative to selling branded integrated appliances as most of its large competitors do, Veeam partners with as many hardware vendors as possible.
Hedvig is the latest Veeam partner, now that the vendors have teamed up to integrate Hedvig’s software-defined storage with Veeam’s data protection. Hedvig Distributed Storage Platform customers can use Hedvig as a backup target for Veeam Backup and Replication software.
The Hedvig and Veeam integration enables the vendors to go head-to-head with converged secondary storage startups Rubrik and Cohesity, at one-third of the price, said Angela Restani, Hedvig’s executive vice president of marketing.
“With Hedvig, [Veeam] can provide a complete solution,” Restani said.
Other competing integrated backup appliances include Veritas NetBackup, Dell EMC Integrated Data Protection Appliance and Commvault HyperScale.
Hedvig storage can be integrated into an existing Veeam workflow using any Linux or Windows-based Veeam Repository servers.
The Hedvig cluster can be configured according to the customer’s backup target needs. Hedvig provides storage capacity configurations from 24 TB, 48 TB and 96 TB, along with different disk types such as HDD and SSD. Customers can also install into public clouds such as Amazon Web Services, Microsoft Azure and Google Cloud Platform.
The Veeam integration enables the creation of scalable, pay-as-you-grow backup storage.
“Hedvig’s software-defined storage capabilities and enterprise focus make them an ideal addition to our [partner] program,” Ken Ringdahl, Veeam’s vice president of global alliance architecture, wrote in an email.
Veeam recently laid the groundwork for its new “hyper-availability” platform strategy that includes backup, aggregation, visibility of data, orchestration and automation. That platform, Ringdahl said, helps enterprises evolve data management from policy-based to behavior-based and ensure workloads are available across any application or cloud infrastructure.
“Enterprises are looking to software-defined solutions that have the ability to simplify data management not only in the private and public cloud but across public clouds,” Ringdahl wrote. “Companies like Hedvig are gaining traction in the unstructured data market and therefore an important alliance partner as enterprises continue along the path of digital transformation.”
The Hedvig Distributed Storage Platform allows businesses to create an elastic storage system built from commodity server hardware to support any application, hypervisor, container or cloud.
The vendors have certified Veeam Backup and Replication 9.5 and above, and Hedvig Distributed Storage Platform 3.0 and above to work together.
Veeam has partnerships ranging from reseller deals to product validations with storage vendors NetApp, Hewlett Packard Enterprise, Pure Storage, Dell EMC, Cisco and others.
This is Hedvig’s first partnership with Veeam, said Gaurav Yadav, Hedvig product manager. Hedvig is eyeing a deeper Veeam integration, for example in using Veeam’s recently released Universal Storage API. Pure Storage recently teamed up with Veeam on an integration initiated through the API.
Hedvig launched an integration with NetBackup last year, Yadav said.
Object storage startup Cloudian partnered with UK-based Storage Made Easy on file sync-and-share capabilities designed to target enterprises facing compliance requirements under the European Union’s General Data Protection Regulation (GDPR).
On-premise Cloudian storage sits at the back end, and Storage Made Easy (SME) supplies the collaboration software to enable users to securely share data in Dropbox-like fashion. The SME software is compatible with Windows, Mac, Linux, iOS and Android operating systems.
“It’s getting increasingly challenging to meet compliance requirements and use the cloud as a repository. So keeping data in the data center behind your firewall and making sharing possible in a controlled way becomes very appealing,” said Jon Toor, Cloudian’s chief marketing officer.
Toor noted that the EU’s GDPR and the newly enacted California Consumer Privacy Act require strict control of personally identifiable information, as well as the ability to delete data on request. The SME software can detect more than 60 types of personal data, including social security, passport and credit card numbers.
With the SME software, IT managers gain oversight of regulated personal information through functionality such as file event auditing, versioning and locking, configurable geographic boundaries and download limits, and policy-based data synchronization. They can restrict access permissions through the Lightweight Directory Access Protocol (LDAP) and Microsoft’s Active Directory and set time limits to disable password-protected links to shared information.
Cloudian storage enables data replication
Cloudian’s HyperStore scale-out object storage software enables data replication to another HyperStore instance in a different location or to a secure cloud location for disaster recovery purposes. Cloudian has a HyperFile NAS controller to enable enterprise file services from its HyperStore object storage.
Toor said that Cloudian became aware of Storage Made Easy through customers who were using the British company’s software and suggested the two companies work together. The Cloudian storage and SME collaboration software supported the Amazon S3 API
from the beginning, so the two systems can connect seamlessly, Toor said.
Cloudian has a reseller agreement with SME enabling it to offer the SME software as part of a complete package with its storage software and support. Toor said Cloudian launched the agreement with SME in February, but new functionality to track personal information didn’t become available until June.
Customers would typically run the SME software on a virtual machine in a server co-located in the same vicinity as the Cloudian storage and point it at the HyperStore cluster, according to Toor. He said they also have the option to run the SME software on the same box as the Cloudian storage software, which is typically sold pre-configured with hardware.
Greg Schulz, a senior advisory analyst at Server and Storage IO, said he expects to see more vendors develop new functionality in-house or strike partnership to get to market quicker to help customers address GDPR, the California Consumer Privacy and additional regulations that will likely emerge.
“Now we’re going to see more about how you go about implementing, managing, and figuring out what to do in actually rolling out and deploying GDPR as well as taking care of it on an ongoing basis,” Schulz said. “The vendors are coming out with tools. But likewise the customers are wondering how they go about detecting what they have and how they go about implementing this.”
Until last year, Dell EMC data protection involved separate offerings of Avamar backup software and Data Domain deduplication appliances. The vendor changed that strategy in May 2017 with the launch of the Integrated Data Protection Appliance (IDPA) family.
On Tuesday, Dell EMC brought out IDPA 4400, extending the disk-based product for backup and disaster recovery at satellite offices. The new hardware is aimed at companies with roughly 2,000 employees, 1,000 virtual machines and 200 TB of storage.
The 4400 is the fifth version in the IDPA product line, and it offers the lowest usable capacity at 24 TB. It carries a list price of about $80,000.
“The IDPA 4400 is for customers that want a dense platform that they can rack themselves and grow in place,” said Ruya Barrett, a vice president of marketing at Dell EMC data protection.
“They are definitely the (competitors) we’re watching. They have checked the box for simplicity, but they do at the cost of performance and efficiency.”
The 2U box is designed on Dell PowerEdge 14th generation storage servers. IDPA 4400 is available as a turnkey appliance preconfigured with 96 TB of disk capacity. Customers can start as low as 24 TB and scale capacity in 12-terabyte increments with additional license keys.
Software is based on the Avamar code and embeds a backup application, storage analytics and an application configuration manager. Dell EMC guarantees up to 55-1 data deduplication with Data Domain Boost on all IDPA hardware.
The HTML5 management interface is integrated with VMware vSphere, Oracle RMAN and SQL Management Studio.
The vendor claims IDPA 4400 protects about 5 PB of usable storage. With optional Cloud Disaster Recovery software, an enterprise could natively tier up to 14 PB of deduplicated data from IDPA 4400 to an Amazon Web Services S3 target.
For long-term retention, IDPA supports AWS and Microsoft Azure, as well as Dell EMC-branded Elastic Cloud Storage and Virtustream.
Selling more Dell EMC data protection gear is part of a larger overall goal to grow its storage business. It closed the April quarter with $4.1 billion in storage sales, up 10% and marking its first positive territory since the Dell-EMC merger closed in 2016.
Dell EMC data protection ranked third last quarter with nearly 13% of the worldwide market, according to IT research firm IDC. That placed it ahead of Commvault (9%), but behind market leaders Veritas (17%) and IBM (16%).
IDC ranked Dell first in sales of purpose-built backup appliances with $520 million, good for 9% growth. Dell EMC represented more than half of the $842 million quarterly total reported by IDC.
Data Direct Networks Inc. (DDN) is seeking a bargain from Tintri’s fire sale.
The DDN storage portfolio is poised to add the storage assets of failing hybrid vendor Tintri, which filed for Chapter 11 protection this week. The two vendors on Tuesday said they have signed a nonbinding letter of intent, with the transaction to be administered by a U.S. bankruptcy court in Delaware.
Proposed financial terms were not disclosed. However, a Tintri-DDN storage transaction is far from guaranteed. DDN’s offer is subject to several contingencies, including a court-sanctioned bidding process and final court approval. Should a deal be finalized, Tintri stockholders are not expected to receive a return on their shares.
However, Tintri employees waiting for back wages could be in luck thanks to new interim financing from TriplePoint Capital, one of the vendor’s principal debt-holders.
According to a securities filing, Tintri is seeking court approval to obtain debtor-in-possession financing to advance a possible acquisition. TriplePoint Capital has offered to lend $5.5 million of working capital and allow Tintri to roll up $25 million in TriplePoint debt, which DDN (or other potential bidders) presumably would take on as a condition of any deal.
The TriplePoint debt would be subordinated to that of Silicon Valley Bank, which negotiated a $12.5 million credit facility with Tintri earlier this year. TriplePoint would require Tintri to establish a $1.9 million payroll reserve fund for back pay, benefits, withholdings and commissions owed to employees and contractors.
For Tintri, the bankruptcy filing marks the latest sad chapter in its slow demise. Perhaps handing its storage off to DDN can help burnish the technology, which has a strong installed base that includes about two dozen Fortune companies.
Tintri went public in a $60 million initial public offering on June 30, 2017. But investors were lukewarm on the shares from the start. After hitting a high of $7.75 – well off the company’s initial $11-per-share IPO target – the price has been in freefall ever since.
Nasdaq last month sent a delisting notice to Tintri after TNTR traded at less than $1 for more than 30 consecutive sessions. The equity remained in penny-stock territory on Tuesday, closing at 17 cents a share, with the DDN announcement pushing trading volume to 14.3 million shares.
Tintri’s struggles were mostly due to execution, said Eric Burgener, a research vice president at Framingham, Mass.-based IT analyst firm IDC.
“The Tintri array technology is specifically architected for running in a virtualized environment with a high IOPS profile. Tintri gave you that level of performance out of the box. Their challenges as a company were more along the go-to-market side, not their storage technology,” Burgener said.
DDN storage gear fills a niche in the high-performance computing market. Tintri will add real-time data analytics, virtualization and VM automation to DDN scale-out storage, making it particularly useful with the rapid emergence of AI-driven workloads across many industry sectors. DDN executives did not immediately respond to requests for additional comment.
Tintri products include the hybrid VMstore virtualization platform and flagship EC8000 Series all-flash cloud arrays. This marks the second DDN storage addition in less than two weeks, following its acquisition of the Lustre parallel file system from Intel in June.
It may not set off fireworks, but Hewlett Packard Enterprise heads into the July 4 break with a finalized portfolio of Cisco-based StoreFabric Fibre Channel switching gear for its all-flash arrays.
The latest HPE SAN switch hardware is based on a Cisco chipset and includes the eight-port StoreFabric C-Series SN6610 32 Gbps Fibre Channel (FC) switch and the StoreFabric SN8500C FC director module. Cisco customers can upgrade existing chassis by inserting the larger blades.
“Upgrading from 8 Gbps to 16 Gbps gave a huge performance difference, and moving to 32-gig is another performance leapfrog. Cisco customers have been waiting for this product because they knew it would make it easier and less expensive to upgrade,” said Marty Lans, HPE’s general manager of storage connectivity.
Lans said the latest HPE SAN products are designed to serve as a department-level SAN, top-of-rack switch, or native end-of-row FC SAN extension. Customers can light up 32 ports on the SN6610C by incrementally licensing eight additional blades at a time. Onboard telemetry analyzes storage traffic from all ports at line rate.
The StoreFabric 48-port director allows customers to scale to 384 ports per chassis or 1,152 FC ports in a single 9U rack. The module contains a backplane that HPE claims can provide 1.5 TB per second of throughput per slot. Aggregated duplex performance on the SN8500 is rated up to 1,536 Gbps.
Smart SAN storage automation software enables HPE SAN arrays to handle physical orchestration and volume management on behalf of the networking switch.
HPE released SAN switches based on Cisco rival Brocade’s Gen 6 Fibre Channel (FC) technology two years ago. The Brocade product line was acquired by Broadcom in a deal worth nearly $6 billion in 2017.
The advent of powerful servers, coupled with the performance improvements native to NVMe flash, heighten the need for networking gear that can deliver faster storage. Experts expect initial NVMe over Fabrics deployments to utilize legacy FC, or move to FC over Ethernet to accommodate real-time data analytics.
Although data in backup, hyper-converged and secondary systems accounts for about 80% of storage, Lans said the remaining 20% relies on primary storage arrays. He said HPE SAN customers want help to integrate NVMe flash with existing FC investments.
“We expect a lot of customers to go in that direction. They already have the infrastructure from their primary array. When they want to go NVMe natively, the storage infrastructure is ready to go,” Lans said.