Hyper-converged vendor Pivot3 said it more than tripled its revenue in the fourth quarter of 2016 from the previous year, and grew total revenue 84% in 2016 from 2015.
Pivot3’s growth came in part from two new products launched last year. It added the Pivot3 Edge Office for SMBs and the Pivot3 vSTAC SLX incorporating PCI Express flash technology acquired from NexGen Storage in January 2015. Pivot3 is also incrementally integrating quality of service acquired from NexGen into the Pivot3 vSTAC OS, and CEO Ron Nash said new products adding both QoS and flash will ship in 2017.
Quality of service may hold the key to the vendor’s future success. Pivot3’s Dynamic QoS includes a policy engine that prioritizes workloads and manages data placement and protection. Nash said coming Pivot3 vSTAC products will extend QoS to the cloud and legacy storage. The vendor will add another NexGen storage flash system in 2017.
“Instead of just having our policy engine work on hyper-converged infrastructure, we’re extending it out to the cloud and backward to legacy systems,” Nash said.
He said the engine will be able to look at characteristics, such as service-level agreements and required response times, and find the best storage tier for application data. For instance, if you’re looking for cheap storage and don’t need fast response times, the data can go to a cold storage public cloud service.
Nash said the Pivot3 vSTAC SLX system, integrating NexGen technology, launched in 2016 and appealed to enterprises because of the flash performance. “It kind of surprised us where it was sold,” he said. “We thought it might in the midrange, but we found high-end people are using it, too. If you have an application that needs low latency, it gives you the low latency an NVMe-PCI-type storage device gives.”
‘Little guy” moving up to compete with ‘the big boys’
Nash said Pivot3’s long-term goal is to follow Nutanix’s 2016 initial public offering with an IPO of its own to become a public company. He said Pivot3 takes a different approach than Nutanix, though. He is trying to avoid the heavy losses Nutanix has suffered, even as it racks up impressive revenue growth.
“We’re much more disciplined about financial performance,” he said. “We want to grow fast, but if the difference between 80% and 150% growth is [that] I have a massive loss at 150%, I’ll stick to 80%. We’re still losing money, but not hemorrhaging.”
He said Pivot3’s $55 million funding round in March 2015 should get the company to profitability, although it may raise another round ahead of an IPO.
For now, Pivot3 is the little guy in the land of hyper-convergence giants. Its competitors are newly public Nutanix, Dell EMC (including VMware), Hewlett Packard Enterprise (with its new SimpliVity acquisition) and Cisco.
“A year ago, I was competing with startup companies; now I’m competing with big companies. I count Nutanix as a big company now,” he said. “The big boys are moving in. But I think we can compete against them.”
Stratoscale acquired database-as-a-service provider Tesora Inc. in a move aimed at strengthening its AWS database services.
The hyper-converged software startup added Tesora’s open platform for NoSQL databases Monday. The same day, Stratoscale launched a homegrown relational database service in its Amazon-compatible cloud storage stack.
The Tesora technology will be phased in with future rollouts of Stratoscale Symphony hyper-converged software. Symphony supports block and object storage capabilities by turning x86 servers into hyper-converged compute clusters.
Symphony builds an Amazon-like private cloud behind a firewall to help enterprises reduce VMware licensing costs. Customers can connect their legacy storage to a Symphony cluster and have Stratoscale orchestrate the compute, networking, storage and virtualization resources.
Tesora Database as a Service (DBaaS) is an enterprise-hardened version of OpenStack Trove, the native database service for OpenStack-based clouds. The Tesora acquisition hastens the delivery of relational AWS database services, a feature already on Stratoscale’s roadmap.
“This is a big expansion for us. It allows us to engage with customers who have been waiting for this type of capacity,” Stratoscale CEO Ariel Maislos said. “Going into production with database as a service is very complex, so this will save us about a year of development time.”
Tesora DBaaS enables self-service management and provisioning for Cassandra, Couchbase, DataStax Enterprise, DB2 Express, MariaDB, MongoDB, MySQL, Percona,Redis and Oracle. Stratocale said it will use the Tesora platform to augment its AWS database services, which include its AWS Relational Database Service and AWS NoSQL database offerings.
Maislos said enterprises want Stratoscale’s help with large-scale deployments that mirror AWS database services such as Amazon RDS.
“People want the ability to run their applications either in Amazon or inside their data center,” he said. “If you want to do a hybrid cloud, we give you an on-premises environment that is compatible with the private cloud. That’s the Holy Grail that customers love.”
Since its launch in 2015, Stratoscale has expanded its Amazon support to include Simple Storage Services, DynamoDB, Elastic Block Store, ElastiCache in-memory cache, Redshift and Virtual Private Cloud Services. Symphony 3.4 is currently shipping to customers with support for Kubernetes as a service and a one-click Application Catalog to deploy more than 140 prepackaged catalogs.
Stratoscale did not disclose terms of the deal. Tesora’s Cambridge, Mass., office will be added to Stratoscale locations in Israel, New York City and Sunnyvale, Calif. Maislos said approximately 20 Tesora employees are now part of Stratoscale.
Want to manage your Tintri storage the same way you turn on lights, set an alarm, or choose music with an Amazon Echo or Dot device?
Tintri Inc. launched a proof of concept that lets customers ask Amazon’s Alexa voice service to initiate tasks such as provisioning virtual machines (VMs), taking snapshots and applying quality of service.
Tintri storage engineers used Amazon’s software development kit to map its application programming interfaces (APIs) to the Alexa service to enable Echo and Dot devices to recognize and execute storage commands.
Chuck Dubuque, vice president of product marketing at Tintri, said Tintri will use feedback on the proof of concept to gauge the potential to turn the “cool demo” into a product.
A video demonstration shows a Tintri employee instructing Amazon Alexa to ask the system to provision a VM. Alexa prompts the user with questions such as “What type of VM would you like to create?” and “How many VMs would you like to create?”
Dubuque admitted that using Amazon Echo beyond home use cases might be “a little further out” in the future. But the proof of concept gives Tintri experience using Amazon’s voice recognition and natural language capabilities and making its self-service APIs more responsive to human commands, he said.
“It’s relatively easy to write an admin interface for the storage administrator or the VM administrator who already thinks about things at the low level around VMs and vdisks and other things,” Dubuque said. “But for people who aren’t experts on the infrastructure and just want to say, ‘Hey Alexa, create a test environment,’ what does that mean? Underlying all of the assumptions, a test environment means this set of 100 virtual machines is created from this template, put into this network with these characteristics. That’s more complicated.”
Chat option lets developers manage Tintri storage
At VMworld last August, Tintri demonstrated a text-based chat option to enable developers to collaborate with each other and manage Tintri storage. Dubuque said a customer in Japan used Tintri’s REST APIs to put together a simple robot to respond to system commands from within the Slack chat environment.
Developers in the virtual chat room could call out to a Tintribot — which appears as another “person” in the chat window — to tell the system to execute a command, such as firing up VMs to test new software.
“The Tintribot will acknowledge the command, maybe ask a few questions, and then once all of the VMs are up and running, reply back into the same chat window: ‘Hey, the 100 VMs are now ready. You can run your test,'” Dubuque said.
“It’s a way to enable self-service. In this case, it’s aligned to the developers who don’t really care about the details. They want to be able to do things on their own when they need to without having to hand it off to a third party,” to launch VMs.
Because the Slack-based ChatOps interface requires a username and password for login, the system can control what any given user is permitted to view and create a time-stamped chat audit trail in case they need to troubleshoot a problem.
“You get to see all the humans who were involved in the decision, as well as what the environment was telling you – what’s successful and what wasn’t,” Dubuque said.
Tintri is still gathering customer feedback and has not determined a general availability date for the Slack-based ChatOps that performs operations from within a chat.
“It’s definitely something that has sparked a lot of interest,” Dubuque said.
Dubuque said the Tintri storage architecture is conducive to plug-in integration with systems such as Slack and Amazon Alexa. He said the company’s key differentiator is a web services model “where the fundamental unit that we manage is around the virtualized or containerized application.
“Our file system, our I/O scheduler, all of our storage operations are at that same level that virtualization and cloud management systems use to control compute and networking,” Dubuque said. “You can think of us as finishing the trinity of network, compute and storage being all aligned to the same abstraction level, which is a virtual machine, or a container, not around physical constructs.”
Dubuque said Tintri exposes REST APIs and interfaces with PowerShell and Python through a software development kit. He said other storage vendors use REST APIs that focus on storage constructs such as LUNs and volumes and don’t directly map to an individual application. That causes complexity when trying to automate the storage component of an application.
The processes for keeping data safe when employees leave a company are fundamental data protection best practices: backup, archive and encryption. Yet barely half of the organizations that took part in a recent survey have a plan that ensures data can be recovered if an employee changes or deletes it on the way out the door.
Osterman Research conducted a survey of 187 IT and human resources professionals in October 2016 and released the findings this month. The results show organizations are generally not prepared for data theft protection issues with departing employees, said Osterman Research president Michael Osterman. The report found that fewer than three in five organizations have a backup and recovery platform that ensures data can be recovered if an employee maliciously changes or deletes data before giving notice to leave.
“They know what to do, they’re just not doing it very much,” Osterman said.
Osterman suggested organizations should develop a plan for this issue and nail down who’s in charge of ensuring sensitive data is protected.
The report found that 69% of the business organizations surveyed had suffered significant data or knowledge loss from employees who had left.
Those employees may not have taken data mischievously. According to the report, there are three reasons employees leave with corporate data: They do it inadvertently; they don’t feel that it’s wrong; or they do it with malicious intent.
Mobilizing mobile protection
The BYOD movement has complicated matters. For example, an employee can create content on a personal mobile device and store it in a personal Dropbox account or another cloud-based system. That content never hits the corporate server.
“Get control over that kind of content,” Osterman said. One way to do that is to replace personal devices with ones managed by IT.
Virtual desktops can help data theft protection. Because they store no data locally, virtual desktops make it more difficult for employees to misappropriate data, the report said.
The report stressed it is important that “every mobile device can be remotely wiped” so former employees don’t have access to the content.
“Enterprise-approved apps and any associated offline content can be remotely wiped, even if the device is personally owned,” the report said.
Backup, archive, encrypt
A proliferation of cloud applications also makes it harder to recover employee data.
“While IT has the ability to properly back up all of the systems to which it has access, a significant proportion of corporate content, when stored in personally managed repositories, is not under IT’s control,” the report said. “Office 365, as well as most cloud application providers, do not provide backup and recovery services in a holistic manner, and so organizations can have a false sense [of] security about the data that is managed by their end users.”
To maintain complete visibility of sensitive corporate data across all endpoints, cloud applications and other storage repositories, the report suggests deploying a content archiving system.
“Email archiving is the logical and best first place to start the process of content archiving, but other data types — such as files, social media content, text messages, web pages and other content — should also be considered for archiving as well,” the report said.
The data theft protection report advocates encrypting data in transit, at rest and in use, regardless of its location. In addition to manual encryption, Osterman Research recommends encryption that automatically scans content based on policy and then encrypts it appropriately.
“Encryption alone can prevent much of the data loss that occurs when employees leave a company,” the report said.
Report ‘hit a nerve’
In a fairly decent economy, approximately one in four employees will leave a company in a year, Osterman said.
An Osterman Research client originally suggested the organization undertake the data theft protection report.
“I think it hit a nerve with a lot of companies,” Osterman said.
The sponsors of the report were Archive360, Druva, Intralinks, OpenText, Sonian, Spanning, SyncHR and VMware.
The fundamental goals of the report were to make people more aware of the issue and what can happen if they are not careful with data, and to raise awareness about backing up data and archiving, Osterman said.
Quantum’s scale-out storage business is growing like a weed, with the help of a large weed grower.
While Quantum’s DXi disk backup library increased the most of all its product lines last quarter, the StorNext scale-out storage business excites CEO Jon Gacek the most.
You have to love a market where deals include tape plus disk, and range from law enforcement to legal marijuana merchants. The Quantum video surveillance storage business last quarter included all of that.
Gacek said Quantum closed the most video surveillance deals ever last quarter. Running through a list of large wins, he included police departments in Canada and India, as well as smaller law enforcement agencies and “a company focused on [the] emerging cannabis growth market, where surveillance of the facility is critical.”
Each large Quantum video surveillance deal included StorNext software, disk plus tape, “reinforcing the power of our tiered storage value and expertise,” Gacek said on Quantum’s earnings call Wednesday.
Flash, dense drives push disk backup deals
Quantum’s disk-based backup revenue grew 17% year over year to $22.9 million. That success came after the release of the enterprise DXi6900-S deduplication library that uses flash to speed up data ingest. The 6900-S also includes Seagate 8 TB self-encrypting hard disk drives. Gacek said DXi libraries won seven-figure deals at an Asian taxation department, a European insurance company and other large deals at a U.S. telecom and European supermarket chain.
“It’s a combination of flash that handles metadata and 8 terabyte drives that give it density. Nothing else looks like it,” Gacek said of the DXi6900-S.
Scale-out (StorNext) revenue increased 12% to $39.8 million, including Quantum video surveillance deals. Scale-out storage also includes media and entertainment, and technical workflows such as unstructured data archiving. Quantum claimed more than 100 new scale-out customers and a near-70% win-rate in the quarter in scale-out tiered storage.
Total data protection revenue, including tape, increased 3% to $83.1 million despite a small drop in tape automation.
Overall, Quantum’s revenue of $133.4 million for the quarter increased $5.4 million over last year, and its $5 million profit follows a slight loss a year ago.
Gacek forecasted revenue of $120 million to $125 million this quarter, which is Quantum’s fiscal fourth quarter. “We are teed up for a good one next quarter, but I am not using superlatives like great and fantastic yet, which I think we have potential for,” he said.
Quantum video surveillance, archiving deals include tape
Part of Gacek’s reason for optimism is new uses for tape in cloud archiving.
“We believe there is a shift in tape usage to the archive scale-out, cloud-like architectures,” Gacek said. “And I think you are going to see tape media usage go up quite dramatically as an archive use case.”
More legalized marijuana might help as well.
Following a quarter of solid revenue growth to end 2016, Commvault Systems Inc. plans a string of product enhancements throughout 2017. The additions are designed to improve Commvault’s performance in the cloud, and with software-defined storage and business analytics.
Commvault Wednesday reported $167.8 million in revenue last quarter, a 7% increase from last year. Software revenue of $77.3 million increased 8% year over year, while service revenues of $88.5 million increased 5%. Commvault broke even for the quarter following two straight quarters of losses.
During the earnings call Wednesday, CEO Bob Hammer laid out plans for a Commvault products rollout that will culminate in the Commvault GO 2017 user conference in November.
Hammer said the company plans to add capabilities for business analytics, search and business process automation as part of its strategy to become a full-scale data management player for on-premises and in the cloud deployments.
“Next month, we will further enhance our offerings with new solutions with industry-leading Web-based UIs and enhanced automation to make it easy for customers to extend data services across the enterprise Commvault solutions,” Hammer said of the Commvault products roadmap. “[We will deliver] some of the key enhancements tied to the journey to the cloud and converged data management.”
The enhancements include new data and application migration capabilities for Oracle applications and the Oracle cloud, big data, fast data and SAP HANA. Commvault already supports Hadoop, Greenplum and IBM’s General Parallel File System.
Products for the AWS cloud
Commvault will also add tools for migrating and cloning data resources to the cloud. These include automated orchestration of compute and storage services for disaster recovery, quality assurance, development and testing, optimizing cloud protection, and recovery offerings inside and across clouds to secure data against ransomware risks.
Earlier this week, Commvault added optimized cloud reference architectures for Amazon Web Services (AWS) that will make it easier for customers to implement comprehensive data protection and management in the AWS cloud.
Commvault customers will have the ability to direct data storage to specific AWS services — such as Amazon Simple Storage Service (Amazon S3), Amazon S3 Standard-Infrequent Access and Amazon Glacier for cold storage.
Hammer said the amount of data stored using the Commvault software within public environments increased by 250% during 2016.
“When you look at our internal numbers, in both cases, we’ve had strong pull from both AWS and Microsoft Azure,” Hammer said. “The pull from AWS has been stronger, so there’s a higher percentage of customers’ data in AWS, but I will also say that we are gaining a lot of momentum and traction with Microsoft and Azure.”
Hammer said Commvault continues to make progress on its software-defined data service offerings that are in early release.
“More and more of our customers are replacing or planning to replace their current IT infrastructure, with low-cost, flexible, scalable infrastructures, similar to those found in the public cloud,” he said.
“Our teams have been hard at work to embed those cloud-like capabilities directly into the Commvault data platform, so we can ensure the delivery of a new class of active, copy management and direct data usage services across an infrastructure built with low-cost, scale-out hardware,” Hammer said.
Other upgrades to Commvault products include new and enhanced enterprise search, files sync-and-share collaboration, cloud-based email and endpoint protection during the middle of 2017.
Growth dependent on new products
Commvault has been working to dig itself out of a sales slump that began in 2014. Hammer said the company still faces some critical challenges, and continued growth depends on its ability to win more large deals. A lot of its success will turn on releases of new Commvault products.
“Our ability to achieve our growth objectives is dependent on a steady flow of $500,000 and $1-million-plus deals,” he said. “These deals have quarterly revenue and earnings risk due to their complexity and timing. Even with large funnels, large deal closure rates may remain lumpy. In order to achieve our earnings objectives, we need to prudently control expenses in the near-term without jeopardizing our ability to achieve our software growth objectives for our critical technology innovation objectives.”
Commvault added 600 new customers during the quarter, bringing its total customer base to 240,000. Revenue from enterprise deals, defined at sales of more than $100,000 in software, represented 57% of the total software revenue and the number of enterprise deals increased by 22% year-over-year.
While much of the storage market is stagnant or down, data protection vendor Veeam Software said it grew revenue 28% in 2016 by expanding its business into enterprises and the cloud.
Veeam, a privately held company, this week reported its financial results for 2016. It claimed $607.4 million in bookings in 2016, which included new license sales and maintenance revenue, compared to $474 million in 2015.
Doug Hazelman, Veeam Software’s vice president for product strategy and chief evangelist, said the bulk of the growth came from its flagship Veeam Availability Suite. The suite handles backup, restores and replication through Veeam Backup and Replication along with monitoring, reporting and capacity planing in Veeam ONE for VMware vSphere and Microsoft Hyper-V deployments.
But the Veeam Cloud and Service Provider VCSP program, which offers Disaster Recovery as a Service (DRaaS) and Backup as a Service (BaaS) helped contribute to the revenue growth, Hazelman said.
VCSP generated 79 percent year-over-year growth in 2016 as Veeam pushed to move upstream into the enterprise. License booking grew by 57% annually from enterprise-level customers.
Veeam reported the VCSP program expanded to more than 14,300 service and cloud providers. The vendor claims 230,000 customers worldwide and its Veeam Availability Suite protects up to 13.3 million virtual machines, with 1 million virtual machines using the VCSP management product.. The company added 50,000 paying customers last year.
“They are not all enterprises customers,” Hazelman said. “It’s (still) a lot of SMB commercial accounts (but) we added 761 enterprise customers in 2016.”
Hazelman said the cloud portion of Veeam’s business helped close many deals. Veeam has four business segments — SMB, commercial accounts, enterprise-level accounts and the cloud.
“The VCSP product is the fastest growing,” Hazelman said. “It’s one of the fastest growing segments. It’s not the biggest in revenue but it’s the fastest growing.”
Last year Veeam added a fully functional physical backup server backup product. Veeam Software initially started as a virtual machine backup specialist but moved into physical backup due to customer requests as it moved into enterprise accounts.
“The physical server did help a lot on closing deals but it didn’t add a lot to the total year number,” he said.
In-memory storage startup Alluxio has struck a partnership with Dell EMC.
The news marks Alluxio’s first formal alliance with a North American storage vendor. Alluxio in September integrated its software on Chinese vendor Huawei Technologies’ FusionStore elastic block storage.
The San Mateo-based vendor’s Alluxio Enterprise Edition software will be available on Dell EMC Elastic Cloud Storage (ECS) appliances. ECS is the successor to EMC Atmos object storage.
The Exabyte-scale ECS arrays are built using commodity servers and EMC ViPR virtualized storage. Since acquiring EMC, Dell gradually has started moving EMC software-defined storage products to its PowerEdge line.
The ECS private cloud uses active-active architecture to support Hadoop Distributed File System data analytics. Dell EMC sells ECS on turnkey appliances and also as a managed service.
Alluxio Enterprise Edition is a commercial version of the startup’s open source-based virtual distributed in-memory storage software. It allows applications to share data at memory speed across disparate storage systems.
The chief attributes are high availability and high performance. Alluxio allows data to persist in a host to speed real-time data analytics.
Alluxio software is designed to accelerate Apache Spark and other workloads that process data in memory. Storage from disparate data stores is presented in a global namespace.
EMC is not expected to bundle Alluxio in its software stack. Alluxio CEO Haoyuan Li said the partnership allows EMC to better recommend the in-memory storage to ECS customers that need high performance.
“The benefit we bring is allowing EMC ECS customers to pull data from other storage systems. Previously, you had to either move the data manually into the new compute cluster. We automate data movement,” Li said. “We also accelerate end-to-end performance of your applications with our memory-centric architecture, which manages the compute-side storage.”
Partnering with EMC is a feather in the vendor’s cap, although perhaps not as noteworthy as if Alluxio was qualified to run on Dell EMC Isilon scale-out NAS. Big data jobs tend to use HDFS as the underlying substrate, not EMC ECS.
“ECS as a storage platform does not have a huge share of the market at this point, so this partnership won’t have a material impact on Alluxio’s top line. But it’s an interesting partnership and definitely a win for them that could lead to other partnerships with Dell EMC,” said Arun Chandrasekaran, a research vice president at Gartner Inc.
Cloud NAS vendor Panzura raised $32 million in Series E equity funding this week to expand its product and give organizations an alternative to what CEO Patrick Harr calls a “dying on-premises model.”
Harr said Panzura will add support for block, Internet of Things and Hadoop interfaces for data analytics to go with its original NAS protocols. He also plans expansion outside the U.S., into the U.K. and Europe.
“We are focusing on scaling our business in two areas,” Harr said. “One is on the channel side and second is the continued expansion of our product portfolio. We are adding addition protocols to consolidate what we view is the dying on-premise model.”
Harr said the on-premises only storage model will collapse, and he offers Panzura as a cloud-first model for building a hybrid cloud. Since he became Panzura’s CEO in May 2016, the vendor has expanded its its hybrid cloud storage controllers and added archiving capabilities. He said he has also hired 18 engineers during his eight months at Panzura.
“We are very much in growth mode,” he said.
Harr said in 2016 Panzura added 100 new enterprise-level customers and expanded its partnerships to include Amazon Web Services, Google, IBM and Microsoft Azure. It also added 26 petabytes of customer enterprise storage.
Panzura’s new Freedom Archive software that moves infrequently accessed data to public clods or low-cost, on-premises storage could bring the vendor into new markets. Target archive markets include healthcare, video surveillance, gas and seismic exploration and media and entertainment. Freedom Archive is a separate application from Panzura’s flagship Cloud NAS platform, which caches frequently used primary data on-site and moves the rest to the cloud.
Last summer, Panzura launched a new series of cloud storage controllers with more capacity and the ability to expand to handle multiple workloads. The new 5000 hybrid cloud storage controllers replace Panzura’s previous 4000 series.
The E funding round brings Panzura’s total funding to around $90 million. The investment was led by Matrix Partners and joined by Meritech Capital Partners, Opus Capital, Chevron, Western Digital and an undisclosed strategic investor.
All-flash array pioneer Kaminario kicked off 2017 with a cash infusion of $75 million to accelerate global expansion and fuel product support for non-volatile memory express (NVMe) technologies.
Kaminario’s fifth funding round increased its overall total to $218 million since the company launched in 2008. Waterwood Group, a private equity firm based in Hong Kong, led the latest financing effort. Additional investors included Sequoia, Pitango, Lazarus, Silicon Valley Bank and Globespan Capital Partners. Kaminario’s most recent previous funding came in January 2015.
Founder and CEO Dani Golan said Kaminario has more than doubled revenue in each of the last four years. Global expansion and go-to-market efforts will focus on eastern and western Europe, Asia Pacific and the Middle East.
Kaminario will concentrate on incorporating NVMe technologies into the company’s K2 all-flash array. Kaminario currently uses SATA-based 3D TLC NAND flash drives. NVMe-based PCI Express (PCIe) solid-state drives (SSDs) can lower latency and boost performance, and NVMe over Fabrics (NVMe-oF) can extend the benefits across a network.
Golan said Kaminario does not support NVMe SSDs yet because “the price is too high.” He added that the NVMe-oF technology “is not mature enough to run in mission-critical and business-critical environments.”
A handful of new companies are starting to ship products with NVMe drives, but Golan said Kaminario’s NVMe support will probably wait until 2018.
“The ecosystem is not there yet,” he said.
Golan said startups that currently support NVMe use drives directly attached to servers. But, with a mature array platform on the market, Kaminario needs “to drive a full storage software stack over NVMe Fabrics,” he said.
“The big gain is [going to be with] NVMe over Fabrics, because NVMe drives are just media. That’s not interesting. The interesting part is NVMe over Fabrics and NVMe shelves,” Golan said.
Kaminario’s architecture allows customers to add controllers or shelves in any combination, scaling compute separately from storage, Golan said.