Storage Soup


March 1, 2018  7:29 AM

Komprise NAS migration leaves no data behind

Carol Sliwa Carol Sliwa Profile: Carol Sliwa
Storage

Startup Komprise added NAS migration to its list of data management features in the new 2.7 release of its software.

Komprise Intelligent Data Management customers can now migrate from one NAS system to another NAS system, leaving no data behind on the source filer. The new feature targets customers who want to replace or decommission one NAS system in favor of a new NAS system, according to Krishna Subramanian, chief operating office at Campbell, California-based Komprise.

Subramanian said a future product release would support the migration of data from NFS- or SMB/CIFS-based file storage to on-premise or public-cloud-based object storage.

Earlier versions of the Komprise software enabled customers to move data from one NFS- or SMB/CIFS-based NAS system to another NAS system, or to Amazon S3– or Microsoft REST API-compliant object storage, for archival or disaster recovery (DR) purposes. In those scenarios, the source NAS system retained the hot data, and users set policies merely to shift the cold data to another NAS system or to on-premise of public-cloud-based object storage, Subramanian said.

She said customer demand spurred the addition of the new NAS-to-NAS migration capabilities in the 2.7 product release.

“Every three to five years, storage is becoming end of life. So companies always have some NAS migration activity happening,” Subramanian said. “They may be retiring a filer, and they have to move that data somewhere else. Or, maybe they are moving it altogether from on premise to cloud. Or, they are moving from cloud back to on premise.”

The NAS migration feature removes the need for Komprise customers to buy third-party tools, hire services or conduct a laborious manual process to move data. Subramanian said Komprise can migrate data between different vendors’ NAS, and works with any NAS array that supports supports NFS or CIFS/SMB protocols.

How Komprise NAS migration works
Komprises’ new task-based NAS migration works similarly to the long-running activities such as data archiving and data replication for DR. Customers set a policy , the Komprise software makes a copy of the data on the new NAS system, and — after confirming the copy is accurate — deletes the data from the source NAS filer. The Komprise software retries in the event the storage is unavailable or the network is down, Subramanian said.

The Komprise Intelligent Management software runs as a virtual machine, and the software automatically updates. The user interface has a new tab for migration that customers can choose to turn on or off.

Komprise’s new NAS migration capabilities are available to customers at no additional cost. The company charges for its software based on the amount of data under management. List price is $150 per TB for a perpetual license, and annual subscriptions and volume discounts are available, Subramanian said.

Also this month, the Komprise Intelligence Data Management software became certified to operate with Spectra Logic’s BlackPearl Converged Storage System. BlackPearl is an object storage gateway for long-term storage, including tape-, disk- and cloud-based options.

Subramanian said Komprise has about 65 or 70 customers spanning industries such as genomics, health care, media and entertainment, gaming, finance, and insurance.

February 28, 2018  11:38 AM

IT resiliency report: Downtime holding businesses up

Paul Crocetti Paul Crocetti Profile: Paul Crocetti
Storage

The intro to a recent IT resiliency report asks the question: “Is IT turbulence the new norm?”

In a year marred by ransomware attacks large and small, catastrophic natural disasters and other IT outages, that’s a valid question. But how are organizations handling the disruptions and IT resiliency planning?

Generally, not that well. “Overall, IT departments exceeded their maximum tolerance for downtime during a failure, a weakness that must be addressed,” said the Syncsort report, “The 2018 State of Resilience,” which surveyed 5,632 global IT professionals between January 2017 and July 2017 and came out last month.

“It continues to surprise us that organizations feel underprepared to deal with disasters,” said Terry Plath, vice president of global services at Syncsort, an enterprise software provider.

Only half of respondents met their recovery time objectives after a failure, according to the report. Eighty-five percent of professionals had no recovery plan or were less than 100% confident in their plan.

For those IT professionals whose organization suffered data loss, when asked how much was lost in their most significant incident, 28% said they lost a few hours and 31% lost a day or more of data.

The IT resiliency report found that many of the reasons for data loss involved lack of a quality backup. In order, the top primary reasons for the data loss were: Old backup copy, human error, lost data was in memory and no backup was made, malfunction in data protection platform, and data protection platform was not configured to back up the specific data.

Thankfully, when IT professionals look ahead the next two years, 45% say high availability/disaster recovery (HA/DR) is a chief IT initiative, second only to security at 49% and just above cloud computing at 43%. And in choosing top IT issues of concern in the coming year, 47% said business continuity/high availability, 46% said the ability to recover from disaster and 45% said security/privacy breaches.

Comprehensive IT resilience planning needed

Specifically regarding high availability or disaster recovery, businesses’ top initiatives in the coming year are tuning or reconfiguring the current HA/DR platform, expanding the current HA/DR platform to cover additional servers or data, adopting new technology to augment the current HA/DR, and incorporating cloud or hosting technology into the HA/DR strategy.

“Appropriate staffing, workforce training, better recovery planning and testing are needed to ‘bulletproof’ company systems,” the report said. “This is especially true, since a considerable majority of companies have HA/DR initiatives planned for the coming year.”

Surprisingly, when asked which technologies organizations use for data protection and archiving, just under 50% said tape backup, the second most popular answer behind hardware and storage replication at slightly over half.

Plath recommends several steps for a solid IT resiliency strategy. It starts with defining and documenting the disaster recovery process. Organizations then need to have the right tools in place and make sure that recovery time objectives and recovery point objectives are acceptable. They must ensure that IT staff or the third-party support providers are trained and up to speed on the DR platform.

And then there’s resilience testing. Organizations should take the time to schedule HA/DR switch tests. Companies should also have run books in place that define what needs to happen in a disaster, but many don’t, Plath said.

The IT resiliency study is a continuation of the annual survey conducted for the last 10 years by Vision Solutions, now part of Syncsort.


February 23, 2018  9:33 AM

More Nimble, HPE storage revenue jumps 24%

Dave Raffo Dave Raffo Profile: Dave Raffo
Storage

Hewlett Packard Enterprise storage sales rebounded last quarter, thanks to the addition of Nimble Storage and an improvement of the 3PAR SAN platform.

HPE storage revenue of $948 million grew 24% year-over-year last quarter. That’s not quite as impressive as it sounds, because most of that growth came from Nimble, which HPE had not yet acquired a year ago. Still, HPE storage increased 11% in apples-to-apples comparison from a year ago – matching the growth of HPE’s overall revenue in the quarter.

That follows five percent growth in HPE storage revenue last quarter. In the first two quarters of last year, HPE storage revenue declined year-over-year.

HPE’s flagship 3PAR storage array line rebounded after the vendor brought 3PAR and Nimble sales team together into one group. The 3PAR revenue increased a few percentage points last quarter. In the previous earnings report, outgoing CEO Meg Whitman called 3PAR sales “soft.”

Antonio Neri stepped up from HPE president to replace Whitman as CEO this month.

“This was a good quarter for us in storage,” Neri said Thursday night on the HPE earnings call. “Obviously we have the Nimble numbers, but our organic storage grew 11 percent. In previous calls we talked about execution challenges in our go-to-market, particularly in the United States. So this was all about focus and execution.”

Besides bringing more revenue, the Nimble deal gave HPE storage the InfoSight predictive analytics technology. HPE now uses InfoSight in all of its storage arrays, brining artificial intelligence into systems monitoring and management.

Neri called InfoSight “a game changer for our storage business and is a major step on our journey to an autonomous data center.”

Flash was another driver for HPE storage growth. HPE execs said all-flash array revenue increased 16% year-over-year. While that’s less of a spike than other large vendors are gaining from flash – NetApp’s all-flash sales increased 50% last quarter – it shows flash is increasingly driving enterprise storage sales.

“We expect to see solid continued performance in storage,” Neri said.

HPE also said its hyper-converged inftrastructure revenue grew more than 200% over last year. That’s also misleading, because HPE hyper-convergence is based on its acquisition of SimpliVity in January 2017. HPE did have a hyper-converged product before SimpliVity but had little success with it, so most of the revenue  comes from a product HPE did not have year ago. Neri said the hyper-competitive hyper-convergence market is now an area HPE is placing a great deal of focus on.


February 22, 2018  6:40 PM

Pivotal move: Chad Sakac says Dell EMC in ‘rear-view window’

Garry Kranz Garry Kranz Profile: Garry Kranz

The question ‘Where is Chad Sakac?’ got answered this week: by Chad Sakac. The former president of the Dell EMC converged infrastructure division (formerly known as converged platforms) said he will join Dell’s Pivotal Software subsidiary in April to guide development of the Pivotal Container Service.

Sakac appeared to be the odd man out following a February reshuffling of the Dell Infrastructure Solutions Group (ISG). He informed readers of his blog this week that Dell EMC and the converged products are “in my rear-view window.”  He did not specify his new job title, saying only he would be the “glue” among Dell EMC storage, Pivotal Cloud Foundry (PCF) and VMware product development for DevOps customers.

Pivotal Container Service (PKS) stems from VMware’s partnership with Google. The “K” refers to Pivotal’s reliance on the Google Kubernetes orchestration framework.

“My role is to be part of the team that is laser-focused on driving the success of Pivotal Container Service (and the rest of PCF) together with the teams at VMware and Dell EMC.   The job is simple – helping make our aligned Dell Technologies developer platform come together…” Sakac said.

Dell Technologies in February realigned ISG, placing VCE converged infrastructure under the Servers unit, headed by Ashley Gorakhpurwalla.   The converged and hyper-converged infrastructure products, including  VxRack/VxRail and Nutanix XC Series, shifted to the Dell EMC storage division, led by EMC veteran Jeff Boudreau.

Dell EMC’s management moves emerged against a backdrop of other potential changes under consideration. Dell Technologies CEO Michael Dell has acknowledged his company is exploring strategic options to unload debt related to $60 million EMC merger in 2016.

Options reportedly being mulled include an initial public offering of the Dell EMC storage business, or allowing VMware to acquire Dell Technologies in a reverse merger.  It’s also possible – some experts say it’s likely, in fact – that Dell will keep its present structure intact. The vendor has not publicly disclosed a timetable for its decision.

Reorganizing ISG appeared to leave Sakac without a clearly defined role. Matt Baker, a Dell EMC senior vice president of strategy and planning, said at the time that Sakac would “continue to be a valuable part of the organization” who was “working hand in glove with the executive team to carve out his next role.”

That role now appears to be defined. On his blog, Sakac said he supports the Dell Technologies reorganization as a way to make converged storage a more useful tool to support development of cloud-native applications.

“The CI business is finding a new gear. In 2017, we lost some ground – but maintained our No. 1 position. That trend changed in the second half – and the CI business and primary storage show that. CI is essential to the primary storage business of Dell EMC. There are thousands of customers who depend on Dell EMC CI to be the foundation of their data center(s) – and have moved up to consume infrastructure,” rather than as a building blocks.


February 15, 2018  3:52 PM

Intel P4510 U.2 SSD ships with 64-layer TLC 3D NAND

Carol Sliwa Carol Sliwa Profile: Carol Sliwa

Intel launched a new P4510 Series of U.2 solid-state drives (SSDs) equipped with its 64-layer triple-level cell (TLC) 3D NAND flash and enhanced firmware, enabling greater storage density and lower random read latency than the prior P4500 model.

The enterprise Intel P4510 Series is the first datacenter SSD to use Intel’s latest 64-layer 3D NAND designed for bulk storage. The company began shipping client SSDs with the denser 64-layer flash technology last year.

The Intel P4510 PCI Express (PCIe) SSDs began shipping at at 1 TB and 2 TB capacities last year to cloud service providers (CSPs) and is now making available 4 TB and 8 TB drives to CSPs and channel customers. Intel expects to ship the new P4510 SSDs to OEM partners later this year.

The prior P4500 model used Intel’s 32-layer TLC 3D NAND technology. The highest capacity available for the P45000 SSDs in the U.2 form factor was 4 TB. Intel also lists 8 TB P4500 options in the new “ruler” form factor, named for its long, thin shape.

The latest Intel P4510 PCIe SSDs are 2.5-inch, 15-mm U.2 form factor. Intel plans to add lower power 110-mm M.2 and 7-mm U.2 P4511 datacenter SSD options later this year.

Intel claimed the new P4510 SSD boosts sequential write bandwidth up to 90% over the older P4500 model and improves quality of service up to 10 times. Firmware enhancements allowing granular I/O prioritization help the Intel P4510 SSD to cut mixed workload latency by up to two times and read workload latency by up to 10 times, according to Intel.

The Intel P4510 Series supports non-volatile memory express (NVMe) 1.2, with four PCIe 3.1 lanes, as well as the NVMe Management Interface (NVMe-MI) for operational insight.

Hot-pluggable U.2 SSDs
Intel enabled additional management and serviceability features in the P4510 SSDs through its Volume Management Device (VMD) and Virtual RAID on CPU (VROC). The VMD and VROC platform-connected technologies help to facilitate hot pluggability with U.2 SSDs, LED light management to help users locate failed drives, and RAID configuration simplification and acceleration, according to Intel.

Industry-wide, U.2 SSDs currently account for approximately 25% of the unit volume of NVMe PCIe SSDs, and they will become the majority in 2019, according to Greg Wong, founder and principal analysts at Forward Insights.

But Greg Matson, director of SSD strategic planning and product marketing at Intel, said Intel will push the Enterprise & Datacenter SSD Form Factor (EDSFF) 1U Long and 1U Short as the “preferred and optimized” form factors for 3D NAND SSD bulk storage this year. Intel introduced the early version of the EDSFF SSDs as the ruler form factor at last year’s Flash Memory Summit and then worked to get EDSFF standardized.

Matson said 1U Long SSDs allow massive capacity scaling, and the EDSFF SSDs are PCIe 4.0- and PCIe 5.0-ready and support up to 16 lanes.

“While we think U.2 is a pretty darn good form factor for storage, it’s not as good as EDSFF,” Matson said. “We can make much more thermally efficient platforms requiring about half the airflow, and about half the airflow is also less than power than the U.2 form factors.”

Matson said three of the four major drive suppliers and several “tier 1” ODMs and OEMs, including Quanta and Supermicro, support EDSFF. He noted that Intel has already shipped ruler SSDs to IBM Cloud and Tencent, one of the largest cloud service providers in China.


February 15, 2018  10:48 AM

NetApp cloud-flash pivot brings cheers

Garry Kranz Garry Kranz Profile: Garry Kranz
Storage

NetApp attributed strong product revenue growth last quarter in part to two-a-days – it’s averaging two displacements of rivals’ all-flash SAN systems every day.

Success in NetApp cloud and flash sales fueled a strong quarter, as revenue increased eight percent year-over-year to $1.52 billion. Product revenue of $920 million increased 17% over last year.

On  NetApp’s earnings call Wednesday night, CEO George Kurian said the vendor made solid gains with its all-flash arrays, including NetApp FAS, EF and SolidFire storage. Annualized net revenue from all-flash jumped nearly 50% to $2 billion.

Demand for all-flash FlexPod – sold with partner Cisco’s compute and networking — helped to boost converged infrastructure sales by 50%.

“Our growth in all-flash has helped us gain strength in both the SAN and converged infrastructure markets,” Kurian said. “Through our competitive take-out program, we average two SAN displacements per day. That enables us to gain share in the SAN market, and expand wallet share with our existing customers.”

Central to the NetApp cloud strategy is an integrated Data Fabric that allows customers to more easily manage data across local storage and multiple hybrid clouds.  NetApp cloud Data Fabric extended last year to add NFS file storage as a service in the Microsoft Azure public cloud.

Products introduced last quarter are in preview with selected customers, including NetApp Cloud Volume for Amazon Web Services (AWS) and support for VMware on AWS. Those offerings are expected to be generally available in 2018.

Other product rollouts included a software upgrade for NetApp AltaVault backup, including the addition of Microsoft Azure Archive Blob, plus the introduction of SnapMirror for the SolidFire ElementOS operating system.

Kurian dodged questions on reports that rival Dell EMC is considering strategic options to pay down debt. Although he did not name Dell EMC directly, Kurian said NetApp’s pivot from hardware to cloud infrastructure makes it a more formidable competitor against “our largest competitor.”

Dell EMC has to figure out how to “rationalize a completely confusing product portfolio. They lack a competitive flash offering with a road map to the future, and they’ve got to get a cloud story,” he said.

NetApp took a $506 million loss, the result of an $856 million one-time charge on repatriated capital due to new tax laws.

Kurian said the new tax law will provide “added flexibility” as a result of corporate rates getting slashed to 22%. NetApp plans to bring back an additional $4 billion parked offshore during the next 12 months.

NetApp closed the quarter with $5.6 billion in cash and short-term investments. The higher domestic cash balance is being used to pay down $800 million in bonds it issued  issued last year.

NetApp’s revenue guidance for the fourth quarter ranges between $1.525 billion and $1.875 billion, or an 8% increase year over year.


February 8, 2018  5:29 PM

NVMe flash hopeful E8 Storage goes software-only route

Garry Kranz Garry Kranz Profile: Garry Kranz

All-flash array startup E8 Storage has expanded into reference architecture with the launch of a software-only version.

The new product, E8 Storage Software, runs on rack servers from Dell EMC, Hewlett Packard Enterprise (HPE) and Lenovo. Customers can buy SKUs through channel partners, or purchase the software-defined flash directly from E8 Storage as an integrated nonvolatile memory express (NVMe) appliance.

E8 Storage Software is qualified with Dell EMC PowerEdge R740xd and PowerEdge r64, Hewlett Packard Enterprise ProLiant DL360 and DL380 Gen 10, and Lenovo ThinkSystem SR630. The validated hardware systems need to run Red Hat Enterprise Linux of CentOS 7.3 or higher.

The reference stack includes server chassis with 32 GB to 64 GB of memory, Intel Skylake processors, 24 NVMe U.2 hot-swappable SSDs, two 128 GB RAID 1-enabled M.2 boot SSDs with RAID, and two 100 Gigabit Ethernet (GbE) Mellanox Connect X4-C remote direct memory access network interface cards.

The E8 software-defined flash allows 96 clustered hosts to read and write to shared storage. The vendor’s flagship E8-D24 rack-scale system has dual controllers and scales to 140 TB of effective storage with high-capacity SSDs. E8 Storage also is previewing its E8-X24 block arrays with customers running the IBM Spectrum Scale parallel file system and Oracle Real Application Cluster environments.

Thje recent addition of host-level mirroring enables E8 Storage to market its software-only flash storage on its S10 entry-level appliance to enterprise customers. The S10 has a single-controller and has been used mostly with proofs of concept.

“Customers in database environments want us to fit into their existing disaster recovery environment, rather than running an additional layer.  Larger customers may want to do their own integration. We think smaller customers will still want a (turnkey) appliance,” said Julie Herd, E8 Storage director of technical marketing.

The NVMe standard is based on Peripheral Component Interconnect Express (PCIe) protocol. It is designed to squeeze the most performance from software-defined flash storage. Rather than running traffic through network host bus adapters, an application uses PCIe to talk directly to storage.

NVMe flash storage is maturing to the point that some industry observers predict an uptick in mainstream adoption in 2018. The NVM Express organization, a consortium of industry partners, is expected to help advanced NVMe over Fabrics technologies this year.

E8 Storage and other NVMe flash startups are jockeying for position, while established vendors Hitachi Vantara, IBM and Pure Storage are bringing systems to market built with custom flash modules.

Herd said the reference architecture stacks will help E8 Storage take on more workloads.  “We are a block system, so this will help us tackle file-based workloads, and also broadens the market for our channel partners.”


February 7, 2018  4:03 PM

Nutanix leads first hyper-converged Magic Quadrant

Dave Raffo Dave Raffo Profile: Dave Raffo
Storage

Gartner now gives hyper-convergence a Magic Quadrant of its own, and places Nutanix as the leader in the upper right-hand corner.

Dell EMC, VMware and Hewlett-Packard Enterprise also sit in the leaders’ quadrant with Nutanix in the hyper-converged Magic Quadrant Gartner released this week. So Dell Technologies is also sitting pretty as owner of Dell EMC and VMware, and an OEM partner of Nutanix.

Previously, Gartner included hyper-converged systems as part of its Magic Quadrant for Integrated Systems.

Gartner defines hyper-converged infrastructure (HCI) as “a category of scale-out software-integrated infrastructure that applies a modular approach to compute, network and storage on standard hardware, leveraging distributed, horizontal building blocks under unified management.” It adds in the hyper-converged Magic Quadrant report that HCI vendors can build their own appliances with off-the-shelf infrastructure, or sell HCI software in partnership with system vendors or resellers/integrators. They can also sell HCI software directly to end users, or as HCI-as-a-service on-premises or in a public cloud.

Gartner credits HCI pioneer Nutanix with overcoming the IT world’s reluctance to invest in a new vendor, and raising confidence level in its product’s performance to continue to scale deployments. Gartner puts Nutanix’s customer base at more than 7,800.

Nutanix also won points for its robust management and self-service interface and choice of its KVM-based AHV hypervisor as an alternative to VMware ESXi. Nutanix’s negatives include lack of broad appeal to remote offices, departments, edge implementations and SMBs, according to Gartner.

Dell EMC is ranked high mainly due to its VxRail appliance and VxRack rackscale system. Those run on Dell PowerEdge servers and integrate with VMware technology and Dell EMC products such as Avamar, DataDomain, RecoverPoint and CloudArray. But Gartner points out that VxRail uses a different software release cycle than VMware and often lags behind the latest version of VMware’s vSAN HCI software.

Gartner ranks VMware vSAN separately from the Dell EMC HCI products, because vSAN is also sold as standalone software and packaged with other vendors’ servers. Gartner said VMware sells the broadest set of hyper-converged systems, but customers must pay extra for features such as deduplication, compression and erasure coding, and vSAN customers have reported performance and stability issues.

HPE bolstered its hyper-converged platform with the February, 2017 acquisition of SimpliVity, which is now sold on HPE ProLiant servers. Gartner says HPE doubled its HCI customer count to around 2,000 since the acquisition. HPE scores points for SimpliVity’s data services that include backup and disaster recover capabilities, but is cited for lack of flexibility with support only for VMware hypervisors and all-flash configurations.

Gartner lists Cisco, Pivot3 and Huawei as challengers in the hyper-converged Magic Quadrant, Stratoscale and Microsoft as visionaries, and Scale Computing, DataCore and HTBase as niche players.


January 30, 2018  10:10 AM

Caringo Swarm sales jumped 40% in 2017

Sonia Lelii Sonia Lelii Profile: Sonia Lelii
Storage

Despite all the talk about object storage over the years, it has yet to push scale-out NAS out of the enterprise for storing files that take up hundreds of terabytes to petabytes of capacity. But early object storage vendor Caringo reports progress, with a 40% year-over-year sales increase in 2017 due to a heavy expansion of the footprint of previous customers along with an intake of new customers.

Caringo also reported 50% growth in the fourth quarter compared to the previous year. Adrian Herrera, vice president of marketing at Caringo, said most of the increase is due to previous customers adding capacity to their Caringo Swarm object storage implementations.

“We are seeing customers start with hundreds of terabytes and expanding to multiple petabytes,” he said.

Herrera said Caringo Swarm scale-out hybrid cloud object storage is picking up steam with the media and entertainment companies. Caringo has partnered with Reach Engine by Level Beyond, Pixit Media and CatDV to serve that market. He said as companies become more familiar with the Amazon S3 API, they warm to object storage.

“It’s really because of the Amazon S3 API acceptance,” Herrera said. “There are some asset managers that we have been certified with and their adoption of the S3 API makes it easy for us to plug into their solutions.”

Herrera said Caringo Swarm sales are also growing in local and federal government and high performance computing markets.

Still, with its target customer storing such large data sets, the sales process remains lengthy for object storage deals.

“It’s not uncommon to see a deal take about a year,” Herrera said. “Object storage deals take a long time. But it is compressing. The sales process is accelerating because people are a lot more comfortable with object storage.”

Jon Toigo, CEO and managing partner at Toigo International Partners, credited Caringo with helping to lead the wave of object storage vendors embracing Amazon Web Services’ public cloud.

“Many object-level storage companies, citing client cloud storage preferences, started emulating Caringo by adding Amazon Web Services storage compatibility to their kit,” Toigo wrote in a December 2017 Storage Magazine article. “Some added file system-like interfaces to help users who understood the hierarchical file systems better than mystical object storage and access methods.”

Caringo Swarm also supports Microsoft Azure’s Blob storage for customers who want an alternative to AWS.


January 29, 2018  5:42 PM

Kaminario storage jettisons hardware for software-only cloud model

Garry Kranz Garry Kranz Profile: Garry Kranz

Kaminario is the latest vendor to deemphasize hardware in favor of a solely software-defined approach.

Under its new strategy, customers will buy Kaminario storage as a reference stack from global reseller TechData Corp., which will integrate the software on standard appliances. The companies inked a distribution deal in January.

Kaminario on Wednesday released the first product under the new software-only model: Kaminario Cloud Fabric, a usage-based utility aimed at midsized IT services providers.  Cloud Fabric licenses customers to access composable infrastructure on demand with all-flash K2 storage arrays, the Kaminario flagship.

Prior to the deal with TechData, Kaminario relied on contract manufacturers to build K2 all-flash systems, but it owned the hardware inventory and associated financial and forecasting risk.

TechData will capture hardware revenue, while Kaminario storage revenue going forward will be solely from software licenses. Josh Epstein, Kaminario’s chief marketing officer, said TechData will handle asset tracking and inventory.

“All of our IP historically has been in software. We don’t do custom hardware engineering. To date, we have shipped our arrays as a fully integrated appliance, but we are moving to a software-only operational model. This move positions us for better operational and financing efficiency, and we’ll pass those efficiencies on to our customers,” Epstein said.

Amazon, Facebook, Google and other hyper-scale cloud data centers run on infrastructure built with proprietary hardware stacks from white-box servers. Epstein said Kaminario Cloud Fabric gives midrange service providers a similar advantage.

Kaminario Cloud Fabric is an enterprise-wide software utility licensed per consumed storage, regardless of where users are located. The goal is to qualify Kaminario storage with general purpose servers. K2 all-flash arrays to date have exclusively used Supermicro enclosures and SAS SSDs.

Epstein said many of Kaminario’s larger storage customers want to buy IT as a service. He said cloud and SaaS customers account for roughly 85% of Kaminario’s business.

“They want to move to a hyper-scale environment, but there is a lot of risk associated with vendor lock-in, regulatory concerns and overall pricing. We want to help them mitigate that risk.”

The Cloud Fabric license incorporates the standard Kaminario storage software stack, including the VisionOS operating system and Kaminario Clarity analytics and monitoring. Integration of Kaminario Flex automation and orchestration will be added upon general availability later this year.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: