Storage Soup


December 8, 2016  5:42 PM

Primary Data contributes to NFS 4.2

Dave Raffo Dave Raffo Profile: Dave Raffo
Primary Data, Software-defined storage

Software-defined storage vendor Primary Data’s open standards parallel NFS contributions made it into the NFS 4.2 standard, which could help the startup make inroads with scale-out storage customers.

Primary Data’s contributions to NFS 4.2 include enhancements to the pNFS Flex File layout that allows clients to provide statistics on how data is used and the performance of the underlying storage.

NFS 4.2 enables clients and application servers to natively support data virtualization and mobility features. That plays well with Primary Data’s DataSphere software that virtualizes different types of storage into tiers using a single global data space.

“Data virtualization sits between hardware and storage arrays below us and the virtual machine space above us, virtualizing compute resources,” Primary Data CEO Lance Smith said. “We separate the logical view of data from where it’s physically stored. Now we can put data on the right type of storage without bothering or interrupting the application. To do that we need a touch point on the client, and that’s what this is about.  When you put our metadata software into the infrastructure, that’s where virtualization comes alive.”

DataSphere supports SAN and DAS as well as NAS, but the integration of Primary Data technology into NFS 4.2 fits with scale-out NAS customers. The NFS 4.2 spec was completed in November.

“We are heavily engaged with media and entertainment companies,” Smith said. “They can now do clustering of their storage, even if it’s from different vendors. Oil and gas is right behind, looking for performance and scale-out. Financial service firms have about 20 percent of their data that’s super hot and needs to be on the highest performance tier, they want stuff migrated to a cheaper tier and use cloud and object storage. But that migration has to be seamless and not disrupt the application.”

December 8, 2016  11:32 AM

Western Digital shows gains from acquisitions

Dave Raffo Dave Raffo Profile: Dave Raffo
SanDisk, western digital

Western Digital told financial analysts its business will be better than expected this quarter, and added solid-state drives (SSDs), hard-disk drives (HDDs) and an all-flash array platform to its portfolio.

WD updated its quarterly forecast to $4.75 billion from $4.7 billion at its analyst day this week. It also gave details on NVMe products from its SanDisk and HGST acquisitions and expanded its helium HDD line.

New NVMe all-flash array coming

WD previewed a 2U all-flash platform that the vendor claims will deliver 18 million IOPS using NVMe over PCIe fabric. The first system is due to ship in the first half of 2017. WD pledged to contribute software supporting the platform to the open source community.

WD will position the NVMe array for real-time and streaming analytics applications such as credit card fraud detection, video stream analysis, location-based services, advertising servers, automated systems, and solutions built on artificial intelligence or machine learning (ML).

The new system is part of WD’s InfiniFlash brand it gained as part of its SanDisk acquisition. Dave Tang, GM of WD’s data center systems, said WD will eventually expand the platform to include NVMe over Ethernet, which supports longer distances than PCIe but has more latency. Scalability of NVMe over PCIe is limited to either a single rack or an adjacent rack.

“We think customers interested in ultimate performance will go to NVMe over PCIe, and those looking for scalability may opt for NVMe over Ethernet,” Tang said. “They will co-exist and serve different purposes in the data center.”

Tang said he suspects NVMe over Ethernet support remains a few years away. Widespread adoption will require more expensive 100-Gigabit Ethernet. Also, NVMe over Ethernet standards are still evolving.

SSDs, HDDs expand capacities

WD added two SSDs, the Ultrastar SN200 and Ultrastar SS200 Series. The SN200 is an NVMe PCIe SSD and the SS200 is a SAS SSD drive. Both are available in 2.5-inch and Half-Height, Half-Length form factors and in capacities up to 7.68 TB.

The NVMe SSDs are built for cloud and hyperscale storage and big data analytics.The SAS SSDs are aimed at hyper-converged and other server-based storage that use a dual-port design and SAS interface. The UltraStar line comes from WD’s 2012 HGST acquisition.

 WD also said it would ship new higher capacity helium and Singled Magnetic Recording (SMR) HDDs. The Ultrastar He12 is a 12 TB 3.5-inch SAS/SATA drive. It will be the highest capacity helium drive on the market. WD and rival Seagate currently ship 10 TB Helium drives. WD will also add a 14 TB SMR He12 drive, which surpasses its current 10 TB SMR capacity drives.

The Ultrastar SN200 and SS200 SSDs are expected to be generally available in the first quarter of 2017 with the He12 Helium drive expected in the first half of 2017 and the 14 TB SMR drive around the middle of next year.


December 7, 2016  4:57 PM

SwiftStack public Cloud Sync starts with Amazon, Google

Garry Kranz Garry Kranz Profile: Garry Kranz

Object storage provider SwiftStack Inc. has added hybrid cloud synchronization to its OpenStack-based software controller.

SwiftStack Cloud Sync allows data or subsets of data to exist simultaneously behind a firewall and in multiple public clouds.  Customers with private SwiftStack cloud storage could create data copies in Amazon Simple Storage Service (S3) and Google Compute Platform and replicate between the two.

Cloud Sync replicates native objects between physical nodes and the public cloud. SwiftStack does not charge extra to support multiple copies of objects in different locations. The hybrid cloud topology places data in a single namespace.

Mario Blandini, SwiftStack’s vice president of marketing, said Cloud Sync policy management extends the storage of a physical enterprise data center.

“Cloud Sync moves your data close to where your application is running. We synchronize your data to the public cloud.  It’s not a one-time copy; it’s a continuous sync based on policies that are applied to S3 or Google (storage),” Blandini said.

Distributed users can collaborate securely by accessing the same data bucket via Cloud Sync. Other use cases include active archiving, cloud bursting, offsite disaster recovery and cold archiving with Amazon Glacier and Google Cloud Storage Nearline.

Blandini said SwiftStack customers requested additional cloud support beyond Amazon Web Services.

“Customers buy servers from multiple vendors,” he said. “They want the same consumption experience with the public cloud. They want to have data in Amazon and Google and be able to balance between them.”

Existing SwiftStack customers can obtain Cloud Sync at no charge. SwiftStack’s capacity-based licensing starts at $375 per terabyte annually for a minimum of 50 TB.


December 6, 2016  3:33 PM

Datrium snares $55 million, nearly year after DVX release

Carol Sliwa Carol Sliwa Profile: Carol Sliwa

Datrium secured $55 million in Series C financing as it approaches the one-year anniversary of the general release of its flagship DVX storage system for VMware virtual machines.

The new financing – led by New Enterprise Associates – raises the funding total to more than $110 million since the Sunnyvale, California-based startup launched in late 2012. Datrium’s early backers included founders and executives from VMware and Data Domain, which are now both part of the Dell-EMC empire.

Datrium CEO and founder Brian Biles said the company plans to use the additional funding to expand into Europe, from its initial sales base in the U.S. and Japan. He said Datrium will also add product features, and grow the support, engineering, sales and marketing teams. Datrium currently employs about 140, Biles said.

“I model on the early history of Data Domain, and we’re beating that regularly in units and revenue. So I’m feeling pretty good about that,” Biles said, commenting on another startup he co-founded.

Datrium claims more than 50 customers since the DVX product became generally available in January.  That includes users in banking, cloud hosting, health care, manufacturing, media and entertainment, technology and public sectors.

The Datrium storage system consists of software that runs on customer-supplied servers, and NetShelf appliances equipped with 7,200 rpm SAS hard disk drives (HDDs) for persistent storage. The NetShelf appliance is currently disk-only, but Biles predicted that Datrium would offer a flash system within three years, once the price of flash further plummets.

The DVX software orchestrates and manages data placement between the NetShelf appliance and host server, and uses customer-supplied, server-based flash cache to accelerate reads. The software also provides storage functionality such as inline deduplication and compression, clones, and RAID for the persistent backend storage.

The local host server can use up to 16 TB of flash for caching, so after deduplication and compression, the effective local flash cache capacity can reach 32 TB to 100 TB, Datrium has said.

So far, the main use case for Datrium storage has been database workloads. Biles said 63% of customers listed databases (primarily Microsoft SQL Server) as the dominant use case for the DVX system. He said the product has also seen strong uptake in mixed-use virtual machine (VM) and VDI environments and attracted a number of customers in data warehousing scenarios.

“They’ve told us that not only is it fast, but as they virtualize their warehouses, they’ve found that performance on Datrium is faster than the warehouse on physical servers,” Biles said.

Another emerging trend Datrium noted is the use of NVMe-based PCI express solid-state drives (SSDs) for server-based flash cache. Datrium vice president of marketing Craig Nunes  estimated that NVMe adoption is approaching 10%.

One customer, Northrim Bank in Alaska, uses 2 TB NVMe-based SSDs in each of the 16 VMware servers at its primary data center in Anchorage and its eight VMware servers at its secondary data center in Fairbanks.

Benjamin Craig, Northrim’s executive vice president and chief information officer, said the company’s Iometer testing showed a near doubling of IOPS and throughput with the 2 TB NVMe SSDs over enterprise-class 1 TB SATA SSDs.

Craig said that Northrim was able to procure the Intel NVMe SSDs from its server vendor, Dell, at about $1,000 per TB – within 10% of the price per TB as heavy write-intensive, enterprise-class SATA SSDs.


November 30, 2016  8:18 PM

HPE launches flash, SMB storage updates

Carol Sliwa Carol Sliwa Profile: Carol Sliwa

Hewlett Packard Enterprise (HPE) rolled out a new 3PAR flash pricing initiative and StoreEasy file storage enhancements as part of a late-year rush of storage product releases.

HPE last week disclosed that its $779 million in storage revenue was down 5% on a year-over-year basis for the fiscal fourth quarter, ending on Oct. 31. That was HPE’s fourth fiscal quarter. For the full fiscal 2016, HPE storage revenue of $3.065 billion decreased 4% over last year’s $3.18 billion.

This week, HPE unveiled a 3PAR Flash Now initiative that it claims will allow customers to buy on-premises all-flash 3PAR StorServe at 3 cents per usable GB per month after data reduction. Program incentives include deferred payments until a customer’s all-flash array is up and running and automated data migration tools at no cost.

HPE also updated its StoreFabric Fibre Channel (FC) networking portfolio to the latest 32-Gigabit per second technology. The new 32-Gbps FC director and fixed-port switches and extension blades are based on technology from Brocade. HPE Smart SAN technology enables automated orchestration from 3PAR StoreServ flash storage arrays and can significantly reduce provisioning time, according to HPE.

HPE storage updates for SMBs
In December, HPE will make available upgraded StoreEasy network-attached file storage products for SMBs. New configurations of HPE StoreEasy support Microsoft Windows Storage Server 2016 Standard Edition, the latest Intel Xeon E5 v4 processors and higher speed DDR-2400 memory.

“Our focus is to make on-premise and hybrid cloud file storage even easier for small and mid-sized businesses, and with more cost predictability than public cloud solutions,” said Stephen Bacon, director of product management for HPE network-attached and software-defined storage.

Bacon said the jump from Windows Storage Server 2012 R2 to Windows Storage Server 2016 brings enhancements such as finer-grained deduplication and compression for larger files and file systems, hardened security against man-in-the-middle attacks, and improved performance for data-in-flight encryption.

StoreEasy customers have the option to use Microsoft Azure as an off-site backup target thanks to native Microsoft integration. Microsoft Data Protection Manager customers can also use StoreEasy as an on-premises backup target for Microsoft applications, virtual machines and client PCs, according to HPE.

The 2U StoreEasy NAS appliance will give customers the option of using new 10 TB hard disk drive (HDD) bundles to boost density by 25%. The HPE StoreEasy 1650 Expanded model will be able to scale to 280 TB raw capacity compared to the previous limit of 224 TB with 8 TB drives. Customers also will retain previously available drive options, including small-form-factor enterprise SAS drives, large-form-factor near-line drives and solid-state drives (SSDs).

Two key HPE storage management tools for the StoreEasy line are also getting updates. Bacon said HPE dramatically simplified its StoreEasy Pool Manager provisioning tool to enable customers with limited IT expertise to maximize capacity or performance, or strike a balance between the two, based on updated best practices.

“The person who’s provisioning it doesn’t need to know about RAID levels and different drive types and so on. For them, the experience is: Do they want to optimize their storage layout for capacity, for performance, or a balance of both?” Bacon said.

HPE is also updating its StoreEasy Dashboard monitoring tool to trigger low-capacity notifications for both customers and partners. Bacon said the partners receive information on the recommended drive bundle based on the customer’s configuration.

“With this [StoreEasy] makeover, we really thought through the capacity lifecycle for a system and, at each point, have made enhancements to make life easier for our customers and for our partners who are assisting those customers,” Bacon said.

HPE promotes its entry StoreEasy NAS at under $15,000, and Bacon said pricing has not materially changed with the update. Existing HPE StoreEasy customers running Windows Storage Server 2012 or 2012 R2 can purchase an upgrade kit for $700.


November 30, 2016  12:42 PM

Nutanix adds oodles of customers and red ink

Dave Raffo Dave Raffo Profile: Dave Raffo

In its first quarter as a public company, hyper-converged pioneer Nutanix recorded substantial revenue gains while continuing to suffer heavy losses. CEO Dheeraj Pandey said the increase in sales shows progress in Nutanix’s goal to establish itself as a trusted enterprise cloud vendor. The losses are the price it pays to earn that trust.

Nutanix reported revenue of $166.8 million for the quarter, up 125% over last year and ahead of financial analysts’ expectations. Its $129.65 in product revenue increased 84% from last year. Part of the uptick is due to maturing OEM deals with Dell EMC and Lenovo, who bundle Nutanix software on their servers. Nutanix forecast $175 million to $180 million for this quarter, which compares to $102.7 million for the same quarter last year.

Despite all those sales, Nutanix lost $162.2 million last quarter, compared to $38.5 million last year. Sales and marketing, research and development and overall operating expenses all shot up as Nutanix chases enterprise customers while competing with large IT vendors.

Nutanix raised $250 million in its September initial public offering and finished the quarter with $347 million, yet the company appears at least three years away from turning a profit.

“As a young public company, we’ll strive to balance our short-term goals with the long-term bets we must make for sustainable differentiation,” Pandey said on the earnings call Tuesday night.

That growth included nearly 400 new employees in the quarter, including 112 from PernixData and Calm.io acquisitions. Nutanix finished the quarter with more than 2,350 employees.

Pandey said the company made progress landing enterprise deals and its Acropolis Hypervisor (AHV) is gaining traction with customers. Overall, the vendor added more than 700 customers in the quarter to bring its total to more than 4,400. Pandey said 256 customers spent more than $1 million on Nutanix in the quarter, a 137% increase from last year.

Pandey spent much of the call outlining his company’s history and future strategy. That strategy consists of providing a cloud infrastructure as a complement or alternative to public clouds such as Amazon Web Services (AWS).

“Today we will cover a lot of ground on the way we think and dream about the future of enterprise computing,” Pandey said at the start of the call.

Here is his take on important issues:

Coopetition with Dell EMC

Pandey said the Dell-EMC merger – which also brought VMware to Dell – did not change the relationship between Nutanix and Dell. He said Nutanix is also working closely with EMC sales teams, despite EMC’s competitive VxRail hyper-converged system that uses VMware vSAN software. Pandey said some of Nutanix’s large deals last quarter came through Dell EMC.

“From the sidelines, it might appear that because of the VMware ownership, Dell EMC and Nutanix are in a zero-sum game,” he said. “As hustlers in the front line, we see a very different picture in which customer interest dictates alliances.

“What keeps Dell and Nutanix in an honest partnership with mutual respect is the fact that we can compete and cooperate on a deal-by-deal basis.”

On complementing/replacing AWS

Nutanix is counting on enterprises using its appliances as an alternative to the public cloud, but Pandey said there is room for Nutanix and AWS. “Contrary to some perceptions, we believe adoption of Amazon Web Services is ultimately a tailwind for us,” he said. “Owning and renting will find a balance, and the operating system that straddles the two will emerge as the new virtualization layer for enterprise infrastructure. I will repeat that again. The operating system that straddles the two will emerge as the new virtualization layer infrastructure.”

Convergence and hyper-convergence

Pandey said converged infrastructure came about when “large infrastructure incumbents came together to define bolt-on convergence heavily driven by joint marketing and professional services” and “today the market is ashamed with talking about converged infrastructure because it was a hack that flattered to deceive.” He said Nutanix’s pioneering work in hyper-convergence truly brought parts of the IT stack together.

Yet he said Nutanix is more than hyper-covnergence, which he called “a mere pit stop in the journey of an enterprise cloud system. While we remain a force to recon with in the hyper-converged infrastructure space, we are increasingly winning in the enterprise due to our full stack solution that includes native virtualization, management, cloud orchestration and, going forward, network security.” At another point in the call, he said, “The laggard observer focuses on our core data management capabilities to box us into a hyper-converged category.”

Large customer wins

Pandey said Toyota North America expanded its Nutanix product acquisition as part of a data center consolidation last quarter, and has spent a total of $6 million on Nutanix technology. Scotia Bank added Nutanix for a Splunk initiative and Singapore investment firm GIC spent more than $1 million in the quarter to help build an enterprise cloud. ICIC Bank Limited in India expanded its Nutanix footprint and added AHV. The U.S. Army Human Resources Command spent more than $1 million on Nutanix’s enterprise cloud stack and AHV for 54 sites, and the U.S. Navy installed Nutanix software on Cisco UCS hardware to run Splunk and Micrsofot SQL Server.

AHV traction

Pandey said 17% of Nutanix customers have adopted the Acropolis Hypervisor over the past year, and nearly half of its remote office customers deployed AHV last quarter. He said nearly one-third of U.S. federal government Nutanix nodes shipped in the quarter included AHV.


November 23, 2016  4:06 PM

HyperGrid swears off hardware, favors HyperCloud SaaS

Garry Kranz Garry Kranz Profile: Garry Kranz

Hypergrid put the finishing touch on its shift from storage hardware to SaaS-based delivery with the introduction of HyperCloud. The company formerly known as Gridstore said HyperCloud helps enterprises build scalable IT services with pay-as-you-go SaaS subscription pricing.

“We believe cloud is a model, not a destination. We want to help enterprise CIOs transform their applications to containers and microservices architecture,” HyperGrid chief product officer Manoj Nair said.

HyperGrid’s new hybrid cloud integrates container technology picked up from DCHQ. Gridstore and DCHQ merged in June, renaming the combined company as Hypergrid.

HyperCloud bundles Hypergrid 3U hyper-converged appliances, HyperForm container orchestration and the software-defined HyperWeave fabric that integrates servers, all-flash storage and programmable network switches.

Customers may deploy HyperCloud as an integrated appliance in their data centers or consume applications, infrastructure and platforms via 18 hybrid cloud partners.  Metered billing is the same regardless of deployment method.  Containers, hyper-converged infrastructure, SQL Server instances, virtual desktops and unified cloud management are purchased as discretely consumed virtual units.

HyperForm provides lift-and-shift cloud migration. Internally developed containers could be hosted locally or burst to the cloud with policy-based governance and security.

The HyperCloud launch includes a new Hypergrid block storage driver, permitting Docker containers to run directly on all-flash storage. Scale-out flash storage gets a boost with support for nonvolatile memory express (NVMe) drives.  A single Hypergrid node can support 10 TB of NVMe flash across 100 Gigabit RDMA over Converged Ethernet fabric.

HyperCloud has qualified Dell PowerEdge C Series and Hewlett Packard Enterprise DL Series ProLiant rack servers and Micron NVMe drives. Minimum subscription is a one-year term.

Hypergrid’s cloud-based services model marks a departure from the company’s origins as Gridstore, which started out selling Microsoft-only arrays before adding all-flash hyper-converged appliances in 2014.

“We are (no longer) focused on selling a disk hardware appliance,” Nair said. “That business model is in the past.”


November 17, 2016  5:24 PM

NetApp’s hyper-convergence answer? All-flash and CI arrays

Dave Raffo Dave Raffo Profile: Dave Raffo
FlexPod, Hyper-convergence, NetApp, SolidFire

NetApp’s hyper-convergence strategy is to address that market through FlexPod and SolidFire.

FlexPod is the NetApp-Cisco converged infrastructure program, and SolidFire is the all-flash array platform NetApp acquired early this year. Those were the technologies NetApp CEO George Kurian spoke about Wednesday when asked about hyper-convergence on NetApp’s earnings.

Kurian sounded as if the major focus of hyper-convergence is to provide infrastructure for departments and remote offices.

“We have two approaches to compete with hyper-converged solutions,” he said. “One is a set of innovations that we brought to the FlexPod family called FlexPod Automation and the second is with SolidFire, which provides a zero touch storage provisioning solutions. The release of SolidFire that we introduced in the summer of this year called Fluorine allows us to compete very well with hyper-converge solutions and VMware environments, and we have been seeing wins.”

FlexPod with Infrastructure Automation provides a way for smaller companies to expedite the ordering and installation of a bundle consisting of Cisco UCS Mini, NetApp FAS arrays, hypervisors and other management software. But the server and storage are distinct products, while hyper-convergence puts all of that into one chassis.

Fluorine is the latest version of the SolidFire Element operating system. It included support for VMware VVOLs and greater virtual machine integration, but SolidFire is an array without a server built in.

While FlexPod with Infrastructure Automation and SolidFire can serve as storage for virtual desktop infrastructure and remote offices, many hyper-converged products are moving beyond that to serve as organizations’ primary storage.

Hyper-convergence appears to be the biggest hole in NetApp’s product portfolio, now that it selling a good deal of all-flash arrays. Kurian said NetApp was on track to hit $1 billion annual revenue in all-flash, counting All-Flash FAS, E-Series and SolidFire arrays. Still, NetApp’s product revenue of $741 million declined 13% from last year and its overall revenue of $1.34 billion slipped 7.3%. Both results were below Wall Street expectations. The vendor’s $109 million profit was below last year but better than expected, mainly because of cost cuts including layoffs.

NetApp forecasted revenue for this quarter of between $1.325 billion to $1.475 billion, which means it will likely increase over last year’s $1.386 billion.

“We are on track to return the company to long-term growth,” Kurian said.


November 16, 2016  12:23 PM

Pure Storage: We’re NVMe-ready

Dave Raffo Dave Raffo Profile: Dave Raffo
Pure Storage

Pure Storage isn’t riding the NVMe bandwagon yet, but it has reserved a seat.

Pure promises its customers they can upgrade all new FlashArray//M systems to NVMe by the end of 2017 through the vendor’s Evergreen Storage program.

NVMe is a memory-class protocol for communications between CPU and flash. NVMe SSDs are expected to replace SAS SSDs. Bulk shipments are expected in 2017 although arrays such as EMC DSSD D5 and systems from startups E8 Storage and Apeiron Data Systems already use NVMe.

Pure today said it is offering an NVMe-Ready Guarantee. Pure VP of products Matt Kixmoeller said the guarantee means if Pure cannot upgrade an array to NVMe in 2017, the vendor will replace that customer’s system with a new NVMe array.

“We think we’re well set up compared to the retrofit legacy vendors,” Kixmoeller said, referring to storage vendors who were around since before the days of flash in enterprise storage. “We believe NVME is the next big thing and will be a strain to those legacy architectures. It’s a dramatic change.”

Although Pure isn’t first to ship NVMe SSDs, Kixmoeller said the vendor has prepared for the new technology since the start.

All FlashArray//M arrays have shipped with dual-ported hot pluggable NVMe NV-RAM devices since 2015. Every flash module slot is wired for SAS and PCIe/NVMe connectivity. Pure maintains its controllers can be upgraded non-disruptively from SAS to NVMe, and its Purity Operating Environment is optimized for NVMe with a massively parallel and multi-threaded design.

Kixmoeller said Pure will support M series arrays with NVMe and flash, or allow customers to switch completely to NVMe. He predicted the full transition to NVMe will take a year or two.

“When we built the Flasharray/M, we wanted to build a product that was upgradeable,” he said. “We foresaw the change in flash would be much faster than disk. If we built an array that needed to be upgraded every three-to-five years, it would seem like a dinosaur.”


November 15, 2016  7:17 AM

Qumulo scales out hardware options through HPE partnership

Dave Raffo Dave Raffo Profile: Dave Raffo
Storage

Qumulo has scored an OEM deal with Hewlett Packard Enterprise to sell the startup’s data-aware scale-out NAS software on Apollo hardware.

Qumulo Core on HPE Apollo will be available on 2u 4200 appliances starting at 180 TB raw per node. Qumulo Core is also sold on 1u and 4u Qumulo QC-Series commodity hardware appliances.

Jeff Cobb, Qumulo VP of product management, said the vendor intends to extend its hardware compatibility list to more HPE models and other server vendors.

“This is definitely the first in a series,” Cobb said. “We’ll follow this with difference sizes and price performance points with HPE, and we’ll add other vendors to our hardware compatibility list.”

DataGravity, another data-aware storage vendor, has stopped selling its own appliances and now sells only software. DataGravity is pursuing partnerships with hardware vendors but has not disclosed any major OEM deals.

Cobb said the HPE Apollo products give Qumulo customers a denser hardware option, and could appeal to organizations that have standardized on HPE hardware.

“We’re a software company, and the value that we have comes from software,” Cobb aaid. “That software has to run on infrastructure, and our philosophy has always been the customers should choose the infrastructure.

He said Qumulo will also extend its file system to public clouds.

Qumulu’s QC-Series includes 1u QC24 (24 TB) and QC40 (40 TB) and 4u QC104 (104 TB), QC 208 (208 TB) and QC 260 (260 TB) nodes. Like the QC-Series, the Qumulo Core on Apollo appliances require four-node minimum clusters.

Qumulo also launched Core 2.5, adding snapshots to its file and object storage. Version 2.5 also includes throughput analytics, intelligent caching of metadata on solid-state drives and improvements to the erasure coding added in version 2.0 earlier this year.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: