Storage Soup

October 21, 2016  3:58 PM

FalconStor updates FreeStor, adopts new pricing model

Carol Sliwa Carol Sliwa Profile: Carol Sliwa

FalconStor Software took aim at hybrid cloud deployments with a new pricing model and product upgrade for its FreeStor storage virtualization and block-based data services.

The Melville, NY-based software vendor now charges customers only for the primary copy of data – not the total storage capacity under management – with its subscription-based pricing model. The FreeStor software provides common tools, single-pane management and block-based services such as data migration, protection, recovery, and analytics for use with heterogeneous storage.

FalconStor CEO Gary Quinn estimated that 70% of FreeStor’s customers are managed service providers (MSPs). He said providers offer services such as backup or disaster recovery (DR) and want the ability to store an additional copy of their customers’ data in public clouds such as Amazon Web Services (AWS) or Microsoft Azure.

Quinn said FalconStor’s enterprise customers have also been asking for similar options to move to AWS or Azure for virtual backup, DR and test and development use cases.

“It doesn’t really cost me anything to make a copy of the data or replicate the data to another location and manage it through the FreeStor management server. So our view is that customers should pay once,” Quinn said.

He said the list price for the FreeStor software, inclusive of data services, is three cents per GB per month to use on the primary data copy. The customer supplies the hardware.

Eric Burgener, a research director at International Data Corp., said he has seen pay-as-you-go models from other vendors but nothing like FalconStor’s aggressive pricing.

FalconStor changed the pricing model in anticipation of a new version of its FreeStor software, which extends support to public clouds. FalconStor added support for Amazon, Microsoft, Alibaba, Huawei and Oracle to go with its prior support of OpenStack-based deployments.

Tim Sheets, vice president of marketing at FalconStor, said, in an Amazon environment, the FreeStor Virtual Appliance (FSS VA) would run on the AWS Elastic Compute Cloud (EC2). The FSS VA could either use Amazon’s Elastic Block Store (EBS) or present block services through AWS Storage Gateway (ASG) to load into Amazon’s object-based Simple Storage Service (S3) container, he said.

“You don’t have to go learn a new set of tools from Amazon if you haven’t done it before. We’ve already got the configuration set up to really simplify it for those customers,” he said. “And you also get the analytics, all the insights, through a single pane of glass with the FreeStor management server that you wouldn’t get if you had to use the Amazon or an Azure gateway,” Sheets said.

Customers could also use FreeStor to manage data across multiple supported public clouds or to move data from one public cloud to another, so long as the FSS VA runs in each cloud.

“I’m sure that Amazon’s not going to provide tools to leave Amazon and go to Azure,” Quinn said. “That’s what we’re doing here, the same way as if you wanted to move from EMC to HP on disk or EMC to Pure on flash. It’s just being done in the cloud.”

The new FreeStor software also beefs up external security with support for the Lightweight Directory Access Protocol (LDAP) and Microsoft Active Directory for authentication, authorization, and auditing.

Other newly supported features include enhanced analytics to enable core-to-edge visibility down to the applications and service-level agreement (SLA) management, improved support for NVMe to boost performance and lower latency, and Linux 7 compliance.

The FreeStor updates arrive as FalconStor battles financial woes. FalconStor reported $8.1 million in revenue for the second quarter, down from $9.6 million in Q2 of 2015, with only $9.4 in cash on hand. But Quinn said at the time that FalconStor was making solid progress selling FreeStor subscriptions to MSPs, enterprises and OEMs.

October 20, 2016  6:24 AM

Dell EMC’s making XtremIO multi-protocol

Dave Raffo Dave Raffo Profile: Dave Raffo

EMC’s XtremIO all-flash SAN is getting a file-system injection thanks to Dell Fluid File System (FluidFS).

Dell EMC previewed the NAS capabilities for XtremIO at Dell EMC World, saying they would be generally available by late 2017. FluidFS is a scale-out NAS technology that Dell acquired from Exanet in 2010 and used to add file capabilities to its Compellent and EqualLogic SAN arrays. But even before Dell acquired EMC for more than $60 billion, the development teams from XtremIO

EMC’s XtremIO all-flash SAN is getting a file-system injection thanks to Dell Fluid File System (FluidFS).

Dell EMC previewed the NAS capabilities for XtremIO at Dell EMC World, saying they would be generally available by late 2017. FluidFS is a scale-out NAS technology that Dell acquired from Exanet in 2010 and used to add file capabilities to its Compellent and EqualLogic SAN arrays. But even before Dell acquired EMC for more than $60 billion, the development teams from XtremIO and FluidFS – both based in Israel – were collaborating on their integration.

Chris Ratcliffe, Dell EMC senior vice president of core technologies, jokingly referred to the joint development as a “black ops” operation. The integration will add NFS, SMB, Hadoop Distributed File System (HDFS) and NDMP to XtremIO’s current Fibre Chanel and iSCSI block storage support.

As when Dell added FluidFS to Compellent and EqualLogic, XtremIO will require a separate piece of hardware to deliver file services. XtremIO CTO Itzik Reich called the appliance an extension to XtremIO rather than a full gateway, and said the XtremIO approach will not impact performance. He also said file storage will be managed through the same interface as block storage with “the same look and feel.”

Reich said the original design goal for ExtremIO included adding data services in later iterations. “What’s in the market today is just the beginning,” he said of the product that EMC claims has more than 3,000 customers and $3 billion in revenue in three years on the market. He also said there will be a lot more added to the next generation XtremIO, including more drives, higher capacity SSDs and software-defined storage capabilities.

“We were looking for ways to complement our scale-out architecture,” he said. “We wanted it to be more than just Fibre Channel. When we heard talk of a partnership, I gave Michael (Dell) a call and said this is a good project for us to add file services.”

Dell EMC this week announced plans to deliver an all-flash version of its Isilon scale-out NAS platform in 2017. Isilon is aimed at traditional scale-out NAS use cases such as media/entertainment, life sciences and Hadoop analytics. Ratcliffe said XtremIO’s NAS would be more for traditional SAN customers. “This is scale-out NAS for transactional environments that require sub-millisecond response times,” he said.

Reich estimated it would have taken at least five years to build file services from scratch into XtremIO. His team looked at filesy stems from EMC’s Unity unified and Isilon scale-out NAS but determined FluidFS fit better with XtremIO’s architecture.

“Unity doesn’t scale out,” he said. “Isilon scales out like nobody’s business, but it doesn’t provide the latency we need.”

EMC’s Unity, Isilon and VMAX All-Flash arrays already support 15 TB SSDs, but they won’t be available on XtremIO until the next generation. Reich said his team wants to make sure using the higher capacity drives will not impact performance. “People don’t realize, the larger the drive capacity gets, the worse the performance gets,” he said. “We are not willing to sacrifice our predictable performance.”

October 19, 2016  10:56 AM

Cleversafe-based IBM Cloud Object Storage service debuts

Carol Sliwa Carol Sliwa Profile: Carol Sliwa

IBM marked the one-year anniversary of its Cleversafe acquisition with the launch of a “pay-as-you-go” cloud object storage service enabling customers to use the same technology on site and off premises.

IBM foreshadowed its plans to facilitate hybrid cloud deployments on Oct. 5, 2015, when it  acquired Cleversafe. But until this month, IBM made available the Cleversafe object storage software for use only on-premises or in a dedicated environment in the IBM Cloud.

Russ Kennedy, vice president of product strategy and customer success at IBM, said IBM has done considerable work to extend its public cloud’s previously limited multi-tenancy capabilities to support millions of concurrent tenants and to integrate the core Cleversafe technology.

Kennedy said customers have the flexibility to store application data in the cloud and move it back on premises, or vice versa, if they choose. He said IBM is looking to provide more automation capabilities in the future, “where decisions are made based on utilization or access or certain parameters that may drive the workloads in one direction or another.”

IBM Cloud Object Storage services are now available in the U.S. and Europe in three configurations:

–Standard – Cleversafe-based high-performance offering for active workloads; supports object storage application programming interfaces (APIs) such as Amazon S3 and OpenStack Swift.

–Vault – lower-cost offering that targets archive, backup and other workloads where data is infrequently accessed.

–Dedicated – single-tenant IBM Object Storage running on dedicated servers in IBM Cloud data centers; available as an IBM managed service or a self-managed option.

Kennedy said SecureSlice technology from Cleversafe eliminates the need for customers to manage encryption keys. SecureSlice automatically encrypts each data segment before it is erasure coded and distributed. IBM Cloud Accesser technology can reassemble the data at the customer’s primary data center, and SecureSlice decrypts it.

IBM Cloud Object Storage has regional and cross-regional options. The cross-regional service sends sliced data to at least three geographic regions. The regional service stores data in multiple data centers in a specific region.

Kennedy said IBM operates close to 50 data centers worldwide, including 12 to 15 in North America. IBM Cloud Object Storage is due to become available in the Asia-Pacific region by year’s end, with other locations to follow in 2017, according to Kennedy.

IBM Cloud Object Storage pricing, found at this link, is based on per GB per month basis. There are also fees for transactions. IBM’s on-premise object storage software can be licensed based on capacity or through a subscription model.

Scott Sinclair, a senior analyst at Enterprise Strategy Group (ESG) Inc., said a 2016 ESG poll of current enterprise Amazon Web Services (AWS) customers identified Microsoft Azure and IBM as the most viable competitors to AWS.

Sinclair said using the same object storage software on premises and off premises could provide advantages. He said storage vendors often differ in how they implement protocols, so users might have piece of mind with the same technology in both places. He said they also know what to expect for service and support, working with a partner that understands both their on-premises and off-premises needs.

“The more vendors that you have to manage in your IT organization creates work,” Sinclair said. “And that work requires people.”

Kennedy said the exponential growth of information is driving users to recognize the cost, scalability and management benefits that object storage can provide over traditional storage, especially when they need petabytes or exabytes of capacity.

“There are still headwinds for object storage,” he said.  “Not all the applications in the world have the ability to write to object storage like they do to traditional file-based or block-based storage. But that’s changing. And it’s changing quite rapidly with the popularity of moving to the cloud.”

October 18, 2016  12:32 PM

Magnetic storage device maker Everspin spins its way to Nasdaq

Garry Kranz Garry Kranz Profile: Garry Kranz

Magneto-resistive RAM chipmaker Everspin Technologies is trying to make sure its in-memory magnetic storage technology will spin into the future. The Chandler, Ariz.-based vendor this month raised $40 million in an initial public offering needed to “continue as a going concern” beyond 2016, according to its security filing.

Everspin shares started trading on the Nasdaq Global Market Oct. 7 under ticker symbol MRAM. Investors purchased 5 million shares at $8 per share. Underwriters retain an option to scoop up an over-allotment of 750,000 shares at the offering price, which would increase proceeds by roughly $6 million to $46 million.

In a concurrent transaction, Everspin said it expects to get an additional $5 million via a private placement of 625,000 shares with China-based NOR memory maker GigaDevice Semiconductor (HK) Limited.

Shares in Everspin peaked at $9.99 during the first day of trading before pulling back Monday to $6.69 on volume of 161,170 shares, a drop of 33%.

The early-stage vendor has lost money every year since its 2008 spinout from Freescale Semiconductor Inc., now a subsidiary of Netherlands-based NXP Semiconductors. A net loss occurs when a company’s debt, operating expenses and taxes exceed its profit.

Everspin has accumulated an $89.7 million deficit and has $2.6 million in cash and cash equivalents on hand. Through June, Everspin’s net loss was $10 million on revenue of $12.8 million. Net losses in fiscal year (FY) 2015 were $19 million on total revenue of $26.5 million, which followed a $10 million net loss on revenue of $24.9 million in FY 2014.

Magneto-resistive RAM technology stores data as a magnetic state. Traditional semiconductor memory uses an electrical charge.  Everspin MRAM is available as a discrete device or an embedded system on a chip.  The products are designed to read and write data at speeds on par with DRAM and SRAM.

Everspin’s magnetic storage combines the persistence of nonvolatile memory with the speed and endurance of random access memory.  The MRAM devices can be used as byte-addressable memory channel storage. Everspin stacks chips in a vertical plane to boost cell density and enable MRAM to function as persistent memory.

Everspin integrates magnetic storage by integrating industry-standard CMOS wafers during manufacturing at its Chandler fab plant. The storage uses a magnetic tunnel junction device to make calls to system memory. Manufacturing of higher density MRAM chips are outsourced to fab partner Global Foundries, which holds a $5 million ownership stake in Everspin.

Newcomers like Everspin and RAM card makers could represent the next wave of storage technology, following the dizzying pace of adoption of NAND flash. Everspin’s lower density MRAM products range from 128 kilobits to 16 megabits (Mb). Common uses are found in automotive, industrial and transportation application.  Higher density chips range from 64 Mb to 256 Mb and provide magnetic storage to the enterprise storage market, including server, SSD and appliance vendors.  Everspin counts Broadcom, Dell, IBM and Lenovo as storage customers.

October 14, 2016  2:31 PM

Hubstor gives Microsoft Azure encryption a two-pronged focus

Garry Kranz Garry Kranz Profile: Garry Kranz

Cloud archiving startup HubStor is fortifying its data protection, adding two methods to apply Azure encryption of at-rest data in Microsoft public cloud storage.

Customers of the Ottawa-based software company may opt to use Azure Storage Service Encryption (SSE) or Hubstor’s virtual cloud gateway to apply Azure encryption of cloud-hosted data. Which method to choose depends on an organization’s expected level of cloud-based searching and the accompanying security level.

The Microsoft SSE setting automatically encrypts data as it is written to persistent storage in Azure.

Encryption with Hubstor’s virtual cloud gateway preserves a subset of encrypted data locally, then synchronizes file shares to Azure.  HubStor lets customers retain local control by applying 256-bit AES encryption ciphers, although its approach limits indexing and search capabilities.

Hubstor’s indexing engine performs transparent decryption of data to render content. For that reason, the vendor suggests customers use it locally. It recommends Azure encryption for enterprises that regularly perform search queries on cloud data sets.

“Before these enhancements, any content encrypted before (moving to) the cloud was work for the customer and the data was hard to manage. We now make it easier to encrypt before the cloud, and the encrypted data integrates with HubStor’s data-aware storage platform to make it easy to isolate it in searches and policies,” Hubstor CEO Geoff Bourgeois said.

Hubstor launched its eponymous software suite in July. Thus far, Microsoft Azure is the only public cloud it supports.  The Hubstor enterprise archive suite is an overlay atop the Azure cloud, particularly to tier cold or seldom used data related to legal regulations. Hubstor is installed behind a corporate firewall, but presents a cloud archive tenant for file storage in Azure.

Encryption is not the company’s only news.  Hubstor plans a feature for cloud storage chargeback, aimed at law firms and project-oriented enterprises that store unstructured data for long periods. Chargeback equips Hubstor customers to track, visualize, and report on storage costs associated with particular projects, clients, or departments.

Bourgeois said Hubstor analytics are getting a boost in November from planned taxonomy and user tagging. Hubstor also is developing a file system driver that virtualizes inactive array-based data and tiers it to Azure.

October 13, 2016  10:58 AM

Avere Systems rolls out its own core NAS filer for hybrid cloud

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Avere Systems recently announced its latest hardware, the Cloud-Core NAS (C2N) hybrid system that is integrated with object storage and can scale up to five petabytes.

The system is comprised of the FXT 5000 nodes for NAS and the CX200 nodes for object storage that is based on OpenStack Swift software. A full system includes a minimum configuration of three 1U CX200 storage nodes for a total of 120 TB of usable capacity when using triple replication for data protection.

The other minimum configuration is six CX200 storage nodes for 480 TB of usable capacity when using erasure coding for data protection. The erasure coding offer N+4 availability, so four servers or four drives can be lost and the system will keep running.  It also offers a geo-dispersal capability for disaster recovery using three sites. The CX200 nodes are loaded with 10TB disk drives and capacity can be expanded in 80 TB increments.

“It’s a scaleable system that can go from three nodes all the way to 72 1U servers that gets over 5 PB of capacity,” said Jeff Tabor, senior director of product management and marketing at Avere Systems. “It provides NAS simplicity but also provides the efficiency of the cloud and it’s all integrated.

“The key part of the operating system is the data protection. One is erasure coding and the other is triple replication. Triple replication can be inefficient so the erasure coding gives both resiliency and efficiency,” Tabor said.

The FXT compute performance tier for NAS, which supports NFS and SMB, is an all-flash configuration that scales to 480 TB using solid state drives. The system supports snapshots, data migration, mirroring, compression and encryption.

Tabor said Avere Systems is targeting customers who are dealing with large file data. The system integrates  private and public object storage with an organization’s existing NAS infrastructure so it allows customers to create a hybrid cloud and manage an entire heterogeneous infrastructure as a single, logical pool of storage. The C2N is integrated with Avere Systems’ global namespace.

“Historically, you would store that on NAS but NAS has some challenges,” he said. “The trend is to move away from NAS and move to the cloud. But it’s difficult moving that data to the cloud. What C2N provides is a simple way to get into the cloud. This is a complete edge-to-core configuration supported by Avere. C2N has a built in operating system, so it’s our cloud.”

October 11, 2016  12:30 AM

SNIA panel: Docker data center deployments inch to primary storage

Garry Kranz Garry Kranz Profile: Garry Kranz

Enterprise storage containers aren’t about to supplant virtual machines, but the trend line for Docker data center adoption is going up.  Hurdles of persistent storage and enterprise data protection are being removed, allowing organizations to move from “monolithic applications” to containerized microservices, according to a recent industry webinar sponsored by the Storage Networking Industry Association (SNIA).

The Oct. 6 event was the first of two events planned as part of SNIAs’s Cloud Storage Initiative. SNIA-CSI chairman Alex McDonald, part of NetApp’s Office of the CTO, moderated the session with panelists Keith Hudgins of Docker and Chad Thibodeau of Veritas Technologies.

Typical Docker data center use cases have mostly centered on application development and testing, but the panel said container storage is undergoing big changes.

“Micro-service architecture is designed to enable applications to be deployed extremely fast and make them much more portable to run on a variety of platforms. Containers really are optimized for speed of deployment, portability and efficiency,” Thibodeau, a principal product manager at backup vendor Veritas, told an audience of about 140 attendees.

He said companies often get started by launching containers inside virtual machines, “but ideally, containers are designed to (give you) the most advantage by running on bare metal.”

Containers are similar to virtual machines, yet also distinctly different.  Whereas virtualization abstracts underlying hardware, Docker software virtualizes the operating system, eliminating the need to supply each virtual instance with a hypervisor and guest operating system. Multiple workloads share compute, operating system and storage resources, yet run in segregation on the same physical machinery.

According to Docker, data center downloads of its Linux-based software have topped five billion since its launch in 2013. It claims more than 650,000 registered users. Microsoft threw its support behind Docker containers as part of Windows Server 2016.

Sensing its growing importance, most major storage vendors now have tools to use their arrays as a persistent storage back end for Docker. Data center demand is ticking upward, albeit gradually.  Financial services firms spawn persistent storage containers to authenticate end users.

Hudgins listed payroll-processing giant ADP and government IT contractor Booz Allen Hamilton among major firms using Docker in some fashion. Hudgins, the director of tech alliances at Docker, said ADP approached Docker to build nimble infrastructure for application microservices, using private and public cloud storage.

“ADP wanted a fast, easy way to change their payroll processing as needed. They deployed Docker Data Center internally to run all their data processing in a micro-services-based way… using Docker Data Center on both an internal OpenStack private cloud and public components running in Amazon for people to check their pay stubs. (ADP’s) entire system is now running on Docker Data Center,” Hudgins said.

Docker is a common service platform that Booz Allen uses to host customized applications for its government clients at the federal General Services Administration.  Hudgins said  Booz Allen wanted to migrate from “monolithic applications toward a smaller component-ized structure,” running a commercial version of Docker hosted in Amazon Web Services.

“They greatly reduced their time to market for (customer) applications… and also reduced the surface attack area and improved security,” Hudgins said.

SNIA said a Dec. 7 webinar will highlight best practices on Docker data management.

October 5, 2016  12:18 PM

Investors pony up $51M to lift Druva software’s cloud backup

Garry Kranz Garry Kranz Profile: Garry Kranz

Druva has scored $51 million in new private financing to diversify its cloud backup platform and accelerate global marketing and sales.

The vendor said part of the proceeds will be used to introduce new features in Druva software., including machine learning in 2017 for analyzing multiple data sets in the public cloud.

Prying capital from investors is challenging in the current climate, making Druva’s $51 million a considerable haul. The new money brings its total private capital raised to $118 million since Druva launched in 2008.

CEO Jaspreet Singh attributed the new investment to Druva’s continuing focus on the cloud to eliminate separate hardware and software for different use cases.

“The timing to raise money isn’t great right now, but we have a strong story to tell. We have a strong tier of public cloud behind us for collaboration, disaster recovery and business intelligence. Part of DR is backup and recovery and part of it is information management. We do both,” Singh said.

“People are looking at cloud storage as a means to retain data longer. Druva software is a born-in-the-cloud, cloud-native technology that doesn’t require you to buy any dedicated hardware or software, which is pretty attractive if you are a growing enterprise.”

Singh said machine learning will be added to Druva software in January to allow customers to extract greater value from idle cloud backups.

Druva sells two branded cloud backup products. Druva’s software for backing up enterprise endpoints is called inSync, which converges backup and data governance across physical and public cloud storage.

Druva Phoenix is a software agent to back up and restore data sets in the cloud for distributed physical and virtual servers. Phoenix applies global deduplication at the source level and points archived server backups at a cloud target.

Druva in May added disaster recovery as a service (DRaaS) to Phoenix to continuously back up VMware image data to Amazon Web Services.

Druva’s software-based analytics works off a golden backup copy in the cloud. Users can search the single-instance storage and run multiple workflows off the same data.

Existing Druva investor Sequoia India headed a consortium that included new investors Singapore Economic Development Board, Blue Cloud Ventures and Hercules Capital. Other existing vendors to participate included Nexus Venture Partners, NTT Finance and Tenaya Capital.

September 30, 2016  6:20 PM

Dell EMC to ‘stay the course’ with flash storage portfolio

Carol Sliwa Carol Sliwa Profile: Carol Sliwa
flash storage

Dell EMC plans to “stay the course” with its flash storage portfolio despite overlapping products at the midrange and low end, an executive at the newly combined company confirmed.

Daniel Cobb, a fellow and vice president of media strategy at Dell EMC, said the company would continue to invest in all of its the flash products. That includes support for emerging technologies such as nonvolatile memory express (NVMe), NVMe over Fabrics and 3D TLC NAND flash.

“You may not always see the newest technologies first in the lowest end platforms,” Cobb said. “That’s usually not the way it happens. But as things continue to go mainstream and suppliers get their volumes up and their costs down and under control, we’ll see the appropriate technologies end up across the whole portfolio.”

Cobb referred to Dell EMC’s DSSD rack-scale appliance as “the flagship in terms of performance and throughput” for real-time workloads. EMC’s original all-flash array platform, XtremIO, and all-flash VMAX target general-purpose enterprise workloads.

The greatest potential for all-flash overlap is in the midrange. The Dell EMC flash portfolio includes EMC’s new Unity-F and older VNX-F arrays. Dell holdovers include the SC Series, formerly known as Compellent, and PS Series, formerly EqualLogic.

“Our plans there are stay the course, keep those customers happy, keep them running on the media that they’re comfortable running with,” Cobb said. “Both platforms have already made the move to flash.”

Cobb said he expects VNX customers to “be delighted” with the new Unity product and ultimately move to that product. But he said they can stay with VNX as long as they want, much the same as Compellent and EqualLogic will be able to do.

“As [former EMC CEO] Joe Tucci liked to say, ‘I’d rather have multiple products in a portfolio and risk managing the overlaps than leave some gaps.’ We’re pretty comfortable doing that now. We’ve been doing it for a while,” Cobb said.

He said EMC is able to continue to invest in so many flash product lines because it is accustomed to sharing investments such as flash management, deduplication and compression across multiple product lines.

Yet another all-flash product is on EMC’s roadmap. Project Nitro, an all-flash version of its Isilon scale-out NAS array, is due to be equipped with more cost-effective 3D TLC NAND flash to target file and object workloads. Cobb provided no updated timetable for Project Nitro.

EMC already held a commanding 40% market share for the second quarter of 2016 in the all-flash array (AFA) market, according to a report released by International Data Corp. (IDC) this month. NetApp (16%), Hewlett Packard Enterprise (13.8%), Pure Storage (11.5%) and IBM (8.7%) trailed by considerable margins.

Dell’s SC and PS Series arrays do not qualify for IDC’s AFA stats, because they’re only all-flash configurations of hybrid flash arrays, according to Eric Burgener, a storage research director at IDC. EMC products factoring into IDC’s second quarter statistics were XtremIO, VMAX All Flash, Unity-F and DSSD D5, Burgener said.

September 30, 2016  4:01 PM

Maxta makes MxSP HCI software version perpetually free

Garry Kranz Garry Kranz Profile: Garry Kranz

Software-only hyper-converged startup Maxta today formally unveiled a free perpetual license option for its MxSP software, following a registration drive launched during VMworld 2016.

MxSP is hyper-converged infrastructure (HCI) software designed to run on commonly used server hardware.  Maxta said customers can download a free production license of its HCI software to implement a maximum three-node cluster with up to 24 TB of raw storage.

The “freemium” model gives enterprises a perpetual MxSP license that can be upgraded to a paid support contract with more storage.  Customers choosing the no-cost download can get advice and self-help resources via on online community forum sponsored by Maxta.

Most hyper-converged vendors package their HCI software on branded appliances that consolidate computing resources, networking, storage and virtualization tools within a single piece of hardware. Maxta, on the other hand, licenses MxSP as a virtual storage appliance that pools storage on x86 servers. Maxta server resellers also prepackage MxSP on commodity storage servers as part of its MaxDeploy reference architecture.

MxSP is licensed on a per-server basis based on dual-socket or quad-socket servers. Maxta does not charge customers by processors, server class or by storage capacity. As is typical of hyper-converged vendors, Maxta deployments start at a minimum of three nodes. Maxta requires each server node to have at least one solid-state drive with 100 GB of available capacity, plus two 300-GB  hard disk drives.

Maxta vice president of marketing Mitch Seigle said making the HCI software available as a free offering removes some barriers that discourage enterprises from hyper-converging resources.

“We are enabling them to stand up an HCI cluster using hardware they typically already have available in house, at no cost. They can evaluate and test it in their environment on their schedule, without the constraint of trial ‘time bombs’ or limited functionality,” Seigle said.

As an example, Seigle said an enterprise could test an MxSP test cluster with the free version, and subsequently take it directly into production by upgrading to a paid support contract and unlimited storage. Each node added to the existing cluster requires a paid license for MxSP HCI software, which includes 12 months of Maxta support.

Giving customers a free perpetual HCI software license is an attempt by Maxta to boost brand recognition, particularly as a way to emphasize how its HDI differs from appliance-based products. Maxta did not disclose how many paying customers it has, but Seigle said “hundreds of users” have registered to download the free version since its launch Aug. 29.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: