Storage Soup


November 26, 2017  5:11 PM

Like its CEO, HPE storage in transition phase

Dave Raffo Dave Raffo Profile: Dave Raffo
"Meg Whitman"

Meg Whitman says it’s time for “a new generation” to take over Hewlett Packard Enterprise. She was talking about the CEO change when she made that comment during the company’s earnings call last week, but she could have been talking about the HPE storage portfolio as well.

Independent of the CEO switch from Whitman to Antonio Neri that will take place in February, the HPE storage technology focus is shifting from 3PAR arrays to Nimble Storage. The 3PAR arrays still generate most of HPE’s storage revenue but Nimble is growing much faster in revenue and influence inside the company.

HPE acquired Nimble for $1.2 billion last March. Whitman said the Nimble deal “completed our storage offering from entry-level to the high-end and accelerated our transition to all flash.”

The transition is ongoing. HPE storage revenue of $871 million last quarter grew only five percent over last year, a disappointing number for the vendor considering the 2016 results did not include Nimble. But Whitman said Nimble revenue increased 80%, while 3PAR sales were “soft.”

She blamed 3PAR problems on “a tough competitive environment in the mid-range and some go-to-market challenges in America.”

HPE hasn’t given up on 3PAR, and is working on changes to give the platform new life and bolster its sales team. But those changes rely on Nimble.

They include porting Nimble’s InfoSight predictive storage analytics across all PAR arrays. “This is going to be a game-changer for our storage business,” Whitman said. “Leveraging advanced machine learning, HPE InfoSight is the next step in our vision for an autonomous data center.”

HPE is also combining the 3PAR and Nimble sales teams under Keegan Riley, who led Nimble sales before the acquisition. Riley, who worked in Hewlett Packard storage from 2008-2012, is the VP and GM of HPE’s North America Storage Business Unit. Whitman said the HPE storage unit is also hiring more field specialists to support sales.

All-flash revenue increased 16% from last year, which pales compared to HEP’s 30% year-over-year all-flash growth in the previous quarter. Both Nimble and 3PAR storage includes all-flash and hybrid arrays.

HPE gave no results for its other 2017 storage-related acquisition, SimpliVity. Whitman several times mentioned SimpliVity among HPE’s significant acquisitions but did not break out any hyper-converged results.

When asked if the HPE storage platform needed to grow by picking up new products, Whitman said HPE would be “very disciplined” about acquisitions.

“If we found something that we thought was important in the storage business … and it was priced right, we might think about doing it,” she said. “And I promise you that Antonio and [CFO] Tim [Stonesifer] will continue that disciplined approach to acquisitions. And I will be on the board to make sure they do.”

November 16, 2017  12:54 PM

NetApp revenue rides flash gravy train

Dave Raffo Dave Raffo Profile: Dave Raffo
NetApp

NetApp is showing a legacy storage array vendor can still increase revenue impressively during these days of scant storage growth.

NetApp Wednesday night reported its fourth straight quarter of revenue growth, a year that followed a slump during the vendor’s transition period.

NetApp revenue of $1.42 billion last quarter jumped 6% year-over-year. NetApp product revenue of $807 million increased 14% — impressive growth in today’s storage market. Wall Street analyst expected NetApp revenue of $1.38 billion, roughly the midpoint of the vendor’s own forecast from three months ago.

We don’t know yet how much the overall storage market grew in the quarter, but IDC put storage market growth at a mere 2.9% in the previous quarter.

NetApp’s $175 million in profit also beat expectations and increased from $109 million last year.

NetApp predicted the growth will continue this quarter with a forecast of between $1.425 billion and $1.575 billion compared to $1.404 billion in revenue in the same quarter last year.

“We are undoubtedly out-executing our competition on all fronts,” NetApp CEO George Kurian said on the company’s earnings call. “Our second quarter results are a strong indicator that the transformation of NetApp remains on track.”

Kurian replaced Tom Georgens as CEO in 2015 during a NetApp slump caused by a poor flash strategy and slow customer upgrades from its flagship OnTap 7-Mode operating system to Clustered OnTap. NetApp was late to the all-flash array game, and its OnTap upgrade process required downtime to complete.

NetApp has put those problems in the past.

Kurian said NetApp’s all-flash revenue last quarter grew close to 60% over last year. Most of that came from its All-Flash FAS lineup, with its E-Series performance platform and cloud-friendly SolidFire all-flash arrays contributing. Kurian said NetApp is on track for around $1.7 billion in all-flash revenue for its fiscal year, which has six months left. NetApp is second in the all-flash array market behind Dell EMC.

Kurian said NetApp averages two all-flash displacements per day, taking out competitors such as Dell EMC, Hewlett Packard Enterprise and IBM. He said there is still a long way to go with flash growth, as only about 10% of NetApp customers are using all-flash.

“We are still in the early innings of flash adoption in our customer base,” Kurian said.

After a long slog, the bulk of NetApp’s customer base has moved to Clustered OnTap.

“The transition from 7-mode to Clustered OnTap is behind us,” Kurian said.

“As I noted last quarter, we have already transitioned our business away from the declining segments to the data-driven high-growth segments of all-flash arrays, converged and hyper-converged infrastructure and hybrid cloud.”

Actually, the NetApp revenue stream from hyper-convergence has barely started to trickle in. NetApp HCI began shipping in October and hardly contributed revenue last quarter. The vendor came late to hyper-convergence just as it arrived late to all-flash, but Kurian predicted the HCI product will attract new customers who will either ditch direct attached storage or switch from competitors’ hyper-converged products. NetApp HCI is based on SolidFire all-flash technology.

Kurian identified NetApp’s new NFS service native to the Microsoft Azure cloud as another reason for optimism. The Azure service is in private preview now, and will likely become generally available in 2018 to provide another NetApp revenue stream.

“The way I look at it, we are riding several long-term secular trends: data growth and the criticality of data in a digital business; major technological transitions like solid-state storage, converged infrastructure and the cloud,” Kurian said.


November 15, 2017  9:27 AM

Amazon not to blame for S3 cloud storage lapses

Sonia Lelii Sonia Lelii Profile: Sonia Lelii
Cloud Security, Cloud storage

The Amazon Simple Storage Service (S3) has been giving big businesses –and their customers — big trouble.

It was reported earlier this summer that high-profile companies left data in their S3 buckets exposed because the access control lists (ACLs) were configured to allow access from any user on the internet. The companies caught up in this misconfiguration problem included telco giant Verizon, U.S government contractor Booz Allen Hamilton, World Wrestling Entertainment and Dow Jones.

And the cloud storage security problem has not gone away.

It was reported in October that corporate consulting firm Accenture left at least four S3 cloud buckets in a similar unsecured condition, according to security firm UpGuard blog post. Accenture works with 94 of the Fortune Global 100 and more than three-quarters of the Fortune Global 500.

But experts say Amazon is not to blame for the cloud storage misconfiguration issue. Human error is to blame: Administrators who are creating the S3 buckets are failing to reconfigure them in a restricted access configuration mode, essentially leaving the barn door open for unwanted entry.

“AWS is aware of the security issue, but are not likely to mitigate it since it is caused by user misconfiguration,” according to Detectify, a company that simulates automated hacker attacks.

AWS states on its blog that “by default, all Amazon S3 resources – buckets, objects and related sub-resources…are private. Only the resource owner, an AWS account that created it, can access the resource. The resource owner can optionally grant access permissions to others by writing an access policy.”

Amazon claims it has enhanced S3 storage security. In August, the company updated the “managed rules to secure S3 buckets.” The AWS Config offers a timeline of configuration changes with two new rules. The S3 bucket-public-write-prohibited rule automatically identifies buckets that allow a global access write, so that if an S3 bucket policy or bucket ACL allows public read access then the bucket is considered non-compliant. The second rule, an S3-bucket-public-read-prohibited rule also automatically identifies that a bucket has a global access read.

“This will flag content that is public available, including web sites and documentation,” according to a blog post written by Jeff Barr, chief evangelist for AWS. “This rule also checks all buckets in the account.”

George Crump, president of IT analyst firm Storage Switzerland, said the buckets are secure when created. Trouble occurs only when IT does not do a follow through on locking down the buckets.

“It’s not (Amazon’s) fault,” Crump said. “They just provide the infrastructure. They provide the material for you to create a solution. It’s not their fault. It’s the job of IT to lock it down. It would be different if Amazon had not put the tools in place, but that clearly is not the case.”

Many of these unsecured S3 buckets are created for application development, then left open after a team pulls its compute and storage resources from AWS for the duration of the project.

“Typically, these buckets are secured when they are created so that only authenticated users can access them,” Crump wrote in a blog post. “But sometimes, especially in the initial development of an application, these buckets are left unsecured to make it easier for multiple users to test them.

“The problem is when the application moves into production, no one remembers to secure the bucket, leaving it open for anyone to gain access,” he said.

 

 

 


November 14, 2017  4:45 PM

Dell EMC: Big data lakes will like our Elastic Data Platform

Garry Kranz Garry Kranz Profile: Garry Kranz

Dell EMC big data storage has a sharper focus.

The vendor recently launched Elastic Data Platform (EDP), a series of reference architecture geared for Hadoop and related big data environments.  EDP is available as direct-attached storage on PowerEdge servers, clustered Isilon scale-out NAS, or a managed service via the Dell EMC Virtustream Storage Cloud.

Financial services and health care are among the vertical industries in beta, said Matt Maccaux, who heads the global big data practiceof Dell EMC.  Big data deployments have matured to the point that DevOps organizations have captured the “low-hanging fruit” of reduced data warehousing and operational gains.

“We are positioning EDP primarily at enterprises that some sort of data lake in place. They have taken all the low-hanging fruit and started to hit a wall in terms of returning value to the business. They are storing massive amounts of data and want to take advantage of the three Vs: volume, variety and velocity of data,” Maccaux said.

A big data analytics team typically has to request resources and wait for provisioning to be approved.  Dell EMC’s big data system allows teams to provision resources on the fly, Maccaux said.

The Dell EMC big data hardware bundles software from BlueData Software and Blue Talon.  BlueData manages the orchestration of Hadoop compute containers. The Blue Talon centralized policy engine enforces attributes-based security on individual nodes within the cluster.

PowerEdge servers with decoupled local disk storage are intended for deployments of several hundred terabytes.  For larger scale-out deployments, Dell EMC recommends Isilion NAS with Hadoop Distributed File System as the underlying storage for Blue Data compute instances. EDP with Isilion includes the ability to create read-only snapshots for all users, erasure coding and automated tiering for cold, hot and warm data.

EDP in Virtustream uses underlying Dell EMC block storage. Virtustream is best known for hosting legacy applications that were not written for the cloud.


November 14, 2017  10:01 AM

Commvault GO: ‘Sully,’ Swan emphasize good data’s value

Dave Raffo Dave Raffo Profile: Dave Raffo
Data Management

We hear a lot of talk these days about machine learning and artificial intelligence. Those are hot and valuable technologies, but two speakers at Commvault GO last week highlighted the importance of human learning and genuine intelligence in using data.

Chesley “Sully” Sullenberger and polar explorer Robert Swan gave Commvault GO keynotes explaining how the proper use of good data can prove invaluable – and even save lives – without requiring analytics or any computers at all.

Sullenberger was well known to the Commvault GO audience for landing a US Airways airplane safely on the Hudson River in New York with 150 passengers aboard. He was widely hailed as a hero after that Jan. 15, 2009 event, and played by Tom Hanks in the movie Sully. Swan, a U.K. native, isn’t a U.S. household name but he has travelled to both the North and South Poles, and remains an active adventurer at age 60.

Their talks fit in the show’s theme of data management, even if they used their gut rather than fancy data analytics to interpret information.

“I used data not in any specific way, but used data to frame my decision,” Sullenberger said of his famous flight. “So what couldn’t be a computational decision was more of an intuitive one. But it really wasn’t, it was totally a cerebral exercise.

Yet it was a decision based on accurate information. It came from what Sullenberger knew about his airplane and from what more than three decades in the Air Force and as a commercial pilot taught him about flight and navigation.

Sullenberger considered his options within seconds when the plane lost its engines after striking a flock of geese. He took control of the plane from copilot Jeff Skiles, who had less experience with that type of aircraft.

Sullenberger then determined the plane with damaged engines could not make it to the nearest airports, LaGuardia in New York City or Teterboro in northern New Jersey. After deciding to land in the Hudson River, he knew the best spot would be between two water ferries. That way, their crews could rescue the passengers quickly enough in freezing water. Sullenberger calculated the best angle and speed to land the plane to keep the impact from destroying it.

He said his decisions were “based on having flown jets for years, and having managed the height and speed and total energy of jets very precisely for thousands of flights.”

Sullenberger said there were no flight simulations for water landing at the airline.

“The only training we ever got at water landing was a theoretical classroom discussion,” he told the Commvault GO attendees. “But because I knew not just what and the how, but the why, even in that situation I could set clear priorities. I learned that bad outcomes are almost never the result of a single fault, a single failure or a single error. Instead, they are the end result of a causal chain of events. I made sure when I saw these causal links in a chain begin to line up, I would intervene to break them.”

Pole walking, without electronic gadgets

Explorer Swan’s claim to fame is he led teams that ventured on foot to the South Pole and North Pole.

In other words, “I’m the first person in history stupid enough to walk to both poles,” he said during Commvault GO.

Actually, stupid didn’t figure in either mission.

He relied on data culled from those who went before him, and from scientific agencies such as NASA before his dangerous treks. His team walked 900 miles – including the final 70 days without radio communication – before reaching the South Pole on Jan. 11, 1986. He nearly drowned because of unseasonable melting of Arctic ice before arriving at the North Pole on May 14, 1989.

But, like “Sully,” Swan had to work and survive without the benefit of real-time computer data analytics. Swan had no electronics to call on, and not even compasses work at the South Pole.

“Whatever limited data we had, we used to stay alive,” he said. Swan said his critical data came from using “the sun, a sextant and a watch. We knew if we make mistakes, we’re going to die.”

Swan and his son Barry are due to set off Wednesday on a 600-mile expedition to the South Pole using only renewable energy. The trip is expected to take eight weeks. They will carry solar panels to power NASA-designed ice melters that will give them water to drink and cook with.

“One day, NASA will use these ice melters on Mars,” Swan said.

Swan’s mission now is to help clean up Earth, which scientific data tells us is in danger.

“My target is to clean up 326 million tons of carbon before end of 2025,” he said. “Our survival on earth and the data people like NASA provide to protect us, we should take that seriously. Climate change is happening. How much we’re causing it, we don’t know yet. But as a survivor, I’m going to try and do something about it. Just in case.”

Sullenberger agreed that today more than ever, we need to heed science and facts.

“We have an obligation to be scientifically literate,” Sullenberg said. “In other words, you can’t use data if you don’t understand it. We have an obligation to be good citizens, which means that if we must make important decisions, we need to be capable of independent critical through. And when we make important decisions, we must make them based on facts, not based on fears or falsehoods. And certainly not on big lies, even if they are told loudly and often.”


November 13, 2017  7:52 AM

Nutanix backup choices expand

Dave Raffo Dave Raffo Profile: Dave Raffo
Nutanix, Veritas, Veritas NetBackup

Options for Nutanix backup are growing for customers using the hyper-converged vendor’s AHV hypervisor.

At Nutanix’s European .NEXT user conference last week, backup software market leader Veritas pledged AHV support in NetBackup 8.1 and Comtrade Software updated its HYCU software built specifically for Nutanix backup.

Nutanix lists 10 data protection vendors that support AHV. Veritas, Veeam Software, Commvault and Comtrade have the greatest integration with AHV, according to Nutanix. Cohesity, Rubrik, Arcserve, Unitrends, Cloudian and Sureline Systems also support AHV.

Most Nutanix customers still use VMware hypervisors, but close to one-quarter of the customers have adopted the vendor’s KVM-based AHV hypervisor.

Veritas claims NetBackup support will enable faster backup and recovery of Nutanix virtual workloads though its protection technologies. Veritas supports AHV with its NetBackup Cloud Catalyst appliances that enable deduplication of data in the pubic cloud. NetBackup will also enhance the Nutanix backup process with its parallel streaming technology that backs up data across multiple hyper-converged nodes simultaneously and its CloudPoint cloud-based snapshots.

Comtrade Software said HYCU 2.0 for Nutanix backup will add support for VMware’s ESX hypervisor, Microsoft Exchange and DR across remote sites. Comtrade also said it plans support for Nutanix Acropolis File Services (AFS) in the first quarter of 2018. It will also become one of the first partners in the Nutanix Calm marketplace, allowing customers to download and install HYCU by clicking on a blueprint from the marketplace.

Comtrade launched HYCU in June to protect AHV virtual machines, and claims more than 20 paying customers.

HYCU 2.0 is planned for later in 2017. It can broaden the HYCU customer base through support of ESX, which is used by about 65% of Nutanix customers.

“ESX support is often asked for by customers,” said Subbiah Sundaram, Comtrade’s vice president of products. “Customers who want to migrate from ESX to AHV want staging areas. We’re adding that and making it easier for customers to experiment.”

Sundaram said HYCU will use Nutanix snapshots instead of VMware’s VADP to protect ESX.  He said that avoids VM stun, which is when I/O latency makes the VM unresponsive.

HYCU adds support of Exchange to its previous support of Microsoft SQL Server and Active Directory. It will enable mailbox level recovery and allow customers to clone entire instances for text and development. Comtrade is also adding the ability to restore databases from one SQL Server to another.

For DR, HYCU will enable Nutanix customers to set up standard Nutanix Protection Domains for VMs at remote sites, set up DR replicas at a DR site and restore directly to the remote site from the replicas.

Comtrade will begin trials for AFS backup with the expectation of adding it to HYCU 3.0 in early 2018. Sundaram said HYCU will also add parallel stream backups, spreading the backup load across up to eight Nutanix nodes.


November 10, 2017  10:27 AM

Quantum CEO Gacek’s a goner after poor quarter

Dave Raffo Dave Raffo Profile: Dave Raffo
Quantum

Quantum CEO Jon Gacek is out following poor sales results for the data protection and scale-out storage vendor last quarter.

The Quantum board named director Adalio Sanchez as interim CEO. Chairman Raghu Rau said he will head a search for a permanent CEO, with the help of an executive headhunter firm.

Quantum Thursday reported revenue of $107.1 million for last quarter, down from $135 million last year and more than $15 million below Wall Street expectations. Quantum lost $7.9 million in the quarter compared to a $4.1 million profit last year.

The results prompted the Quantum CEO change. Gacek joined Quantum through the 2006 acquisition of rival tape vendor ADIC. He had been ADIC’s CFO and assumed that role at Quantum. He was promoted to COO in 2010 and became the Quantum CEO the following year.

Chairman Rau described the quarter as a disappointment “that fell short of all our expectations” and a “very eventful” quarter for Quantum.

The Quantum CEO change was hardly shocking considering moves the company made over the past eight months. After years of up-and-down quarters, Quantam agreed with demands from investor VIEX Capital Advisors to change the board last March, and Rau joined then. IBM veteran Sanchez and Marc Rothman joined the board in May, pushing Gacek off the board. Rau became chairman in August. Quantum added VIEX’s Eric Singer to the board Thursday.

After Rau became chairman, he, Sanchez, Rothman and Alex Pinchev formed a committee to conduct a strategic review of Quantum.

Sanchez said his work on the review gave him a head start in as interim Quantum CEO.

“I am hitting the ground running,” said Sanchez, who spent 32 years at IBM and a year at Lenovo.

Rau said Quantum is “intensely focused on taking aggressive actions” to reduce cost and he predicted increased revenue and profitability over the next six months.

Sanchez said Quantum will cut around $35 million in costs over the next year. Quantum also secured $20 million in funding from TCW Direct Landing and PNC Bank to go with $170 million in funding from those lenders a year ago.

Sanchez said the board reviewed Quantum’s strategy, go-to-market model and cost structure. He described StorNext scale-out storage as Quantum’s growth engine and data protection as its profit engine. But while Quantum is looking for LTO-8 to give the tape products a boost, CFO Fuad Ahmad said the vendor must “reorient” its strategy for its DXi disk backup platform. He said Quantum will maintain its partnership with Veeam Software to integrate backup software on DXi and tape products, but will scale back development on the DXi data deduplication appliances.

“We are a small player in that market with less than three percent market share,” Ahmad said. “While it’s a fairly profitable business for us, it is not core to what we want to do long term.”

Sanchez said Quantum will build a software-defined storage business around StorNext and its Rook.io open source project to build cloud-native file, block and object storage.

“We will reposition our company over time as a modern software-defined provider as new products rollout,” Sanchez he said.

Sanchez described his priorities over the next 90 days as “re-ignite the sales engine,” reduce costs and “execution, execution, execution.”

Product revenue last quarter slipped to $63.6 million from $88.6 million last year. Overall Scale-out storage revenue of $33.8 million was down from $46.6 million. Disk backup fell from $18.7 million to $11.7 million, and tape automation slipped from $59.7 million a year ago to $52.2 million.

Quantum executives blamed the poor results partly on industry conditions and the failure to close large deals before the end of the quarter. They expect a bit of improvement this quarter but will still fall below last year’s results. For this quarter, Quantum forecast revenue of $120 million to $125 million compared to $133 million last year. Its six-month guidance is for revenue of $250 million to $260 million.


November 9, 2017  5:26 PM

SMB ransomware report: Attacks frequent, backups key piece

Paul Crocetti Paul Crocetti Profile: Paul Crocetti
Ransomware

Ransomware attacks on SMBs have increased, according to a recent survey, but backup and disaster recovery platforms can calm data protection fears.

An estimated 5% of SMBs worldwide fell victim to a ransomware attack from the second quarter of 2016 to the second quarter of 2017, according to the “State of the Channel Ransomware Report” released by backup and recovery vendor Datto. About 1,700 managed service providers (MSPs) serving more than 100,000 SMBs provided data for the ransomware report.

Ninety-seven percent of the MSPs report ransomware is becoming more frequent and 99% predict the frequency of attacks will continue to increase over the next two years.

Anxiety is rising. Among MSPs, 90% say they are “highly concerned” about ransomware, up from 88% in 2016, while 38% of SMBs say the same, up from 34% in 2016.

“There’s more of an awareness of ransomware and it being an epidemic,” Datto CTO Robert Gibbons said. However, the gap between SMBs’ perception and MSPs’ awareness is too far on the side of SMBs being under aware, he said.

While CryptoLocker remains one of the top ransomware strains, the Bad Rabbit virus caused problems globally in the last month.

SMBs need to understand that the downtime is often the worst element of an attack on a business. Seventy-five percent of MSPs report their clients experienced “business-threatening downtime” after an attack.

On another positive note, though, reporting is increasing. SMB victims reported about one in three ransomware attacks to authorities, up from one in four incidents reported in 2016.

And less SMBs are paying the ransom, according to the report. In 2017, 35% of MSPs report SMBs paid the ransom, down from 41% in 2016. Of those that paid the ransom, 15% never recovered their data, according to the ransomware report.

“The word is getting out that if you pay the ransom, sometimes you get your data, sometimes you don’t,” Gibbons said.

A ‘multilayered portfolio’ of protection includes backup

Ransomware is getting smarter. About 30% of MSPs report a virus remained on an SMB’s system after the initial attack and hit again later. And one in three MSPs report ransomware encrypted an SMB’s backup.

So what are SMBs to do?

First of all, backup systems vary in complexity and strength. Copying files to a USB drive is one method, but not a great one. Having a comprehensive backup and recovery platform, following a “3-2-1” system of three copies of data, on two different media, with one copy off-line, is much more secure.

Backup and disaster recovery is the most effective protection, according to MSPs in the ransomware report, followed by employee cybersecurity training, anti-virus software, email/spam filters, patching applications and ad/pop-up blockers.

If backup and recovery is in place, 96% of MSPs report SMBs fully recover from ransomware, according to the report. And 95% of MSPs said they feel more prepared to respond to an SMB infection.

But ransomware protection goes beyond having just one safety element in place. For example, 94% of MSPs report ransomware successfully bypassed anti-virus software.

“As no single solution is guaranteed to prevent ransomware attacks, a multilayered portfolio is highly recommended,” the report said.

MSPs blamed a lack of cybersecurity training as the leading reason for a successful ransomware attack, followed by phishing emails and malicious websites/ads.

“Employees today are largely unprepared to defend themselves against these attacks,” the ransomware report said.

Gibbons said in one type of education, a company will send out a fake phishing scam and anyone who clicks in the email will get diverted to ransomware training. Just one employee who clicks on a bad link — in a company of hundreds — can cause a business possibly irreparable harm from a ransomware attack.

“There are more tools available to up your minimum game,” Gibbons said.

SMBs need to stay on top of the issue, because attacks are constantly evolving. For example, in 2017, 26% of MSPs reported ransomware infections in cloud applications. Gibbons said he thinks cracking Salesforce is at the top of the attackers’ radar in their continuing quest to best wreak havoc among SMBs.


November 8, 2017  6:26 AM

Nutanix Acropolis services expand for cloud, developer needs

Dave Raffo Dave Raffo Profile: Dave Raffo
Hyper-convergence, Storage

Like many storage and data center vendors, hyper-converged vendor Nutanix is taking the next steps to give its platform multi-cloud capabilities.

Nutanix today laid out its plans to add services for developers to its Enterprise Cloud OS software. These include a Nutanix Acropolis Object Storage Service and Acropolis Cloud Compute. The hyper-converged pioneer will also add a Nutanix App Marketplace to its Calm cloud application and orchestration service.

“The Nutanix roadmap is evolving, looking at public cloud services as a deployment model for applications,” said Greg Smith, Nutanix vice president of product and technical marketing. “We want our customers’ data center to operate like a public cloud. This is a continuation of the Nutanix journey to build an enterprise cloud that provides much of the same capabilities that customers expect from public cloud services, but in their own data centers.”

The new Nutanix Acropolis features will not be available until 2018. Smith said the marketplace will start in 2017 with 20 validated pre-defined app blueprints, and add “a significant number” soon after.

Nutanix will provide an Amazon Web Services S3-compatible API to help application development teams use Nutanix storage for on-demand object storage as they would use the public cloud. Smith said the Nutanix Acropolis Object Storage Service can store billions of objects in a single namespace.

“People want to write to S3 through a standard API,” Smith said. “We’ve embraced that interface. Now the Nutanix Cloud Storage OS can store and manage all those large unstructured data files with a single namespace.”

Nutanix Acropolis Cloud Compute (AC2) consists of compute-only nodes that can run in a Nutanix cluster. AC2 nodes are for CPU-intensive applications such as in-memory analytics, large-scale web services, and Citrix XenApp. Most hyper-converged nodes include storage and compute. Nutanix does already offer capacity-only storage nodes but has not had compute-only nodes.

Smith said Nutanix will have several AC2 configuration options, and customers will still require a minimum of three storage nodes in a cluster. AC2 is built on Nutanix’s AHV hypervisor and will initially be available only on Nutanix-branded appliances. Smith said Nutanix hopes its OEM hardware partners Dell EMC and Lenovo will eventually make compute-only nodes available.

“This is to provide additional compute resources to support apps and services that require a lot of CPU but not storage with it,” Smith said. “The new compute resources will benefit application developers as well as infrastructure managers.”

The Nutanix App Marketplace will include applications defined via standards-based blueprints that developers can quickly consume in self-service fashion. These published validated blueprints will include developer tools such as Kubernetes, Hadoop, MySQL, Jenkins and Puppet. Nutanix customers can also publish apps on the marketplace to share them internally.


November 7, 2017  9:10 AM

Pivot3 Acuity jukes HCI sales, aims at cloud

Dave Raffo Dave Raffo Profile: Dave Raffo

Hyper-converged vendor Pivot3 said its Acuity appliance is significantly expanding its enterprise footprint, with one-third of its revenue coming from deals of $500,000 or more last quarter.

The private company today said its average sales price increased 25% and overall revenue increased 50% from the previous quarter. Pivot3 reported a record in million-dollar orders in the quarter. Now it seeks to expand deeper into the enterprise by tailoring its HCI software for cloud implementations and broadening its distribution strategy with partners Lenovo and Arrow Electronics.

The Pivot3 Acuity platform launched in April, supporting NVMe solid-state drives for performance and incorporating quality of service the vendor acquired in 2016 from NexGen Storage.

Along with boosting performance on Pivot3 Acuity with NVMe, the vendor is concentrating on solving the problems of moving data in and out of the cloud. Pivot3 Acuity’s quality of service is designed to run multiple applications, which will help cloud customers. Pivot3 said deals supporting multiple use cases on its appliances more than doubled last quarter.

But data movement is another issue.

“It’s a long process, but there’s a massive economic gain if we get it right,” Pivot3 CEO Ron Nash said. “The cloud’s not monolithic. There are many clouds with many different characteristics.”

He said mastering the cloud requires a policy management engine, an orchestration engine and analytics engine. Pivot3 has the policy management and has started with orchestration to move data in and out of the cloud. The analytics will determine if the policy decisions are working.

“It’s easy to say, ‘There is the goal line, that’s where we want to get to. Then let’s lay out the steps,’” Nash said. “If clouds let you spin up and spin down quickly and take peak of peaks type demand, that’s a valuable service and something you are willing to pay a lot for.”

Nash said it will take a few years to get all the pieces down, but he considers Pivot3 ahead of the other HCI vendors In the meantime, Pivot3 is expanding its distribution process.

Pivot3’s branded appliances run on Dell servers, but it also has an OEM deal with Lenovo and a channel partnership with Cisco. Pivot3 chief marketing officer Bruce Milne said Pivot3 considers Lenovo its key partner, and last month signed a distribution deal with Arrow Electronics to sell Pivot3 Acuity software on Lenovo. Milne said around 15% of Pivot3 revenue comes from software-only deals.

Pivot3 has a similar go-to-market strategy as Nutanix. Nutanix sells its own branded appliances complemented by OEM deals with Dell EMC and Lenovo and meet-in-the-channel deals with Cisco and Hewlett Packard Enterprise resellers. These types of partnerships make hyper-converged full of coopetition. Dell EMC, Cisco and HPE all sell hyper-converged appliances with their own software, too. Dell EMC uses VMware vSAN software on PowerEdge servers, Cisco acquired Springpath for its UCS-based HyperFlex appliances, and HPE bought hyper-converged startup SimpliVity for its software.

“We see Cisco making a lot of noise, but no accounts except for the Cisco base,” Milne said. “HPE is starting to make noise, trying to differentiate its hardware by embedding SimpliVity software in ProLiant servers. Dell EMC is coming on strong, which I’m sure concerns Nutanix. I can’t count on a competitor as a supplier on my platforms.”


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: