Storage Soup


December 7, 2017  8:15 PM

Dell EMC all-flash surge collides with softening storage demand

Garry Kranz Garry Kranz Profile: Garry Kranz

Growth in Dell EMC all-flash storage is one of the bright spots in what remains a tough slog for legacy array vendors.

Dell Technologies on Thursday reported consolidated revenue of $19.6 billion for the last quarter. That’s up 2% on a quarterly basis and 21% year over year. Gross margin as a percentage of revenue was $6.4 billion, or 32.2%.  Operating losses widened to $530 million, largely a result of debt related to the Dell-EMC acquisition in September 2016.

Dell EMC storage is part of the Dell Infrastructure Solutions Group (ISG), which also encompasses servers and networking.  ISG generated $7.5 billion in revenue last quarter. Servers and networking sales jumped 32% year over year to $3.9 billion.

Storage was a different story. In a continuing industry trend, Dell acknowledged that demand for traditional networked storage continues to drop. Storage revenue of $3.7 billion remained flat. Increased demand for Dell EMC all-flash storage and hyper-converged infrastructure were offset by a softening market for legacy systems, Dell Technologies CFO Tom Sweet said.

Sweet said Dell EMC all-flash and Isilon scale-out NAS increased by double digits. HCI saw triple-digit growth, spearheaded by VxRail adoption.  He declined to provide specific revenue breakdowns for those product categories

Dell EMC achieved “better pricing and better mix in storage, even (though) volume wasn’t quite where we wanted it,” Sweet said.

This was the first Dell EMC earnings call to include a full quarter of results for EMC and VMware products. In February, VMware moved to the Dell Technologies’ fiscal calendar after previously reporting results on a calendar-quarter basis.  VMware virtualized storage software contributed $1.9 billion on operating income of $638 million.

Dell closed the quarter with $18 billion of cash and equivalents on the books, including the proceeds of VMware’s recent debt issuance. Dell debt maturities of about $3 billion start becoming due in April.

Dell has paid down $9.7 billion of the gross debt it used to acquire EMC. That includes $1.7 billion in debt satisfaction during the third quarter.

Sweet said flexible consumption models are expected to account for an increasing percentage of Dell revenue. Consumption-based services realize recurring revenue incrementally across the length of a multiyear customer contract.

“These tend to have better profitability, but it does change the timing and pattern of when (revenues) are recognized,” Sweet said.

Jeff Clarke, a Dell vice chairman of products and operations, said Dell EMC midrange storage is receiving increased attention as a way to shore up sagging storage growth. The focus involves reshaping sales incentives and expanding product features of Dell EMC all-flash and hybrid Unity, SC Series and PS Series arrays.

“We increased our go-to-market capacity by adding storage specialists and are ensuring our sales compensation plan spurs the appropriate behavior to drive long-term strength in our results,” Clarke said.

Dell EMC all-flash SC Series array models launched in November. Due out soon are software enhancements for midrange Dell EMC Unity arrays, including the addition of inline data deduplication, synchronous file replication and in-place storage controller upgrades.

Dell also has launched an Internet of Things division to coordinate development of products and services across its business lines.

December 7, 2017  8:12 AM

Hedvig CEO: Who needs backups?

Dave Raffo Dave Raffo Profile: Dave Raffo
Hedvig

Santa Clara, Calif. — By now, most people realize this is the age of convergence in IT – especially as it applies to storage. We have converged infrastructure mixing storage, compute and networking; hyper-converged infrastructure integrating compute, storage and virtualization in one box, and converged secondary storage putting backup, DR, archiving, test/dev, copy and cloud data on one platform.

Now startup Hedvig is pushing a new kind of convergence – primary and secondary data together in one distributed platform.

Hedvig designed its software-defined storage as scale-out, multi-cloud primary storage. But the startup finds early customers sometimes use it as a backup data deduplication target running on x86 servers. Hedvig CEO Avinash Lakshman said Hedvig software can drive primary storage that requires no separate backup.

“One capability we can bring to the table naturally is, if Hedvig is chosen as a primary storage platform, then you don’t need to take backups at all,” Lakshman said during a press briefing at Hedvig’s headquarters this week. “You can take scheduled snapshots in your primary environment, and go back to any snapshot from your primary environment. Think of it as converged where you have primary and secondary storage built it. We also provide the capability of moving snapshots to the public cloud as they age.”

Old-school backup admins will tell you this violates a cardinal rule of data protection. “It used to be, ‘Thou shalt not put backup data on the same box as primary,’” said Eric Carter, Hedvig senior director of marketing. “But distributed systems are no longer the same box.”

Hedvig also positions itself as a good fit for dev/ops because it includes self-service APIs to program and integrate applications.

Hedvig claims its software can run any workload on any infrastructure and over any cloud.

“We have been multi-cloud even before that term was coined,” Lakshman said.

Hedvig software forms a universal data plane supporting block, file and object storage. It installs on x86 nodes and cloud instances and forms a scale-out storage cluster over multiple sites and private and public clouds. Its storage proxy presents virtual disks at the application layer, routes I/O to the storage cluster, enables local flash-optimized services, and includes APIs for plug-ins and direct application integration.

Lakshman, who created Casandra and helped create Amazon DynamoDB as a developer, founded Hedvig in2012. Hedvig 1.0 software started shipping in 2015, and Lakshman said the company still has less than 50 customers. However, it has a few large customers since joining Hewlett Packard Enterprise Complete Program last June, a few months after HPE participated in a $21.5 million funding round.

Lakshman said the HPE reseller deal “has been a shot in the arm for us. They walk us to the table for deals we never could be part of, with Fortune 100 companies. We have a least half a dozen of those customers now. All those companies are pivoting toward hybrid and multi-cloud.”


December 4, 2017  8:18 AM

Nutanix software-defines itself

Dave Raffo Dave Raffo Profile: Dave Raffo
Storage

Nutanix is hard selling the value of its software.

While the hyper-converged vendor stopped short of re-naming itself Nutanix Software, CEO Dheeraj Pandey used its earnings call last week to emphasize that Nutanix software drives its products. And it’s not just what the software does for customers; Pandey focused on how Nutanix is building its accounting and sales practices around being a software company.

Pandey went back to Nutanix’s roots, explaining why it started selling its software on integrated appliances and how it has slowly moved off that stance.

Nutanix will still sell its appliances, but will recognize revenue only from software and continue its push to sell that software on any x86 vendor’s hardware. That model is working, judging from last quarters’ results. Nutanix revenue of $276 million last quarter increased 46% over last year and beat expectations. The vendor also cut its losses to $65.1 million from $140 million a year ago.

But the Nutanix software transformation dominated the discussion from Pandey and CFO Duston Williams. While it is mostly an accounting move designed to make Nutanix look more attractive to investors, it also accelerates the company’s recent strategy of partnering closely with all major x86 server vendors.

Pandey said when Nutanix came to market in late 2011, the IT world was not ready for a software-only delivery model. That meant Nutanix software needed to ship on a pre-built appliance. It chose Supermicro as its hardware partner.

“Software-defined anything was too abstract for our customers to put their arms around,” Pandey said. “Our only route to market was to take full control of our own destiny. The Nutanix appliance was born.”

Nutanix eventually found OEM partners, beginning with Dell in 2014 and extending to Lenovo and IBM. It also forged partnerships with resellers to install Nutanix software on servers from Cisco and Hewlett-Packard Enterprise so customers can run Nutanix software on any major x86 platform.

“We now have a meaningful competitive advantage in being the most portable operating system built for the enterprise cloud,” Pandey said.

Nutanix will change the way it recognizes revenue, emphasizing software licenses instead of the hardware to raise margins that investors watch closely.

Pandey said 10% of its revenue last quarter came through OEM deals, and 30% of its HCI nodes run on OEM hardware.

CFO Williams added: “Today, we are a software company, more specifically an enterprise cloud operating systems company that up until now has delivered a majority of its software via its own branded appliance and recognize the associated hardware revenue.”

Williams said Nutanix is in a years-long transition, and “will emerge as exactly what it is, an enterprise cloud operating systems company.”

The goal is to do that in a way that there will be “absolutely zero change from what the customer sees,” Williams said. “So that process from a customer standpoint is left intact and exactly the same as it has been in the past.”

The Nutanix software-centric approach resembles the VMware business model. VMware vSAN is Nutanix’s primary hyper-converged software competition, as well as a frequent partner. Nutanix also sells an AHV hypervisor that competes with VMware’s flagship ESX product. VMware’s success has always depended on its relationship with all major server hardware vendors. That is still the case, even now that it is owned by one of those server vendors, Dell.

On VMware’s earnings call Thursday, VMware CEO Pat Gelsinger reported vSAN license bookings grew over 150% year-over-year last quarter.

VMware’s parent is also one of Nutanix’s biggest hardware partners. Dell EMC sells the its XC hyper-converged appliance based on PowerEdge servers through an OEM deal that pre-dates the Dell-EMC merger. Dell EMC also sells VxRail HCI appliances running vSAN on PowerEdge servers.


November 30, 2017  6:26 PM

Nexenta raises $20 million, shoots for cloud

Carol Sliwa Carol Sliwa Profile: Carol Sliwa

Bolstered by lead investor SoftBank Corp., Nexenta secured $20 million in financing this week to fund its new cloud portfolio.

The Santa Clara, California-based storage software vendor next month plans to unveil its NexentaCloud for Amazon Web Services (AWS), according to CEO Tarkan Maner. NexentaStor CloudNAS and NexentaFusion CloudManagement will be the first two Nexenta options available through the AWS Marketplace.

Customers using NexentaStor on premises would have the option to back up data to a NexentaStor instance running on Amazon’s Elastic Cloud Compute (EC2), with connections to Amazon’s Simple Storage Service (S3), Maner said. They would be able to use NexentaFusion for management and analytics across both environments, he noted.

“This is more like a DR and backup service for Nexenta customers to move their backups to a cloud environment for certain data types,” Maner said. “This is not necessarily a full-blown cloud product running by itself on the cloud. It’s a hybrid cloud technology.”

The company also plans a NexentaCloud option through AWS Marketplace that will not require an on-premise Nexenta deployment.

Nexenta, SoftBank reach strategic agreement

Maner said SoftBank Cloud is due to roll out NexentaCloud for Japanese customers in 2018, and support for additional clouds will follow.

Tokyo-based SoftBank also struck a strategic distribution and go-to-market agreement with Nexenta. SoftBank plans to distribute Nexenta software in Japan, and its affiliated companies gain preferential purchasing rights to Nexenta software running on hardware from Dell, Lenovo, Supermicro and other vendors. SoftBank Cloud also plans to use Nexenta software in partnership with new OEMs such as Huawei and Foxconn, according to Nexenta.

“That’s a big story for us because Japan is the second-largest storage market,” Maner said. “SoftBank has a very big customer base as a telco and mobile service provider, and they have 1,300 affiliated companies from Sprint to Yahoo Japan to Arm Holdings.”

“That’s a big story for us because Japan is the second-largest storage market,” Maner said of the . “SoftBank has a very big customer base as a telco and mobile service provider, and they have 1,300 affiliated companies from Sprint to Yahoo Japan to Arm Holdings.”

SoftBank will have access to Nexenta’s full product portfolio, which includes NexentaStor scale-up block and file storage, NexentaEdge scale-out object storage, and NexentaFusion reporting, monitoring, storage analytics and orchestration through a single pane of glass.

Nexenta has raised about $100 million since 2005. Other investors in the new round included Javelin Venture Partners, SV Booth Investments, SAB Capital, Lake Trail Capital, TRB Equity, and Nexenta CEO Maner.

Maner said he expects Nexenta to reach profitability next year. He said 2017 was a good year for the company, with 80% year-over-year growth in revenue. He said the revenue is split roughly 50-50 between new customers and renewals from older customers.

Nexenta has about 3,000 customers with a collective storage capacity of about two exabytes in production, and this year went over $100 million in cumulative bookings since its 2005 founding, Maner said.

“We believe hardware-centric storage companies are going to struggle because everything’s going server-based and software-only,” he said. “We see a huge opportunity in that space, and this investment round obviously will help us achieve that goal.”


November 30, 2017  10:40 AM

Barracuda Networks going private

Sonia Lelii Sonia Lelii Profile: Sonia Lelii
Storage

Barracuda Networks became the latest public technology vendor to go private when equity firm Thoma Bravo agreed to pay $1.6 billion this week to acquire the security and data protection vendor.

The Barracuda Networks acquisition is expected to close in February, before the end of Barracuda Networks’ fiscal year. The security and data protection company said it does not expect any executive changes while all the current Barracuda employees will remain as part of the private company.

Barracuda Networks offers backup and email security for mid-market companies.  It offers the Barracuda Essentials for cloud-based security, archiving and backup for Microsoft 365 and Exchange. It also provides the Barracuda Message Archiver for compliance and e-discovery and a Cloud Archiving service.

The Barracuda backup and recovery appliances handle physical-to-physical systems, physical-to-virtual systems and virtual-to-virtual configurations. The cloud-to-cloud backup service protects Microsoft Office 365, SharePoint Online and OneDrive. Data is deduplicated and compressed to reduce backup windows before it is stored in the Barracuda Cloud.

Barracuda Networks declined a request to comment on the acquisition.

Dave Russell, vice president and distinguished analyst at Gartner, said he is skeptical of private equity buy outs but Thoma Bravo has a good track record with its technology acquisitions.

“My angle on this is whenever a private equity firm (buys a vendor), is it really good news?” Russell said. “That is not usually what happens. They tend to milk (the company).  If I look for a silver lining in this, it’s if Barracuda can get out of the distraction of quarterly reports, they can replicate what SonicWall and Blue Coat got from Thoma Bravo, which is investment.”

“Thoma Bravo has a history of investing and increasing R&D over time.”

This past summer, Barracuda Networks took its first step in delivering disaster recovery in the public cloud by allowing customers to replicate to an Amazon Simple Storage Service (S3) cloud. Customers can replicate data from an on-premises physical or virtual appliance to an Amazon S3 bucket. Barracuda previously offered appliances with built-in software replication to either the Barracuda Cloud Storage or to another Barracuda Backup appliance in an off-site location or to external disk or tape.

The support of the Amazon S3 cloud allows customers a choice to store data off-site either in the proprietary Barracuda cloud or use the Amazon Web Services (AWS) public cloud as a data protection target.

Barracuda Networks’ Essentials for Email Security has had promising growth in the past few quarters. The product has integrated backup and email archiving embedded in it. Rod Mathews, senior vice president and general manager of data protection business at Barracuda, called that Barracuda’s first step with a major public cloud offering. The AWS offering began in North America and they planned to expand in Europe later this year.

“Down the road we will be able to do multi-cloud for Microsoft Azure,” he said in August.

In October, Barracuda claimed to support more than 85 PB of storage in its cloud, and said it helps customers with approximately 3 million backup jobs and more than 12,000 recoveries per month.

Barracuda acquired managed service provider Intronis for $65 million in October 2015.

Barracuda last month reported income of $94.3 million for its last full quarter, up from $87.9 million from the same quarter a year ago. Most of its revenue — $76 million – came from subscriptions. Appliance revenue slipped from $21 million a year ago to $18.3 million, and its profit of $1.6 million dropped from $2.4 million the previous year.

Russell said Barracuda Networks’ previous success was attributed to being early to market with appliances, owning its supply chain and “almost concierge-like service.”

“Three or four years ago, that was unique,” he said. “Now you have many kinds of appliances and vendors. The market has gotten more crowded. Barracuda has been having on and off financial challenges for a year now. They needed a shot in the arm.”


November 29, 2017  2:21 PM

Data recovery plans face tall task as storage levels grow

Paul Crocetti Paul Crocetti Profile: Paul Crocetti
Disaster Recovery

Too much data is weighing heavily on data recovery plans, according to a recent survey.

The problem has scaled beyond what many organizations can handle, said Douglas Brockett, president of backup and recovery software vendor StorageCraft, which commissioned the survey.

“People are choking on the volume of data” that’s expected to be backed up, Brockett said. “I think we’re seeing a breaking point.”

The survey of more than 500 IT decision makers — which include management-level employees and above, according to StorageCraft — found that 43 percent are struggling with data growth and believe it is going to get worse. Fifty-one percent are not confident that their organizations can perform instant data recovery after a failure.

Difficulties with data recovery plans hit organizations of all sizes, from small businesses to large enterprises, Brockett said.

Specifically:

  • 58% of companies with revenues under $1 million are not confident of instant recovery
  • 50% of companies with revenues between $1 million and $500 million are not confident of instant recovery
  • 51% of companies with revenues of more than $500 million are not confident of instant recovery

In addition, 51% of larger organizations said they would benefit from more frequent data backup but their infrastructure doesn’t allow it, according to the survey. And among businesses with less than 500 employees, 65% are not confident they can get their systems back up in minutes.

Certain business areas feel the data recovery burn more than others. Top examples include healthcare and its needs in data retention and mission-critical data, and financial services with time-sensitive data, Brockett said.

Specifically, 56% of healthcare and 54% of finance IT decision makers in the survey said they would benefit from more frequent backups, but the scale of data growth and backup technology infrastructure doesn’t allow it.

“You see a lot of anxiety in these types of industries,” Brockett said.

StorageCraft's Douglas Brockett

Douglas Brockett

While the cloud is a popular place for off-site copies, data recovery plans suffer because the process to retrieve data from a provider like Amazon Web Services can be expensive and slow, Brockett said.

Organizations should be thinking, “Can I spin up a virtual machine in the cloud?” As in, if you have cloud backup, you should have cloud recovery, in the form of disaster recovery as a service.

“Make sure your off-site backup is bootable,” Brockett said.

To improve data recovery plans in the face of exponential data growth, Brockett suggests organizations use integrated, scalable data management and protection platforms. And organizations should be smart with data, using tiered storage and data reduction techniques.

“Data analysis is a critical part of getting a strong data protection infrastructure,” Brockett said.


November 29, 2017  11:42 AM

Pure Storage revenue boost puts it on ‘march to profit’

Dave Raffo Dave Raffo Profile: Dave Raffo
Pure Storage

Pure Storage hit a home run in its first quarter under CEO Charlie Giancarlo.

Pure Storage revenue of $278 million last month exceeded the high point of its  guidance and increased 41% from last year. The all-flash pioneer reported positive cash flow for the first time and it remains on track to break the $1 billion revenue mark for the full year. Its executives also predict non-GAAP profitability for the current quarter, which would be a first for Pure.

“Pure has exceeded my initial expectations and I could not be more excited and thrilled about all the opportunity in front of us,” Giancarlo said on Pure’s earnings call Tuesday night.

The Pure Storage revenue results don’t represent a great turnaround in Giancarlo’s first quarter since replacing Scott Dietzen, who remains Pure’s chairman. When Dietzen handed the CEO job off to Cisco veteran Giancarlo last August, he predicted Pure Storage revenue of $1 billion for the full year and profits just around the corner. However, the Pure Storage revenue growth outpaced even Dietzen’s optimistic view.

The Pure Storage revenue forecast of $327 million to $335 million this quarter also beat Wall Street expectations. The low end would bring its full-year revenue to $1.012 billion. Pure claims more than 300 new customers last quarter, bringing its total to more than 4,000.

Pure Storage revenue also continues to outpace the storage industry growth. IDC puts all external storage growth at 4.1% for the third quarter, with total storage — include capacities in servers — at 14%. Over the past month, NetApp (up 6%), Hewlett Packard Enterprise (up 5%) and IBM (up 4%) reported year-over-year storage growth but none in the ballpark with Pure’s.

Pure cut its non-GAAP loss to $1.9 million, down from $20 million a year ago and $24 million in the previous quarter – although it lost $42 million (compared to $79 last year) under GAAP rules. Pure CFO Tim Riiters predicted a small GAAP profit for this quarter, which is usually the biggest revenue quarter for storage companies. That profit may be short-lived when seasonality reduces revenue the first quarter of next year, but Pure will have a profitable 2018 calendar year if it continues its steady revenue growth.

Pure finished last quarter with $551 million in cash and investments, up $28 million from the previous quarter.

 “Our march is to profitability,” Riiters said.

Pure’s product revenue of $232 million last quarter increased 39% year over year. Most of that came from its flagship FlashArray block storage platform. Pure Storage revenue growth from its newer FlashBlade unstructured data system slowed slightly from when it doubled in the previous quarter, but Giancarlo said he remains optimistic about the system. He said early customer reaction to FlashBlade has been strong.

“If anything, we are even more optimistic and more pleased with FlashBlade than when we first introduced it,” Giancarlo said.

Giancarlo laid out three growth areas for Pure to chase: cloud customers, next-generation applications such as machine learning, artificial intelligence and analytics, and large enterprises.

“Each of these growth areas is a large opportunity on its own. Together, they represent a huge market opportunity for Pure,” Giancarlo said.

Pure executives said FlashBlade brings it into more competitive deals with NetApp. FlashBlade targets next-generation AI, machine learning and real-time analytics as well as legacy file and object storage.

“This is actually taking the fight to NetApp where we haven’t focused on their file-based world,” Pure president David Hatfield said.


November 26, 2017  5:11 PM

Like its CEO, HPE storage in transition phase

Dave Raffo Dave Raffo Profile: Dave Raffo
"Meg Whitman"

Meg Whitman says it’s time for “a new generation” to take over Hewlett Packard Enterprise. She was talking about the CEO change when she made that comment during the company’s earnings call last week, but she could have been talking about the HPE storage portfolio as well.

Independent of the CEO switch from Whitman to Antonio Neri that will take place in February, the HPE storage technology focus is shifting from 3PAR arrays to Nimble Storage. The 3PAR arrays still generate most of HPE’s storage revenue but Nimble is growing much faster in revenue and influence inside the company.

HPE acquired Nimble for $1.2 billion last March. Whitman said the Nimble deal “completed our storage offering from entry-level to the high-end and accelerated our transition to all flash.”

The transition is ongoing. HPE storage revenue of $871 million last quarter grew only five percent over last year, a disappointing number for the vendor considering the 2016 results did not include Nimble. But Whitman said Nimble revenue increased 80%, while 3PAR sales were “soft.”

She blamed 3PAR problems on “a tough competitive environment in the mid-range and some go-to-market challenges in America.”

HPE hasn’t given up on 3PAR, and is working on changes to give the platform new life and bolster its sales team. But those changes rely on Nimble.

They include porting Nimble’s InfoSight predictive storage analytics across all PAR arrays. “This is going to be a game-changer for our storage business,” Whitman said. “Leveraging advanced machine learning, HPE InfoSight is the next step in our vision for an autonomous data center.”

HPE is also combining the 3PAR and Nimble sales teams under Keegan Riley, who led Nimble sales before the acquisition. Riley, who worked in Hewlett Packard storage from 2008-2012, is the VP and GM of HPE’s North America Storage Business Unit. Whitman said the HPE storage unit is also hiring more field specialists to support sales.

All-flash revenue increased 16% from last year, which pales compared to HEP’s 30% year-over-year all-flash growth in the previous quarter. Both Nimble and 3PAR storage includes all-flash and hybrid arrays.

HPE gave no results for its other 2017 storage-related acquisition, SimpliVity. Whitman several times mentioned SimpliVity among HPE’s significant acquisitions but did not break out any hyper-converged results.

When asked if the HPE storage platform needed to grow by picking up new products, Whitman said HPE would be “very disciplined” about acquisitions.

“If we found something that we thought was important in the storage business … and it was priced right, we might think about doing it,” she said. “And I promise you that Antonio and [CFO] Tim [Stonesifer] will continue that disciplined approach to acquisitions. And I will be on the board to make sure they do.”


November 16, 2017  12:54 PM

NetApp revenue rides flash gravy train

Dave Raffo Dave Raffo Profile: Dave Raffo
NetApp

NetApp is showing a legacy storage array vendor can still increase revenue impressively during these days of scant storage growth.

NetApp Wednesday night reported its fourth straight quarter of revenue growth, a year that followed a slump during the vendor’s transition period.

NetApp revenue of $1.42 billion last quarter jumped 6% year-over-year. NetApp product revenue of $807 million increased 14% — impressive growth in today’s storage market. Wall Street analyst expected NetApp revenue of $1.38 billion, roughly the midpoint of the vendor’s own forecast from three months ago.

We don’t know yet how much the overall storage market grew in the quarter, but IDC put storage market growth at a mere 2.9% in the previous quarter.

NetApp’s $175 million in profit also beat expectations and increased from $109 million last year.

NetApp predicted the growth will continue this quarter with a forecast of between $1.425 billion and $1.575 billion compared to $1.404 billion in revenue in the same quarter last year.

“We are undoubtedly out-executing our competition on all fronts,” NetApp CEO George Kurian said on the company’s earnings call. “Our second quarter results are a strong indicator that the transformation of NetApp remains on track.”

Kurian replaced Tom Georgens as CEO in 2015 during a NetApp slump caused by a poor flash strategy and slow customer upgrades from its flagship OnTap 7-Mode operating system to Clustered OnTap. NetApp was late to the all-flash array game, and its OnTap upgrade process required downtime to complete.

NetApp has put those problems in the past.

Kurian said NetApp’s all-flash revenue last quarter grew close to 60% over last year. Most of that came from its All-Flash FAS lineup, with its E-Series performance platform and cloud-friendly SolidFire all-flash arrays contributing. Kurian said NetApp is on track for around $1.7 billion in all-flash revenue for its fiscal year, which has six months left. NetApp is second in the all-flash array market behind Dell EMC.

Kurian said NetApp averages two all-flash displacements per day, taking out competitors such as Dell EMC, Hewlett Packard Enterprise and IBM. He said there is still a long way to go with flash growth, as only about 10% of NetApp customers are using all-flash.

“We are still in the early innings of flash adoption in our customer base,” Kurian said.

After a long slog, the bulk of NetApp’s customer base has moved to Clustered OnTap.

“The transition from 7-mode to Clustered OnTap is behind us,” Kurian said.

“As I noted last quarter, we have already transitioned our business away from the declining segments to the data-driven high-growth segments of all-flash arrays, converged and hyper-converged infrastructure and hybrid cloud.”

Actually, the NetApp revenue stream from hyper-convergence has barely started to trickle in. NetApp HCI began shipping in October and hardly contributed revenue last quarter. The vendor came late to hyper-convergence just as it arrived late to all-flash, but Kurian predicted the HCI product will attract new customers who will either ditch direct attached storage or switch from competitors’ hyper-converged products. NetApp HCI is based on SolidFire all-flash technology.

Kurian identified NetApp’s new NFS service native to the Microsoft Azure cloud as another reason for optimism. The Azure service is in private preview now, and will likely become generally available in 2018 to provide another NetApp revenue stream.

“The way I look at it, we are riding several long-term secular trends: data growth and the criticality of data in a digital business; major technological transitions like solid-state storage, converged infrastructure and the cloud,” Kurian said.


November 15, 2017  9:27 AM

Amazon not to blame for S3 cloud storage lapses

Sonia Lelii Sonia Lelii Profile: Sonia Lelii
Cloud Security, Cloud storage

The Amazon Simple Storage Service (S3) has been giving big businesses –and their customers — big trouble.

It was reported earlier this summer that high-profile companies left data in their S3 buckets exposed because the access control lists (ACLs) were configured to allow access from any user on the internet. The companies caught up in this misconfiguration problem included telco giant Verizon, U.S government contractor Booz Allen Hamilton, World Wrestling Entertainment and Dow Jones.

And the cloud storage security problem has not gone away.

It was reported in October that corporate consulting firm Accenture left at least four S3 cloud buckets in a similar unsecured condition, according to security firm UpGuard blog post. Accenture works with 94 of the Fortune Global 100 and more than three-quarters of the Fortune Global 500.

But experts say Amazon is not to blame for the cloud storage misconfiguration issue. Human error is to blame: Administrators who are creating the S3 buckets are failing to reconfigure them in a restricted access configuration mode, essentially leaving the barn door open for unwanted entry.

“AWS is aware of the security issue, but are not likely to mitigate it since it is caused by user misconfiguration,” according to Detectify, a company that simulates automated hacker attacks.

AWS states on its blog that “by default, all Amazon S3 resources – buckets, objects and related sub-resources…are private. Only the resource owner, an AWS account that created it, can access the resource. The resource owner can optionally grant access permissions to others by writing an access policy.”

Amazon claims it has enhanced S3 storage security. In August, the company updated the “managed rules to secure S3 buckets.” The AWS Config offers a timeline of configuration changes with two new rules. The S3 bucket-public-write-prohibited rule automatically identifies buckets that allow a global access write, so that if an S3 bucket policy or bucket ACL allows public read access then the bucket is considered non-compliant. The second rule, an S3-bucket-public-read-prohibited rule also automatically identifies that a bucket has a global access read.

“This will flag content that is public available, including web sites and documentation,” according to a blog post written by Jeff Barr, chief evangelist for AWS. “This rule also checks all buckets in the account.”

George Crump, president of IT analyst firm Storage Switzerland, said the buckets are secure when created. Trouble occurs only when IT does not do a follow through on locking down the buckets.

“It’s not (Amazon’s) fault,” Crump said. “They just provide the infrastructure. They provide the material for you to create a solution. It’s not their fault. It’s the job of IT to lock it down. It would be different if Amazon had not put the tools in place, but that clearly is not the case.”

Many of these unsecured S3 buckets are created for application development, then left open after a team pulls its compute and storage resources from AWS for the duration of the project.

“Typically, these buckets are secured when they are created so that only authenticated users can access them,” Crump wrote in a blog post. “But sometimes, especially in the initial development of an application, these buckets are left unsecured to make it easier for multiple users to test them.

“The problem is when the application moves into production, no one remembers to secure the bucket, leaving it open for anyone to gain access,” he said.

 

 

 


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: