Storage Soup


May 24, 2018  10:50 AM

ZertoCON 2018: IT resilience takes center stage

Paul Crocetti Paul Crocetti Profile: Paul Crocetti
Storage

BOSTON – At the first ZertoCON in 2016, analyst John Morency said that “IT resilience” is becoming the new “disaster recovery.” The concept at the time stressed continuous availability and proactively avoiding all recovery situations, versus just being able to recover from huge disasters.

Two years later, at ZertoCON 2018, IT resilience was the dominant theme. But that concept itself is evolving.

“The definition has changed,” Morency, a research vice president at Gartner, said in his keynote address Wednesday. “The scope has changed.”

Gartner defines resilience as “the ability of an organization to protect, absorb, recover and adapt in a complex and rapidly changing environment to enable it to deliver its objectives and to rebound and prosper.”  That’s different from the classic recovery model with a focus on recovery time and recovery point objectives, Morency said.

“Backup can only take us so far,” Morency said.

Zerto’s new Elastic Journal, for example, which is scheduled for release with Zerto 7 in early 2019, provides continuous recovery points across data, files or virtual machines, going from seconds to years back. Zerto pitched it as a new way to do backup. The feature is a part of Zerto’s newly branded IT Resilience Platform.

“We don’t ever go to our backup solution for recovery,” said senior system engineer Jayme Williams, of material manufacturer TenCate, a Zerto customer since 2012.

Instead, TenCate uses Zerto for its journal file-level recovery.

Data protection criteria are changing, Morency said. In the past, functionality was the most important for products. Now it’s cost, ease of use and capability to support multiple data protection uses.

Needs and planning tips for IT resiliency

According to Morency, governance requirements driving organizations’ need for IT resilience include:

  • close-to-continuous IT and business operations
  • workload mobility
  • sustainable data integrity, consistency, availability and accessibility
  • cyberthreat mitigation
  • IT service configuration, deployment and change agility
  • detection of and response to potentially disruptive events in order to sustain business and IT operations

Ransomware is changing the game for cyberthreats. Speakers at ZertoCON 2018 noted that a ransomware attack is a “when” not “if” scenario. And traditional backup — with recovery point objectives measured in hours — may not cut it, as organizations will want to recover from just before the attack hit.

Gartner estimates that by 2020, 30% of organizations targeted by major cyberattacks will spend more than two months cleansing backup, resulting in delayed recoveries.

IT resilience management is also driving product convergence: in backup software, runbook automation, software-defined managers and cloud management platforms.

“It’s not about backup. It’s not about runbook automation,” Morency said. “It’s all of the above.”

For many organizations, the IT resilience scope is hybrid. According to Gartner research, nearly 80% of organizations say their data center capacity profile in five years will include some combination of on-premises and cloud.

Morency provided an action plan for organizations looking to begin an IT resilience journey:

  • Monday morning: Benchmark your organization’s resilience and identify people, process and technology gaps that are specific to the support of mission-critical business processes and applications.
  • 30 to 90 days: Develop and execute a plan for improving relevant resilience gaps for mission-critical processes and applications.
  • 90 to 180 days: Prioritize gap closure for critical and important processes and applications; mitigate the resilience risks posed by key vendors and service providers.

May 24, 2018  10:04 AM

NetApp CEO: Dell EMC ‘years behind’ us on flash, cloud

Dave Raffo Dave Raffo Profile: Dave Raffo
Storage

NetApp CEO George Kurian says things have never been better for the storage vendor since he joined the company in 2011.

Kurian maintains NetApp’s all-flash platform is a hit, it’s the first major vendor with end-to-end NVMe array, it has solid connections with all major cloud vendors and now has a viable hyper-converged product.  And he said NetApp is still making two SAN array displacements a day, while its main rival Dell EMC is struggling with its midrange storage and cloud strategies.

“What a difference a year makes,” Kurian said during NetApp’s Wednesday evening earnings call, reporting a better than  expected 11% year-over-year revenue increase to $1.64 billion last quarter. “We improved the consistency of our results, expanded our market opportunities, and successfully accelerated our momentum. We are undoubtedly in the best position since beginning the transformation of NetApp.”

NetApp’s forecast failed to capture Kurian’s optimistic words, though. Investors were disappointed by NetApp’s guidance range of $1.365 billion to $1.465 billion for this quarter, a significant drop from last quarter. The stock price fell 3.9% to $66.79 Wednesday after the earnings report and then dropped more to $64.00 at today’s opening.

When asked about the tepid guidance, Kurian said: “We are very bullish on the strength of our product portfolio. Our philosophy is to build a plan that we can meet or beat and provide you more updates as we see more visibility through the course of the year.”

When Kurian became NetApp CEO three years ago, the vendor was a laggard in the emerging all-flash market. He said all-flash revenue grew 43% year-over-year last quarter and is at a $2.4 billion run rate for the year, putting it at or near the top of the overall storage market for that segment. He said most new arrays that ship now are all-flash configurations.

NetApp this month launched its All-Flash FAS (AFF) 800, part of a wave of NVMe arrays from major vendors. Dell EMC, Hewlett Packard Enterprise, Pure Storage and IBM also have new NVMe arrays out or coming, as do several startups specifically targeting that market.

Speaking of competitors, Kurian said Dell EMC still has a lot of work to do following its 2016 merger. He said the market leader needs to do more than rationalize its overlapping midrange arrays to stem share losses.

“I think what Dell has to do is not only rationalize their portfolio, but then to develop a coherent cloud strategy. That takes years of work. They’re years behind on everything from flash to cloud,” he said.

NetApp’s cloud strategy revolves around its Data Fabric and Cloud Volumes, which make its file services available in public clouds. NetApp Cloud Volumes is generally available for Amazon Web Services and in private previews for Microsoft Azure and Google Cloud Platform.

“The opportunity created by this part of our business is incredibly exciting,” Kurian said.

Kurian said NetApp’s FlexGroup feature that allows customers to cluster FlexVols has helped the vendor win deals from Dell EMC’s Isilon scale-out NAS product. “We have taken back several footprints from Isilon, and frankly, they’re trying to chase us now,” he said.

NetApp HCI is still in its early days, less than a year after launching. Kurian said the vendor is not chasing traditional hyper-converged use cases but concentrating on enterprises running mixed workloads. NetApp uses its SolidFire all-flash platform as the storage for HCI.

“We are not targeting the entire hyper converged market, but a very specific large segment of it where we think we’ve got a winning architecture,” he said.

Should we believe Kurian’s words or NetApp’s guidance? Either he is not as optimistic as he sounds, or he is setting things up for NetApp to beat expectations again. Check back in three months to find out.

 


May 23, 2018  10:44 AM

HPE storage still on its Nimble high

Dave Raffo Dave Raffo Profile: Dave Raffo
Storage

Hewlett Packard Enterprise extended its impressive storage turnaround last quarter.

For the second straight quarter, HPE storage revenue increased 24% year-over-year – jumping to $912 million for the period. Now, that includes revenue from Nimble Storage that HPE didn’t own the year before, so that 24% is inflated. But HPE’s organic growth – without Nimble revenue – increased 14% from last year. That’s better than the 11% organic growth from the previous quarter well above the overall industry growth.

When the $1.2 billion Nimble deal closed in April 2017, HPE storage was a rock bottom. It declined 12% year-over-year in the first quarter of 2017 and dropped 13% year-over-year in the second quarter. All-flash sales for the HPE storage flagship 3PAR platform were below  its competitors’ all-flash growth and the company pointed to “execution” problems in the wake of the Hewlett-Packard breakup.

A year later, HPE storage is soaring. Its all-flash growth of 20% remains below rivals such as NetApp and Pure Storage, but the InfoSight storage analytics that HPE gained in the Nimble deal helps the vendor stay ahead of the rush to use artificial intelligence in IT management. HPE has extended InfoSight to 3PAR as well.

HPE CEO Antonio Neri said his company has gained market share in storage in 10 of the last 12 quarters.

“We actually executed way better than last year,” Neri said of the HPE storage team. “Last year, we had some execution challenges, particularly North America. We think we have addressed those issues. And when they think about the opportunity, the market, obviously, all-flash continue to be a significant opportunity. This quarter, we grew 20%. And we are really excited about our portfolio. With our AI technologies built into both in Nimble and 3PAR,  that is something that’s resonating with customers.”

Neri pointed out HPE storage is unlikely to increase 24% year-over-year next quarter because the third quarter will include Nimble revenue from 2017. But if it can keep going with its double-digit organic increase, HPE will almost definitely continue to take market share.

HPE stood second behind networked storage leader Dell EMC in the latest IDC numbers, from the fourth quarter of 2017. According to IDC, overall networked storage revenue grew 1.4% year-over-year in the fourth quarter of 2017 and 4.1% in the quarter before that. All-flash array revenue grew 38.1% in the third quarter and 15.1% in the fourth.

HPE also reported it more than doubled revenue from its SimpliVity hyper-converged platform, although that product was in its early days under HPE’s banner a year ago. HPE acquired SimpliVity for $650 million in Jan 2017.

Neri said last week’s Plexxi software-defined networking acquisition will help both its hyper-converged and Synergy composable infrastructure products.

 


May 22, 2018  6:30 AM

Pure Storage: flash is a means, not an end, to shared data

Garry Kranz Garry Kranz Profile: Garry Kranz
Pure Storage

SAN FRANCISCO – Pure Storage will push the theme of a “data-centric architecture” at its annual Accelerate user conference that begins Wednesday.

Data-centric architecture is Pure’s description for its new flash strategy. The strategy revolves around the concept that the storage array is becoming a commodity item, an afterthought for IT. Enterprise data centers instead want fast flash to deliver data as a service to any type of application.

Among the expected product highlights is a major upgrade to the flagship Pure Storage FlashArray block and file system, featuring a handful of highly dense models that extend nonvolatile memory express (NVMe) rack-scale flash across the product line.

This will be the first Pure Accelerate under new CEO Charles Giancarlo, who replaced Scott Dietzen last April. Dietzen remains as Pure’s charman.

Shared accelerated storage: Jargon or meaningful distinction?

Pure Storage has had an eventful year so far. Pure became a public company in 2015, and this year capped off a pair of key milestones: $1 billion in sales and achieving non-GAAP profit. Pure Storage launched in 2009 as one of the first vendors to sell only all-flash arrays.

Pure now wants to shift the focus away from hardware specs to software-defined storage features that exploit advances in flash technology. With the launch of Pure AIRI this year, the vendor moved into artificial intelligence.

Pure wants to lump its all-flash arrays under a recently developed hardware category known as shared accelerated storage.  The term was coined by IT analyst firm Gartner to describe hardware gears equipped with NVMe over Fabrics capabilities.

“Our vision for data-centric architecture is that IT organizations need to think less about managing storage and more about being storage service providers to the rest of the organization” said Matt Kixmoeller, Pure’s vice president of strategy.  “That’s a bit of a different mindset than just buying and running storage arrays.”

At Pure Accelerate in 2017, the vendor previewed a FlashStack converged infrastructure product based on Cisco servers and networking. The first iteration of that product will be made generally available this week.


May 21, 2018  9:40 AM

Pavilion Data gets funded for NVMe-oF push

Dave Raffo Dave Raffo Profile: Dave Raffo
Storage

As all-flash array pioneer Pure Storage celebrates its $1 billion in annual revenues, two of its former executives are planning the next big thing in flash.

Pavilion Data, whose CEO Gurpreet Singh and VP of global sales Dan Heydenfeldt came from Pure, today picked up $12 million in funding to market its NVMe over Fabrics (NVMe-oF) storage system. Singh said Pavilion is going after customers running applications built on a “new modern stack dominated by open source, massively parallel, scale-out, clustered databases and file systems.” In other words, it’s targeting storage for databases such as MongoDB, Spark, MySQL and Cassandra instead of Oracle and Microsoft SQL Server.

“Somebody gets to build the next billion dollar company riding on this modern data stack,” Singh said. “We believe we have the best architecture for these modern applications. We call it disaggregated shared storage, or rack-scale flash.”

Singh said current all-flash arrays — including Pure’s — are fine for traditional applications but not modern apps. “The old-school dual controller architecture, server-centric design exposes a lot of challenges when running these applications,” he said. “For example, performance density is just not there.”

He said a storage system to run these new apps must have performance, latency and bandwidth characteristics of direct attached storage yet easy to scale and use. “Today there are compromises,” Singh said. “You can go shared storage, but you lose performance. Or you stick in four to six NVMe cards per server, 40 servers per rack, tens of racks and you lose the serviceability and data management. “It requires a complete re-think of how you develop and architect a storage system. You can’t retro-fit that. The basic math doesn’t add up.”

Pavilion’s answer is the Pavilion Memory Array, which started shipping in early 2018.

Pavilion Data CTO VR Satish said a 4u Pavilion Memory Array can drive 120 GBps second with around 100 microseconds of latency and 20 million IOPS. The system uses x86 hardware and up to 72 standard 2.5-inch NVMe flash drives for a maximum capacity of 1 PB.

The back of the box resembles a network switch with a minimum of two line cards, each card containing four 100 Gigabit Ethernet ports and two controllers. Customers can expand to 10 line cards, 40 GigE ports and 20 controllers.

The array does not scale beyond one system, but Satish said 1 PB of storage with 72 drives “is more than enough.”

The system supports RDMA over Converged Ethernet (RoCE) and NVMe over TCP but no Fibre Channel.

“Hyperscalers don’t want to be caught dead running Fibre Channel in their data centers,” Singh said.

Satish said Pavilion developed a clustered file system that supports RAID 6 data protection, non-disruptive upgrades, multi-pathing, thin provisioning, snapshots and clones. The array does not require any software to run on host servers.

Founded in 2014, Pavilion Data has just under 60 employees split between the U.S. and India. But the company has not had all smooth sailing so far. A founder and the original CEO Kiran Malwankar left Pavilion early this year. Pavilion also had some layoffs around that time, although it is hiring now.

Singh said another founder and current VP of engineering Sundar Kanthadai is in charge of development, and there has been no change in direction since Malwankar left.

He said Malwankar “left to pursue other opportunities” although Malwankar’s LinkedIn entry describes him as a “free bird.”

“It’s the natural course and evolution of a company,” Singh said. “People leave and new people come in.”

The new funding brings Pavilion Data’s total to $33 million. New investors Korea Investment Partners and DAG Ventures participated along with previous investors Kleiner Perkins Caufield & Byers, Artiman White Space Investments, and SK Telecom.


May 16, 2018  3:30 PM

Quest hopes NetVault Backup 12 can vault into enterprise

Dave Raffo Dave Raffo Profile: Dave Raffo
Storage

While Veeam Software uses its VeeamON user conference this week to further its push into the enterprise, Quest Software is making its own attempt to go the same route with its NetVault Backup.

Quest upgraded the NetVault platform that it acquired from BakBone in 2010. That was before Dell bought Quest, and then spun it out in 2016 after the Dell-EMC merger.

With NetVault Backup 12.0, Quest has made the application more scalable – particularly for virtual machines. NetVault Backup 12.0 can run VMware plug-ins on any available proxy so users can back up VMs with a unified view and scale thousands of VMs. A new heuristic algorithm can load balance backup jobs across clients acting as backup proxies.

NetVault Backup 12.0 also supports application-aware storage array snapshots for the first time, although the only array supported now is Dell EMC SC Series (Compellent). Other enhancements include a single sign-on so users can log onto NetVault Backup by using Active Directory credentials, a push install feature to streamline product updates and installations, a new granular catalog search function and a new widget-based dashboard.

“We’ve decided to up the game with NetVault a little bit,” said Adrian Moir, Quest’s lead technology evangelist. “We wanted to go further into the enterprise space. We wanted to add more scale around protection of VMware, and give a single view of a larger environment by placing proxies under a single point of management. This is also our first entry into array-based snapshot. We’ve built a framework and will expand it, adding other arrays as quickly as we can.”

Also like Veeam, Quest is avoiding integrated systems in this age of converged data protection. Moir said Quest wants NetVault Backup to  work with as many backup target options as possible rather than package it on an appliance.

“We’re quite happy to run on anyone’s hardware,” he said. “If people want to use a specific hardware platform, that’s fine with us. We’d rather offer the flexibility rather than give them something that doesn’t match the rest of their infrastructure. An appliance might be right-fitted, but sometimes it doesn’t match everything else. We’d rather be flexible and let them match the rest of their environment.”

 


May 15, 2018  6:34 PM

VeeamON 2018: Beyond availability and data protection

Paul Crocetti Paul Crocetti Profile: Paul Crocetti
Storage

CHICAGO – Veeam unveiled a copy data management feature at its VeeamON 2018 user conference Tuesday as part of its newly branded Hyper-Availability Platform.

Veeam DataLabs, part of the vendor’s flagship Availability Suite, allow organizations to create copies of production environments for uses beyond standard data protection. Those scenarios include test and development, DevOps and DevSecOps.

DataLabs is an element that has evolved over the past year, said Peter McKay, Veeam president and co-CEO.

“It’s a continuation of extending our platform for the enterprise,” McKay said at VeeamON. “Data is becoming more and more important. It’s the lifeblood of companies today.”

An organization can boot virtual machines from a backup or replica into a secure, isolated virtual environment. Veeam DataLabs expand on the functionality of Veeam’s Virtual Labs, enabling production-like instances of virtual environments. The feature replaces Veeam Virtual Labs.

Specifically, they help to ensure teams are developing and testing with the most recent data copies. Developers can spin up instances of the production environment as they design new features. The DataLabs provide sandbox environments to test new patches and updates. The copies also enable testing of security vulnerabilities without disrupting production systems.

In addition, an organization can examine and classify data, which helps comply with regulations such as the General Data Protection Regulation.

More than ‘always on’

Veeam DataLabs is one piece of how the vendor is expanding its data management functionality.

Veeam laid the groundwork at VeeamON 2018 for its strategy and vision of what it’s calling the “Hyper-Availability Platform for Intelligent Data Management,” a shift from its recent “Availability for the Always-On Enterprise” platform.

Hyper-availability is more than availability and being always on; it’s enabling artificial intelligence and being able to move data fluidly across a multi-cloud environment, said Danny Allan, vice president of product strategy.

Veeam executives noted the difficulties that organizations are having with their data management. Not only is the volume of data increasing exponentially, it’s also often on multiple platforms, in multiple locations.

Combining AI and automation will drive the most innovate disruptions of the next decade, Allan said. Veeam is building intelligence into its platform, but didn’t provide specifics Tuesday at VeeamON.

“Leveraging intelligence for better data outcomes is where we need to go as an industry,” Allan said.

Veeam recently reached 300,000 customers and claims to be growing by 4,000 per month.


May 15, 2018  7:11 AM

Nutanix .NEXT 2018: ‘Sherlock’ targets IoT data

Dave Raffo Dave Raffo Profile: Dave Raffo
Nutanix, Storage

Nutanix .NEXT 2017 focused on the hyper-converged infrastructure pioneer’s move to the public cloud with Xi Cloud Services. Nutanix .NEXT 2018 last week was a showcase for Nutanix Beam, the vendor’s software-defined networking service. Those launches went in directions most expected the vendor to eventually take hyper-convergence.

Nutanix .NEXT 2019 will likely have an edge computing launch, although it’s not clear that will be the main focus. But the HCI vendor last week unveiled Project Sherlock at Nutanix .NEXT 2018. A Nutanix executive described Sherlock as an enterprise cloud software stack to harness hundreds of zettabytes of data that will be captured by Internet of Things (IoT) devices.

Satyam Vaghani, Nutanix’s VP of IoT and AI, demonstrated Sherlock during one of the .NEXT keynotes. He showed how the technology could allow a retail store to use it to identify customers and automate check out in stores using technologies such as facial recognition. The software could tie into the store’s sensors, cameras, kiosks and rack servers at the edge and connect to the cloud. The demo included a SaaS console to support “planet scale” IoT infrastructure.

“Enterprise IoT is very complicated because of the variety of devices involved,” Vaghani said. “It’s planet scale, not data center scale. Platforms for IoT operations are very different than platforms we are used to running in the data center. Wouldn’t it be great if there was a way for Nutanix to provide a managed service that can be used to manage all of this planet scale infrastructure, all from a central place?”

Vaghani said the technology could allow users to walk out of a story without stopping at a checkout counter to complete a purchase, board an airplane without a boarding pass or wear an ID badge at work. “These are the headliner use cases of IoT, the biggest use case in the edge cloud,” he said.

Nutanix chief product and development officer Sunil Potti said he expects “more concrete announcements” for Sherlock at .NEXT 2019. “Now it’s in what we call the Series A financing mode,” he said.

Considering Xi still isn’t ready for wide-scale consumption a year after its launch, there is a good chance Sherlock won’t make it into next year’s use conference. Like Xi, Sherlock requires a significant amount of engineering work.

“Just like we couldn’t take our current OS and put it in Xi, we had to re-do some core components” for Sherlock, Potti said. “When we extend our stack to the edge, we can’t take our current storage stack. The amount of real time processing needed, the latency requirements, the footprint requirements … everything is different.”

Government IT wrestles with cloud costs, management

Nutanix Beam, a SaaS offering for managing cloud costs, resonated with users at Nutanix .NEXT 2018 who need to get a handle on public cloud costs. Cloud sprawl was a topic during a breakout session with admins from government agencies, a vertical that often has mandates to use public clouds.

Beam provides a dashboard showing all of an organization’s cloud services and monthly pricing. Admins can manage their cloud resources from the dashboard, and Beam recommends cost saving changes.

David Gokey, chief engineer for Mission Systems at NORAD-U.S. Northern Command (USNORCOM), said he saw Beam for the first time at the show and it fit the bill for what he’s been looking for.

“I need a cloud economist,” he said of the task of keeping track of the costs of all his agency’s cloud services. “I need somebody 100 percent dedicated to look at all the products out there and know how we can control that. It has to be automated.”

Derek Williams, director of data center operations for the state of Louisiana, said his group operates as a service provider for state agencies. That requires a great deal of cost control.

“Parking four petabytes of data in the cloud is not cheap,” he said. “The way we handle it is to establish a service catalog per bid. We own the back end on the IT side, then we look at the business case and determine where that infrastructure should live. The business shouldn’t really care where that server is.  They have a business case, not an IT case. We figure out what’s cost effective on the back end. People used to say ‘I’ve got this much money has to be allocated for this project.’ Now it’s turned around for grants. You’re no longer getting money to buy hardware, this is a service. You have to change the way you think now. You say I have this much money for someone to provide a hosting service. We put a rate on that based on the internal IT cost.”

Using multiple clouds will complicate things more. Gorey and Williams said they find the public clouds do different things well, making it necessary to use more than one.

Gorey said Microsoft Azure is way ahead of market leader AWS for high performance computing, and Google has the best AI capabilities. “By 2030, AI and machine learning will be the largest piece of our workload,” he said. “We’re going to need a multi-cloud strategy. We want to give everybody options. People should have choices between where they want to push those workloads.”

Williams said the Louisiana IT team found AWS ran its workloads well and “we saw no reason to go and learn a different stack for every cloud.” But that is changing with advances such as Kubernetes that makes it easier to move workloads.

“If we can get it to where we can move workloads without having to care about where it goes, and security rules follow and we can get it all locked down, that will be the turning point for us where we don’t care what cloud it’s in,”Williams said. “But we haven’t hit that state yet.”

Dell EMC upgrades XC Nutanix-based HCI

Dell chose Nutanix .NEXT 2018 rather than Dell Technologies World the previous week to roll out its new Dell EMC XC appliance based on Nutanix software. Like the Dell EMC VxRail HCI platform that uses VMware vSAN software, XC runs on Dell PowerEdge servers.

The Dell EMC XC940-24 is the first XC quad-core appliance, and can hold 6 TB of memory per appliance. The XC940-24 is available with all flash or as a hybrid with flash and hard disk drives and supports 10-Gigabit Ethernet or 25 GbE networking. Dell positions it as the XC version for running high-performance applications such as in-memory and memory-intensive databases.

The XC940-24 also has a new interface that integrates with the Nutanix Prism software stack. “The interface looks like Prism but Dell designed it,” said John Shirley, director of product management for the Dell EMC XC family.

Dell sees XC as the HCI choice for customers running multiple hypervisors, while VxRail is for VMware-only customers. But the multiple hypervisors of choice for XC customers rarely includes Nutanix AHV, Shirley said. He said Microsoft Hyper-V is the second most common choice behind VMware’s ESX, although Nutanix claims more than 35% of its customers are using AHV.

Dell EMC is not on the list of backup vendors that support AHV. Its Avamar software and Data Domain backup targets are not integrated with AHV. “If we see big traction, we’ll take a look at it,” Shirley said. “It’s not on our roadmap now.”

XC sales contributes to Dell’s No. 1 spot in HCI market share. Dell ranks first in IDC hardware and Dell-owned VMware is tops in HCI software, with Nutanix standing second in both lists. Nutanix CEO Dheeraj Pandey said his company is second due to “funky accounting,” but Dell is chasing world domination in HCI.

“We haven’t been shy about saying we want to be number one in hyper-converged,” Shirley said.

Will HCI SDN go with the Flow?

Potti said Nutanix Flow software-defined networking will converge one more IT role, bringing HCI administration more in line with the cloud.

“You can consolidate the networking admin with the storage admin and the server admin,” he said. “In AWS, there is no networking admin. There is no storage admin. There is no server admin. There’s only the cloud admin or cloud operations or cloud DevOps. So that hyper-convergence of roles has to happen. That’s the core reason we’re doing the network piece.”

Potti said AWS was more of the model for Flow than VMware NSX, which gives SDN to VMware’s vSAN HCI software. “You go to Amazon, provision virtual machines, then suddenly there’s a new policy for security,” he said. “You can put 10 VMs in a security group and the system makes sure they’re segmented.”


May 11, 2018  7:51 AM

OwnBackup GDPR features seek to aid compliance

Paul Crocetti Paul Crocetti Profile: Paul Crocetti
Storage

With two weeks to go before the compliance deadline for the General Data Protection Regulation, cloud-to-cloud backup vendor OwnBackup is helping its customers prepare for the comprehensive set of rules.

GDPR updates data protection, privacy and access laws across the European Union, and goes into effect on May 25, 2018. It affects not only companies in the EU, but any company that processes data on European Union residents.

The OwnBackup GDPR feature set allows customers to find a data subject’s information within backups.

Built on OwnBackup’s backup and recovery service, the features help customers respond to data subject rights requests, as they apply to personal data within backups and archives, according to the vendor.

The OwnBackup GDPR functionality may be a trend-setter in backup for software as a service (SaaS) applications, such as Salesforce, Slack and ServiceNow, all platforms that the vendor protects. SaaS data is created in the cloud and often needs enhanced protection beyond the basics offered by the applications.

“We’re happy to put a stake in the ground,” said Lee Aber, OwnBackup’s chief information security officer. “We’re trying to move the SaaS backup market forward.”

Key elements of GDPR include:

  • The right to be forgotten: Data subjects can request personally identifiable data to be erased from a company’s storage.
  • The right to rectification: Data subjects can expect inaccurate personal information to be corrected.
  • The right of portability: Data subjects can access personal data that a company has about them and transfer it.
  • The right of access: Data subjects can review data that an organization has stored about them.

The OwnBackup GDPR features include:

  • Erasure requests, submitted through the OwnBackup application, which support data subjects’ right to be forgotten;
  • Rectification requests, submitted through the OwnBackup application, which support data subjects’ right to have their personal data updated;
  • Audit logs and notifications, sent to the data controllers’ administrators confirming that an erasure or rectification request has been processed;
  • Exporting or transferring a subject’s personal data to support the right of portability;
  • The capability to search for data subject information across backups and archives, including within attachments; and
  • The ability to set custom data backup expiration dates.
OwnBackup GDPR screenshot

With the OwnBackup GDPR feature set, customers can submit a rectification request through the application. (Screenshot courtesy of OwnBackup)

Aber said he has been impressed with how the SaaS community has stepped up to get the word out about GDPR. Salesforce and others have provided guidance and education.

OwnBackup CEO Sam Gutmann said customers have focused on GDPR in the last four to five months.

“Once the regulations are live, I think you’ll see a lot more focus in the U.S.,” Gutmann said.

Gutmann said the OwnBackup GDPR feature set will be live next week. Customers will see a specific tab for GDPR in the administrative console, with its own subset of tools. There is no upcharge for the features, as it’s part of OwnBackup’s core offerings.

OwnBackup plans to add more features, including the ability to apply a group of requests in bulk.

Aber said he thinks interpretation of GDPR will evolve, as will OwnBackup’s approach. Once enforcement begins, Aber suspects the authorities will go after the most egregious rule-breakers.

In general, GDPR provides common sense guidelines around data protection, transparency and privacy that should help organizations.

“It’s not just a compliance obligation,” Aber said. “It actually makes sense.”


May 4, 2018  3:04 PM

Dell EMC storage IPO, VMware merger plans still unclear

Garry Kranz Garry Kranz Profile: Garry Kranz

Dell Technologies World 2018 has come and gone with hardly a mention of the corporate reorganization Michael Dell and his team is considering as the next step after Dell’s $60 billion-plus 2017 acquisition of storage giant EMC.

Dell has been weighing a return to public ownership as part of strategy to pay down a mountain of debt related to the merger with EMC .  A potential reverse merger with its VMware subsidiary is one option Dell is considering, along with a possible spinoff of the Dell EMC storage division as a public company.

But Dell chairman and CEO Michael Dell stuck to technology during his Monday keynote address kicking off the conference. He also sidestepped questions about the issue during a media briefing following his keynote, suggesting any interested parties should read the SEC filing the company made listing its options.

“Have you read our Form 13D (securities filing)?” Dell responded to a question from one reporter. “If you haven’t read it, I suggest you read it in full. It contains everything we’ve said on the matter. We filed it because we publicly said we were thinking about some things. If I say anything else about it, we’ll have to make another filing.”

The coals of Dell’s post-merger considerations are being stoked by roughly $46 billion of debt related to the EMC transaction. Getting a handle on the debt service was an immediate concern when Dell and EMC storage fused into a single company. Analysts and industry observers have predicted from the outset that Dell would have to lop off business segments that no longer fit its long-range goals, including services and software.

In the Jan. 31 filing with the U.S. Securities and Exchange Commission, actually made by publicly traded VMware, Dell Technologies referred to several options under review.  One option involves Dell EMC storage pursuing an initial public offering (IPO) of stock. That would mark a sharp departure from the origins of the EMC deal.

Taking Dell EMC storage public likely would net much larger proceeds than the $555 million from Dell subsidiary Pivotal Software’s recent IPO.  At the time it acquired EMC, Dell touted the fact the deal would shield the storage business from the scrutiny of Wall Street.

Since the merger, Dell EMC storage revenue has sagged, with Dell Technologies claiming the lion’s share of revenue from sales of traditional servers and networking gear. Would investors get behind an EMC IPO, considering the industrywide decline in externally networked storage sales? That an open question and one undoubtedly getting batted around during Dell’s executive deliberations.

A reverse merger with VMware would shift the Dell EMC debt to the virtualization vendor’s books. That would entail Dell, which owns 81% of VMware, to be the acquired by its much smaller subsidiary. VMware could absorb the debt by selling additional shares to amortize the cost, but shareholders might balk if they perceive such a decision will dilute the value of VMware holdings.

“It’s not a guarantee that activist investors won’t jump in. But there is so much cash being generated by VMware (it could make sense) as a way of shuffling that paper debt around,” said Greg Schulz, senior advisory analyst at consulting firm Server and Storage IO, based in Stillwater, Minn.

Then again, it’s entirely possible that no changes lay ahead for VMware or Dell EMC storage. Listed among the various options on Dell’s securities filing is “maintaining the status quo.”  Most industry observers, in fact, say they would be surprised if Dell made any substantive  change to its corporate structure.

For now, though, Dell’s flight path toward a decision appears to be stuck in an indefinite holding pattern, searching for a runway and soft landing.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: