Storage Soup


February 17, 2017  10:13 AM

NetApp hyper-converged entry: Better late than early?

Dave Raffo Dave Raffo Profile: Dave Raffo
NetApp

NetApp is entering the hyper-convergence arena with the latecomer’s cry: We may be last into the market, but we’ll be the best.

Emboldened by the recent strong revenue growth of its late-to-market All Flash FAS (AFF) arrays, CEO George Kurian Wednesday night outlined plans to play rapid catch-up in hyper-convergence.

Kurian said a NetApp hyper-converged product will launch in the May to July timeframe. He gave few details, but said the system will be built on SolidFire all-flash storage and NetApp Data Fabric technology that links on-premises and cloud storage.

NetApp has been noticeably absent from the hyper-converged infrastructure (HCI) market after its 2014 plans for an EVO:RAIL system in partnership with VMware never got off the ground.

Three months ago, Kurian indicated there was no NetApp hyper-converged strategy. He said NetApp addressed the advantages of hyper-convergence through its FlexPod converged infrastructure partnership with Cisco. FlexPods bundle storage, compute and virtualization as separate pieces rather than in the same chassis. Kurian has also pointed to NetApp’s cloud-centric SolidFire arrays as an answer for customers who want hyper-convergence.

How will NetApp handle hyper-convergence?

During NetApp’s quarterly earnings call Wednesday night, Kurian talked less about what a NetApp hyper-converged infrastructure might look like than what is missing from current HCI appliances.

“We will do what has not yet been done by the immature first generation of hyper-converged solutions — bring hyper-converged infrastructure to the enterprise by allowing customers to run multiple workloads without compromising performance, scale or efficiency,” he said.

What will NetApp do differently and better? Kurian said the vendor will have the first fully cloud-integrated hyper-converged system that moves data across tiers on-premises and in the public and private clouds. That is something executives at HCI market-leader Nutanix say they are working on now.

Kurian characterized current hyper-converged products as “first-generation,” lacking enterprise data management and consistent performance. He said that relegates them to departmental use and the low-end of the market, a statement that almost all the current hyper-converged vendors would dispute.

Along with Nutanix, server vendors make up most of the HCI market. Dell EMC, Cisco, Hewlett Packard Enterprise (with its recent SimpliVity acquisition) and Lenovo all sell HCI appliances.

Kurian said he doesn’t mind playing catch up.

“There have been lots of companies that have gone after the early-adopter segment with a subset of the features that enterprise customers really want and have failed in the long run,” he said. “And so, first to market doesn’t necessarily mean the big winner, right?”

Kurian points to NetApp’s all-flash arrays to back up his theory. NetApp lagged behind other large vendors as well as startups in offering mainstream all-flash arrays. It never got its home-grown FlashRay product out the door as a GA product, and only found success on its second attempt to build FAS into an all-flash option.

Kurian said NetApp’s all-flash arrays grew 160% year-over-year to approximately $350 million last quarter. That includes AFF, the EF Series for high-performance computing and SolidFire. But that still leaves NetApp well behind market leader EMC, which claims its all-flash XtremIO array generated more than $1 billion in bookings in 2016.

NetApp won’t be first with all-flash hyper-converged either. Most vendors in the market have all-flash options, and Dell EMC claims 60% of its VxRail customers deploy all-flash appliances. Cisco added an all-flash version of its HyperFlex HCI appliance this week.

In a blog posted soon after the earnings call, John Rollason, NetApp’s director of product marketing for next generation data center, echoed Kurian’s comments about current HCI systems. Rollason criticized hyper-converged systems for having fixed ratios of compute-to-storage resources, and lacking performance guarantees for mixed workloads and mature data services. He said they were limited to eight-node clusters that result in silos. While not all of those criticisms are valid for all hyper-converged systems, Rollason’s and Kurian’s comments provide hints as to what NetApp will try to do. It is pledging hyper-converged systems that scale higher with predictable performance aimed for enterprise.

Who will NetApp partner with?

We don’t yet know what NetApp will do for virtualization and compute. You can expect a NetApp hyper-converged system to incorporate VMware. It has had a good working relationship with VMware, despite VMware’s being owned by NetApp storage rival EMC and now Dell.

NetApp will also need a server partner. NetApp FlexPod partner Cisco is a possibility. Cisco has its own HyperFlex HCI appliance and added an all-flash version this week, but allows several HCI software applications, including VMware’s vSAN, to run their software on its UCS servers. NetApp can also go the OEM route that EMC went before getting bought by Dell. EMC’s first hyper-converged systems used servers from Quanta before switching to Dell PowerEdge in September.

NetApp promises more details soon. Whatever it plans, it will have to be good to make up for being late.

February 16, 2017  9:01 AM

Dell EMC VxRail hitches ride on Enterprise Hybrid Cloud

Dave Raffo Dave Raffo Profile: Dave Raffo
Dell EMC, Hybrid cloud

Dell EMC’s VxRail turned one today, and the vendor marked the anniversary by adding the hyper-converged platform to its Enterprise Hybrid Cloud package.

Dell EMC claims over 1,000 customers for VxRail through the end of 2016, with more than 8,000 nodes, 100,000 CPU cores and 65 PB of storage capacity shipped in the system. VxRail is EMC’s first successful hyper-converged appliance, following a short, failed attempt with a Vspex Blue product launched in 2015.

Like Vspex Blue, VxRail is based on Dell-owned VMware’s vSAN hyper-converged software. It also runs on Dell PowerEdge servers, although VxRail originally incorporated Quanta servers until the Dell-EMC acquisition closed last September. VxRail launched just after VMware upgraded vSAN to version 6.2, which added data reduction and other capabilities that improved its performance with flash storage. Dell EMC VxRail senior vice president Gil Shneorson said 60% of VxRail sales have been on all-flash appliances.

“We’re definitely seeing the combination of hyper-converged and all-flash taking off in a meaningful way,” he said.

Now VxRail is an option for Dell EMC Enterprise Hybrid Cloud (EHC) customers. EHC is a set of applications and services running on Dell EMC hardware that provide automation, orchestration and self-service features. The software includes VMware vRealize cloud management, ViPR Controller and PowerPath/VE storage management, and EMC Storage Analytics.

Other EHC storage options include EMC VMAX, XtremIO, Unity, ScaleIO and Isilon arrays sold as VxBlock reference architectures with Dell PowerEdge servers. EHC is also available with VxRack Flex hyper-converged systems that use Dell EMC ScaleIO software instead of VxRail appliances. Data protection options include Avamar, RecoverPoint and Vplex software and Data Domain backup hardware.

Along with the Dell EMC VxRail option, the vendor is adding subscription support and encryption as a service to EHC. Dell EMC does not break out EHC financials, but Dell EMC senior vice president of hybrid cloud platforms Peter Cutts said its revenue was in the “hundreds of millions of dollars” last year.

Adding a Dell EMC VxRail options lets EHC customers start with as few as 200 virtual machines.

“This gives customers the ability to start smaller, configure EHC as an appliance and go forward in that direction,” Cutts said.

For now, organizations who want to use VxRail with EHC need to buy a new appliance. Cutts said the vendor is working on allowing customers to convert existing VxRail appliances to EHC but that is not yet an option.

Using VxRail as part of EHC makes sense as vendors begin to position hyper-converged systems as enterprise cloud building blocks. Hyper-converged market leader Nutanix now positions its appliances that way, emphasizing its software stack’s ability to move data from any application, hypervisor or cloud to any other application, hypervisor or cloud. Nutanix is VxRail’s chief competitor.

“We’ve seen requests for more data center-type features and functionality,” Shneorson said. “VxRail is being put into data centers in much larger clusters than we originally anticipated. We’re seeing a shift from an initial focus on remote offices and test/dev to mission critical data center use.”

But unlike Nutanix, Dell EMC still also sells traditional storage. So Shneorson admits hyper-converged is not a universal answer because not every organization wants to scale their storage and compute in lockstep.

“It’s a matter of economics,” he said. “The advantage of hyper-converged is you can start small and grow in small increments. But some customers’ environments are already large and predictable in growth. By using shared storage you can get any ratio of CPU to disk. With hyper-converged, there is always a set ratio of CPU to disk. If you want massive amounts of storage with a small amount of CPUs for example, you would be better served by a traditional architecture.”


February 8, 2017  11:16 AM

Microsoft Office 365 backup included in Acronis Backup 12 update

Paul Crocetti Paul Crocetti Profile: Paul Crocetti

Acronis has added cloud-to-cloud backup for Microsoft Office 365 in the latest version of Acronis Backup.

Acronis Backup 12 protects Microsoft Office 365 and 15 other platforms, all under one management console.

The vendor’s Microsoft Office 365 backup features include:

  • Automatic backup of Microsoft Office 365 emails, contacts, calendars and tasks;
  • Store backup data locally or in the cloud, for long-term access or archiving purposes;
  • Preview, browse and search backup content;
  • Recover individual and shared mailboxes to the original or an alternative location; and
  • Recover and deliver individual items by email without restoring the entire backup.

Acronis’ existing portfolio can back up local systems with other Office 365 elements such as Word, Excel and PowerPoint.

Office 365, Acronis backup

Acronis Backup 12 features Microsoft Office 365 backup. (Image courtesy of Acronis)

Microsoft Office 365 backup subscription licenses are available online and from local distributors. The monthly price per mailbox ranges between $1.67 and $3.33, depending on geography, volume and the subscription term, according to Acronis.

Most leading backup software vendors now support Office 365 backups as customers begin to realize that putting data in the cloud does not mean it’s protected. Microsoft Office 365 backup features have been available in the Acronis Backup Cloud product for service providers since July 2016. Office 365 is the first cloud application protected by Acronis backup.

With ransomware an increasingly prevalent threat, it’s critical to keep a backup copy in another location, said Frank Jablonski, Acronis vice president of product marketing. People in the industry anticipate ransomware becoming smart enough to hack into backups.

“We’ve hardened our agent that makes it more difficult, if not impossible, to get into backup data,” Jablonski said.

In the next product update, Acronis Active Protection against ransomware will protect user devices and data by blocking the attacks and instantly restoring affected data, according to the vendor.

In addition, backup for OneDrive and SharePoint will come later in the year.

The Acronis Backup 12 console allows administrators and service providers to manage data from physical systems, virtual machines (VMs) and cloud workloads in the Acronis cloud, on premises, and in Microsoft Azure and the Amazon Elastic Compute Cloud (EC2). The software can back up Azure VMs and EC2 instances.

The Acronis Backup 12 update also introduces support for VMware vSphere 6.5, VMware Changed Block Tracking, Acronis Instant Recovery, Acronis vmFlashBack and Replication with WAN optimization.


February 7, 2017  4:50 PM

Nakivo Backup & Replication for Hyper-V gets ready for prime time

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Nakivo Inc.’s latest backup and replication product offers support for VMware vSphere 6.5, Microsoft Hyper-V 2016, VMware vSphere 6.5 and Windows Server 2016 Backup along with a way to delete obsolete or orphaned backed up data in bulk with a single click.

Sergei Serdyuk, Nakivo’s director of product management, said Nakivo Backup & Replication v7 with added support for Hyper-V is currently in beta and will be generally available later in the month.

The product performs image-based backups and uses virtual machine (VM) snapshots to create backups that can be stored in a local data repository or in the Amazon Web Services (AWS) cloud.

Nakivo Backup & Replication v7 beta offers native backup for Hyper-V and Hyper-V Server 2012 and the R2 version. The agentless software is application-aware via Microsoft Volume Shadow Copy Service technology and supports Microsoft Exchange, Active Directory, SQL and Microsoft SharePoint. It recovers operating system files and application objects.

“We have a bulk feature and we use traffic redirection technology to speed up the data transfers,” Serdyuk said. “You can get an average of two times improvement if you use this feature in terms of the backup speeds. We will have a single product to back up Hyper-V to AWS environments.”

The Nakivo Backup & Replication v7 beta software also supports VMware vSphere 6.5, which creates VM snapshots of the backed up data and identifies the changed data by using VMware change block tracking technology. It sends data to the repository and removes the temporary snapshots.

Active Directory Integration has been added, allowing numerous users to log into the product. IT administrators can map Active Directory groups to a Nakivo software user’s roles so they can log in with their domain credentials.

“This new feature was a request by customers, mostly by enterprise customers who use Active Directory,” Serdyuk said. “It improves the company’s security, management and scalability.”

An activities tab offers a centralized view of software activity, including jobs that are running, files being stored and object recovery sessions. Protection for containers is also supported.

“In the VM infrastructure, there are a number of containers,” Serdyuk said. “Customers can protect  a particular container (such as a host or a cluster) and all current and future VMs in this container will be added to the job. In addition, customers can exclude VMs they don’t want to protect (test machines, for example).”

Nakivo Backup & Replication has also added the ability to skip swap files, a temporary location on a hard drive that stores data not used by random access memory.

“The new version automatically skips those files,” Serdyuk said. “They can become quite large.”

Version 7 of the Nakivo software does forever incremental backups, which means that after the initial full backup only the subsequently changed data is sent to the repository for more efficient backups. Backup & Replication v7 beta supports Microsoft Exchange, Active Directory, Microsoft SQL and Oracle databases running inside VMs.


February 7, 2017  11:09 AM

Pivot3 vSTAC sales soaring, CEO says

Dave Raffo Dave Raffo Profile: Dave Raffo

Hyper-converged vendor Pivot3 said it more than tripled its revenue in the fourth quarter of 2016 from the previous year, and grew total revenue 84% in 2016 from 2015.

Pivot3’s growth came in part from two new products launched last year. It added the Pivot3 Edge Office for SMBs and the Pivot3 vSTAC SLX incorporating PCI Express flash technology acquired from NexGen Storage in January 2015. Pivot3 is also incrementally integrating quality of service acquired from NexGen into the Pivot3 vSTAC OS, and CEO Ron Nash said new products adding both QoS and flash will ship in 2017.

Quality of service may hold the key to the vendor’s future success. Pivot3’s Dynamic QoS includes a policy engine that prioritizes workloads and manages data placement and protection. Nash said coming Pivot3 vSTAC products will extend QoS to the cloud and legacy storage. The vendor will add another NexGen storage flash system in 2017.

“Instead of just having our policy engine work on hyper-converged infrastructure, we’re extending it out to the cloud and backward to legacy systems,” Nash said.

He said the engine will be able to look at characteristics, such as service-level agreements and required response times, and find the best storage tier for application data. For instance, if you’re looking for cheap storage and don’t need fast response times, the data can go to a cold storage public cloud service.

Nash said the Pivot3 vSTAC SLX system, integrating NexGen technology, launched in 2016 and appealed to enterprises because of the flash performance. “It kind of surprised us where it was sold,” he said. “We thought it might in the midrange, but we found high-end people are using it, too. If you have an application that needs low latency, it gives you the low latency an NVMe-PCI-type storage device gives.”

‘Little guy” moving up to compete with ‘the big boys’

Nash said Pivot3’s long-term goal is to follow Nutanix’s 2016 initial public offering with an IPO of its own to become a public company. He said Pivot3 takes a different approach than Nutanix, though. He is trying to avoid the heavy losses Nutanix has suffered, even as it racks up impressive revenue growth.

“We’re much more disciplined about financial performance,” he said. “We want to grow fast, but if the difference between 80% and 150% growth is [that] I have a massive loss at 150%, I’ll stick to 80%. We’re still losing money, but not hemorrhaging.”

He said Pivot3’s $55 million funding round in March 2015 should get the company to profitability, although it may raise another round ahead of an IPO.

For now, Pivot3 is the little guy in the land of hyper-convergence giants. Its competitors are newly public Nutanix, Dell EMC (including VMware), Hewlett Packard Enterprise (with its new SimpliVity acquisition) and Cisco.

“A year ago, I was competing with startup companies; now I’m competing with big companies. I count Nutanix as a big company now,” he said. “The big boys are moving in. But I think we can compete against them.”


February 7, 2017  9:52 AM

Stratoscale buys Tesora, adds AWS database services

Garry Kranz Garry Kranz Profile: Garry Kranz

Stratoscale acquired database-as-a-service provider Tesora Inc. in a move aimed at strengthening its AWS database services.

The hyper-converged software startup added Tesora’s open platform for NoSQL databases Monday. The same day, Stratoscale launched a homegrown relational database service in its Amazon-compatible cloud storage stack.

The Tesora technology will be phased in with future rollouts of Stratoscale Symphony hyper-converged software. Symphony supports block and object storage capabilities by turning x86 servers into hyper-converged compute clusters.

Symphony builds an Amazon-like private cloud behind a firewall to help enterprises reduce VMware licensing costs. Customers can connect their legacy storage to a Symphony cluster and have Stratoscale orchestrate the compute, networking, storage and virtualization resources.

Tesora Database as a Service (DBaaS) is an enterprise-hardened version of OpenStack Trove, the native database service for OpenStack-based clouds. The Tesora acquisition hastens the delivery of relational AWS database services, a feature already on Stratoscale’s roadmap.

“This is a big expansion for us. It allows us to engage with customers who have been waiting for this type of capacity,” Stratoscale CEO Ariel Maislos said. “Going into production with database as a service is very complex, so this will save us about a year of development time.”

Tesora DBaaS enables self-service management and provisioning for Cassandra, Couchbase, DataStax Enterprise, DB2 Express, MariaDB, MongoDB, MySQL, Percona,Redis and Oracle. Stratocale said it will use the Tesora platform to augment its AWS database services, which include its AWS Relational Database Service and AWS NoSQL database offerings.

Maislos said enterprises want Stratoscale’s help with large-scale deployments that mirror AWS database services such as Amazon RDS.

“People want the ability to run their applications either in Amazon or inside their data center,” he said. “If you want to do a hybrid cloud, we give you an on-premises environment that is compatible with the private cloud. That’s the Holy Grail that customers love.”

Since its launch in 2015, Stratoscale has expanded its Amazon support to include Simple Storage Services, DynamoDB, Elastic Block Store, ElastiCache in-memory cache, Redshift and Virtual Private Cloud Services. Symphony 3.4 is currently shipping to customers with support for Kubernetes as a service and a one-click Application Catalog to deploy more than 140 prepackaged catalogs.

Stratoscale did not disclose terms of the deal. Tesora’s Cambridge, Mass., office will be added to Stratoscale locations in Israel, New York City and Sunnyvale, Calif. Maislos said approximately 20 Tesora employees are now part of Stratoscale.


February 6, 2017  10:17 AM

‘Alexa, provision my Tintri storage’

Carol Sliwa Carol Sliwa Profile: Carol Sliwa

Want to manage your Tintri storage the same way you turn on lights, set an alarm, or choose music with an Amazon Echo or Dot device?

Tintri Inc. launched a proof of concept that lets customers ask Amazon’s Alexa voice service to initiate tasks such as provisioning virtual machines (VMs), taking snapshots and applying quality of service.

Tintri storage engineers used Amazon’s software development kit to map its application programming interfaces (APIs) to the Alexa service to enable Echo and Dot devices to recognize and execute storage commands.

Chuck Dubuque, vice president of product marketing at Tintri, said Tintri will use feedback on the proof of concept to gauge the potential to turn the “cool demo” into a product.

A video demonstration shows a Tintri employee instructing Amazon Alexa to ask the system to provision a VM. Alexa prompts the user with questions such as “What type of VM would you like to create?” and “How many VMs would you like to create?”

Dubuque admitted that using Amazon Echo beyond home use cases might be “a little further out” in the future. But the proof of concept gives Tintri experience using Amazon’s voice recognition and natural language capabilities and making its self-service APIs more responsive to human commands, he said.

“It’s relatively easy to write an admin interface for the storage administrator or the VM administrator who already thinks about things at the low level around VMs and vdisks and other things,” Dubuque said. “But for people who aren’t experts on the infrastructure and just want to say, ‘Hey Alexa, create a test environment,’ what does that mean? Underlying all of the assumptions, a test environment means this set of 100 virtual machines is created from this template, put into this network with these characteristics. That’s more complicated.”

Chat option lets developers manage Tintri storage

At VMworld last August, Tintri demonstrated a text-based chat option to enable developers to collaborate with each other and manage Tintri storage. Dubuque said a customer in Japan used Tintri’s REST APIs to put together a simple robot to respond to system commands from within the Slack chat environment.

Developers in the virtual chat room could call out to a Tintribot — which appears as another “person” in the chat window — to tell the system to execute a command, such as firing up VMs to test new software.

“The Tintribot will acknowledge the command, maybe ask a few questions, and then once all of the VMs are up and running, reply back into the same chat window: ‘Hey, the 100 VMs are now ready. You can run your test,'” Dubuque said.

“It’s a way to enable self-service. In this case, it’s aligned to the developers who don’t really care about the details. They want to be able to do things on their own when they need to without having to hand it off to a third party,” to launch VMs.

Because the Slack-based ChatOps interface requires a username and password for login, the system can control what any given user is permitted to view and create a time-stamped chat audit trail in case they need to troubleshoot a problem.

“You get to see all the humans who were involved in the decision, as well as what the environment was telling you – what’s successful and what wasn’t,” Dubuque said.

Tintri is still gathering customer feedback and has not determined a general availability date for the Slack-based ChatOps that performs operations from within a chat.

“It’s definitely something that has sparked a lot of interest,” Dubuque said.

Dubuque said the Tintri storage architecture is conducive to plug-in integration with systems such as Slack and Amazon Alexa. He said the company’s key differentiator is a web services model “where the fundamental unit that we manage is around the virtualized or containerized application.

“Our file system, our I/O scheduler, all of our storage operations are at that same level that virtualization and cloud management systems use to control compute and networking,” Dubuque said. “You can think of us as finishing the trinity of network, compute and storage being all aligned to the same abstraction level, which is a virtual machine, or a container, not around physical constructs.”

Dubuque said Tintri exposes REST APIs and interfaces with PowerShell and Python through a software development kit. He said other storage vendors use REST APIs that focus on storage constructs such as LUNs and volumes and don’t directly map to an individual application. That causes complexity when trying to automate the storage component of an application.


January 31, 2017  3:56 PM

Report: Better data theft protection needed for employee exits

Paul Crocetti Paul Crocetti Profile: Paul Crocetti

The processes for keeping data safe when employees leave a company are fundamental data protection best practices: backup, archive and encryption. Yet barely half of the organizations that took part in a recent survey have a plan that ensures data can be recovered if an employee changes or deletes it on the way out the door.

Osterman Research conducted a survey of 187 IT and human resources professionals in October 2016 and released the findings this month. The results show organizations are generally not prepared for data theft protection issues with departing employees, said Osterman Research president Michael Osterman. The report found that fewer than three in five organizations have a backup and recovery platform that ensures data can be recovered if an employee maliciously changes or deletes data before giving notice to leave.

“They know what to do, they’re just not doing it very much,” Osterman said.

Osterman suggested organizations should develop a plan for this issue and nail down who’s in charge of ensuring sensitive data is protected.

The report found that 69% of the business organizations surveyed had suffered significant data or knowledge loss from employees who had left.

Those employees may not have taken data mischievously. According to the report, there are three reasons employees leave with corporate data: They do it inadvertently; they don’t feel that it’s wrong; or they do it with malicious intent.

Mobilizing mobile protection

The BYOD movement has complicated matters. For example, an employee can create content on a personal mobile device and store it in a personal Dropbox account or another cloud-based system. That content never hits the corporate server.

“Get control over that kind of content,” Osterman said. One way to do that is to replace personal devices with ones managed by IT.

Virtual desktops can help data theft protection. Because they store no data locally, virtual desktops make it more difficult for employees to misappropriate data, the report said.

The report stressed it is important that “every mobile device can be remotely wiped” so former employees don’t have access to the content.

“Enterprise-approved apps and any associated offline content can be remotely wiped, even if the device is personally owned,” the report said.

Backup, archive, encrypt

A proliferation of cloud applications also makes it harder to recover employee data.

“While IT has the ability to properly back up all of the systems to which it has access, a significant proportion of corporate content, when stored in personally managed repositories, is not under IT’s control,” the report said. “Office 365, as well as most cloud application providers, do not provide backup and recovery services in a holistic manner, and so organizations can have a false sense [of] security about the data that is managed by their end users.”

To maintain complete visibility of sensitive corporate data across all endpoints, cloud applications and other storage repositories, the report suggests deploying a content archiving system.

“Email archiving is the logical and best first place to start the process of content archiving, but other data types — such as files, social media content, text messages, web pages and other content — should also be considered for archiving as well,” the report said.

The data theft protection report advocates encrypting data in transit, at rest and in use, regardless of its location. In addition to manual encryption, Osterman Research recommends encryption that automatically scans content based on policy and then encrypts it appropriately.

“Encryption alone can prevent much of the data loss that occurs when employees leave a company,” the report said.

Report ‘hit a nerve’

In a fairly decent economy, approximately one in four employees will leave a company in a year, Osterman said.

An Osterman Research client originally suggested the organization undertake the data theft protection report.

“I think it hit a nerve with a lot of companies,” Osterman said.

The sponsors of the report were Archive360, Druva, Intralinks, OpenText, Sonian, Spanning, SyncHR and VMware.

The fundamental goals of the report were to make people more aware of the issue and what can happen if they are not careful with data, and to raise awareness about backing up data and archiving, Osterman said.


January 27, 2017  10:05 AM

Quantum video storage customers range from police to pot growers

Dave Raffo Dave Raffo Profile: Dave Raffo

Quantum’s scale-out storage business is growing like a weed, with the help of a large weed grower.

While Quantum’s DXi disk backup library increased the most of all its product lines last quarter, the StorNext scale-out storage business excites CEO Jon Gacek the most.

You have to love a market where deals include tape plus disk, and range from law enforcement to legal marijuana merchants. The Quantum video surveillance storage business last quarter included all of that.

Gacek said Quantum closed the most video surveillance deals ever last quarter. Running through a list of large wins, he included police departments in Canada and India, as well as smaller law enforcement agencies and “a company focused on [the] emerging cannabis growth market, where surveillance of the facility is critical.”

Each large Quantum video surveillance deal included StorNext software, disk plus tape, “reinforcing the power of our tiered storage value and expertise,” Gacek said on Quantum’s earnings call Wednesday.

Flash, dense drives push disk backup deals

Quantum’s disk-based backup revenue grew 17% year over year to $22.9 million. That success came after the release of the enterprise DXi6900-S deduplication library that uses flash to speed up data ingest. The 6900-S also includes Seagate 8 TB self-encrypting hard disk drives. Gacek said DXi libraries won seven-figure deals at an Asian taxation department, a European insurance company and other large deals at a U.S. telecom and European supermarket chain.

“It’s a combination of flash that handles metadata and 8 terabyte drives that give it density. Nothing else looks like it,” Gacek said of the DXi6900-S.

Scale-out (StorNext) revenue increased 12% to $39.8 million, including Quantum video surveillance deals. Scale-out storage also includes media and entertainment, and technical workflows such as unstructured data archiving. Quantum claimed more than 100 new scale-out customers and a near-70% win-rate in the quarter in scale-out tiered storage.

Total data protection revenue, including tape, increased 3% to $83.1 million despite a small drop in tape automation.

Overall, Quantum’s revenue of $133.4 million for the quarter increased $5.4 million over last year, and its $5 million profit follows a slight loss a year ago.

Gacek forecasted revenue of $120 million to $125 million this quarter, which is Quantum’s fiscal fourth quarter. “We are teed up for a good one next quarter, but I am not using superlatives like great and fantastic yet, which I think we have potential for,” he said.

Quantum video surveillance, archiving deals include tape

Part of Gacek’s reason for optimism is new uses for tape in cloud archiving.

“We believe there is a shift in tape usage to the archive scale-out, cloud-like architectures,” Gacek said. “And I think you are going to see tape media usage go up quite dramatically as an archive use case.”

More legalized marijuana might help as well.


January 26, 2017  12:44 PM

Commvault products rollout promised throughout 2017

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Following a quarter of solid revenue growth to end 2016, Commvault Systems Inc. plans a string of product enhancements throughout 2017. The additions are designed to improve Commvault’s performance in the cloud, and with software-defined storage and business analytics.

Commvault Wednesday reported $167.8 million in revenue last quarter, a 7% increase from last year. Software revenue of $77.3 million increased 8% year over year, while service revenues of $88.5 million increased 5%. Commvault broke even for the quarter following two straight quarters of losses.

During the earnings call Wednesday, CEO Bob Hammer laid out plans for a Commvault products rollout that will culminate in the Commvault GO 2017 user conference in November.

Hammer said the company plans to add capabilities for business analytics, search and business process automation as part of its strategy to become a full-scale data management player for on-premises and in the cloud deployments.

“Next month, we will further enhance our offerings with new solutions with industry-leading Web-based UIs and enhanced automation to make it easy for customers to extend data services across the enterprise Commvault solutions,” Hammer said of the Commvault products roadmap. “[We will deliver] some of the key enhancements tied to the journey to the cloud and converged data management.”

The enhancements include new data and application migration capabilities for Oracle applications and the Oracle cloud, big data, fast data and SAP HANA. Commvault already supports Hadoop, Greenplum and IBM’s General Parallel File System.

Products for the AWS cloud

Commvault will also add tools for migrating and cloning data resources to the cloud. These include automated orchestration of compute and storage services for disaster recovery, quality assurance, development and testing, optimizing cloud protection, and recovery offerings inside and across clouds to secure data against ransomware risks.

Earlier this week, Commvault added optimized cloud reference architectures for Amazon Web Services (AWS) that will make it easier for customers to implement comprehensive data protection and management in the AWS cloud.

Commvault customers will have the ability to direct data storage to specific AWS services — such as Amazon Simple Storage Service (Amazon S3), Amazon S3 Standard-Infrequent Access and Amazon Glacier for cold storage.

Hammer said the amount of data stored using the Commvault software within public environments increased by 250% during 2016.

“When you look at our internal numbers, in both cases, we’ve had strong pull from both AWS and Microsoft Azure,” Hammer said. “The pull from AWS has been stronger, so there’s a higher percentage of customers’ data in AWS, but I will also say that we are gaining a lot of momentum and traction with Microsoft and Azure.”

Hammer said Commvault continues to make progress on its software-defined data service offerings that are in early release.

“More and more of our customers are replacing or planning to replace their current IT infrastructure, with low-cost, flexible, scalable infrastructures, similar to those found in the public cloud,” he said.

“Our teams have been hard at work to embed those cloud-like capabilities directly into the Commvault data platform, so we can ensure the delivery of a new class of active, copy management and direct data usage services across an infrastructure built with low-cost, scale-out hardware,” Hammer said.

Other upgrades to Commvault products include new and enhanced enterprise search, files sync-and-share collaboration, cloud-based email and endpoint protection during the middle of 2017.

Growth dependent on new products

Commvault has been working to dig itself out of a sales slump that began in 2014. Hammer said the company still faces some critical challenges, and continued growth depends on its ability to win more large deals. A lot of its success will turn on releases of new Commvault products.

“Our ability to achieve our growth objectives is dependent on a steady flow of $500,000 and $1-million-plus deals,” he said. “These deals have quarterly revenue and earnings risk due to their complexity and timing. Even with large funnels, large deal closure rates may remain lumpy. In order to achieve our earnings objectives, we need to prudently control expenses in the near-term without jeopardizing our ability to achieve our software growth objectives for our critical technology innovation objectives.”

Commvault added 600 new customers during the quarter, bringing its total customer base to 240,000. Revenue from enterprise deals, defined at sales of more than $100,000 in software, represented 57% of the total software revenue and the number of enterprise deals increased by 22% year-over-year.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: