Storage Soup


March 1, 2017  10:22 AM

IBM happy to stay out of hyper-converged infrastructure market

Dave Raffo Dave Raffo Profile: Dave Raffo
IBM Storage, VersaStack

NetApp’s recent claim that it will launch a hyper-converged infrastructure (HCI) platform in the coming months leaves IBM noticeably absent in HCI among major storage vendors.

NetApp’s recent claim that it will launch a hyper-converged infrastructure (HCI) platform in the coming months leaves IBM noticeably absent in HCI among major storage vendors.

And IBM is likely to stay on the hyper-converged infrastructure market sidelines for the foreseeable future.

In a recent interview with TechTarget editors, IBM storage general manager Ed Walsh said IBM doesn’t need hyper-convergence because it has a converged infrastructure (CI) platform that accomplishes the same things. IBM VersaStack combines IBM’s storage with Cisco switching and UCS servers in a similar bundle that other storage vendors sell with Cisco servers and networking.

The difference is that CI bundles such as VersaStack consist of traditional products sold as one package, while HCI puts storage, compute and virtualization in one box.

“They’re solving the same customer problem,” Walsh said of products in the converged and hyper-converged infrastructure market. “They both drive down Opex, give you a better user experience and free up your people to do other things. You can be a purist and say what we do is converged, not hyper-converged, but it’s about the job you’re trying to do. The two worlds are blending together.

“Both give you flexibility in how you deploy time, storage and CPUs,” he continued. “If you have a VMware stack and that’s called hyper-converged to you, we do that. If you want to make it easy to increase CPUs separate from storage, we do that. If you’re saying that’s converged and not hyper-converged, OK, but that’s 80% of the market.”

Of course, the hyper-converged infrastructure market is growing fast and certainly has the attention of server vendors Dell EMC, Hewlett Packard Enterprise, Cisco and Lenovo. They are all moving fast to compete with and/or partner with Nutanix, which created the HCI market.

IBM and NetApp don’t sell x86 servers used with most hyper-converged systems, which may have delayed their entry into hyper-convergence. Walsh’s take on HCI is similar to what NetApp CEO George Kurian said a few months ago when he claimed NetApp’s FlexPod CI partnership with Cisco addressed the same needs of HCI. But last month Kurian said NetApp would enter the hyper-converged infrastructure market with a product based on its SolidFire all-flash platform and Data Fabric technology for moving on-premises data to the cloud.

NetApp has yet to say where it will get servers from, and we still can’t be sure whether it will have true hyper-convergence or just re-package existing flash and cloud technology.

IBM’s Walsh does not sound like he is re-considering. Not only does he see CI doing all that HCI does, but he points out CI has scaling advantages over HCI. The CPU and storage are independent products in converged infrastructure.

“We see after 12 months or so our clients want more flexibility in how they deploy storage to servers,” he said. “People are looking to refresh storage and CPU at different intervals.”

February 24, 2017  10:09 AM

HPE storage sales slipped sharply in late 2016

Dave Raffo Dave Raffo Profile: Dave Raffo
HPE

Storage is among the casualties of Hewlett Packard Enterprise’s struggles following the break-up of Hewlett-Packard.

The vendor reported HPE storage revenue declined 12% year-over-year to $730 million in the fourth quarter of 2016. The poor storage sales came across the board, with only 3PAR all-flash systems showing an increase — but even that increase was less than expected.

“We’re not happy with the storage performance this quarter,” HPE CEO Meg Whitman said. “I’m quite happy with the all-flash situation, but there are other things that we’re going to buck up.”

Whitman said “other parts of the business were weaker than they probably should have been.” She blamed these HPE storage weaknesses on execution, softness in the overall storage market and a shortage in NAND flash supply.

HPE’s all-flash sales were less than they could have been. The vendor reported 3PAR all-flash arrays increased 28%year-over-year, but all-flash revenue has risen in triple digits at HPE and other vendors in recent quarters. NetApp reported a 185% spike in all-flash sales last quarter, and HPE’s fourth-quarter all-flash revenue rose close to 100% year-over year.

Whitman said HPE’s flash sales would have been “considerably higher” if not for the NAND shortage. But other large storage vendors say the NAND shortage has not had a great negative effect on sales.

HPE storage not the vendor’s only problem area

Storage was far from the only sore spot for HPE as it struggles to find its footing after the HP split. Server sales fell 12%, networking dropped 33%, enterprise services declined 11% and software slipped 8%. HPE’s overall revenue of $11.4 billion dropped 10% from last year and missed Wall Street analysts’ consensus forecast by $700 million.

Whitman tried to paint a rosy view of the future of HPE storage. She said she expected the NAND shortage to lift soon, which will help flash sales. She also pointed to last week’s addition of streamlined licensing and the addition of compression to 3PAR to fill what had been “actually a competitive hole in our product.”

HPE closed its $650 million acquisition of hyper-converged startup SimpliVity last week. Whitman put the hyper-converged market at approximately $2.4 billion and growing approximately 25% annually. She said SimpliVity will deliver to HPE “a whole new group of storage sellers where we can have broader market coverage” and pledged to become a hyper-converged leader.

“We see this as a significant opportunity,” she said.

HPE will have to take advantage of any opportunity it finds because it has a lot more challenges than opportunities these days.


February 20, 2017  10:32 AM

Cloudian, Panzura expand cloud data archiving options

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Cloudian and Panzura have come up with new cloud data archiving products to help organizations move on-premises data to the cloud.

Cloudian’s new HyperStore 4000 is a 7U scale-out object storage enclosure that stores up to 700 TB and includes two separate compute nodes per chassis. It can be configured as a three-way cluster for data availability and the system has built-in, hybrid cloud tiering. Like Cloudian’s 1U HyperStore 1500 appliance, the 4000 can store data on premises or in the Amazon Web Services (AWS), Microsoft Azure and Google public clouds. It also can tier data to the Cloudian public cloud.

Jon Toor, Cloudian’s chief marketing officer, said the appliance is aimed largely at the entertainment, video surveillance and genome sequencing industries, or as a replacement for tape archives.

Panzura launched a new cloud data archiving appliance as part of its Freedom Archive platform, and is packaging that with its Freedom NAS and Freedom Collaboration file sync products. The vendor said its Panzura 5500 Series Flash Cache can support up to 1,200 active users. The Freedom Archive virtual appliance that launched in late 2016 runs on VMware vSphere and supports up to 500 users.

Panzura’s products all integrate on-premises storage with the public cloud for cloud data archiving. Freedom NAS stores active data in local cache while moving colder data to the cloud. Freedom Collaboration stores data in a central cloud repository and makes all files read and write accessible on each.

“It takes the archiving piece and adds additional functionality as the company grows into the cloud,” said Barry Phillips, Panzura’s chief marketing officer.

Cloud data archiving ‘requires trust’

Scott Sinclair, senior analyst at Enterprise Strategy Group, said Panzura’s encryption makes it a good fit for protecting sensitive information that organizations are reluctant to move off-premises for cloud data archiving.

“The cloud offers a number of benefits, but some businesses are reluctant to leverage public cloud resources for sensitive information,” Sinclair said. “With FIPS 140-2 certification and AES-256 bit encryption to secure at rest and in-flight, Panzura is working to alleviate any potential security concerns.

“There are other hybrid cloud solutions that offer encryption. The success in storing sensitive data in the cloud requires more than the right technology. It also requires trust,” Sinclair said. “Some businesses have already benefited by moving digital archives to the cloud, while others remain reluctant. Panzura has the right technology to find success in this space. The question is whether they can convince those businesses still questioning the cloud to make the move forward.”


February 17, 2017  10:13 AM

NetApp hyper-converged entry: Better late than early?

Dave Raffo Dave Raffo Profile: Dave Raffo
NetApp

NetApp is entering the hyper-convergence arena with the latecomer’s cry: We may be last into the market, but we’ll be the best.

Emboldened by the recent strong revenue growth of its late-to-market All Flash FAS (AFF) arrays, CEO George Kurian Wednesday night outlined plans to play rapid catch-up in hyper-convergence.

Kurian said a NetApp hyper-converged product will launch in the May to July timeframe. He gave few details, but said the system will be built on SolidFire all-flash storage and NetApp Data Fabric technology that links on-premises and cloud storage.

NetApp has been noticeably absent from the hyper-converged infrastructure (HCI) market after its 2014 plans for an EVO:RAIL system in partnership with VMware never got off the ground.

Three months ago, Kurian indicated there was no NetApp hyper-converged strategy. He said NetApp addressed the advantages of hyper-convergence through its FlexPod converged infrastructure partnership with Cisco. FlexPods bundle storage, compute and virtualization as separate pieces rather than in the same chassis. Kurian has also pointed to NetApp’s cloud-centric SolidFire arrays as an answer for customers who want hyper-convergence.

How will NetApp handle hyper-convergence?

During NetApp’s quarterly earnings call Wednesday night, Kurian talked less about what a NetApp hyper-converged infrastructure might look like than what is missing from current HCI appliances.

“We will do what has not yet been done by the immature first generation of hyper-converged solutions — bring hyper-converged infrastructure to the enterprise by allowing customers to run multiple workloads without compromising performance, scale or efficiency,” he said.

What will NetApp do differently and better? Kurian said the vendor will have the first fully cloud-integrated hyper-converged system that moves data across tiers on-premises and in the public and private clouds. That is something executives at HCI market-leader Nutanix say they are working on now.

Kurian characterized current hyper-converged products as “first-generation,” lacking enterprise data management and consistent performance. He said that relegates them to departmental use and the low-end of the market, a statement that almost all the current hyper-converged vendors would dispute.

Along with Nutanix, server vendors make up most of the HCI market. Dell EMC, Cisco, Hewlett Packard Enterprise (with its recent SimpliVity acquisition) and Lenovo all sell HCI appliances.

Kurian said he doesn’t mind playing catch up.

“There have been lots of companies that have gone after the early-adopter segment with a subset of the features that enterprise customers really want and have failed in the long run,” he said. “And so, first to market doesn’t necessarily mean the big winner, right?”

Kurian points to NetApp’s all-flash arrays to back up his theory. NetApp lagged behind other large vendors as well as startups in offering mainstream all-flash arrays. It never got its home-grown FlashRay product out the door as a GA product, and only found success on its second attempt to build FAS into an all-flash option.

Kurian said NetApp’s all-flash arrays grew 160% year-over-year to approximately $350 million last quarter. That includes AFF, the EF Series for high-performance computing and SolidFire. But that still leaves NetApp well behind market leader EMC, which claims its all-flash XtremIO array generated more than $1 billion in bookings in 2016.

NetApp won’t be first with all-flash hyper-converged either. Most vendors in the market have all-flash options, and Dell EMC claims 60% of its VxRail customers deploy all-flash appliances. Cisco added an all-flash version of its HyperFlex HCI appliance this week.

In a blog posted soon after the earnings call, John Rollason, NetApp’s director of product marketing for next generation data center, echoed Kurian’s comments about current HCI systems. Rollason criticized hyper-converged systems for having fixed ratios of compute-to-storage resources, and lacking performance guarantees for mixed workloads and mature data services. He said they were limited to eight-node clusters that result in silos. While not all of those criticisms are valid for all hyper-converged systems, Rollason’s and Kurian’s comments provide hints as to what NetApp will try to do. It is pledging hyper-converged systems that scale higher with predictable performance aimed for enterprise.

Who will NetApp partner with?

We don’t yet know what NetApp will do for virtualization and compute. You can expect a NetApp hyper-converged system to incorporate VMware. It has had a good working relationship with VMware, despite VMware’s being owned by NetApp storage rival EMC and now Dell.

NetApp will also need a server partner. NetApp FlexPod partner Cisco is a possibility. Cisco has its own HyperFlex HCI appliance and added an all-flash version this week, but allows several HCI software applications, including VMware’s vSAN, to run their software on its UCS servers. NetApp can also go the OEM route that EMC went before getting bought by Dell. EMC’s first hyper-converged systems used servers from Quanta before switching to Dell PowerEdge in September.

NetApp promises more details soon. Whatever it plans, it will have to be good to make up for being late.


February 16, 2017  9:01 AM

Dell EMC VxRail hitches ride on Enterprise Hybrid Cloud

Dave Raffo Dave Raffo Profile: Dave Raffo
Dell EMC, Hybrid cloud

Dell EMC’s VxRail turned one today, and the vendor marked the anniversary by adding the hyper-converged platform to its Enterprise Hybrid Cloud package.

Dell EMC claims over 1,000 customers for VxRail through the end of 2016, with more than 8,000 nodes, 100,000 CPU cores and 65 PB of storage capacity shipped in the system. VxRail is EMC’s first successful hyper-converged appliance, following a short, failed attempt with a Vspex Blue product launched in 2015.

Like Vspex Blue, VxRail is based on Dell-owned VMware’s vSAN hyper-converged software. It also runs on Dell PowerEdge servers, although VxRail originally incorporated Quanta servers until the Dell-EMC acquisition closed last September. VxRail launched just after VMware upgraded vSAN to version 6.2, which added data reduction and other capabilities that improved its performance with flash storage. Dell EMC VxRail senior vice president Gil Shneorson said 60% of VxRail sales have been on all-flash appliances.

“We’re definitely seeing the combination of hyper-converged and all-flash taking off in a meaningful way,” he said.

Now VxRail is an option for Dell EMC Enterprise Hybrid Cloud (EHC) customers. EHC is a set of applications and services running on Dell EMC hardware that provide automation, orchestration and self-service features. The software includes VMware vRealize cloud management, ViPR Controller and PowerPath/VE storage management, and EMC Storage Analytics.

Other EHC storage options include EMC VMAX, XtremIO, Unity, ScaleIO and Isilon arrays sold as VxBlock reference architectures with Dell PowerEdge servers. EHC is also available with VxRack Flex hyper-converged systems that use Dell EMC ScaleIO software instead of VxRail appliances. Data protection options include Avamar, RecoverPoint and Vplex software and Data Domain backup hardware.

Along with the Dell EMC VxRail option, the vendor is adding subscription support and encryption as a service to EHC. Dell EMC does not break out EHC financials, but Dell EMC senior vice president of hybrid cloud platforms Peter Cutts said its revenue was in the “hundreds of millions of dollars” last year.

Adding a Dell EMC VxRail options lets EHC customers start with as few as 200 virtual machines.

“This gives customers the ability to start smaller, configure EHC as an appliance and go forward in that direction,” Cutts said.

For now, organizations who want to use VxRail with EHC need to buy a new appliance. Cutts said the vendor is working on allowing customers to convert existing VxRail appliances to EHC but that is not yet an option.

Using VxRail as part of EHC makes sense as vendors begin to position hyper-converged systems as enterprise cloud building blocks. Hyper-converged market leader Nutanix now positions its appliances that way, emphasizing its software stack’s ability to move data from any application, hypervisor or cloud to any other application, hypervisor or cloud. Nutanix is VxRail’s chief competitor.

“We’ve seen requests for more data center-type features and functionality,” Shneorson said. “VxRail is being put into data centers in much larger clusters than we originally anticipated. We’re seeing a shift from an initial focus on remote offices and test/dev to mission critical data center use.”

But unlike Nutanix, Dell EMC still also sells traditional storage. So Shneorson admits hyper-converged is not a universal answer because not every organization wants to scale their storage and compute in lockstep.

“It’s a matter of economics,” he said. “The advantage of hyper-converged is you can start small and grow in small increments. But some customers’ environments are already large and predictable in growth. By using shared storage you can get any ratio of CPU to disk. With hyper-converged, there is always a set ratio of CPU to disk. If you want massive amounts of storage with a small amount of CPUs for example, you would be better served by a traditional architecture.”


February 8, 2017  11:16 AM

Microsoft Office 365 backup included in Acronis Backup 12 update

Paul Crocetti Paul Crocetti Profile: Paul Crocetti

Acronis has added cloud-to-cloud backup for Microsoft Office 365 in the latest version of Acronis Backup.

Acronis Backup 12 protects Microsoft Office 365 and 15 other platforms, all under one management console.

The vendor’s Microsoft Office 365 backup features include:

  • Automatic backup of Microsoft Office 365 emails, contacts, calendars and tasks;
  • Store backup data locally or in the cloud, for long-term access or archiving purposes;
  • Preview, browse and search backup content;
  • Recover individual and shared mailboxes to the original or an alternative location; and
  • Recover and deliver individual items by email without restoring the entire backup.

Acronis’ existing portfolio can back up local systems with other Office 365 elements such as Word, Excel and PowerPoint.

Office 365, Acronis backup

Acronis Backup 12 features Microsoft Office 365 backup. (Image courtesy of Acronis)

Microsoft Office 365 backup subscription licenses are available online and from local distributors. The monthly price per mailbox ranges between $1.67 and $3.33, depending on geography, volume and the subscription term, according to Acronis.

Most leading backup software vendors now support Office 365 backups as customers begin to realize that putting data in the cloud does not mean it’s protected. Microsoft Office 365 backup features have been available in the Acronis Backup Cloud product for service providers since July 2016. Office 365 is the first cloud application protected by Acronis backup.

With ransomware an increasingly prevalent threat, it’s critical to keep a backup copy in another location, said Frank Jablonski, Acronis vice president of product marketing. People in the industry anticipate ransomware becoming smart enough to hack into backups.

“We’ve hardened our agent that makes it more difficult, if not impossible, to get into backup data,” Jablonski said.

In the next product update, Acronis Active Protection against ransomware will protect user devices and data by blocking the attacks and instantly restoring affected data, according to the vendor.

In addition, backup for OneDrive and SharePoint will come later in the year.

The Acronis Backup 12 console allows administrators and service providers to manage data from physical systems, virtual machines (VMs) and cloud workloads in the Acronis cloud, on premises, and in Microsoft Azure and the Amazon Elastic Compute Cloud (EC2). The software can back up Azure VMs and EC2 instances.

The Acronis Backup 12 update also introduces support for VMware vSphere 6.5, VMware Changed Block Tracking, Acronis Instant Recovery, Acronis vmFlashBack and Replication with WAN optimization.


February 7, 2017  4:50 PM

Nakivo Backup & Replication for Hyper-V gets ready for prime time

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Nakivo Inc.’s latest backup and replication product offers support for VMware vSphere 6.5, Microsoft Hyper-V 2016, VMware vSphere 6.5 and Windows Server 2016 Backup along with a way to delete obsolete or orphaned backed up data in bulk with a single click.

Sergei Serdyuk, Nakivo’s director of product management, said Nakivo Backup & Replication v7 with added support for Hyper-V is currently in beta and will be generally available later in the month.

The product performs image-based backups and uses virtual machine (VM) snapshots to create backups that can be stored in a local data repository or in the Amazon Web Services (AWS) cloud.

Nakivo Backup & Replication v7 beta offers native backup for Hyper-V and Hyper-V Server 2012 and the R2 version. The agentless software is application-aware via Microsoft Volume Shadow Copy Service technology and supports Microsoft Exchange, Active Directory, SQL and Microsoft SharePoint. It recovers operating system files and application objects.

“We have a bulk feature and we use traffic redirection technology to speed up the data transfers,” Serdyuk said. “You can get an average of two times improvement if you use this feature in terms of the backup speeds. We will have a single product to back up Hyper-V to AWS environments.”

The Nakivo Backup & Replication v7 beta software also supports VMware vSphere 6.5, which creates VM snapshots of the backed up data and identifies the changed data by using VMware change block tracking technology. It sends data to the repository and removes the temporary snapshots.

Active Directory Integration has been added, allowing numerous users to log into the product. IT administrators can map Active Directory groups to a Nakivo software user’s roles so they can log in with their domain credentials.

“This new feature was a request by customers, mostly by enterprise customers who use Active Directory,” Serdyuk said. “It improves the company’s security, management and scalability.”

An activities tab offers a centralized view of software activity, including jobs that are running, files being stored and object recovery sessions. Protection for containers is also supported.

“In the VM infrastructure, there are a number of containers,” Serdyuk said. “Customers can protect  a particular container (such as a host or a cluster) and all current and future VMs in this container will be added to the job. In addition, customers can exclude VMs they don’t want to protect (test machines, for example).”

Nakivo Backup & Replication has also added the ability to skip swap files, a temporary location on a hard drive that stores data not used by random access memory.

“The new version automatically skips those files,” Serdyuk said. “They can become quite large.”

Version 7 of the Nakivo software does forever incremental backups, which means that after the initial full backup only the subsequently changed data is sent to the repository for more efficient backups. Backup & Replication v7 beta supports Microsoft Exchange, Active Directory, Microsoft SQL and Oracle databases running inside VMs.


February 7, 2017  11:09 AM

Pivot3 vSTAC sales soaring, CEO says

Dave Raffo Dave Raffo Profile: Dave Raffo

Hyper-converged vendor Pivot3 said it more than tripled its revenue in the fourth quarter of 2016 from the previous year, and grew total revenue 84% in 2016 from 2015.

Pivot3’s growth came in part from two new products launched last year. It added the Pivot3 Edge Office for SMBs and the Pivot3 vSTAC SLX incorporating PCI Express flash technology acquired from NexGen Storage in January 2015. Pivot3 is also incrementally integrating quality of service acquired from NexGen into the Pivot3 vSTAC OS, and CEO Ron Nash said new products adding both QoS and flash will ship in 2017.

Quality of service may hold the key to the vendor’s future success. Pivot3’s Dynamic QoS includes a policy engine that prioritizes workloads and manages data placement and protection. Nash said coming Pivot3 vSTAC products will extend QoS to the cloud and legacy storage. The vendor will add another NexGen storage flash system in 2017.

“Instead of just having our policy engine work on hyper-converged infrastructure, we’re extending it out to the cloud and backward to legacy systems,” Nash said.

He said the engine will be able to look at characteristics, such as service-level agreements and required response times, and find the best storage tier for application data. For instance, if you’re looking for cheap storage and don’t need fast response times, the data can go to a cold storage public cloud service.

Nash said the Pivot3 vSTAC SLX system, integrating NexGen technology, launched in 2016 and appealed to enterprises because of the flash performance. “It kind of surprised us where it was sold,” he said. “We thought it might in the midrange, but we found high-end people are using it, too. If you have an application that needs low latency, it gives you the low latency an NVMe-PCI-type storage device gives.”

‘Little guy” moving up to compete with ‘the big boys’

Nash said Pivot3’s long-term goal is to follow Nutanix’s 2016 initial public offering with an IPO of its own to become a public company. He said Pivot3 takes a different approach than Nutanix, though. He is trying to avoid the heavy losses Nutanix has suffered, even as it racks up impressive revenue growth.

“We’re much more disciplined about financial performance,” he said. “We want to grow fast, but if the difference between 80% and 150% growth is [that] I have a massive loss at 150%, I’ll stick to 80%. We’re still losing money, but not hemorrhaging.”

He said Pivot3’s $55 million funding round in March 2015 should get the company to profitability, although it may raise another round ahead of an IPO.

For now, Pivot3 is the little guy in the land of hyper-convergence giants. Its competitors are newly public Nutanix, Dell EMC (including VMware), Hewlett Packard Enterprise (with its new SimpliVity acquisition) and Cisco.

“A year ago, I was competing with startup companies; now I’m competing with big companies. I count Nutanix as a big company now,” he said. “The big boys are moving in. But I think we can compete against them.”


February 7, 2017  9:52 AM

Stratoscale buys Tesora, adds AWS database services

Garry Kranz Garry Kranz Profile: Garry Kranz

Stratoscale acquired database-as-a-service provider Tesora Inc. in a move aimed at strengthening its AWS database services.

The hyper-converged software startup added Tesora’s open platform for NoSQL databases Monday. The same day, Stratoscale launched a homegrown relational database service in its Amazon-compatible cloud storage stack.

The Tesora technology will be phased in with future rollouts of Stratoscale Symphony hyper-converged software. Symphony supports block and object storage capabilities by turning x86 servers into hyper-converged compute clusters.

Symphony builds an Amazon-like private cloud behind a firewall to help enterprises reduce VMware licensing costs. Customers can connect their legacy storage to a Symphony cluster and have Stratoscale orchestrate the compute, networking, storage and virtualization resources.

Tesora Database as a Service (DBaaS) is an enterprise-hardened version of OpenStack Trove, the native database service for OpenStack-based clouds. The Tesora acquisition hastens the delivery of relational AWS database services, a feature already on Stratoscale’s roadmap.

“This is a big expansion for us. It allows us to engage with customers who have been waiting for this type of capacity,” Stratoscale CEO Ariel Maislos said. “Going into production with database as a service is very complex, so this will save us about a year of development time.”

Tesora DBaaS enables self-service management and provisioning for Cassandra, Couchbase, DataStax Enterprise, DB2 Express, MariaDB, MongoDB, MySQL, Percona,Redis and Oracle. Stratocale said it will use the Tesora platform to augment its AWS database services, which include its AWS Relational Database Service and AWS NoSQL database offerings.

Maislos said enterprises want Stratoscale’s help with large-scale deployments that mirror AWS database services such as Amazon RDS.

“People want the ability to run their applications either in Amazon or inside their data center,” he said. “If you want to do a hybrid cloud, we give you an on-premises environment that is compatible with the private cloud. That’s the Holy Grail that customers love.”

Since its launch in 2015, Stratoscale has expanded its Amazon support to include Simple Storage Services, DynamoDB, Elastic Block Store, ElastiCache in-memory cache, Redshift and Virtual Private Cloud Services. Symphony 3.4 is currently shipping to customers with support for Kubernetes as a service and a one-click Application Catalog to deploy more than 140 prepackaged catalogs.

Stratoscale did not disclose terms of the deal. Tesora’s Cambridge, Mass., office will be added to Stratoscale locations in Israel, New York City and Sunnyvale, Calif. Maislos said approximately 20 Tesora employees are now part of Stratoscale.


February 6, 2017  10:17 AM

‘Alexa, provision my Tintri storage’

Carol Sliwa Carol Sliwa Profile: Carol Sliwa

Want to manage your Tintri storage the same way you turn on lights, set an alarm, or choose music with an Amazon Echo or Dot device?

Tintri Inc. launched a proof of concept that lets customers ask Amazon’s Alexa voice service to initiate tasks such as provisioning virtual machines (VMs), taking snapshots and applying quality of service.

Tintri storage engineers used Amazon’s software development kit to map its application programming interfaces (APIs) to the Alexa service to enable Echo and Dot devices to recognize and execute storage commands.

Chuck Dubuque, vice president of product marketing at Tintri, said Tintri will use feedback on the proof of concept to gauge the potential to turn the “cool demo” into a product.

A video demonstration shows a Tintri employee instructing Amazon Alexa to ask the system to provision a VM. Alexa prompts the user with questions such as “What type of VM would you like to create?” and “How many VMs would you like to create?”

Dubuque admitted that using Amazon Echo beyond home use cases might be “a little further out” in the future. But the proof of concept gives Tintri experience using Amazon’s voice recognition and natural language capabilities and making its self-service APIs more responsive to human commands, he said.

“It’s relatively easy to write an admin interface for the storage administrator or the VM administrator who already thinks about things at the low level around VMs and vdisks and other things,” Dubuque said. “But for people who aren’t experts on the infrastructure and just want to say, ‘Hey Alexa, create a test environment,’ what does that mean? Underlying all of the assumptions, a test environment means this set of 100 virtual machines is created from this template, put into this network with these characteristics. That’s more complicated.”

Chat option lets developers manage Tintri storage

At VMworld last August, Tintri demonstrated a text-based chat option to enable developers to collaborate with each other and manage Tintri storage. Dubuque said a customer in Japan used Tintri’s REST APIs to put together a simple robot to respond to system commands from within the Slack chat environment.

Developers in the virtual chat room could call out to a Tintribot — which appears as another “person” in the chat window — to tell the system to execute a command, such as firing up VMs to test new software.

“The Tintribot will acknowledge the command, maybe ask a few questions, and then once all of the VMs are up and running, reply back into the same chat window: ‘Hey, the 100 VMs are now ready. You can run your test,'” Dubuque said.

“It’s a way to enable self-service. In this case, it’s aligned to the developers who don’t really care about the details. They want to be able to do things on their own when they need to without having to hand it off to a third party,” to launch VMs.

Because the Slack-based ChatOps interface requires a username and password for login, the system can control what any given user is permitted to view and create a time-stamped chat audit trail in case they need to troubleshoot a problem.

“You get to see all the humans who were involved in the decision, as well as what the environment was telling you – what’s successful and what wasn’t,” Dubuque said.

Tintri is still gathering customer feedback and has not determined a general availability date for the Slack-based ChatOps that performs operations from within a chat.

“It’s definitely something that has sparked a lot of interest,” Dubuque said.

Dubuque said the Tintri storage architecture is conducive to plug-in integration with systems such as Slack and Amazon Alexa. He said the company’s key differentiator is a web services model “where the fundamental unit that we manage is around the virtualized or containerized application.

“Our file system, our I/O scheduler, all of our storage operations are at that same level that virtualization and cloud management systems use to control compute and networking,” Dubuque said. “You can think of us as finishing the trinity of network, compute and storage being all aligned to the same abstraction level, which is a virtual machine, or a container, not around physical constructs.”

Dubuque said Tintri exposes REST APIs and interfaces with PowerShell and Python through a software development kit. He said other storage vendors use REST APIs that focus on storage constructs such as LUNs and volumes and don’t directly map to an individual application. That causes complexity when trying to automate the storage component of an application.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: