SwiftStack’s leadership team appreciated being selected a visionary in Gartner’s recent Magic Quadrant for distributed file systems and object storage. The vendor even put out a press release celebrating Gartner’s inclusion of SwiftStack object storage in the report.
But SwiftStack’s chief marketer said the object storage neighborhood is not where the vendor wants to live anymore.
“We’re a software company, not an object company,” SwiftStack VP of marketing Mario Blandini said. “We just happen to have our own object storage system on premises.”
So Blandini admits SwiftStack’s cloud storage software based on the OpenStack Swift object project is indeed object. He just thinks that’s not chic. Or at least SwiftStack customers and potential customers don’t think SwiftStack object storage has the right ring to it.
“People say to us, ‘How can an object storage company be doing something cool? Don’t object storage companies all suck?’” Blandini said. “No one loves object storage because it doesn’t do enough to be transformative. There’s nothing wrong with NAS, why replace it?”
Blandini also wants to distance SwiftStack object storage from OpenStack, even if SwiftStack is the main contributor to OpenStack. “SwiftStack is too often known as an OpenStack company,” he said. “In reality, we are a data management platform across multiple clouds.”
The multiple cloud part, more commonly known as multi-cloud storage, is what’s cool now. Ask just about any storage vendor out there, because we’re hearing that phrase a lot more these days.
“We like the concept of multi-cloud,”Blandini said. “Do you want to be locked into one cloud provider, or be able to put your data closer to the user?”
SwiftStack object storage includes a file system gateway on top of its native object storage software. Earlier this year SwiftStack beefed up its Cloud Sync feature for moving data in and out of Amazon Web Services and the Google Cloud Platform as part of the multi-cloud plan.
This week SwiftStack added policy-based auto tiering, the ability to use erasure coding across regions, client capability to access objects in private clouds as if they were on-premises, and more granular policies for determining which nodes and regions data should reside.
Blandini said these features are merely setting up a significant product release coming around AWS re:Invent this month. He’s keeping quiet on details for now, but you can expect it to center around managing data on multiple clouds (just a wild guess).
“We architected our product to be cloud from the beginning,” he said. “I’m not saying this will kill NAS and there will be no more Fiber Channel. There always will be those things. But there’s always room for new types of storage.”
Or new labels for storage, anyway. So say goodbye to SwiftStack object storage and hello to SwiftStack multi-cloud storage.
Cloud NAS vendor Elastifile has struck an OEM deal with Dell EMC, one of the startup’s first investors.
Dell EMC will integrate Elastifile Cloud File System and CloudConnect cloud transfer and object storage tiering software on PowerEdge servers. Dell EMC will sell the appliances as part of the Dell EMC OEM Solutions program, with Elastifile providing software support. The appliances include the Elastifile license and three years of support. The vendors will formally disclose the deal Thursday.
Elastifile came out of stealth in April with its scale-out file system designed for flash hardware that spans on-premises and cloud storage. Until now, it has sold its software standalone. Andy Fenselau, Elastifile vice president of marketing, said the OEM deal makes sense because many of the startup’s early customers are using PowerEdge servers.
“You can put it on any standard server,” Fenselau said of Elastifile’s software. “But we were finding as we were ramping our business that many customers and many of our partners are joint customers and joint partners with Dell. They really wanted the Easy Button. They wanted a pre-integrated, pre-bundled solution for their on-prem deployments that they could buy from their standard OEM, in this case Dell.”
The Dell EMC Elastifile appliances scale from four to 100, with performance and capacity models available. The startup claims the performance model supports from 800,000 to 26 million IOPs with bandwidth ranging from 3.6 GB per second to 120 GBps. Capacity models range from 100 TB to 3.5 TB.
Fenselau said the performance optimized node street price starts at 13 cents per IOPS, and the capacity optimized node costs $2 per raw GB.
The PowerEdge appliances are flash-only.
Elastifile’s Cloud File System handles active data and performance-oriented workloads. CloudConnect provides access to Amazon S3-compliant cloud services, moving inactive data for archiving or analytics into an object tier.
EMC invested in Elastifile’s first funding round in 2014, before the Dell-EMC merger. The companies’ ties go back even farther. Elastifile founder and CTO Sharar Frank also founded all-flash array startup XtremIO and was chief architect at scale-out NAS vendor Exanet. EMC acquired XtremIO in 2012 and Dell bought Exanet in 2010.
Storage hardware vendors Cisco, Lenovo and Western Digital also have strategic investments in Elastifile. Those relationships could lead to more appliance partnerships.
“As customers standardize on other servers, we will work to give them what they want,” Fenselau said when asked about the possibility of working with other strategic partners.
Arcserve is on the hunt to buy more companies.
The data protection and recovery company already has made two acquisitions since becoming independent from CA Technologies in 2014. It bought cloud provider Zetta last summer, giving it a larger cloud footprint and disaster recovery as a service (DRaaS) offering. That followed the Arcserve acquisition of email archiving specialist FastArchiver in April for on premises or public cloud long-term data retention.
Both Zetta and FastArchiver target the midmarket. New CEO Tom Signorello is planning the next Arcserve acquisition, with possible targets ranging from analytics to information management to data management or even security.
“We will be looking at targets again in the coming quarters,” Signorello said. “We are actively working with our owners on what the next logical bolt-on. The opportunities are broad, so I don’t want to be specific.”
Signorello took over as CEO in early October, after Arcserve’s first CEO Mike Crest left to head IT services firm Optanix.
Arcserve has returned as an independent vendor at a time the data protection market is evolving into another area, the overall data management space. Companies such as Veritas Technologies and Commvault Systems are building these over-arching data management platforms that do data indexing, search, analytics, copy data management, governance, security and data mobility.
Arcserve acquisition integrations expand cloud footprint
Signorello said that the integration with Zetta is “well underway.” The latest Arcserve acquisition gave it cloud data centers in the West Coast and New Jersey, and Arcserve plans to open one in the United Kingdom. The Arcserve acquisition also broadens its relationship with cloud providers Amazon Web Services (AWS) and Microsoft Azure.
Arcserve already had a relationship with AWS before the Zetta buy. The Arcserve UDP appliance allows customers to use the AWS cloud as a remote disaster recovery site. They can replicate recovery points to a local Windows-based Recovery Point Server (RPS) in the AWS cloud and launch an Amazon Elastic Compute Cloud (EC2) and copy full recovery points to Amazon Simple Storage Service (Amazon S3).
Zetta also gave Arcserve a direct-to-cloud offering but the company also is investing more in the hybrid cloud approach.
“There are going to be enhancements in the hybrid area in the next couple of quarters,” Signorello said. “Our clients need it. They need the flexibility. All the managed services providers (MSPs) and VARs are moving in that direction.”
Before joining Arcserve, Signorello was CEO of OnX Enterprise Solutions and held vice president positions at Diebold and Unisys.
With the latest upgrades to its object storage platform, IBM recognizes that not everyone who uses object storage does so in the public cloud. They don’t all start with hundreds of terabytes of capacity either.
IBM Cloud Object Storage System now offers “compliance-enabled vaults” so customers can create on-premises data vaults that protects data from deletion or modification for unstructured data subject to regulations like SEC Rule 17a-4 and the Financial Industry Regulatory Authority (FINRA).
IBM also is offering a new capacity level for customers that want to get started with cloud deployments. The IBM Cloud Object Storage System now starts with a 72 TB capacity level with a concentrated dispersal mode capability that allows for smaller footprint systems to scale to larger ones. Despite the name, IBM Cloud Object Storage System – based on technology acquired from Cleversafe – is available for on-premises use or in a dedicated environment in the IBM Cloud.
IBM hopes the compliance vaults will open up opportunities for the applications market in on-premises object storage, while the lower-capacity option is about getting more customers started with an on-site cloud with the expectation they will move to the public cloud.
“This is a new space that they can compete in,” said Scott Sinclair, a storage analyst with Enterprise Strategy Group. “We have run studies that show on-premises object storage is less expensive than cloud-based object storage. The bottom line is, public cloud is not always cheaper.
“Some workloads are more expensive in the cloud,” Sinclair said.
There is evidence customers are learning not all workloads are meant for the public cloud, and that unexpected costs and security are becoming two red flags.
The ESG 2017 Storage Trends: Challenges and Spending report found 39% of storage decision makers using off-premises cloud resources had moved at least one workload back to on premises.
“There is evidence that organizations need to apply a pragmatic approach with regards to the location of applications and data, whether on- or off-premises,” Sinclair. “Moving a workload to the public cloud is a big decision and should be treated as such.
“Additionally, what workloads are coming back? Why? Is it cost? Is it security? Is it availability? And how are these movements changing the cloud strategy within the organizations?”
He said ESG is researching those issues now.
Robert McCammon, IBM’s leader for IBM Cloud Object Storage, said the vendor is tackling security concerns with its compliance vault feature.
“This is a new software feature of our existing object storage product that is available with our software release in December,” McCammon said. “This is a new type of vault that prevents operations that are unacceptable on one of these compliance environments.”
McCammon said when it comes to compliance rules, “customers don’t have a lot of choices for storage and they have even fewer choices for object storage.”
Highlighted by the largest customer deal in its history and a big bump in enterprise bookings, Veeam reported 34% year-over-year total bookings growth in the last quarter.
There has been a push in the last year-and-a-half to go after the enterprise more aggressively, said Peter McKay, Veeam co-CEO and president. The data protection vendor reported 84% year-over-year growth in new enterprise bookings for the third quarter.
A $4.1 million deal with a European company is Veeam’s biggest ever enterprise booking, McKay said, although he would not identify the customer. The Veeam revenue report showed more $500,000 deals closed in 2017 than in the past four years combined.
“That partnership ecosystem has become a really critical part” of Veeam growth, he said.
The cloud continues to be a bigger piece of the Veeam revenue picture. It took six years for Veeam to net $50 million in bookings for its cloud business, but it has hit $54 million in three quarters this year, according to McKay. Veeam also reported a 72% year-over-year increase in cloud bookings for the third quarter.
Veeam is averaging 4,000 new customers each month. The vendor claims 267,500 customers and 16,700 service provider partners using its software.
Looking for more in Veeam revenue, platform
Veeam is shooting for $1 billion in annual bookings by 2018 and $1.5 billion by 2020. It hit $607 million in 2016, 10 years after the company launched. At its current pace, Veeam is projected to hit about $800 million in revenue by the end of the year.
“We have to have a good Q4 to get there,” McKay said of the 2018 goal.
Veeam has invested in smaller companies such as cloud data protection vendor N2WS. Veeam has an OEM agreement through which N2WS technology will be part of Veeam Availability for Amazon Web Services (AWS).
McKay said to expect a technology acquisition soon.
“We’re looking. We’re active,” McKay said, adding that he doesn’t feel acquisitions are needed for Veeam revenue to hit $1.5 billion.
McKay pointed to data management, visibility and protection as areas for growth. McKay said there are areas where Veeam could improve its processes, for example in better figuring out how go-to market strategies differ by country.
“We’re incredibly paranoid of taking our eyes off the ball,” McKay said.
Veeam’s competition includes Commvault and Veritas. McKay said he sees the amount of funding and competition from startups — such as Cohesity and Rubrik — as a good sign.
“It makes us better,” McKay said.
Version 10 of Veeam’s Availability Suite is due soon. That upgrade will feature continuous data protection and object storage support. Veeam also plans to add new elements to the platform that it hasn’t publicly disclosed yet, McKay said.
Veeam has recently expanded into physical backup as well as multi-cloud support for Microsoft Azure and Azure Stack, AWS, IBM Cloud and software-as-a-service applications such as Microsoft Office 365.
Western Digital CEO Steve Milligan said he remains confident his company will win its fight to prevent its NAND manufacturing joint venture partner Toshiba from selling its memory chip unit without Western Digital’s consent.
Toshiba has agreed to sell its chip unit to a group led by Bain Capital despite Western Digital’s opposition to the deal. Western Digital is trying to block the proposed $18 billion Toshiba NAND sale, claiming it violates terms of its joint venture with Toshiba.
“It continues to be our position that the transaction is not permitted without our consent. That leads to where we are today,” Milligan said during Western Digital’s earnings call Thursday night.
Western Digital gained its stake in the joint venture when it acquired flash manufacturer SanDisk, which already had the joint venture agreement with Toshiba. But when Toshiba decided to put its memory business up for sale earlier this year, it looked at groups outside of Western Digital. Earlier this month, Toshiba said it would sell to the Bain consortium. That group includes Western Digital competitors Seagate, Kingston Technologies and SK Hynix, along with its customers Dell and Apple. Toshiba will retain a stake in the NAND unit if the deal goes through.
Milligan predicted Western Digital will “ensure the longevity and continued success of the joint venture,” either through arbitration or negotiation with Toshiba. And he advised: don’t believe everything you read about the dispute over the Toshiba NAND sale, unless it comes from Western Digital.
“We are confident in our fact-based legal positions, and our right to injunctive relief,” he said.
Western Digital claims the joint venture prevents Toshiba from working with other companies to manufacture NAND, or to transfer interests in the joint venture without Western Digital’s consent.
Milligan said it may take until 2019 to gain a final ruling on the Toshiba NAND sale from the International Court of Arbitration. However, he hopes SanDisk will win temporary injunctive relief by early 2018. That relief would prevent Toshiba’s planned NAND sale to the Bain group. Western Digital has made three requests for relief with the arbitration court in 2017. Each arbitration will be decided by a three-person tribunal. Western Digital has also sought arbitration to try to prevent Toshiba from moving ahead with its Fab 6 production plant without SanDisk involvement. Toshiba opened that new production center in August on its own.
Milligan said SanDisk’s consent rights “are clear and explicit” and will hold up legally, although he would rather not have to go that route.
“Just to be clear, we do not undertake litigation lightly,” he said. “We are not litigious. And it should only be a last resort, especially in the context of this joint venture relationship.” He said Western Digital is open to any reasonable terms proposed by Toshiba, but “we will not agree to terms such as SanDisk unilaterally waiving or negating its consent.”
In an interview with Bloomberg this week, Bain managing director David Gross-Loh accused Western Digital of misrepresenting its rights in its legal challenge to the Toshiba NAND sale. He also urged Western Digital to reach an agreement with Toshiba to allow the deal to proceed.
Milligan said there has been “a great deal of misinformation provided into the marketplace through various channels” about the situation. “Western Digital will continue to communicate consistently and transparently,” he said.
When asked if he had alternative plans if Western Digital is unsuccessful in arbitration, Milligan said current supply agreements will give his company NAND through 2029.
Quest Software, independent again after its spinoff from Dell EMC nearly a year ago, is building out its data protection portfolio with a new cloud-based management console.
The Quest Data Protection Portal is an extension of Rapid Recovery 5.4, the latest version of Quest’s snapshot-based application and data recovery software. The Quest Data Protection Portal runs on the Microsoft Azure public cloud.
“We are planning to add other technologies into the cloud-based management tool,” said Adrian Moir, Quest’s lead technology evangelist. “We are not restricting the technology just to Rapid Recovery. Here we have a cloud platform so that managed service providers can manage multiple customers from multiple locations.”
The Quest Data Protection Portal monitors all Rapid Recovery core servers that may be in disparate locations while checking the status of backups, replications and virtual standbys for disaster recovery. The portal also allows IT administrators to perform backups, replications and recoveries from anywhere.
Quest is working on re-establishing itself as an independent company after Dell EMC sold off its software group to Francisco Partners and Elliott Management in November 2016. Dell Software acquired Quest in 2012 for $2.36 billion, well before its $60-billion-plus merger with EMC.
This past summer, Quest added Vroom, a tool for virtual infrastructure monitoring and recovery management. Quest Vroom protects VMware and Hyper-V while providing disaster recovery-as-a-service in Microsoft Azure. Vroom uses Rapid Recovery and it can monitor a virtual machine (VM) as it is running and show what could happen to it in case of an outage.
Jason Buffington, a principal analyst at Enterprise Strategy Group, said the Quest Data Protection Portal is part of the vendor’s “broader system strategy.”
“The IT operations do patching services and monitoring services, and are the first to be called when things go down,” he said. “This makes them highly motivated to be part of the protection and recovery process. This is a good way to reconsider Quest because they are unique in their ability to integrate system management and data protection within a single pane of glass.”
Following six straight quarters of making or beating expectations, Commvault stumbled last quarter when it missed its targets for revenue and income.
The backup and data management company today reported revenue of $168.1 million last quarter, up five percent from last year and one percent from the previous quarter. Software revenue of $72 million increased only two percent year-over-year, and dropped four percent from the previous quarter. Commvault lost $4.7 million from operations in the quarter compared to a loss of $100,000 the previous year. Wall Street expected $170 million in revenue.
Commvault CEO Bob Hammer called the quarter a disappointment, blaming it on a failure to close large deals. He said many of those deals will eventually close, and he also pointed to the newly launched Commvault HyperScale Appliance and a Cisco reseller deal as reasons for optimism. Cisco and Commvault today said Cisco is adding a ScaleProtect product that incorporates Commvault HyperScaler software on Cisco UCS servers.
“I will just start by saying that I am disappointed with our financial results,” Hammer said on Commvault’s earnings call. “Simply put, there were a number of six- and seven-figure software deals, which we were unable to close that were being forecasted right up until the last several days of the quarter. We believe many of these deals will close [this quarter] and the majority by the end of [March]. In fact, some have already closed.”
Still, there is no guarantee those deals will close, especially the ones that Hammer described as complicated transactions. Hammer put the blame squarely on his sales team for not closing those deals last quarter. He said there were no indications Commvault lost those deals to competitors or because of its products.
“I don’t know how else I can say it. They screwed up. Period. End of story,” he said, attributing the failures to “poor sales execution on large deals primarily in the Americas.”
He said the sales situation was “in stark contrast” to progress Commvault has made on the product front. He said the appliance launches “have the highest potential to change the game of anything we’ve done in recent memory.”
The vendor launched Commvault HyperScale Appliances based on Fujitsu servers last week. Commvault will also make HyperScale software available for resellers to bundle it on hardware from Cisco, Dell EMC, Hewlett Packard Enterprise, Huawei, Lenovo and Super Micro.
ScaleProtect with Cisco UCS appliances represent the first reseller deal for Commvault HyperScale. Cisco’s sales teams and channel network will sell ScaleProtect.
Commvault HyperScale is a big piece of the vendor’s cloud data management strategy for the Commvault Data Platform, Hammer said. That is the same strategy of Commvault competitors, such as Veritas and relative newcomers Cohesity and Rubrik.
“The secondary storage platform is really a cloud play, because you’re giving the customer that same agility and high-utilization rate economics of a public cloud and easy migration,” Hammer said. “We are completely seamless in our ability to move to Azure, AWS and Google.”
Hammer said he is counting on Commvault HyperScale plus a “robust pipeline” from last quarter’s unclosed deals to boost sales in the coming months.
“Our ability to achieve our growth objectives in the near-term is dependent on a steady flow of — and good close rates — of $500,000 to $1 million-plus enterprise deals,” he said. “These deals have quarterly revenue and earnings risk due to their complexity and timing.
“We’re bringing to market many new products, new services, new powerful and simplified user interfaces. These all come with new pricing models. We’re also moving into new market segments, with new strategic partners and more aggressive channel programs. This is requiring us to execute a complex series of initiatives, which have implied execution risk.”
LAS VEGAS – In a week full of raw emotion, NetApp CEO George Kurian shared a personal story on the power of data to change lives.
Kurian was speaking to attendees at NetApp Insight conference here on the need for integrated data mobility in an increasingly multi-cloud world. But after touting the expanding capabilities of NetApp’s Data Fabric, Kurian closed his 15-minute keynote by recounting his personal pain five years ago on learning his son, who was 8 at the time, had developed a rare form of cancer.
The child was diagnosed with a tumor that lay between his right eye and his brain. Doctors were mystified to find an instance of this particular form of cancer in such a young child, Kurian said.
Oncologists in the U.S. used the power of data to generate sophisticated medical images and analyze the data in collaboration with cancer doctors in Canada and the United Kingdom. Over the course of numerous conference calls, surgeons painstakingly identified the source of the tumor and devised a treatment strategy.
A surgical operating theater was set up, including a series of life-sized television screens for projecting the medical images during surgery.
“For 14 ½ hours, the little’s boy’s life was in the hands of the best surgeons in the world. And now, I get to go home and see the power of data every evening in a happy, healthy, fully healed 13-year-old boy who loves me. He tells me every day, ‘Dad, I’m so happy when you come home,” Kurian said, his voice wavering with emotion.
Kurian’s emotional code followed the harrowing mass shooting here that claimed the lives of 58 people and injured hundreds more. The tragedy led to a lockdown at Las Vegas resorts and delayed the start of NetApp Insight, leading to an abbreviated schedule and sparse attendance.
LAS VEGAS – NetApp welcomed more than 4,000 attendees to the Mandalay Bay Resort and Casino this week to formally introduce its hyper-converged infrastructure based on SolidFire all-flash storage. That product news seems academic in the wake of the shooting tragedy that has engulfed this city and put a damper on the NetApp Insight user conference.
But in a bid to try and restore normalcy, NetApp today said the product unveiling will take place as scheduled at NetApp Insight. NetApp HCI will be the highlight of a series of product rollouts that include previews of an integrated NetApp-based NFS data services in the Microsoft Azure software stack and version 9.3 of NetApp Ontap.
CEO George Kurian said NetApp considered cancelling Insight following a mass shooting that occurred at an outdoor concert on Sunday. Police say 59 people were killed and nearly 530 others were injured. The concert site is across the street from Mandalay Bay.
Kurian said the decision to move forward on NetApp Insight came after consulting with law enforcement agencies, event and hotel security, and NetApp’s own on-site staff.
‘We felt this was best,’ Kurian says of NetApp Insight decision
“Given the range of considerations, we felt this was the best thing to do,” Kurian said in a keynote during Tuesday’s general session. “On Monday, we had 3,200 of our customers in attendance. Today, we now have 4,100 who still decided to come. We felt we owed to them to honor their decision to come to Insight.
“Pulling together as a community is one way to show the world that we intend to move forward. We want to move from grief to grace and light the first candle of hope in this valley of darkness.”
NetApp provided most of the details around its new HCI product in June, promising general availability in the fourth quarter of 2017. NetApp is late to the hyper-converged market and some have questioned whether the product is truly hyper-converged. Still, the vendor hopes to catch up in the market as it did after getting its all-flash FAS (AFF) arrays out the door later than its main competitors.