LAS VEGAS – Michael Dell today said his company will be called Dell Technologies after he completes his $67 billion acquisition of EMC. The EMC name will live on for the enterprise business, which will be called Dell EMC.
Dell revealed the name during his keynote address at EMC World 2016. He said Dell, EMC and its pieces, including VMware, Pivotal, Virtustream and RSA, and Dell’s recently spun off SecureWorks will all be part of a strategically aligned family of businesses. “As family names go, I’m kind of attached to Dell,” was his reason for calling the company Dell Technologies.
Dell used the keynote to try and convince attendees that EMC will be bigger and better after the acquisition. He said the deal will close “under the original terms and under the original timeframe.” The deal is expected to close by October. The two major hurdles are regulatory approval in China and ratification from EMC’s shareholders.
Dell compared his company’s direction to Hewlett-Packard, which last year split the company into two. “Companies like HP are shrinking their way to success. Wait, you can’t shrink your way to success. That’s not a reality,” he said. “They’re separating their edge from their core.”
Cinder still has the highest adoption rate among OpenStack storage projects, but interest in the nascent Manila file-share service is picking up, according to a 2016 user survey released this month by the OpenStack Foundation.
The survey showed that 57% of 290 deployers who answered the adoption question use OpenStack Cinder block storage in production, and another 26% are testing it. OpenStack Swift object storage was in production use at 32% of the deployments, with another 21% using it in test mode.
The emerging Manila shared file system had production deployments among only 3% of respondents and test usage at 8%. But Manila is generating lots of interest. Only the Magnum containers service (44%) and Designate DNS service (41%) generated more mentions than Manila (38%) among the 290 respondents who rated the OpenStack projects they’re most interested in.
“We’re starting to see a handful of big deployments move past the ‘kicking the tires’ phase and deploy Manila in production. The number of new users trying Manila is also more than I can count,” said Ben Swartzlander, the OpenStack Manila project team lead and a senior software engineer at NetApp.
The OpenStack Foundation’s volunteer survey generated responses from 1,603 community members representing 1,111 unique organizations and 405 user deployments. The IT industry dominated the survey pool, at 68%, following by telecommunications (14%), academic/research (9%), financial (2%) and film/media (2%). The number of users responding to specific questions varied.
The top priority for most respondents is saving money over alternative infrastructure choices, and the majority of deployments are in on-premise private clouds.
More than 75% of 256 respondents use between five and nine OpenStack projects. Looking at OpenStack storage, 20% of 312 respondents said they use the open source software in production for storage/backup/archiving purposes. Another 5% use it for development/quality assurance, and 3% are involved in proofs of concept.
The top Cinder driver in production use among 260 respondents was Ceph RBD followed by the default logical volume manager (LVM) for Linux (16%), NetApp (8%), NFS (5%), GlusterFS (5%), SolidFire (4%) and VMware VMDK (3%).
And there are some sizable Cinder block storage deployments. About 9% of 148 respondents have more than 1 PB, including . The Cinder breakdown for the rest was:
19% – 100 TB to 999 TB
38% – 10 TB to 99 TB
24% – 9 TB or less
Among the OpenStack Swift respondents, the breakdown was as follows:
4% – 1 PB to 99 PB
20% – 100 TB to 999 TB
25% – 10 TB to 99 TB
51% – 9 TB or less
Asked what kinds of data they plan to store on object storage in the next 12 months, the respondents said:
68% – backup/archiving
60% – Docker/container/VM images
58% – application data
32% – big data
3% – other
The bad news for FalconsStor is it lost $3.2 million last quarter. If there is good news, it’s that its new FreeStor software is picking up OEM and managed service provider (MSP) customers and selling way ahead of last year. But the really bad news is, the software vendor is down to $11.4 million in cash and needs to turn its fortunes around in a hurry to survive.
On Wednesday, FalconStor reported $7.4 million in revenue last quarter compared to $8.7 million a year ago and $9.4 million the last quarter of 2015. Executives blame the decrease on a sharp drop in sales of legacy products that were on the market before FreeStor.
CEO Gary Quinn said FalconStor sold three times as much FreeStor through MSPs last quarter than it did all of 2015, and enterprise subscription licensing in the quarter was nearly half of the 2015 total. FalconStor lists Volkswagen in Poland, Sunrise Communications in China and Petrofac in the U.K. among its FreeStor enterprise customers although most of the revenue has come from international MSPs such as Hitachi Systems in Japan, Blue Chip in the U.K. and LG CNS in South Korea. FreeStor OEMs include array vendors X-IO and Kaminario, backup appliance vendor Synerway and subsystems vendor Rorke Data, but that business relies on the partners’ success.
“There was a decline in our legacy business. Many storage companies are experiencing that now,” FalconStor CFO Lou Petrucelly said. “That’s not an excuse, but it’s the reality.”
Petrucelly admitted the financials must improve, but said “We feel confident that our new product can work, and that gives us the gas to move forward.”
FalconStor launched FreeStor storage virtualization and data protection software in February, 2015. The platform combines data migration, continuous availability, protection and recovery, and inline data deduplication. FalconStor added predictive analytics to FreeStor this month for capacity planning, service-level management and storage health monitoring.
Quinn said FalconStor’s internal forecast called for about $9 million to $10 million in revenue for the quarter, and it came up about because of around $1.5 million to $2 million of late orders. “We still have that business, it wasn’t lost,” Quinn said. “We would have been close to break even for the quarter. If we can bounce back [this quarter] and show positive cash results, that will go a long way towards saying we just hit a bump in the road or stepped in a hole.”
Quinn said the primary use cases so far for Freestor has been backup as a service and disaster recovery as a service. That gives FalconStor hope that FalconStor can pick up steam if businesses move to the cloud in droves.
Quinn said reaction to the technology has been good, but FalconStor is still battling reputation problems from several years back, especially in the United States.
“Our U.S. presence has been diminished, and a lot of that is due to the history of the company,” he said. “Something went on between FalconStor and the marketplace, and we just have had a difficult task getting traction.”
Dell and Scality recently added a highly dense, purpose-built cloud storage system that is pre-intergrated with the RING object storage software to their reseller product lineup.
The SD7000-S cloud storage system scales to 688TBs of raw storage in a 4U form factor or 6.9PBs of capacity in a single rank. The jointly engineered SD7000-S has two server nodes, with two Xeon E5-2650 v3 processors per node and a 10GbitE dual port network interface card. There are 90 hot-plug, 8TBs, 3.5-inch disk drives in the 4U enclosure.
The Scality RING can be deployed with three SD7000-S storage servers. The software provides multi-petabyte storage for unstructured data with a single distributed namespace across a single or multiple sites. It has access for file and object storage with optional OpenStack APIs.
The RING software uses a de-centralized distributed architecture, providing concurrent access to data stored on x86-based hardware. Ring’s core features include replication and erasure coding for data protection, auto-tiering and geographic redundancies inside a cluster.
Scality also has a reseller agreement with HP that became official in October 2014, with the Scality software running on HP Proliant Servers. In August 2015, Scality scored its deal with Dell when it was added to the company’s Blue Thunder program that combines software-defined storage with Dell servers.
Panzura’s Global File System (GFS) is certified to run in Microsoft Azure, giving customers a second “in-cloud NAS” option with a major cloud provider.
The GFS, running on Panzura’s Cloud Controller, has been available in Amazon since November 2013. But Barry Phillips, chief marketing officer at Panzura, said the company didn’t see Microsoft Azure object storage taking off from a storage perspective until last year.
“It went from not having many on Azure storage to having a large number on Azure storage,” he said.
Phillips said, under a typical scenario, customers move all of their unstructured file data into a public or private “cloud bucket.” Panzura caches the hot data on premises in a controller that runs in a physical appliance or a virtual machine. Panzura sells all-flash and hybrid cloud controller for on-premise use. Colder data that customers rarely use is stored in the cloud. Panzura supplies the global file system to interface to object storage such as Microsoft Azure, Amazon S3, Google, EMC’s Atmos and IBM’s Cleversafe.
By enabling the Panzura Cloud Controller to run in Azure and Amazon, Panzura is giving customers the opportunity to use the same global file system on premises and in the public cloud. Customers also have the option to run their applications in Azure and Amazon and use Panzura as in-cloud NAS, with no on-premise file storage.
The Panzura Cloud Controller is available on the Microsoft Azure Marketplace.
“We fundamentally believe going forward that as companies move to the cloud, being able to put all of their file system in the cloud itself whenever possible is something that they’ll be looking to do,” Phillips said. “Of course, if the distance to their office is too far from any cloud, then they can certainly run one of our on-premise cloud controllers.”
He said remote access of a file system over a long distance would be slow because of bandwidth and latency. But a customer could mix and match, with some branch offices running on-premise controllers while others have no on-site infrastructure and use only the in-cloud NAS, he said.
Phillips said customers also are able to mesh together file systems in Azure and in Amazon using the Panzura software. “It’s not an either/or with us,” he said.
Microsoft Azure operates data centers in 22 regions around the world. Locations include California, Texas, Illinois, Iowa and Virginia in the U.S.
“As more and more companies want to move their infrastructure into the cloud but still have on-premise performance, then having those data centers in the middle of the U.S. is helpful,” Phillips said.
The Panzura Cloud Controller provides capabilities such as global file locking, to enable users to work with applications built for local use over a wide area network (WAN), global snapshots, deduplication and compression, and security for data at rest and in transit between controllers and the cloud.
“Our customers essentially have Panzura controllers and a cloud bucket. That is all,” Phillips said. He said workflow, operational expenses, and maintenance of backup and archive go away, because cloud providers such as Amazon store multiple copies of the data and can withstand two data centers going down.
The man who led the design of that system says he expects its second all-flash platform to eventually sell even more.
Pure founder and chief architect John Hayes says the market for Pure’s FlashBlade scale-out NAS and object system that will launch later this year has a larger potential market because unstructured data is growing much faster than the structured data that FlashArray is built for.
“Ultimately, it’s a larger use case,” Hayes said of FlashBlade. “We looked at all infrastructure data, everything from files to archives. That’s a broad target in the data center and today you have all of these different products optimized for different points. Our theory was that we could actually hit all these optimization points. It’s also the area that’s growing fastest. Databases and virtual machines aren’t high data growth, that’s like 10 percent a year growth. All the unstructured data is growing around 40 percent a year. And the variety of applications for unstructured data is increasing. We’ll sell both platforms into a lot of organizations. We’ll sell FlashArray to the IT team and FlashBlade to the engineering team.”
Hayes said FlashBlade, which uses object storage with a file system, is built to accommodate thousands of severs and traditional storage arrays cannot handle that load even if they are filled with solid-state drives.
“We believe in using the network because networks are getting much better,” he said. “It’s also about taking away the limits. Why do people want to use [Amazon] S3, for example? A big part of it is because it’s unlimited. You’re not creating a problem in the future where you want be able to store enough data. That’s why we wanted to make a box that’s really an unlimited data store that’s attached to as many computers as you need to attach to it.”
Not everyone agrees with Pure’s vehicle for expansion. In a blog posted on his company’s web site, Coho Data CTO Andy Warfield said FlashBlade’s architecture has problems. Warfield wrote that Coho Data considered a similar product in 2013 before scrapping plans. He criticized FlashBlade, mainly because it uses proprietary flash hardware and is not flexible enough to be a true scale-out system.
Hayes seemed more confused than upset by Warfield’s criticism. “I read it. I don’t really understand his point of view,” Hayes said. “I don’t know what to say. They’re building stuff, we’re building stuff. I don’t have much to say about it.”
Hayes also doesn’t have much to say about whether Pure will expand into other types of products, except that any new offerings may address another market. “I think between the two products we have, we’ll be able to cover almost all storage in the data center,” he said. “If we launch any new products, it’s probably in a different category.”
They won’t be software-only, despite Hayes’ background with software companies before Pure. He said software is the key to success for any all-flash system but a software-only product makes little sense.
“It’s an enormous amount of work to establish hardware compatibility,” he said. “It’s going to take us more engineering to ship a software-only product. I don’t understand what the customer benefit is going to be if they have to integrate the software and hardware themselves. It probably won’t save them any money.”
Gridstore apparently grew so fast under CEO George Symons that its board decided to change CEOs to keep up with the rapid growth.
Gridstore founder and CTO Kelly Murphy has moved back into the CEO role on an interim basis until the hyper-converged vendor finds a replacement for Symons.
“Gridstore closed out a record year in 2015 in both revenue and customer acquisition and launched 2016 with a new round of investment,” Gridstore VP of corporate communications Douglas Gruehl said in an e-mail statement. “In order to manage the companies’ hyper growth, the board has decided that a new CEO with experience in managing a fast growing company is needed. A search for a new CEO is underway.
“With the new investment Gridstore is expanding rapidly, adding to sales, support, and R&D worldwide; it is truly an exciting time for us.”
When Gridstore closed its $19 million funding round in January, Symons said he was looking forward to growing the business and doubling its headcount, particularly in sales and marketing. But that funding round brought other changes that may have spelled the end for Symons. Gridstore replaced chairman Geoff Barrell with Nariman Teymourian, who ran Hewlett Packard Enterprise’s Converged Systems Division of HPE. Kevin Dillon of Atlantic Bridge Capital, which led the funding round, also joined the board.
Gridistore got a head start on its rebuilding its executive team at the same time bringing in Dell veterans James Thomason (chief strategy officer) and Kevin Rains (chief financial officer) and a VP of sales, Phil Lavery, who also came from Atlantic Bridge.
Symons became Gridstore CEO in 2013, after stints as a CEO at Yosemite Technologies and Evostor, CTO at EMC, COO at Xiotech and chief strategy officer at Nexsan. He transformed Gridstore from a company that sold storage appliances for Microsoft Hyper-V to an all-flash hyper-converged vendor, still focused on Microsoft. In January, Symons said Gridstore revenue grew 343% year-over-year in 2015. “It surprised me how quickly it happened,” he said at the time.
So quickly that his board felt he couldn’t keep up.
EMC’s storage sales declined more than expected last quarter as the vendor waits to become part of Dell.
EMC executives offered several reasons for the drop in sales, but not the most obvious one. That would be, customers are reluctant to buy now until they see what happens if and when the $67 billion Dell deal closes.
EMC CEOs Joe Tucci and David Goulden – who heads the storage business – say the decline was due to product cycle transitions and an overall caution in IT spending that caused a backlog of deals. Those reasons are often cited by storage vendors for poor results, and may be valid in this case. But it’s unrealistic to think that none of the reluctance to buy is related to the pending Dell deal.
Tucci did emphasize that the Dell deal is on track to close “under the original terms and under the original timeframe,” on today’s earnings call. The original timeframe called for it to close between mid-2016 and October. And Tucci said EMC’s plans for 2016 call for revenue growth, indicating he expects the sales declines to be reversed in coming months.
Tucci called the Dell deal “a great strategic option” and said “the combination of EMC and Dell creates a powerhouse in the IT industry. Integration planning has accelerated. [Dell and EMC] have developed detailed integration plans to assure we hit the ground running when the merger closes.”
He said regulatory approval has been granted throughout the world except for China. EMC stockholders still have to approve the deal. And of course, the $57 billion in funding must also be secured but Dell and EMC execs have said that is no problem.
Tucci would not comment on what role he would play in the new company, which will be headed by Michael Dell. He didn’t exactly sound like he is resigned to ride off into the sunset for his long-anticipated retirement, though.
“I’m going to punt a little bit, and then I’m going to tell you the absolute truth,” Tucci said when asked about his role after the Dell deal. “To me, this is all about making sure it’s a good deal for our customers, our shareholders and our people, and they’re all priority number one to me. And it’s not about me. I have a lot of energy left, I’m going to continue to work doing different things. Potentially I could help advise Michael, but I just don’t want to go there yet, and Michael and I have not gone there yet.”
EMC Information Infrastructure (the storage group) reported $3.8 billion in revenue for the first quarter, down six percent year-over-year. Storage product revenue of $1.96 billion dropped 10%, partly because of $75 million worth of unfilled late orders. EMC II CEO Goulden said sales of its XtremIO all-flash storage asold well, and the VMAX All-Flash array is one of the new products he expects to pick up steam.
Goulden said he expects VMAX All-Flash arrays to make up at least half of new VMAX sales by the end of the year. And there will be another all-flash array coming at EMC World in two weeks. That will be a midrange all-flash array that will either be part of the VNX family or replace it.
“VMAX All-Flash is a game-changer,” Goulden said. “We will have a major new mid-tier announcement at EMC World that will be the start of a new cycle where the traditional VNX plays.”
Microsoft Azure is throwing its weight behind startup Talon Storage, offering Talon’s CloudFAST file sharing and acceleration software available from the Azure Marketplace.
CloudFAST for Azure StoreSimple is a joint offering between Talon and Microsoft Azure’s StorSimple cloud storage. CloudFAST Core software runs on-premise on StorSimple appliances, which cache the most active data and send other files to Azure. CloudFAST Edge runs in remote offices, caching files and sending them to the Core system in the data center. CloudFAST’s features such as global namespace and file locking allow collaboration among workers in different locations without losing data because it gets overwritten.
Representatives from Talon and Microsoft said they have integrated CloudFAST for StorSimple, and customers can buy CloudFAST directly from the Azure Marketplace instead of buying CloudFAST from Talon and setting up an Azure account.
CloudFAST for StorSimple costs three cents per GB per month. That would come to $9,200 per year for 25 TB of data center file storage, $36,800 for 100 TB and $148,000 for 400 TB.
While other cloud NAS vendors allow customers to choose between public clouds, Talon senior vice president Charles Foley said Azure is the only cloud partner for CloudFAST. Foley said Azure is the best fit because CloudFAST is used primarily for Windows and is an enterprise product.
“We’re putting our wood behind the Microsoft arrow,” Foley said. “Microsoft is the number one enterprise vendor in the world by virtually any spending survey you look at. Azure is an enterprise cloud platform. Our target customer is not a small business. Our target customer is looking for collaboration and distance- based performance. If you’re separated by oceans and continents, you probably need us.”
Azure partners with many software vendors, but Microsoft Azure director of product marketing Badri Venkatachari said Talon’s file locking adds value for StorSimple customers.
“A lot of our customers feel the need for a data center consolidation and collaboration story,” Venkatachari said. “Talon offers a storage layer with file locking. The storage platform sits in the data center and extends to branch offices. Think of Talon as a software layer and StorSimple as the storage platform.”
Web printing company Cimpress has been using StorSimple and Talon together since late 2015, soon after Talon launched CloudFast for Microsoft Azure File Service.. Mike Benjamin, manager of enterprise applications at Cimpress, said the combination helps him manage file storage for more than 7,000 employees across 50 worldwide locations. He called CloudFAST “tailor-made for my team” because it runs on Windows, and said its file-locking was a feature he sought for years. Benjamin said he previously tried DAS, NAS, SAN, WAN acceleration and other cloud appliances but could not find the required level of performance and user experience.
“Talon allows us to distribute our files so they can be consumed and collaborated on in a global fashion,” he said. “The way our teams are collaborating, they were stepping on each other, and my [IT] team had to ease the burden. We were looking for robust file locking.”
Benjamin also cited Talon’s visual indicator that shows which files are in cache and which are in the cloud. “The user knows if it’s in the cloud it will take an extra second to be brought down,” he said.
He said while CloudFAST isn’t as fast as an on-premise file server, it’s fast enough for the files his users deal with – mostly Office files. “You’re not going to get the same performance you get with a local file server but you get the global collaboration, so there’s a trade off,” Benjamin said.
Cohesity, which bills itself as convergence for secondary data, is adding public cloud support to its data protection platform.
Cohesity’s converged data protection strategy combines data storage for backup, archiving, test/dev and other non-production workloads into one scale-out platform. Today it added the ability to use Amazon, Google and Microsoft public clouds to free up on-premise capacity.
Cohesity’s cloud features are CloudArchive, CloudTier and CloudReplicate.
CloudTier moves seldom-accessed blocks into the same public clouds, but not their cold data services. CloudTier moves data that must be accessed occasionally and isn’t yet ready for long-term archiving. It tiers the data after a given capacity threshold is met to insure an on-premise cluster never runs out of space.
“With CloudTier, the cloud is acting like a remote disk,” Cohesity CEO Mohit Aron said. “With CloudArchive we’re moving a full image and essentially providing an alternative to tape.”
CloudReplicate copies local storage instances to public clouds or remote private clouds. Customers can spin up new instances in the cloud to recover data to on-site appliances.
Customers set the cloud target through the Cohesity Policy Manager. For instance, a customer can set all backups associated with a policy to move to CloudArchive once a week and retain snapshots for 120 days
CloudArchive and CloudTier are available now. CloudReplicate is expected later this year. They are included in Cohesity’s base product for no cost, but customers must subscribe with the public cloud vendor they choose.
Cohesity VP of product management Patrick Rogers said the cloud integration fits with Cohesity’s strategy of converging all non-primary storage onto its platform.
“Customers say the model of having distinct backup software, backup targets and archives has to change,” he said. “We also believe they will continue to have significant on-premise infrastructure. They will use the cloud for the economic advantages and scale that it provides them, but maintain on-premise infrastructure for regulatory and competitive reasons.’
Enterprise Strategy Group senior analyst Scott Sinclair said using the cloud for selected data sets will give Cohesity customers flexibility in the way they use public clouds.
“Secondary storage can be considerable, running to hundreds of terabytes or petabytes,” Sinclair said. “If you move all of that off to Amazon and find out it’s more expensive than you thought, getting that back [on-premise] is difficult. Cohesity lets you move some of those copies to the cloud as a tier or move essentially snapshots to the cloud in an archival fashion. Organizations don’t always understand their workloads. They might say ‘No one ever accesses this, let’s move it to the cloud.’ Then they realize it’s being accessed by quite a few people in the company. Cohesity lets you move data to the cloud and if it doesn’t make sense, you can move it back.”