Microsoft today acquired cloud disaster recovery vendor InMage, which it will use as part of its Windows Azure cloud services.
In a blog announcing the acquisition, Microsoft VP Takeshi Numoto wrote that Scout already lets customers migrate to Azure, but Microsoft will integrate the software more deeply with its cloud. Microsoft will sell Scout through Azure Site Recovery, which supports replication and recovery of an entire site directly Azure.
“This acquisition will accelerate our strategy to provide hybrid cloud business continuity solutions for any customer IT environment, be it Windows or Linux, physical or virtualized on Hyper-V, VMware or others,” Numoto wrote. “This will make Azure the ideal destination for disaster recovery for virtually every enterprise server in the world. As VMware customers explore their options to permanently migrate their applications to the cloud, this will also provide a great on-ramp.”
He added that Microsoft will work with managed service providers who sell InMage’s ScoutCloud DR as a service.
It appears that Microsoft’s strategy for InMage technology is similar to the one it has followed with the cloud storage gateways it acquired from StorSimple in 2012. Although StorSimple supported other large public clouds before the acquisition, Microsoft has integrated the gateways more tightly with Azure and now sells them only to customers who have Azure subscriptions.
The InMage acquisition certainly fits into the “cloud-first” strategy Microsoft CEO Satya Nadella laid out for employees Thursday.
InMage had $36 million in venture funding. Microsoft did not disclose the acquisition price.
When CA Technologies launched its arcserve Unified Data Protection (UDP) platform in May, it was considered a new direction for the backup and recovery platform. It turns out the direction it is going is away from CA. This week the arcserve team revealed plans to spin out of CA, which signed an agreement with Marlin Equity Partners to divest the assets of the data protection business.
The move is similar to Syncsort’s data protection team spinning out of the parent company last year, eventually re-branding itself as Catalogic Software.
Mike Crest, GM of CA’s data management business unit, will become CEO of the new arcserve company. It will have headquarters in Minneapolis (CA is in New York), and all 500 or so of the arcserve product team are expected to join the new company.
The reason for the spinoff? CA is focused on large enterprise customers (it will retain its mainframe backup software) while arcserve software is a best fit for SMBs and small enterprises.
Crest wrote in a letter to arcserve customers:
“For CA Technologies, the divestiture of arcserve is part of a portfolio rationalization plan to sharpen the company’s focus on core capabilities such as Management Cloud, DevOps and Security across mainframe, distributed, cloud and mobile environments. As part of that plan, CA has a strong commitment to thoughtfully placing divested assets, such as arcserve, in environments that benefit enable customers, partners, employees and shareholders mutual gain.”
As arcserve VP of product marketing Christophe Bertrand put it, “the markets we serve are not the traditional markets CA serves today. CA will continue to sharpen its focus and manage its portfolio accordingly. It made perfect sense to look at this as the next step for the arcserve business.”
Bertrand and arcserve VP of product delivery Steve Fairbanks said the new company will build its technology around the new UDP platform. UDP combines previous arcserve data protection products — Backup, D2D and High Availability and Replication — under a common interface along with new features. When UDP was released, Fairbanks called it a re-invention of arcserve.
Bertrand and Fairbanks said they are convinced Marlin will provide all the backing arcserve needs to succeed.
“Marlin has indicated they want to invest in arcserve as a platform,” Fairbanks said. “As we grow organically and we grow revenue, we will re-invest back in the business and grow the size of the company over time.”
Avere Systems today closed a $20 million series D funding round that founder and CEO Ron Bianchini said he expects will take the NAS acceleration vendor to break-even and an initial public offering (IPO).
If he’s correct, the route to the IPO will run through the cloud.
Although Bianchini sold his previous storage company, Spinnaker Networks, to NetApp for $300 million in 2003, he envisions a different outcome for Avere.
When asked if he expected this would be Avere’s last funding round, Bianchini said, “that is our plan. I believe there is an IPO in our future. Our next big focus is break-even, and that’s what we hope this takes us to.”
He said the new funding will be used to beef up sales and marketing for Avere’s FXT Series of edge filers. Bianchini said Avere has just over 100 employees now and he expects to add 30 by the end of the year.
He declined to say how many customers Avere has, but said it is in “triple digits,” and most of them are large enterprises. Since releasing version 4.0 of its operating system in late 2013, Avere has focused on separating capacity from performance by pushing data to the public cloud.
“You can put our product and performance anywhere and put the repository anywhere as well, including the public cloud,” Bianchini said. “We play at the intersection of enterprise and cloud. We’re about driving enterprise performance and features, and doing that with data in the cloud.
Avere began in 2009 selling NAS acceleration appliances designed to improve performance while lowering the cost. It later added global namespace and then WAN acceleration that could move data from the data center to the edge with little latency. Version 4.0 of Avere’s FXT software was the piece that completed the cloud picture. Customers can now use Avere’s FlashCloud software as a file system for object storage and move data to the cloud without requiring a gateway. Avere supports Amazon’s public cloud, and Bianchini expects to add OpenStack and Ceph support soon.
Along the way, Avere went from a company that could help storage giants like EMC and NetApp drive better performance to one that could reduce the need to use their NAS boxes. “With the cloud, people can replace their storage infrastructure with the cloud for the capacity piece,” he said. “I imagine we’ll have fewer friends on the enterprise space, because you can take them out now.”
The big vendors have their own plans for the cloud, which EMC made clear this week with its TwinStrata acquisition.
The funding round, led by Western Digital Capital, brings Avere’s total funding to $72 million. Previous investors Lightspeed Venture Partners, Menlo Ventures, Norwest Venture Partners and Tenaya Capital participated in the round.
EMC World 2014 in May was light on product launches, as the vendor did not make any updates that were expected to its storage array platforms. Those launches came today when EMC refreshed three of its storage platforms during an even in London, bringing out its new-generation VMAX and Isilon systems and expanding its XtremIO all-flash array family.
EMC also threw in a surprise acquisition, picking up cloud storage controller vendor TwinStrata for an undisclosed price. EMC will use the technology to embed cloud access capabilities into its VMAX high-end enterprise SAN platform.
TwinStrata’s CloudArray supports public clouds such as Amazon AWS and Google Storage Cloud. It was one of a handful of startups that came out with gateways that cache hot data on-site and move older data to the cloud a few years back. Microsoft acquired TwinStata competitor StorSimple in 2012 to use as a gateway to its Windows Azure cloud.
Jeremy Burton, EMC president of products and marketing, described TwinStrata as “very focused on block storage” and said its technology “allows us to go from VMAX to any private cloud or the public cloud, and treat the public cloud as another tier of storage.
Barry Ader, vice president of product management for VMAX, said EMC will continue to sell TwinStrata’s CloudArray but is most interested in using its software inside of VMAX. CloudArray is available as a physical or virtual appliance. Ader said EMC will evaluate if TwinStrata technology is a good fit for other EMC platforms but TwinStrata CEO Nicos Vekiarides and the rest of TwinStrata’s employees will become part of the VMAX team.
Other EMC product launches today include VMAX 100K, 200K and 400K arrays with integrated storage services such as data protection, Isilon S210 and X410 scale-out NAS arrays and an upgraded OneFS file system, and an XtremIO entry-level X-Brick with the addition of inline compression to the all-flash platform.
Despite obvious evolution of the storage industry, the problems in storing and managing information are the same ones that have been there since the days of Data Processing in the 1960s. The problems have actually grown, with nuances that make them harder to solve.
There are solutions and technologies that provide abilities to deal with the magnitude of data, the performance demanded for business, and cost economies in a competitive world. The problem is keeping up with those storage developments. There are inflection points in the storage industry where technology makes a dramatic difference in capabilities. These inflection points spawn new products, refinements of the technology, and promises of the next new thing that will become another inflection point. For those in IT organizations responsible for storing and managing information, it is a continuous task to keep with these developments.
These developments lead to new products and solutions that will improve operations. They provide greater performance and efficiency, and simplified management at a lower cost.
The new technologies are often the most effective way to address increasing capacity demands and the problems those demands create. Discussions from outside the IT organization — at conferences, vendor presentations, and in published articles — move quickly to the new products and solutions. Executives in the company will hear about a new technology and ask, “Why aren’t we doing this?” Technical professionals define their careers by applying technology to solve problems. If they don’t fully understanding the latest technology, they may feel that their careers are compromised and limited.
All this means that the importance of continually learning about developments in the storage industry is critical. With the pace of change experienced over the last 20 or 30 years, it does not take long to fall behind the knowledge curve.
Continued learning is the responsibility of the individuals in IT. This is really the admission that “you need to look out for yourself.” A company with wise management will recognize the increased value that IT staff will bring by learning about the new technologies and products and how to best utilize them. It may be up to the IT director or CIO to take the initiative to facilitate the education required to remain current with the industry. Regardless, individuals must also take responsibility.
Some industries have developments that occur at a rapid pace while others seemingly plod along with few changes during the career of technical people. Storage is in the former category. I was reminded of this on a long flight for vacation with a group of other engineers in different fields that I was involved with in college. While on the flight, I was catching up on technology and product developments by reading many documents I had brought with me. Between briefings I receive from vendors and what I research on my own, I seem to be in constant learning mode. The others with me asked about reading so many different articles and other documents. They were surprised when I explained the amount of time required just to keep up with happenings in the storage industry. Each of them said they did not need to spend much time focused on new developments in their discipline.
Not keeping up with the latest storage technology and products results in IT staff falling behind quickly and failing delivering the greatest value possible. It is a career issue and a company competitive issue.
Conferences and education classes (especially ones delivered by independent organizations) are effective means for getting information quickly and in concise formats. There are education opportunities out there – take advantage of them.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Crossbar Inc. specializes in non-volatile 3D Resistive RAM (RRAM), which promises greater endurance and higher density than NAND flash. The Santa Clara, California-based startup claims that it reached an important milestone toward commercializing terabyte-scale memory arrays that can use a chip the size of a postage stamp.
Crossbar announced today its demonstration of pre-production 1 MB arrays that use its patented 1TnR – which stands for 1 Transistor driving n Resistive memory cells – for read/write operations. Crossbar said a single transistor was able to drive over 2,000 memory cells at low power to produce exceptionally dense solid-state storage.
Yet, although Crossbar’s RRAM carries the potential to achieve higher density and greater endurance at lower power than NAND flash, it’s an open question if or when the technology could reach mass production and turn up in shipping products.
Jim Handy, chief analyst at Objective Analysis in Los Gatos, California, said RRAM is a good candidate to replace NAND flash once NAND costs stop scaling, likely after 3D NAND runs its course. He said every NAND flash maker is looking at RRAM as the heir apparent, and Crossbar has a shot at the market.
“In NAND, it’s all about cost,” Handy noted. “Even though RRAM has significantly better specs – like endurance, over-write, speed and power – in the end, the lowest-cost solution always dominates the market.”
Handy said the Crossbar technology is interesting since it doesn’t require a separate select diode, as nearly every other technology does. “Select diodes have been pretty tricky so far. What we need to find out is whether it ramps into production smoothly, and it’s too early to know that yet,” he said.
Alan Niebel, founder and CEO of WebFeet Research in Monterey, California, said Crossbar’s emerging RRAM “could fall on its face if the technology hits a hurdle or no major manufacturer assists Crossbar in further developing and manufacturing it.”
But, Niebel gave Crossbar’s RRAM an 80/20 chance of seeing the light of day in shipping products, as long as the company partners with an integrated device manufacturer (IDM) or original equipment manufacturer (OEM). He speculated that Crossbar’s RRAM could reach enterprise products around 2018.
Crossbar said today that it is finalizing agreements with several leading global semiconductor companies and plans to announce its first licensing agreements shortly. Commercial shipments in enterprise products are expected in 2017, according to Sylvain Dubois, vice president of marketing and business development at Crossbar.
In the meantime, potential use cases for Crossbar’s RRAM are an interesting topic of discussion. Robin Harris, chief analyst at StorageMojo in Sedona, Arizona, said that if Crossbar gets a terabyte on a single die the size of a thumbnail, one possible destination could be a non-volatile RAM module for an in-memory database.
“What if you could have a server that had 64 TB in non-volatile main memory? You can put a very large in-memory database in there,” said Harris. “Then the implications are: ‘Gee whiz, all of a sudden I don’t need my EMC VMAX. There’s a whole bunch of storage that I probably wouldn’t need, and of course, eventually it would be a lot cheaper than buying a VMAX or some other high-performance array.’ ”
But, Harris said it’s difficult to pinpoint the best place to leverage the Crossbar RRAM technology until we know more about the performance characteristics and price. He said it’s quite possible the initial application will be low-power mobile devices such as tablets and smart phones.
Harris expressed optimism about Crossbar’s chances of success in creating an economically viable product. He said the company’s co-founder, Wei Lu, an associate professor at the University of Michigan, is an “extremely smart guy and cutting-edge researcher,” and Crossbar has top-tier venture capital backing from firms such as Kleiner Perkins Caufield & Byers. Harris added that Crossbar’s RRAM appears to be conducive to cheap manufacture at existing semiconductor fabs.
“The real test comes when the fab starts sampling chips,” said Harris. “That’s when you discover if you really have a market.”
Dell and Nutanix took a lot of people by surprise this week when they disclosed an OEM deal that will result in Dell selling Nutanix hyper-converged software on PowerEdge servers. If this plays out like other emerging technologies have, you can expect more established storage vendors to partner with – or acquire – hyper-converged technology that bundles storage, networking, and virtual servers.
There was an erroneous report a few weeks ago that Hewlett-Packard was set to acquire SimpliVity, a rival of Nutanix. That was likely the result of rumblings in the industry about large vendors looking to add this type of technology. It’s no secret that EMC is working on its own hyper-converged system – Project Mystic – and most of the major server hardware vendors will have pre-packaged hyper-converged bundles running VMware’s Virtual SAN (VSAN) software.
‘This speaks volumes about the validation of our technology and our product,” Nutanix VP of marketing Howard Ting said of the Dell deal.art
When a hot storage technology starts gaining momentum, the major vendors always look to jump on the bandwagon. They sometimes develop their own, but it is easier to buy it from a startup that has spent years and hundreds of millions of dollars developing it. Big vendors looking to take a shortcut into hyper-convergence have a few options but not too many. Software-only hyper-convergence startup Maxta would make a good target. SimpliVity CEO Doron Kempel says his company is not for sale, but can be an OEM partner despite requiring a PCIe card to carry out its inline deduplication without impacting performance.
“We have the technological capability to do that,” he said of an OEM deal. “We can give a large server vendor functionality of our stack, which includes the software and card.”
Kempel insists that he has never discussed a sale, OEM deal or any other partnership with HP, however.
“I’ve never spoken with HP and never had any M&A discussions,” he said. “We have no commercial relationships with them. We’re not being acquired by anyone, not even for $2 billion.”
Taneja Group consulting analyst Arun Taneja said the Dell-Nutanix deal “puts hyper-convergence in hyper-drive” and predicts more deals. “Hyper-convergence is a totally tomorrow type of product,” he said. “It’s taking all these technologies that at one time were best of breed but now you can integrate them.”
Sanbolic CEO Momchil “Memo” Michailov predicted the Dell-Nutanix deal will spark a feeding frenzy of partnerships involving other types of server-attached storage software as well.
“This is classic Dell: take a high-value product, smash it into the channel and roll out a commodity offering,” Michailov wrote in an e-mail statement. “As an entry-level platform, this might be the right move for Nutanix and lead to a successful IPO down the line. From a broader industry perspective, the move is the first in what I expect to be a line of strategic plays to move billions of dollars in market value from storage back to the server layer. HP, IBM/Lenovo and others won’t be far behind, but the real enterprise market opportunity relies on software-defined storage players that can offer tier-one, scale-out capabilities.”
HP has renamed an extended version of its managed storage and backup services as Helion.
Formerly called Storage Management Services and Backup Restore services, Helion now is positioned as a managed cloud offering that uses HP 3PAR StoreServ and HP StoreOnce Backup systems as the underlying hardware. The offering can be customized for private, virtual private and public cloud. It leverages block, file and object storage along with backup services.
“We collapsed a set of servers and provisioned gigabytes of storage into a single offering,” said Mike Meeker, HP’s global offering manager for HP enterprise services.”The focus is extending the existing offering. We have a core set of services and structures. With that structure comes the ability of multiple features.”
Meeker said the offering can be customized. For instance, multiple-tiers in a SAN or block array can be configured for high performance applications that require low latency. The service also has tiers of files services that can be used for numerous use case based on the type of application and databases across private, virtual private and public clouds.
Its customized configurations leverage block, file and object storage as well as backup services powered by HP 3PAR StorServ and HP StoreOnce Backup systems. The different tiers and features are front-ended with a new portal.
The block-based service targets application and databases that need high-performance via Fiber Channel communications. The file storage is for NAS to provide file-level storage accessed via the Internet protocol. The object storage provides backup applications the ability to add, delete and modify files via REST or HTTP protocols. Server backup creates copies of data for services in the data center and remote offices.
“They now have the flexibility to pick and choose different tiers, at different price points and different application needs,” Meeker said.
Hitachi Data Systems (HDS) has enhanced its cloud storage platform for people who want to use the public cloud as a tier.
The vendor made changes to three cloud products – the Hitachi Content Platform (HCP), HCPAnywhere and Hitachi Data Ingestor (HDI). HCP is object-based software that serves as HDS’s main cloud platform and can run on any of its hardware products. HCP Anywhere is the file sync and share software introduced a year ago that lets mobile devices access HCP. HDI is what HDS calls its “cloud on-ramp” that lets companies connect to the cloud from remote offices. All three can run as virtual appliances.
With HCP 7 launched this week HDS added cloud adaptive tiering, which lets customers move data on and off Amazon, Google and Microsoft public clouds. The policy-based tiering controls what data is kept on-premise and what data goes to a public cloud. New synchronization capabilities let customers sync data across active sites. The new tiering and synch features let companies store data local and in the cloud and access them through mobile devices.
“We’ve become a cloud broker,” HDS CTO Peter Sjoberg said. “Anything brought into HCP can be sent out to any cloud. The information flows through HCP, but the metadata is retained in the data center. The information can go to your cloud of choice.”
Because HDI caches data that is used locally, there is less of a need for WAN optimization for remote sites. HDS also says the new features alleviate the need to do distributed backups across site. “You don’t have to pop a tape in,” said Tanya Loughlin,HDS director of file, object and cloud product marketing at HDS.
Making it easier for end users to share information could solve a BYOD headache caused by people using their own devices in their own way, or as Sjoberg put it, “going rogue, and going around IT.”
I have been told by a storage administrator that if he moved data to the cloud he was no longer responsible for it. He made the wash-my-hands of it sign during that conversation to illustrate it was “not my problem.” That’s because of guarantees offered by the cloud service provider storing the information. I did not get any of the details about those guarantees but I would also question how sound they were and what recourse the company would have if anything did get lost.
An event happened this past week that illustrates this problem and set off alarm bells for all storage administrators and IT management. U.K.-based code hosting company Code Spaces lost every client’s data and ceased operation. The data loss was due to a malicious hacker that deleted the data and was so proficient at it that Code Spaces’ protection mechanisms were incapable of protecting or recovering data. Russ Fellows wrote a blog with more detail on the Evaluator Group web site. More can be found at SearchAWS.com.
This was a major fail for Code Spaces and a major loss of valuable development source code that was stored there by many companies. And there were guarantees about how the data was protected. A conversation with an IT guy who decided to put data on Code Spaces would be completely different now. Would a company executive believe that the IT guy was no longer responsible when the data was moved there? There is no realistic defense for the IT guy here.
This means the responsibility for protecting information and ensuring its availability for use remains with IT, specifically the storage team, irrespective of where it is physically located. Protecting data from disaster or some type of alteration or destruction is one of the earliest and most basic jobs in IT. Making the data available to the point of business continuity is a responsibility. The consideration that IT would be absolved of those responsibilities by moving to a cloud provider is wishful thinking.
Moving the data to the cloud may have economic benefit. But it still requires the operational effort and expense of ensuring the data is protected and available. The protection and availability must be proven with periodic exercising of recovery and availability switchover. Without that, the liability is there but without evidence that responsible actions have been taken.
In advising our IT clients, this is a great example to use. From this point on, when someone says they are moving data to the cloud and no longer have responsibility, I’ll just ask if they have it in writing that they will not be held liable. We certainly have an example now.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).