EMC World 2014 in May was light on product launches, as the vendor did not make any updates that were expected to its storage array platforms. Those launches came today when EMC refreshed three of its storage platforms during an even in London, bringing out its new-generation VMAX and Isilon systems and expanding its XtremIO all-flash array family.
EMC also threw in a surprise acquisition, picking up cloud storage controller vendor TwinStrata for an undisclosed price. EMC will use the technology to embed cloud access capabilities into its VMAX high-end enterprise SAN platform.
TwinStrata’s CloudArray supports public clouds such as Amazon AWS and Google Storage Cloud. It was one of a handful of startups that came out with gateways that cache hot data on-site and move older data to the cloud a few years back. Microsoft acquired TwinStata competitor StorSimple in 2012 to use as a gateway to its Windows Azure cloud.
Jeremy Burton, EMC president of products and marketing, described TwinStrata as “very focused on block storage” and said its technology “allows us to go from VMAX to any private cloud or the public cloud, and treat the public cloud as another tier of storage.
Barry Ader, vice president of product management for VMAX, said EMC will continue to sell TwinStrata’s CloudArray but is most interested in using its software inside of VMAX. CloudArray is available as a physical or virtual appliance. Ader said EMC will evaluate if TwinStrata technology is a good fit for other EMC platforms but TwinStrata CEO Nicos Vekiarides and the rest of TwinStrata’s employees will become part of the VMAX team.
Other EMC product launches today include VMAX 100K, 200K and 400K arrays with integrated storage services such as data protection, Isilon S210 and X410 scale-out NAS arrays and an upgraded OneFS file system, and an XtremIO entry-level X-Brick with the addition of inline compression to the all-flash platform.
Despite obvious evolution of the storage industry, the problems in storing and managing information are the same ones that have been there since the days of Data Processing in the 1960s. The problems have actually grown, with nuances that make them harder to solve.
There are solutions and technologies that provide abilities to deal with the magnitude of data, the performance demanded for business, and cost economies in a competitive world. The problem is keeping up with those storage developments. There are inflection points in the storage industry where technology makes a dramatic difference in capabilities. These inflection points spawn new products, refinements of the technology, and promises of the next new thing that will become another inflection point. For those in IT organizations responsible for storing and managing information, it is a continuous task to keep with these developments.
These developments lead to new products and solutions that will improve operations. They provide greater performance and efficiency, and simplified management at a lower cost.
The new technologies are often the most effective way to address increasing capacity demands and the problems those demands create. Discussions from outside the IT organization — at conferences, vendor presentations, and in published articles — move quickly to the new products and solutions. Executives in the company will hear about a new technology and ask, “Why aren’t we doing this?” Technical professionals define their careers by applying technology to solve problems. If they don’t fully understanding the latest technology, they may feel that their careers are compromised and limited.
All this means that the importance of continually learning about developments in the storage industry is critical. With the pace of change experienced over the last 20 or 30 years, it does not take long to fall behind the knowledge curve.
Continued learning is the responsibility of the individuals in IT. This is really the admission that “you need to look out for yourself.” A company with wise management will recognize the increased value that IT staff will bring by learning about the new technologies and products and how to best utilize them. It may be up to the IT director or CIO to take the initiative to facilitate the education required to remain current with the industry. Regardless, individuals must also take responsibility.
Some industries have developments that occur at a rapid pace while others seemingly plod along with few changes during the career of technical people. Storage is in the former category. I was reminded of this on a long flight for vacation with a group of other engineers in different fields that I was involved with in college. While on the flight, I was catching up on technology and product developments by reading many documents I had brought with me. Between briefings I receive from vendors and what I research on my own, I seem to be in constant learning mode. The others with me asked about reading so many different articles and other documents. They were surprised when I explained the amount of time required just to keep up with happenings in the storage industry. Each of them said they did not need to spend much time focused on new developments in their discipline.
Not keeping up with the latest storage technology and products results in IT staff falling behind quickly and failing delivering the greatest value possible. It is a career issue and a company competitive issue.
Conferences and education classes (especially ones delivered by independent organizations) are effective means for getting information quickly and in concise formats. There are education opportunities out there – take advantage of them.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Crossbar Inc. specializes in non-volatile 3D Resistive RAM (RRAM), which promises greater endurance and higher density than NAND flash. The Santa Clara, California-based startup claims that it reached an important milestone toward commercializing terabyte-scale memory arrays that can use a chip the size of a postage stamp.
Crossbar announced today its demonstration of pre-production 1 MB arrays that use its patented 1TnR – which stands for 1 Transistor driving n Resistive memory cells – for read/write operations. Crossbar said a single transistor was able to drive over 2,000 memory cells at low power to produce exceptionally dense solid-state storage.
Yet, although Crossbar’s RRAM carries the potential to achieve higher density and greater endurance at lower power than NAND flash, it’s an open question if or when the technology could reach mass production and turn up in shipping products.
Jim Handy, chief analyst at Objective Analysis in Los Gatos, California, said RRAM is a good candidate to replace NAND flash once NAND costs stop scaling, likely after 3D NAND runs its course. He said every NAND flash maker is looking at RRAM as the heir apparent, and Crossbar has a shot at the market.
“In NAND, it’s all about cost,” Handy noted. “Even though RRAM has significantly better specs – like endurance, over-write, speed and power – in the end, the lowest-cost solution always dominates the market.”
Handy said the Crossbar technology is interesting since it doesn’t require a separate select diode, as nearly every other technology does. “Select diodes have been pretty tricky so far. What we need to find out is whether it ramps into production smoothly, and it’s too early to know that yet,” he said.
Alan Niebel, founder and CEO of WebFeet Research in Monterey, California, said Crossbar’s emerging RRAM “could fall on its face if the technology hits a hurdle or no major manufacturer assists Crossbar in further developing and manufacturing it.”
But, Niebel gave Crossbar’s RRAM an 80/20 chance of seeing the light of day in shipping products, as long as the company partners with an integrated device manufacturer (IDM) or original equipment manufacturer (OEM). He speculated that Crossbar’s RRAM could reach enterprise products around 2018.
Crossbar said today that it is finalizing agreements with several leading global semiconductor companies and plans to announce its first licensing agreements shortly. Commercial shipments in enterprise products are expected in 2017, according to Sylvain Dubois, vice president of marketing and business development at Crossbar.
In the meantime, potential use cases for Crossbar’s RRAM are an interesting topic of discussion. Robin Harris, chief analyst at StorageMojo in Sedona, Arizona, said that if Crossbar gets a terabyte on a single die the size of a thumbnail, one possible destination could be a non-volatile RAM module for an in-memory database.
“What if you could have a server that had 64 TB in non-volatile main memory? You can put a very large in-memory database in there,” said Harris. “Then the implications are: ‘Gee whiz, all of a sudden I don’t need my EMC VMAX. There’s a whole bunch of storage that I probably wouldn’t need, and of course, eventually it would be a lot cheaper than buying a VMAX or some other high-performance array.’ ”
But, Harris said it’s difficult to pinpoint the best place to leverage the Crossbar RRAM technology until we know more about the performance characteristics and price. He said it’s quite possible the initial application will be low-power mobile devices such as tablets and smart phones.
Harris expressed optimism about Crossbar’s chances of success in creating an economically viable product. He said the company’s co-founder, Wei Lu, an associate professor at the University of Michigan, is an “extremely smart guy and cutting-edge researcher,” and Crossbar has top-tier venture capital backing from firms such as Kleiner Perkins Caufield & Byers. Harris added that Crossbar’s RRAM appears to be conducive to cheap manufacture at existing semiconductor fabs.
“The real test comes when the fab starts sampling chips,” said Harris. “That’s when you discover if you really have a market.”
Dell and Nutanix took a lot of people by surprise this week when they disclosed an OEM deal that will result in Dell selling Nutanix hyper-converged software on PowerEdge servers. If this plays out like other emerging technologies have, you can expect more established storage vendors to partner with – or acquire – hyper-converged technology that bundles storage, networking, and virtual servers.
There was an erroneous report a few weeks ago that Hewlett-Packard was set to acquire SimpliVity, a rival of Nutanix. That was likely the result of rumblings in the industry about large vendors looking to add this type of technology. It’s no secret that EMC is working on its own hyper-converged system – Project Mystic – and most of the major server hardware vendors will have pre-packaged hyper-converged bundles running VMware’s Virtual SAN (VSAN) software.
‘This speaks volumes about the validation of our technology and our product,” Nutanix VP of marketing Howard Ting said of the Dell deal.art
When a hot storage technology starts gaining momentum, the major vendors always look to jump on the bandwagon. They sometimes develop their own, but it is easier to buy it from a startup that has spent years and hundreds of millions of dollars developing it. Big vendors looking to take a shortcut into hyper-convergence have a few options but not too many. Software-only hyper-convergence startup Maxta would make a good target. SimpliVity CEO Doron Kempel says his company is not for sale, but can be an OEM partner despite requiring a PCIe card to carry out its inline deduplication without impacting performance.
“We have the technological capability to do that,” he said of an OEM deal. “We can give a large server vendor functionality of our stack, which includes the software and card.”
Kempel insists that he has never discussed a sale, OEM deal or any other partnership with HP, however.
“I’ve never spoken with HP and never had any M&A discussions,” he said. “We have no commercial relationships with them. We’re not being acquired by anyone, not even for $2 billion.”
Taneja Group consulting analyst Arun Taneja said the Dell-Nutanix deal “puts hyper-convergence in hyper-drive” and predicts more deals. “Hyper-convergence is a totally tomorrow type of product,” he said. “It’s taking all these technologies that at one time were best of breed but now you can integrate them.”
Sanbolic CEO Momchil “Memo” Michailov predicted the Dell-Nutanix deal will spark a feeding frenzy of partnerships involving other types of server-attached storage software as well.
“This is classic Dell: take a high-value product, smash it into the channel and roll out a commodity offering,” Michailov wrote in an e-mail statement. “As an entry-level platform, this might be the right move for Nutanix and lead to a successful IPO down the line. From a broader industry perspective, the move is the first in what I expect to be a line of strategic plays to move billions of dollars in market value from storage back to the server layer. HP, IBM/Lenovo and others won’t be far behind, but the real enterprise market opportunity relies on software-defined storage players that can offer tier-one, scale-out capabilities.”
HP has renamed an extended version of its managed storage and backup services as Helion.
Formerly called Storage Management Services and Backup Restore services, Helion now is positioned as a managed cloud offering that uses HP 3PAR StoreServ and HP StoreOnce Backup systems as the underlying hardware. The offering can be customized for private, virtual private and public cloud. It leverages block, file and object storage along with backup services.
“We collapsed a set of servers and provisioned gigabytes of storage into a single offering,” said Mike Meeker, HP’s global offering manager for HP enterprise services.”The focus is extending the existing offering. We have a core set of services and structures. With that structure comes the ability of multiple features.”
Meeker said the offering can be customized. For instance, multiple-tiers in a SAN or block array can be configured for high performance applications that require low latency. The service also has tiers of files services that can be used for numerous use case based on the type of application and databases across private, virtual private and public clouds.
Its customized configurations leverage block, file and object storage as well as backup services powered by HP 3PAR StorServ and HP StoreOnce Backup systems. The different tiers and features are front-ended with a new portal.
The block-based service targets application and databases that need high-performance via Fiber Channel communications. The file storage is for NAS to provide file-level storage accessed via the Internet protocol. The object storage provides backup applications the ability to add, delete and modify files via REST or HTTP protocols. Server backup creates copies of data for services in the data center and remote offices.
“They now have the flexibility to pick and choose different tiers, at different price points and different application needs,” Meeker said.
Hitachi Data Systems (HDS) has enhanced its cloud storage platform for people who want to use the public cloud as a tier.
The vendor made changes to three cloud products – the Hitachi Content Platform (HCP), HCPAnywhere and Hitachi Data Ingestor (HDI). HCP is object-based software that serves as HDS’s main cloud platform and can run on any of its hardware products. HCP Anywhere is the file sync and share software introduced a year ago that lets mobile devices access HCP. HDI is what HDS calls its “cloud on-ramp” that lets companies connect to the cloud from remote offices. All three can run as virtual appliances.
With HCP 7 launched this week HDS added cloud adaptive tiering, which lets customers move data on and off Amazon, Google and Microsoft public clouds. The policy-based tiering controls what data is kept on-premise and what data goes to a public cloud. New synchronization capabilities let customers sync data across active sites. The new tiering and synch features let companies store data local and in the cloud and access them through mobile devices.
“We’ve become a cloud broker,” HDS CTO Peter Sjoberg said. “Anything brought into HCP can be sent out to any cloud. The information flows through HCP, but the metadata is retained in the data center. The information can go to your cloud of choice.”
Because HDI caches data that is used locally, there is less of a need for WAN optimization for remote sites. HDS also says the new features alleviate the need to do distributed backups across site. “You don’t have to pop a tape in,” said Tanya Loughlin,HDS director of file, object and cloud product marketing at HDS.
Making it easier for end users to share information could solve a BYOD headache caused by people using their own devices in their own way, or as Sjoberg put it, “going rogue, and going around IT.”
I have been told by a storage administrator that if he moved data to the cloud he was no longer responsible for it. He made the wash-my-hands of it sign during that conversation to illustrate it was “not my problem.” That’s because of guarantees offered by the cloud service provider storing the information. I did not get any of the details about those guarantees but I would also question how sound they were and what recourse the company would have if anything did get lost.
An event happened this past week that illustrates this problem and set off alarm bells for all storage administrators and IT management. U.K.-based code hosting company Code Spaces lost every client’s data and ceased operation. The data loss was due to a malicious hacker that deleted the data and was so proficient at it that Code Spaces’ protection mechanisms were incapable of protecting or recovering data. Russ Fellows wrote a blog with more detail on the Evaluator Group web site. More can be found at SearchAWS.com.
This was a major fail for Code Spaces and a major loss of valuable development source code that was stored there by many companies. And there were guarantees about how the data was protected. A conversation with an IT guy who decided to put data on Code Spaces would be completely different now. Would a company executive believe that the IT guy was no longer responsible when the data was moved there? There is no realistic defense for the IT guy here.
This means the responsibility for protecting information and ensuring its availability for use remains with IT, specifically the storage team, irrespective of where it is physically located. Protecting data from disaster or some type of alteration or destruction is one of the earliest and most basic jobs in IT. Making the data available to the point of business continuity is a responsibility. The consideration that IT would be absolved of those responsibilities by moving to a cloud provider is wishful thinking.
Moving the data to the cloud may have economic benefit. But it still requires the operational effort and expense of ensuring the data is protected and available. The protection and availability must be proven with periodic exercising of recovery and availability switchover. Without that, the liability is there but without evidence that responsible actions have been taken.
In advising our IT clients, this is a great example to use. From this point on, when someone says they are moving data to the cloud and no longer have responsibility, I’ll just ask if they have it in writing that they will not be held liable. We certainly have an example now.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
More than three years after coming out with its copy data storage software, Actifio is expanding the use cases for its data protection technology. Actifio launched Actifio Sky platform in May as a remote office companion to its Copy Data Storage (CDS) core data center product, and this week built on those platforms with a Resiliency Director service for disaster recovery.
The goal for Resiliency Director is to provide one-click failover of heavily virtualized environments, using the cloud or a co-location as a DR site.
Resiliency Director consists of virtual appliance data collectors on the primary site that discovers VMware virtual machines and vApps, and CDS systems at the primary and DR sites. The CDS software performs deduplication and asynchronous replication of VMs, keeping them in a ready state for DR. When there is a failure (or DR test), Resiliency Director software at the DR site orchestrates storage, compute, network and data, and enables full application recovery.
Actifio claims it can enable application recovery faster than anybody else because of its granular understanding of VMs and vApp reduces the size of the data sets and it only has to rehydrate changed deduped blocks when making restores. Also, a read cache in CDS reduces latency and improves IOPS in the recovery stack.
Customers can organize VMs and vApps in application groups and prioritize the order in which they are recovered.
“Our customers said they wanted DR to operate in a much more granular VM level,” Actifio VP of product management Brian Reagan said. “And they want to orchestrate recoveries without relying on array-based replication that adds cost.”
Reagan said Actifio plans to add support for more hypervisors – Microsoft Hyper-V is likely the next – and physical machines to Resiliency Director.
Sungard Availability Services is already using Actifio Resiliency Director for its Recover2Cloud service. This expands the vendors’ partnership, which already includes Sungard’s Managed vaulting Backup for Actifio service. Sungard Availability VP of product management Souvik Choudhury said Recover2Cloud will be available as a standalone service and integrated with other Sungard services.
Veeam Software is offering its service provider partners a way to give more customers an integrated and secure way to move backups to an offsite repository.
The company recently introduced VeeamCloudConnect as part of its Veeam Availability Suite v8. The service requires a single server and takes less than 10 minutes to implement, while providing all the infrastructure management needed for an offsite repository service.
Doug Hazelman, Veeam’s vice president of product strategy, said VeeamCloudConnect offers the service to provide its partners a Secure Sockets Layer (SSL) connection for secure communications on the Internet for data transfers.
“Normally the service provider has to set up a VPN and a separate repository for customers,” Hazelman said. “(With this) they can make one repository available and create multiple tenants in that repository, which is at the cloud provider’s location.”
Hazelman said they are not disclosing pricing, but Veeam’s marketing data says they are offering pricing that is tailored to meet service providers’ specific needs, so instead of an up-front perpetual license Veeam will offer service providers monthly per virtual machine licensing.
The service providers will most likely charge monthly on capacity and not based on the number of virtual machines. Veeam customers can search for the nearest service providers through an integrated Web portal in the Veeam Availablity Suite v8.
Carbonite, a cloud backup pioneer going back to the days when it was something done only by consumers, is moving deeper into the SMB market. Carbonite is also changing its delivery method from software that customers download off its web site to appliances sold by channel partners.
Carbonite launched its first appliance this week. The Carbonite Appliance HT10 includes Carbonite software and 1 TB of local storage with the ability to move 500 GB into the Amazon cloud.
Dave Maffei, Carbonite’s VP of Global Channels, said the appliance is designed for SMBs with up to 500 employees. Customers will pay a monthly fee for the appliance and Amazon capacity. He said channel partners will set the pricing, but Carbonite CEO David Friend said the appliance would cost $99 per month during the vendor’s latest earnings call in April.
Maffei said the software on the appliance is different than what Carbonite sells to consumer. The appliance software likely includes technology that Carbonite acquired when it bought SMB cloud vendor Zmanda in 2012. HT10 connects to servers via Ethernet, takes local bare metal images of the servers and replicates them to the cloud.
The software for businesses must be more reliable than that for consumers. “A couple of days without data is potentially the end of the road for a business,” Maffei said. “It’s different than losing pictures of the kids.”
The Carbonite sells its Zmanda software as Carbonite Server for SMBs, but the majority of its revenue still comes from consumers. In the first quarter of this year, consumers accounted for $23.3 million in revenue compared to $9.2 million from SMBs. The appliance should close that gap.
Carbonite’s appliance and new SMB focus also shows how cloud backup is gaining acceptance. Carbonite, Mozy and the other early cloud backup vendors focused on consumers at the start because businesses wouldn’t think of trusting data protection to the cloud. Now Mozy is part of EMC, and even the largest corporations are backing up to the cloud. That means you can expect larger appliances from Carbonite in the not-so-distant future.