EMC Data Protection and Availability Division executives dropped hints about upcoming snapshot and high availability features and showed off a proof of concept of a new management user interface yesterday during their super session at EMC World.
Guy Churchward, the division’s president, told attendees to expect a big announcement
“shortly” for a new capability called Centaur. A slide referred to it as “snap shot primary to protection storage.” An EMC representative later confirmed Centaur is a “future deliverable.”
“Wouldn’t it be nice if we could actually spit directly from something like a VMAX into a Data Domain? And therefore you actually end run the software backup stack,” Churchward said. “Performance-wise, it’s great. Optimization-wise, it’s great. We’re always looking at disrupting this industry and actually driving a level of innovation.”
Churchward also disclosed plans for protection storage high availability (HA). He said that would take EMC Data Domain’s Data Invulnerability Architecture “just a tiny bit further.” Churchward didn’t supply a date. As with Centaur, the EMC representative would say only that HA is a “future deliverable.”
After displaying a slide illustrating some of the management user interface (UI) improvements for backup and recovery, Churchward issued the following caveat to attendees: “This is a concept of what you will be seeing in the next 18, 24 months of a UI of the future.”
The UI’s initial screen was divided into three segments: system optimization, system health and a data chat box for seeking outside help.
The health portion of the screen listed the total number of systems under management and information such as the number under self-care or EMC-assisted care and the number for which operational fixes were available.
Under system optimization, the UI displayed the number of systems optimized and unoptimized in categories such as capacity forecast, garbage collection and replication lag. The dashboard could indicate the number of systems running out of capacity within 90 days and let the user drill down for more detailed, actionable information, according to Stephen Manley, CTO of EMC’s Data Protection and Availability Division.
Manley outlined an example of a system that wasn’t seeing a good deduplication ratio because its workload included videos and encrypted data that doesn’t deduplicate well. The UI supplied information on options to resolve the issue, such as moving the videos from EMC’s Data Domain to VMware’s vCloud Hybrid Service (vCHS) and the encrypted data to an encrypted container.
“Now the cool thing with this move is it’s going to wait until a good time to move, when the performance and the network bandwidth are available,” Manley said.
In addition to explaining the new UI concept, Manley laid out the company’s vision for providing data protection that can span on-premise, virtualized and newer hybrid and “born in the cloud” consumption models.
“The future of data protection is in metadata,” Manley asserted. “It’s that information about your infrastructure, about your applications, the information about your information, who owns it, the tags, the keywords that are associated with it. That’s what’s going to move us forward.”
Manley broke down the discussion into three areas: hybrid cloud mobility (“the right data in the right place”), analytics-driven automation and cloud-centric data management.
On hybrid cloud mobility: Manley said a company might want to shift data for disaster recovery or analytics, but it needs to understand where it can move the data and what tools will facilitate the migration. “If I move it, is my protection still going to be there? That’s that infrastructure analytics I need and the metadata that goes with it,” he said.
He said application metadata can provide information to ensure the systems function well after the move. “Data mobility is really the lifeblood of the hybrid cloud, and metadata is how you’re going to make it work,” Manley said.
On analytics-driven automation: Manley said he has spoken with customers who have “gathered all the metadata into this lake” and ask him, “Now what?” Those doing analytics are often buried in reports and dashboards.
He said he often fields questions such as: “Am I getting the most I can out of my Data Domain? Am I getting the right dedupe rate? Am I getting the right performance? Should I be upgrading? Should I add more storage? Should I look at a different type of protection storage?”
“Right now, the answer to that is based on some experience and a lot of black magic,” he said. “But, we can do better.”
EMC already captures information about usage on installed systems to help with customer support. Manley said EMC could feed the telemetry into a Greenplum database, analyze it and apply algorithms to make sure the data is in the real-world sweet spot, “not just the white paper sweet spot.”
“What we really need is a social network of machines that connect to each other so we see the interrelationships and then connect to humans so we can take action on it,” Manley said. The answer lies in metadata, he said.
On cloud-centric data management: Manley discussed the need for metadata about a cloud provider’s infrastructure, such as logs to verify service delivery. He said customers may want to move data either within the provider’s infrastructure or to another cloud provider, or they may need to retrieve data. Searching on- and off-premise, they need the single source of metadata to locate what they need, he said.
“That means you need to do the metadata across the products,” said Churchward. “We’re going to play with things like RecoverPoint and Vplex and whatever, mush it up and it’s all going to be magic and it’ll happen.”
After Manley said “yes” with great enthusiasm, Churchward said, “Yeah, no wonder you’re a CTO.”
Las Vegas — Like many online file sharing companies, EMC Syncplicity is trying to balance the user experience with security functions that IT departments demand.
At EMC World this week, Syncplictiy added two enhancements to its online file sharing product — a Storage Vault Authentication function to set security controls on storage repositories and a Syncplicity Connector for SharePoint so data from that application can be accessed and edited via mobile devices.
EMC first integrated a policy-based Storage Vault capability last year that let IT administrators select where data is stored according to user, group, folder file or content type. The storage can be on-premises or in a private or public cloud. The new The Storage Vault Authentication capability gives the IT department the ability to set a second layer for security controls for sync and share data.
“Security has been at odds with usability,” said Jeetu Patel, vice president and general manager of EMC’s Syncplicity business unit. “Your design points should not be at odds. It’s the way you implement that capability. When you add security, there is a way to enhance productivity. That may sound counter-intuitive.”
Patel said the second layer authentication function allows IT to set policy-based controls on the Storage Vault repositories holding certain sensitive data. Previously, user authorization controls to access sync and share data were on the Syncplicity application only.
“This was driven by enterprise customers,” Patel said. “It’s for companies that say, ‘I’m still nervous about the cloud.’ We give them a second layer of authentication. So not only does Syncplicity do authorization but the repository has to allow authorization. You might not need this for all content.”
Patel said the Syncplicity Connector for SharePoint works as a repository for content and helps bridge the gap between the SharePoint application and EMC’s sync and share application. Online file sharing applications often are used to replace SharePoint as a collaborations tool, but companies may not move all files out of SharePoint.
“A lot of repositories don’t die,” Patel said. “You may have found a more progressive way to do things but you still have to access data from those repositories. You shouldn’t have to take on a massive migration problem.”
Syncplicity file and share application is expected to play a role in EMC’s federation business model, in which product development, marketing and sales are balanced among the companies Pivotal, VMware and EMC Information Infrastructure businesses. EMC has identified mobile devices, social platforms, big data, cloud and security as the main growth areas.
“We will work with these assets when it makes sense,” said Patel. “For instance, you can expect to see integration with (VMware’s) AirWatch mobile device manager. There is a lot of collaboration we are doing with different units.
LAS VEGAS — EMC World 2014 opened this morning with the launch of a cloud storage appliance and the next version of ViPR. The products represent two areas that EMC will focus on during the four-day conference – the cloud and software-defined storage.
EMC first unveiled the concept of ViPR at last year’s EMC World, and began shipping the software last fall with support for object storage and EMC VNX, VMAX, and Isilon arrays as well as NetApp arrays. The 1.1 upgrade in January added support for Hadoop and EMC Storage Resource Management suite but no additional arrays.
ViPR 2.0 supports HDS and commodity storage natively. Support for Dell, Hewlett-Packard and IBM arrays requires an OpenStack Cinder block storage plug-in. The new version of ViPR also includes geo-distribution and multi-tenancy support for clouds.
The EMC Elastic Cloud Storage (ECS) Appliance – known as Project Nile during its beta — is designed for public and hybrid clouds, and will scale to 2.9 PB in one rack. ECS is built on the ViPR platform. EMC did not offer many specifics in its initial press release, but more information will be available during the conference.
In a blog posted this morning, Manuvir Das, EMC’s VP of engineering for its advanced software division, listed features of the ECS appliance:
- Universal protocol support in a single platform with support for block, object, and HDFS [Hadoop distributed file system]
- Single management view across multiple types of infrastructures
- Multi-site, active-active architecture with a single global namespace enabling the management of a geographically distributed environment as a single logical resource using metadata-driven policies to distribute and protect content, and
- Multi-tenancy support, detailed metering, and an intuitive self-service portal, as well as billing integration.
Virtual desktop infrastructure (VDI) is a big use case for storage that includes flash, as well as storage built for virtualization. Nutanix fits both categories with its hyper-converged Virtual Computing Platform that includes flash and hard disk drives, so it’s no surprise that VDI is a key market for the startup.
This week Nutanix unveiled a per desktop program to make it easier for customers to size their systems for VDI. Customers tell Nutanix how many virtual desktops they want to deploy, Nutanix recommends the right system and guarantees performance based on user profiles. Nutanix has four VDI customers profiles – kiosk, task, knowledge user and power user.
Greg Smith, Nutanix senior director of product marketing, said pricing for the VDI-specific systems start at $200 per desktop for storage and computing. The systems start at 150 virtual desktops.
If performance is unsatisfactory, Nutanix will provide the right hardware and software to make it work, Smith said.
“Normally the onus is on the end user to build it themselves,” he said, “We’re making things simpler. Customers tell us how many VDI users thy have, what types of users they are, and we provide the infrastructure.”
Smith said Nutanix customers often start with VDI and then add other applications to their appliances. Per desktop customers can go back and add servers and clusters for other apps if they want to expand.
There was an interesting comment made by a person in the audience where I was giving a talk. It was about a major technology area that I had worked in early in my career and the comment was that it was a niche technology now. I had not thought about that much until then but it made sense. And it brought on other thoughts about how natural was it for a technology to reach a zenith and then be eclipsed by other technologies. Maybe it will disappear altogether, or it will continue to have value for a protracted period without being the primary technology in use.
So, how does a technology go from mainstream to niche? I asked that question to people in the industry to get their opinions. First, the discussion gravitated towards eliminating technologies that had never gained a measure of success. The measure of success was generally considered to be “in widespread usage” if not the dominant one in use. The favorites or pet technologies that some were enamored with because of potential or “coolness” were also eliminated. Those that had not yet achieved mainstream usage are usually referred to as “emerging” or “developing.”
So mainstream is a technology considered to be widespread or in dominant usage. When the technology is no longer the primary one in use, but still has value and has not disappeared, it becomes a niche. It is usually characterized by declining or at least not increasing revenue.
When a technology becomes niche, the perceptions around it change and that could impact the future of that technology. This could lead to economic impacts for companies and to career impacts for individuals. Of course, technologies that have been successful in high value solutions have a long tail – they continue to generate revenue and continue careers for long periods. That is especially true in the storage industry where change is much slower than most realize.
A technology that has moved from mainstream to niche is interesting to track as the industry continues to evolve. It is a signpost of sorts indicating inflection points in the industry. It’s not a bad thing – it’s just the natural order. Those that have started working in technologies that have come and gone such as vacuum tubes can probably tell the story of the gradual decline after solid state electronics replaced them.
The question left hanging here is what niche technologies today were once mainstream. It does not mean they are no longer highly valuable. And, they will probably continue to be used for a long time. I’ve worked on several that are not dominant anymore. They still have great value and have been the foundation for other technologies and systems. Because of their widespread use, they may not be niche. Things just change. But it does make for interesting discussions.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Hewlett-Packard is gaining momentum and market share with its 3PAR StoreServ arrays, and HP will try to continue that momentum next week at EMC World in Las Vegas.
HP reps will descend on EMC World to try and lure EMC customers to the 3PAR array with the HP Online Import software. The software was developed to migrate data from EMC Clariion and first-generation VNX midrange systems to 3PAR arrays. EMC requires a controller upgrade to go from its earlier systems to VNX2 arrays.
HP’s Online Import software is similar to the utility it developed to move customers from its EVA arrays to 3PAR.
“What if we could make it easier to go from VNX to 3PAR than from VNX to VNX2?” asked Craig Nunes, HP’s VP of storage marketing.
A free 180-day license for the online import utility is included with all new 3PAR arrays. Besides making for an easier migration, Nunes said the 3PAR arrays will convert thick volumes to thin volumes, saving a considerable amount of capacity.
“It’s no coincidence we’re introducing this today, in advance of EMC World,” Nunes said. “It’s a great opportunity to give that CX and VNX base insight into other options.”
The online import is an extension of HP’s “No is a three-letter word” campaign started earlier this year to try to sway EMC customers to switch.
“We’re calling it 72 hours of ‘yes,’ in Las Vegas,” Nunes said. “It’s a social media effort aimed at bringing concierge style services to EMC World attendees – theatre tickets, trips to Grand Canyon and other services they can get by tweeting #72hoursofyes.”
Nunes said HP’s “base of operations” will be at the Canal Shoppes above the Venetian Hotel where EMC World will take place. “We’ll also have a team on the ground in blue HP Storage Yes Team t-shirts connecting with EMC attendees,” he said.
HP’s come-on to EMC customers is of a type that may not be out of place in Vegas. HP’s press release claims attendees can request a complimentary ride from the airport, coffee “or anything else to make their trip more enjoyable.”
That doesn’t leave much out in Vegas. It also could end up costing HP more in services than it makes from a 3PAR array.
“Actually, we say anything you can get from your hotel concierge,” Nunes clarified, although that probably doesn’t leave much out either.
Red Hat doubled down on open source storage software today when it acquired startup Inktank for $175 million. Inktank’s Ceph software gives Red Hat object and block storage to go with the GlusterFS-based file storage that Red Hat acquired in 2011.
Ceph is an open-source scalable distributed file system created by Sage Weil, who founded Inktank in 2012 and is its CTO. Inktank began selling Inktank Ceph Enterprise as a subscription based storage-only product in November 2013. In February, Inktank upgraded Ceph Enterprise to 1.1 and received formal certification for the Red Hat Enterprise Linux OpenStack platform.
Inktank had $14.4 million in venture funding.
According to the FAQ:
“By aligning two leading open source communities, Red Hat can offer its customers a very competitive alternative to traditional proprietary storage systems from companies like EMC. Given the size of the storage opportunity, increasing the Red Hat investment in this area made a lot of sense, especially considering Inktank’s strong position with OpenStack.”
The FAQ said Red Hat will continue to sell and support Ceph’s products under the Red Hat brand, and will develop a roadmap to deliver compatible products for file, block and object storage. Red Hat said it will continue to support the Inktank development community.
In a blog on the Ceph community web site, Weil wrote that “Red Hat is one of only a handful of companies that I trust to steward the Ceph project. When we started Inktank … our goal was to build the business by making Ceph successful as a broad-based, collaborative open source project with a vibrant user, developer, and commercial community. Red Hat shares this vision.”
Weil wrote the deal will require Inktank to change one part of its product strategy. Because Red Hat favors a pure open source model, Inktank will make its Calamari monitoring and diagnostics tool open sourced. Calamari is currently proprietary.
Red Hat expects the acquisition to close in May.
On a webcast to discuss the deal, Weil said he would join Red Hat to run the Ceph initiative. Weil and Red Hat CTO Brian Stevens said it was too soon to say whether Ceph will remain a standalone product or be bundled with other Red Hat software, or when we might see open-stack Calamari.
CommVault missed its revenue expectations last quarter, a notion CEO Bob Hammer found especially frustrating because he sees great opportunity for the backup vendor to flourish. He also sees silver linings in CommVault’s impending Simpana product upgrade and the cloud.
CommVault’s revenue of $157 million last quarter increased 13 percent over last year but fell about $3 million short of Wall Street’s forecast. Hammer blamed the shortfall on failure to close big deals, particularly in North America. And he blamed that partly the vendor moving sales resources to the cloud and other parts of the world, and the distraction from winding down its Dell partnership.
CommVault executives say their issues are short-term, and maintain the company is on track to become a billion-dollar revenue player (its revenue for the just-completed fiscal year was $586 million). They said enterprise deals ($100,000 and up) did not fall through because customers went to competitors, and some have closed this quarter.
Hammer said he sees great potential for CommVault. The data protection space is wide open with the cloud changing market dynamics, Symantec plodding along without a CEO or clear direction and smaller vendors such as Actifio gaining momentum.
“Despite the weak quarter in the Americas, my confidence on the business in general is the highest it’s ever been,” Hammer said Friday on CommVault’s earnings call. “So I’m really confident that there is extremely high probability that if we get the execution pieces in place, we’re going to hit numbers.”
Later on the call, he added of the poor quarter, “You can tell from my voice, obviously, that it’s an execution issue and fundamentally pisses me off. So instead of fooling around with it, we said we’re going to hit this with a damn sledgehammer. So we put the engine in place to solve that problem. That’s pure execution.”
Hammer pointed to a massive shift to the cloud as part of his reason for optimism. He said approximately 200 service providers use Simpana for data protection, and CommVault will continue to invest heavily in cloud technologies.
Another reason for optimism is Simpana 10 R2, a major upgrade to CommVault’s backup and data management application due in July. “This release will include enhancements to core data management protection, particularly in the areas of virtualization, archiving and snapshot and replication management,” Hammer said.
He added the upgrade will include “new technology to securely and automatically move data to the cloud, in the cloud, and cloud-to-cloud, a standalone mobile solution with added capabilities for document sharing and data loss prevention … new solutions for operations management and intelligence and operations analytics … the ability to economically recover, use, replace and browse data in live native format and virtualized environments providing the capability to immediately restore, copy, back data into a usable state.”
Hammer revealed CommVault is preparing integrated appliances for archiving and cloud gateways that will involve partners. He said these appliances will be “engineered by CommVault and built on commercially available servers and storage.”
NaviSite, a subsidiary of Time Warner Cable, offers managed cloud services via its four co-locations in the United States and England. NaviSite already supports the Actifio Copy Data Storage (CDS) systems for some time now, with customers placing an Actifio device on-premises and one in a NaviSite co-location for data protection. It also uses EMC Atmos for cloud storage.
NaviSite now hopes to expand its customer base by tapping into EMC Data Domain users.
“There are a considerable number of customers in the Data Domain installed base,” said Chris Patterson, NaviSite’s vice president of product management. “The Vault does like-for-like replication from one array to another. We thought it was best to support the more popular array out there.”
Patterson said NaviSite has a large Data Domain system at its data center. It sells customers smaller Data Domain devices to place on premise, or customers can use Data Domain appliances they already own. Like with the Actifio system, the Data Domain on-prem device acts as target for applications. Then the data is replicated to the system in NaviSite’s public cloud.
“If a customer has a lot of data, more that 10 or 20 terabytes, we ship a small Data Domain to them so they can sync up (the data) on their site and they can send it back to us,” Patterson said.
Hitachi Data Systems wasn’t the only vendor to launch a new enterprise SAN array this week. Hewlett-Packard also brought out its XP7.
That’s not a coincidence, because the XP7 and the HDS Virtual Storage Platform (VSP) G1000 use the same hardware architecture. HP has been licensing the technology from Hitachi for 15 years, and brings out its enterprise arrays at the same time as HDS.
The dual rollout raises two questions for HP. First, what’s the difference between the two arrays, and how does the XP7 fit alongside HP’s own 3PAR StoreServ family in HP’s enterprise strategy?
HP people can be touchy about the first question. They refer to the HDS deal as a technology partnership rather than a straight OEM deal. As HP storage and social media expert Calvin Zito wrote on his blog this week:
“One of my big pet peeves in when people say that we rebadge an HDS array. That couldn’t be further from the truth and I dare say that HP has made far more contributions to the XP platform over the years because of the technology agreement with Hitachi Ltd.”
HP brings its software and firmware to the array, adding features such as Performance Advisor, HP advanced clustering features and integration with HP servers. However, these features are overshadowed by Hitachi’s storage virtualization capabilities that allow the arrays to support systems from any major storage vendor and the new Hitachi flash modules that also work with the XP7.
Most of HP’s storage focus is on the 3PAR platform, which spans from the midrange into the enterprise. That is HP’s best-selling storage system and fits into large implementations, so why sell the XP7 to compete against itself? The simplest reason is mainframe connectivity. The XP7/VSP G1000 IP goes back to the days when large storage systems were almost always connected to mainframes. 3PAR, which came along in the early 2000s, was designed for web and cloud hosting companies.
“We recognize that customers and the industry is going through a transformation from reliable robust legacy applications to new styles of IT-as-a-service, cloud-based virtual environment,” said Kyle Fitze, HP’s director for XP storage. ”Customers in some cases can’t move that fast because there are challenges around business needs and the ability to introduce new technology in a seamless way.
“XP is for conservative mission critical customers with expectations of high performance and low latency. StoreServ is for customers re-architecting their data centers, changing applications and moving to a services-oriented model.”