Storage Soup


May 14, 2014  9:32 AM

Storage lifespans: don’t confuse technology with data

Randy Kerns Randy Kerns Profile: Randy Kerns

Clarification is needed about what lifespan means regarding storage because confusion is created by the way product messaging refers to both in the same context.

Lifespans of storage systems refer to many things: wear-out mechanisms for devices, technology obsolescence in the face of new developments, inadequacies of dealing with changing demands for performance and capacity, and physical issues such as space and power.

The wear-out mechanisms are tied to support costs, which typically increase dramatically after the warranty period that could run three years to five years in enterprise storage systems. These issues all lead to a cycle of planned replacement of storage systems, often triggered by the depreciation schedule for the asset.

For the information or data stored on a storage system, the lifespan depends on the characteristics and policies of that data. Information subject to regulatory compliance usually has a defined lifespan or period of time it must be retained. Other data may have business governance about retention. Most of the data is not so clearly defined, and is left to the owners of the data (business owners in many discussions) deciding about the disposition. Typically, data is retained for a long time – perhaps decades or even forever.

There is confusion about how to update the storage technology without regard to what is the content stored. This requires changing technology without disrupting access to the data, without requiring migration that entails additional administrative effort and operational expense, and without creating risk of impacts or data loss. These concerns are addressed with the many implementations of scale-out technology delivered with NAS or object storage systems.

Clustering, grids, ring, or other interconnect and data distribution technologies are key to scale out. Nodes can be added to a configuration (cluster, grid, ring, etc.) and data is automatically and transparently redistributed. Nodes can be retired – automatically where data is evacuated and redistributed and once empty, the node can be removed – all with transparent operation.

These scale-out characteristics allow storage technology to progress: new technology replaces old. This usually happens within the constraints of a particular vendor software or hardware implementation. The important development is that data is independent of the storage technology change.

For data, the format and the application are the big issues. Data may need to be converted to another form whenever the application that can access the data changes (meaning there is no longer support for that format, etc.). Being able to access data from an application is more important than merely storing information. The ability to understand the data is independent of the storage. Updating technology and progressing data along the storage technology improvements is possible and is being addressed with new scale-out systems. Dealing with formats that persist over time is another issue that can be independent of the storage technology.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).

May 13, 2014  4:35 PM

Microsoft previews Azure Files, Azure Site Recovery at TechEd

Carol Sliwa Carol Sliwa Profile: Carol Sliwa

Microsoft previewed the Azure Files service and new Azure-based disaster recovery capabilities yesterday at TechEd North America in Houston, in line with its major conference theme of connecting on-premise systems and the public cloud.

Azure Files are designed to address the problem of moving an on-premise application or data that uses file-based storage to object-based storage in the cloud. Local applications often run on virtual machines (VMs) and use traditional file protocols, such as Server Message Block (SMB), to access shared storage. But, cloud-based object storage is generally accessible via REST APIs.

Until now, enterprises had to rewrite the applications to use REST APIs or use a gateway product to shift their application data to Microsoft’s Azure cloud storage. Azure Files gives them the option to access an Azure File share using SMB 2.1 or REST APIs, allowing Azure to act as a cloud NAS.

“Think of now having shared and common storage in Azure with an SMB protocol head to it that all your VMs in Azure — all the applications that you’re writing — can now use in a shared manner,” Brad Anderson, vice president of Microsoft’s cloud and enterprise division, said during the opening keynote.

The Azure Files service is available as a public preview. Microsoft declined to provide the expected date for the general release.

Microsoft also has yet to set a timetable for SMB 3.0. When Azure Files are accessed via the currently supported SMB 2.1, the file shares are available only to VMs within the same region as the storage account. REST APIs, however, are available for concurrent file access from anywhere, according to a Microsoft Azure storage team blog post.

According to the blog post, the scalability targets for the Azure Files preview are up to 5 TB per file share, a file size of up to 1 TB, up to 1,000 IOPS (at 8 KB block size) per file share and throughput up to 60 MB/s per file share for large I/O.

Pricing for Microsoft Azure Files is 4 cents per GB for locally redundant storage during the preview period. The price includes a 50% preview discount. Geographically redundant storage is not available during the preview period, according to Microsoft’s Azure site.

Microsoft also unveiled new capabilities for Azure Site Recovery (formerly Azure Hyper-V Recovery Manager) at TechEd. New capabilities will enable customers to replicate VMs from their own data centers directly to Azure and coordinate the recovery of  workloads in the cloud. A preview is due next month, according to Anderson.

“This is the No. 1 request that we have heard for Hyper-V Replication Manager today,” Anderson said. He said customers will have a “complete disaster recovery solution with the ability to seriously fail over in an unplanned or planned manner to Azure.”

Anderson said disaster recovery (DR) is typically reserved for only the most mission-critical applications because it’s too expensive and too difficult. But he claimed the simplicity of Azure Site Recovery makes the service suitable for all workloads.

Microsoft priced Hyper-V Recovery Manager by the number of VMs protected, based on the average daily number overly a monthly period. Pricing was $16 per VM protected.


May 12, 2014  8:06 AM

Startup Maxta gets $25 million in funding, partners with Intel

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Maxta, a software-based hyper-convergence startup, last week picked up $25 million in series B funding and a significant strategic investor.

Intel Capital participated in the round, which brings Maxta’s total funding to $35 million. Maxta’s distributed software runs on virtual servers to pools flash and disk storage capacity on the server level, allowing customers to build a SAN with commodity hardware.

Maxta founder and CEO Yoram Novick said the startup is involved in a long-term strategic partnership with Intel to develop a software-based, virtual storage technology that works with Intel chips, motherboards and servers.

“We believe the main challenge for the virtual data center is storage,” Novick said. ““The compute side has improved a lot while traditional storage has not changed much. Intel is working on features to provide better storage in a converged infrastructure. We saw the same thing coming, so we decided to work together. We will work with them to develop a better platform. We will add more features to leverage their architecture.”

Novick said the Maxta storage platform architecture (MxSP) is hypervisor agnostic, although it primarily works in VMware environments. Maxta also announced it is supporting Microsoft Hyper-V and KVM installations if a customer requests it.

“If a customer wants it, they will need to talk to us,” Novick said. “We do it with customers working with other hypervisors. We have limited availability for other solutions.”

The MxSP software does checksums to ensure data integrity, along with local replication to eliminate a single point of failure. It accelerates writes with write-back caching on solid state drives (SSDs) and a log-based data layout. It accelerates reads by caching metadata and hot data on SSDs. It also has the ability to co-locate virtual machines and the associated data.

Novik said use cases include primary storage, disaster recovery, virtual desktop infrastructures (VDI), cloud, and test and development. He said the new funding will be used to expand its sales and marketing. The Sunnyvale, California-based company has 40 employees, mostly engineers.

The B-round funding also was led by Tenaya Capital and existing investor Andreessen Horowitz also participated.


May 7, 2014  9:37 PM

EMC Data Protection execs disclose upcoming features, management UI

Carol Sliwa Carol Sliwa Profile: Carol Sliwa

EMC Data Protection and Availability Division executives dropped hints about upcoming snapshot and high availability features and showed off a proof of concept of a new management user interface yesterday during their super session at EMC World.

Guy Churchward, the division’s president, told attendees to expect a big announcement
“shortly” for a new capability called Centaur. A slide referred to it as “snap shot primary to protection storage.” An EMC representative later confirmed Centaur is a “future deliverable.”

“Wouldn’t it be nice if we could actually spit directly from something like a VMAX into a Data Domain? And therefore you actually end run the software backup stack,” Churchward said. “Performance-wise, it’s great. Optimization-wise, it’s great. We’re always looking at disrupting this industry and actually driving a level of innovation.”

Churchward also disclosed plans for protection storage high availability (HA). He said that would take EMC Data Domain’s Data Invulnerability Architecture “just a tiny bit further.” Churchward didn’t supply a date. As with Centaur, the EMC representative would say only that HA is a “future deliverable.”

After displaying a slide illustrating some of the management user interface (UI) improvements for backup and recovery, Churchward issued the following caveat to attendees: “This is a concept of what you will be seeing in the next 18, 24 months of a UI of the future.”

The UI’s initial screen was divided into three segments: system optimization, system health and a data chat box for seeking outside help.

The health portion of the screen listed the total number of systems under management and information such as the number under self-care or EMC-assisted care and the number for which operational fixes were available.

Under system optimization, the UI displayed the number of systems optimized and unoptimized in categories such as capacity forecast, garbage collection and replication lag. The dashboard could indicate the number of systems running out of capacity within 90 days and let the user drill down for more detailed, actionable information, according to Stephen Manley, CTO of EMC’s Data Protection and Availability Division.

Manley outlined an example of a system that wasn’t seeing a good deduplication ratio because its workload included videos and encrypted data that doesn’t deduplicate well. The UI supplied information on options to resolve the issue, such as moving the videos from EMC’s Data Domain to VMware’s vCloud Hybrid Service (vCHS) and the encrypted data to an encrypted container.

“Now the cool thing with this move is it’s going to wait until a good time to move, when the performance and the network bandwidth are available,” Manley said.

In addition to explaining the new UI concept, Manley laid out the company’s vision for providing data protection that can span on-premise, virtualized and newer hybrid and “born in the cloud” consumption models.

“The future of data protection is in metadata,” Manley asserted. “It’s that information about your infrastructure, about your applications, the information about your information, who owns it, the tags, the keywords that are associated with it. That’s what’s going to move us forward.”

Manley broke down the discussion into three areas: hybrid cloud mobility (“the right data in the right place”), analytics-driven automation and cloud-centric data management.

On hybrid cloud mobility: Manley said a company might want to shift data for disaster recovery or analytics, but it needs to understand where it can move the data and what tools will facilitate the migration. “If I move it, is my protection still going to be there? That’s that infrastructure analytics I need and the metadata that goes with it,” he said.

He said application metadata can provide information to ensure the systems function well after the move. “Data mobility is really the lifeblood of the hybrid cloud, and metadata is how you’re going to make it work,” Manley said.

On analytics-driven automation: Manley said he has spoken with customers who have “gathered all the metadata into this lake” and ask him, “Now what?” Those doing analytics are often buried in reports and dashboards.

He said he often fields questions such as: “Am I getting the most I can out of my Data Domain? Am I getting the right dedupe rate? Am I getting the right performance? Should I be upgrading? Should I add more storage? Should I look at a different type of protection storage?”

“Right now, the answer to that is based on some experience and a lot of black magic,” he said. “But, we can do better.”

EMC already captures information about usage on installed systems to help with customer support. Manley said EMC could feed the telemetry into a Greenplum database, analyze it and apply algorithms to make sure the data is in the real-world sweet spot, “not just the white paper sweet spot.”

“What we really need is a social network of machines that connect to each other so we see the interrelationships and then connect to humans so we can take action on it,” Manley said. The answer lies in metadata, he said.

On cloud-centric data management: Manley discussed the need for metadata about a cloud provider’s infrastructure, such as logs to verify service delivery. He said customers may want to move data either within the provider’s infrastructure or to another cloud provider, or they may need to retrieve data. Searching on- and off-premise, they need the single source of metadata to locate what they need, he said.

“That means you need to do the metadata across the products,” said Churchward. “We’re going to play with things like RecoverPoint and Vplex and whatever, mush it up and it’s all going to be magic and it’ll happen.”

After Manley said “yes” with great enthusiasm, Churchward said, “Yeah, no wonder you’re a CTO.”


May 7, 2014  7:53 AM

EMC adds security to Syncplicity file sharing

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Las Vegas — Like many online file sharing companies, EMC Syncplicity is trying to balance the user experience with security functions that IT departments demand.

At EMC World this week, Syncplictiy added two enhancements to its online file sharing product — a Storage Vault Authentication function to set security controls on storage repositories and a Syncplicity Connector for SharePoint so data from that application can be accessed and edited via mobile devices.

EMC first integrated a policy-based Storage Vault capability last year that let IT administrators select where data is stored according to user, group, folder file or content type. The storage can be on-premises or in a private or public cloud. The new The Storage Vault Authentication capability gives the IT department the ability to set a second layer for security controls for sync and share data.

“Security has been at odds with usability,” said Jeetu Patel, vice president and general manager of EMC’s Syncplicity business unit. “Your design points should not be at odds. It’s the way you implement that capability. When you add security, there is a way to enhance productivity. That may sound counter-intuitive.”

Patel said the second layer authentication function allows IT to set policy-based controls on the Storage Vault repositories holding certain sensitive data. Previously, user authorization controls to access sync and share data were on the Syncplicity application only.

“This was driven by enterprise customers,” Patel said. “It’s for companies that say, ‘I’m still nervous about the cloud.’ We give them a second layer of authentication. So not only does Syncplicity do authorization but the repository has to allow authorization. You might not need this for all content.”

Patel said the Syncplicity Connector for SharePoint works as a repository for content and helps bridge the gap between the SharePoint application and EMC’s sync and share application. Online file sharing applications often are used to replace SharePoint as a collaborations tool, but companies may not move all files out of SharePoint.

“A lot of repositories don’t die,” Patel said. “You may have found a more progressive way to do things but you still have to access data from those repositories. You shouldn’t have to take on a massive migration problem.”

Syncplicity file and share application is expected to play a role in EMC’s federation business model, in which product development, marketing and sales are balanced among the companies Pivotal, VMware and EMC Information Infrastructure businesses. EMC has identified mobile devices, social platforms, big data, cloud and security as the main growth areas.

“We will work with these assets when it makes sense,” said Patel. “For instance, you can expect to see integration with (VMware’s) AirWatch mobile device manager. There is a lot of collaboration we are doing with different units.


May 5, 2014  9:25 AM

EMC World 2014 opens with cloud appliance, ViPR upgrade

Dave Raffo Dave Raffo Profile: Dave Raffo

LAS VEGAS — EMC World 2014 opened this morning with the launch of a cloud storage appliance and the next version of ViPR. The products represent two areas that EMC will focus on during the four-day conference – the cloud and software-defined storage.

EMC first unveiled the concept of ViPR at last year’s EMC World, and began shipping the software last fall with support for object storage and EMC VNX, VMAX, and Isilon arrays as well as NetApp arrays. The 1.1 upgrade in January added support for Hadoop and EMC Storage Resource Management suite but no additional arrays.

ViPR 2.0 supports HDS and commodity storage natively. Support for Dell, Hewlett-Packard and IBM arrays requires an OpenStack Cinder block storage plug-in. The new version of ViPR also includes geo-distribution and multi-tenancy support for clouds.

The EMC Elastic Cloud Storage (ECS) Appliance – known as Project Nile during its beta — is designed for public and hybrid clouds, and will scale to 2.9 PB in one rack. ECS is built on the ViPR platform. EMC did not offer many specifics in its initial press release, but more information will be available during the conference.

In a blog posted this morning, Manuvir Das, EMC’s VP of engineering for its advanced software division, listed features of the ECS appliance:

  • Universal protocol support in a single platform with support for block, object, and HDFS [Hadoop distributed file system]
  • Single management view across multiple types of infrastructures
  • Multi-site, active-active architecture with a single global namespace enabling the management of a geographically distributed environment as a single logical resource using metadata-driven policies to distribute and protect content, and
  • Multi-tenancy support, detailed metering, and an intuitive self-service portal, as well as billing integration.

 


May 2, 2014  12:47 PM

Nutanix tackles VDI on per desktop basis

Dave Raffo Dave Raffo Profile: Dave Raffo

Virtual desktop infrastructure (VDI) is a big use case for storage that includes flash, as well as storage built for virtualization. Nutanix fits both categories with its hyper-converged Virtual Computing Platform that includes flash and hard disk drives, so it’s no surprise that VDI is a key market for the startup.

This week Nutanix unveiled a per desktop program to make it easier for customers to size their systems for VDI. Customers tell Nutanix how many virtual desktops they want to deploy, Nutanix recommends the right system and guarantees performance based on user profiles. Nutanix has four VDI customers profiles – kiosk, task, knowledge user and power user.

Greg Smith, Nutanix senior director of product marketing, said pricing for the VDI-specific systems start at $200 per desktop for storage and computing. The systems start at 150 virtual desktops.
If performance is unsatisfactory, Nutanix will provide the right hardware and software to make it work, Smith said.

“Normally the onus is on the end user to build it themselves,” he said, “We’re making things simpler. Customers tell us how many VDI users thy have, what types of users they are, and we provide the infrastructure.”

Smith said Nutanix customers often start with VDI and then add other applications to their appliances. Per desktop customers can go back and add servers and clusters for other apps if they want to expand.

Smith said Nutanix’s Prism software optimizes VDI performance, so no other software is necessary except for VMware Horizon View and Citrix XenDesktop.


May 2, 2014  7:27 AM

When a technology goes from mainstream to niche

Randy Kerns Randy Kerns Profile: Randy Kerns

There was an interesting comment made by a person in the audience where I was giving a talk.  It was about a major technology area that I had worked in early in my career and the comment was that it was a niche technology now.  I had not thought about that much until then but it made sense.  And it brought on other thoughts about how natural was it for a technology to reach a zenith and then be eclipsed by other technologies. Maybe it will disappear altogether, or it will continue to have value for a protracted period without being the primary technology in use.

So, how does a technology go from mainstream to niche? I asked that question to people in the industry to get their opinions. First, the discussion gravitated towards eliminating technologies that had never gained a measure of success. The measure of success was generally considered to be “in widespread usage” if not the dominant one in use.  The favorites or pet technologies that some were enamored with because of potential or “coolness” were also eliminated.  Those that had not yet achieved mainstream usage are usually referred to as “emerging” or “developing.”

So mainstream is a technology considered to be widespread or in dominant usage. When the technology is no longer the primary one in use, but still has value and has not disappeared, it becomes a niche. It is usually characterized by declining or at least not increasing revenue.

When a technology becomes niche, the perceptions around it change and that could impact the future of that technology. This could lead to economic impacts for companies and to career impacts for individuals. Of course, technologies that have been successful in high value solutions have a long tail – they continue to generate revenue and continue careers for long periods. That is especially true in the storage industry where change is much slower than most realize.

A technology that has moved from mainstream to niche is interesting to track as the industry continues to evolve. It is a signpost of sorts indicating inflection points in the industry.  It’s not a bad thing – it’s just the natural order.  Those that have started working in technologies that have come and gone such as vacuum tubes can probably tell the story of the gradual decline after solid state electronics replaced them.

The question left hanging here is what niche technologies today were once mainstream. It does not mean they are no longer highly valuable.  And, they will probably continue to be used for a long time.  I’ve worked on several that are not dominant anymore.  They still have great value and have been the foundation for other technologies and systems. Because of their widespread use, they may not be niche.  Things just change.  But it does make for interesting discussions.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


May 1, 2014  10:10 AM

HP wants to escort EMC World attendees to 3PAR

Dave Raffo Dave Raffo Profile: Dave Raffo

Hewlett-Packard is gaining momentum and market share with its 3PAR StoreServ arrays, and HP will try to continue that momentum next week at EMC World in Las Vegas.

HP reps will descend on EMC World to try and lure EMC customers to the 3PAR array with the HP Online Import software. The software was developed to migrate data from EMC Clariion and first-generation VNX midrange systems to 3PAR arrays. EMC requires a controller upgrade to go from its earlier systems to VNX2 arrays.

HP’s Online Import software is similar to the utility it developed to move customers from its EVA arrays to 3PAR.

“What if we could make it easier to go from VNX to 3PAR than from VNX to VNX2?” asked Craig Nunes, HP’s VP of storage marketing.

A free 180-day license for the online import utility is included with all new 3PAR arrays. Besides making for an easier migration, Nunes said the 3PAR arrays will convert thick volumes to thin volumes, saving a considerable amount of capacity.

“It’s no coincidence we’re introducing this today, in advance of EMC World,” Nunes said. “It’s a great opportunity to give that CX and VNX base insight into other options.”

The online import is an extension of HP’s “No is a three-letter word” campaign started earlier this year to try to sway EMC customers to switch.

“We’re calling it 72 hours of ‘yes,’ in Las Vegas,” Nunes said. “It’s a social media effort aimed at bringing concierge style services to EMC World attendees – theatre tickets, trips to Grand Canyon and other services they can get by tweeting #72hoursofyes.”

Nunes said HP’s “base of operations” will be at the Canal Shoppes above the Venetian Hotel where EMC World will take place. “We’ll also have a team on the ground in blue HP Storage Yes Team t-shirts connecting with EMC attendees,” he said.

HP’s come-on to EMC customers is of a type that may not be out of place in Vegas. HP’s press release claims attendees can request a complimentary ride from the airport, coffee “or anything else to make their trip more enjoyable.”

That doesn’t leave much out in Vegas. It also could end up costing HP more in services than it makes from a 3PAR array.

“Actually, we say anything you can get from your hotel concierge,” Nunes clarified, although that probably doesn’t leave much out either.


April 30, 2014  9:17 AM

Red Hat opens check book for Inktank’s open-source Ceph

Dave Raffo Dave Raffo Profile: Dave Raffo

Red Hat doubled down on open source storage software today when it acquired startup Inktank for $175 million. Inktank’s Ceph software gives Red Hat object and block storage to go with the GlusterFS-based file storage that Red Hat acquired  in 2011.

Ceph is an open-source scalable distributed file system created by Sage Weil, who founded Inktank in 2012 and is its CTO. Inktank began selling Inktank Ceph Enterprise as a subscription based storage-only product in November 2013. In February, Inktank upgraded Ceph Enterprise to 1.1 and received formal certification for the Red Hat Enterprise Linux OpenStack platform.

Inktank had $14.4 million in venture funding.

In an FAQ on its website, Red Hat positioned Ceph as complementary to its Gluster-based Red Hat Storage Server.

According to the FAQ:

“By aligning two leading open source communities, Red Hat can offer its customers a very competitive alternative to traditional proprietary storage systems from companies like EMC. Given the size of the storage opportunity, increasing the Red Hat investment in this area made a lot of sense, especially considering Inktank’s strong position with OpenStack.”

The FAQ said Red Hat will continue to sell and support Ceph’s products under the Red Hat brand, and will develop a roadmap to deliver compatible products for file, block and object storage. Red Hat said it will continue to support the Inktank development community.

In a blog on the Ceph community web site, Weil wrote that “Red Hat is one of only a handful of companies that I trust to steward the Ceph project. When we started Inktank … our goal was to build the business by making Ceph successful as a broad-based, collaborative open source project with a vibrant user, developer, and commercial community. Red Hat shares this vision.”

Weil wrote the deal will require Inktank to change one part of its product strategy. Because Red Hat favors a pure open source model, Inktank will make its Calamari monitoring and diagnostics tool open sourced. Calamari is currently proprietary.
Red Hat expects the acquisition to close in May.

On a webcast to discuss the deal, Weil said he would join Red Hat to run the Ceph initiative. Weil and Red Hat CTO Brian Stevens said it was too soon to say whether Ceph will remain a standalone product or be bundled with other Red Hat software, or when we might see open-stack Calamari.

 


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: