Storage Soup


May 16, 2014  10:22 AM

Sphere 3D buys Overland Storage for $81 million

Dave Raffo Dave Raffo Profile: Dave Raffo
Storage

Overland Storage is going away. At least, the company name will disappear after its merger with Sphere 3D is completed. Overland’s products will live on, whether or not they have the Overland brand.

Overland and Sphere 3Drevealed their merger plans Thursday. You need a scorecard to keep track of the two companies’ recent mergers. Sphere 3D acquired virtual desktop infrastructure (VDI) software startup V3 Systems in March and now it is merging with Overland, which merged with tape and removable disk cartridge vendor Tandberg Data in January. Overland CEO Eric Kelly, who is also chairman of Sphere 3D, said the Tandberg merger is proceeding as planned. The companies have completed the first of three phases, with phase three scheduled to wrap up by the end of the year.

Sphere 3D will pay $81 million for Overland stock, and the combined companies will be called Sphere 3D. Kelly and Sphere 3D CEO Peter Tassiopoulos discussed the deal on a conference call with analysts Thursday but did not address what the management structure would look like. However, it would make sense for Kelly to remain chairman and Tassiopoulos to stay on as CEO. The execs did not give a protected date for closing the deal, which requires shareholder approval.

Kelly became Sphere 3D chairman last September when the two vendors formed a partnership around developing a storage platform for application virtualization.

Sphere 3D’s Glassware platform allows companies to put applications from PCs, servers and mobile devices in the cloud. The companies have an integrated product running Glassware technology on Overland SnapDX2 NAS appliances.

Kelly said the first phase of the Tandberg acquisition – including integration of supply chains and internal operations – was completed in March and the second phase is due to finish by the end of June. Overland CFO Kurt Kalbfleisch said he expects the Tandberg merger to reduce the companies’ operating expenses by at least $45 million by the end of 2014.

Overland’s long history of losing money continued last quarter when it lost $6.6 million, despite a sharp increase in revenue following the Tandberg deal. Revenue of $22.3 million was double the revenue from the same quarter last year and up from $10.6 million in the last quarter of 2013.

Kelly said the Sphere 3D merger means “as a combined company, we now have greater financial and operational scale, and a clear path for growth and profitability.” He said the business strategy will include selling software, cloud services and appliances. He did not discuss plans for any specific products in Overland’s tape and disk backup, SAN or NAS families.

Of the combined Glassware-SnapSever DX2 product, Kelly added, “as you start looking at what’s happening in the industry in terms of virtualization, in terms of cloud, and how that integrates with the back-end storage, you see that by putting the two technologies together, we have been able to deliver a product line that we believe is the first to the market.”

Kelly said Sphere 3D’s technology will also work with Tandberg’s products, which include tape libraries and drives, RDX removable disk, disk backup and low-end NAS.

May 14, 2014  4:21 PM

Atlantis partners with VMware for VDI, VSAN

Dave Raffo Dave Raffo Profile: Dave Raffo

VMware Virtual SAN (VSAN) can be a disruptive force among the rapidly growing roster of software-defined storage startups. But rather than fight VMware, Atlantis Computing wants to play a complementary role to VSAN.

Atlantis today said its ILIO software platform supports VSAN and VMware Horizon 6 VDI software, and that channel partners will bundle Ilio with the VMware software. That’s no surprise. During the VMware Partner Exchange in March, Atlantis said it partner with VMware to bundle its new USX software with VSAN. Atlantis VP of marketing Gregg Holzrichter said that meet-in-the-channel relationship will go into effect within the next six weeks.

Atlantis had focused on VDI with its Ilio software until it launched USX for virtual servers in February. With USX, the startup can now reduce the amount of storage needed for virtual desktops and virtual servers. Holzrichter said the VMware-Atlantis partnership will revolve around VDI, which VMware has identified as one of the major use cases for VSAN. The Ilio USX software can provide data management features still lacking in the first version of VSAN. These include deduplication and compression, key technologies for VDI.

“We’ve been working with VMware to show how the Atlantis Ilio platform extends the capabilities of VSAN in a positive way,” Holzrichter said. “It’s an interesting combination where we allow you to drive down the cost per desktop significantly compared to traditional storage.”

It will be interesting to see where the partnership goes. If there is strong customer interest in using Ilio with VSAN and Horizon, VMware might OEM the software or acquire Atlantis as it did with Atlantis rival Virsto in 2013.

Then again, this could be a temporary arrangement until VMware develops its own data management features, or imports them from its parent EMC or Virsto. VMware no longer sells Virsto software but is expected to add Virsto technology to VSAN.

Holzrichter, who previously worked for VMware and Virsto, said there is room for both Virsto and Ilio technology with VSAN. “If VMware does implement the data services of Virsto, that will not overlap with the Atlantis data services,” he said. “Virsto has best in class snapshots and cloning technology, where Atlantis has best in class inline dedupe, compression, I/O processing and a unique way of using server RAM.”

Atlantis this week also said it has been awarded a patent for its content-aware I/O processing.


May 14, 2014  9:32 AM

Storage lifespans: don’t confuse technology with data

Randy Kerns Randy Kerns Profile: Randy Kerns

Clarification is needed about what lifespan means regarding storage because confusion is created by the way product messaging refers to both in the same context.

Lifespans of storage systems refer to many things: wear-out mechanisms for devices, technology obsolescence in the face of new developments, inadequacies of dealing with changing demands for performance and capacity, and physical issues such as space and power.

The wear-out mechanisms are tied to support costs, which typically increase dramatically after the warranty period that could run three years to five years in enterprise storage systems. These issues all lead to a cycle of planned replacement of storage systems, often triggered by the depreciation schedule for the asset.

For the information or data stored on a storage system, the lifespan depends on the characteristics and policies of that data. Information subject to regulatory compliance usually has a defined lifespan or period of time it must be retained. Other data may have business governance about retention. Most of the data is not so clearly defined, and is left to the owners of the data (business owners in many discussions) deciding about the disposition. Typically, data is retained for a long time – perhaps decades or even forever.

There is confusion about how to update the storage technology without regard to what is the content stored. This requires changing technology without disrupting access to the data, without requiring migration that entails additional administrative effort and operational expense, and without creating risk of impacts or data loss. These concerns are addressed with the many implementations of scale-out technology delivered with NAS or object storage systems.

Clustering, grids, ring, or other interconnect and data distribution technologies are key to scale out. Nodes can be added to a configuration (cluster, grid, ring, etc.) and data is automatically and transparently redistributed. Nodes can be retired – automatically where data is evacuated and redistributed and once empty, the node can be removed – all with transparent operation.

These scale-out characteristics allow storage technology to progress: new technology replaces old. This usually happens within the constraints of a particular vendor software or hardware implementation. The important development is that data is independent of the storage technology change.

For data, the format and the application are the big issues. Data may need to be converted to another form whenever the application that can access the data changes (meaning there is no longer support for that format, etc.). Being able to access data from an application is more important than merely storing information. The ability to understand the data is independent of the storage. Updating technology and progressing data along the storage technology improvements is possible and is being addressed with new scale-out systems. Dealing with formats that persist over time is another issue that can be independent of the storage technology.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


May 13, 2014  4:35 PM

Microsoft previews Azure Files, Azure Site Recovery at TechEd

Carol Sliwa Carol Sliwa Profile: Carol Sliwa

Microsoft previewed the Azure Files service and new Azure-based disaster recovery capabilities yesterday at TechEd North America in Houston, in line with its major conference theme of connecting on-premise systems and the public cloud.

Azure Files are designed to address the problem of moving an on-premise application or data that uses file-based storage to object-based storage in the cloud. Local applications often run on virtual machines (VMs) and use traditional file protocols, such as Server Message Block (SMB), to access shared storage. But, cloud-based object storage is generally accessible via REST APIs.

Until now, enterprises had to rewrite the applications to use REST APIs or use a gateway product to shift their application data to Microsoft’s Azure cloud storage. Azure Files gives them the option to access an Azure File share using SMB 2.1 or REST APIs, allowing Azure to act as a cloud NAS.

“Think of now having shared and common storage in Azure with an SMB protocol head to it that all your VMs in Azure — all the applications that you’re writing — can now use in a shared manner,” Brad Anderson, vice president of Microsoft’s cloud and enterprise division, said during the opening keynote.

The Azure Files service is available as a public preview. Microsoft declined to provide the expected date for the general release.

Microsoft also has yet to set a timetable for SMB 3.0. When Azure Files are accessed via the currently supported SMB 2.1, the file shares are available only to VMs within the same region as the storage account. REST APIs, however, are available for concurrent file access from anywhere, according to a Microsoft Azure storage team blog post.

According to the blog post, the scalability targets for the Azure Files preview are up to 5 TB per file share, a file size of up to 1 TB, up to 1,000 IOPS (at 8 KB block size) per file share and throughput up to 60 MB/s per file share for large I/O.

Pricing for Microsoft Azure Files is 4 cents per GB for locally redundant storage during the preview period. The price includes a 50% preview discount. Geographically redundant storage is not available during the preview period, according to Microsoft’s Azure site.

Microsoft also unveiled new capabilities for Azure Site Recovery (formerly Azure Hyper-V Recovery Manager) at TechEd. New capabilities will enable customers to replicate VMs from their own data centers directly to Azure and coordinate the recovery of  workloads in the cloud. A preview is due next month, according to Anderson.

“This is the No. 1 request that we have heard for Hyper-V Replication Manager today,” Anderson said. He said customers will have a “complete disaster recovery solution with the ability to seriously fail over in an unplanned or planned manner to Azure.”

Anderson said disaster recovery (DR) is typically reserved for only the most mission-critical applications because it’s too expensive and too difficult. But he claimed the simplicity of Azure Site Recovery makes the service suitable for all workloads.

Microsoft priced Hyper-V Recovery Manager by the number of VMs protected, based on the average daily number overly a monthly period. Pricing was $16 per VM protected.


May 12, 2014  8:06 AM

Startup Maxta gets $25 million in funding, partners with Intel

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Maxta, a software-based hyper-convergence startup, last week picked up $25 million in series B funding and a significant strategic investor.

Intel Capital participated in the round, which brings Maxta’s total funding to $35 million. Maxta’s distributed software runs on virtual servers to pools flash and disk storage capacity on the server level, allowing customers to build a SAN with commodity hardware.

Maxta founder and CEO Yoram Novick said the startup is involved in a long-term strategic partnership with Intel to develop a software-based, virtual storage technology that works with Intel chips, motherboards and servers.

“We believe the main challenge for the virtual data center is storage,” Novick said. ““The compute side has improved a lot while traditional storage has not changed much. Intel is working on features to provide better storage in a converged infrastructure. We saw the same thing coming, so we decided to work together. We will work with them to develop a better platform. We will add more features to leverage their architecture.”

Novick said the Maxta storage platform architecture (MxSP) is hypervisor agnostic, although it primarily works in VMware environments. Maxta also announced it is supporting Microsoft Hyper-V and KVM installations if a customer requests it.

“If a customer wants it, they will need to talk to us,” Novick said. “We do it with customers working with other hypervisors. We have limited availability for other solutions.”

The MxSP software does checksums to ensure data integrity, along with local replication to eliminate a single point of failure. It accelerates writes with write-back caching on solid state drives (SSDs) and a log-based data layout. It accelerates reads by caching metadata and hot data on SSDs. It also has the ability to co-locate virtual machines and the associated data.

Novik said use cases include primary storage, disaster recovery, virtual desktop infrastructures (VDI), cloud, and test and development. He said the new funding will be used to expand its sales and marketing. The Sunnyvale, California-based company has 40 employees, mostly engineers.

The B-round funding also was led by Tenaya Capital and existing investor Andreessen Horowitz also participated.


May 7, 2014  9:37 PM

EMC Data Protection execs disclose upcoming features, management UI

Carol Sliwa Carol Sliwa Profile: Carol Sliwa

EMC Data Protection and Availability Division executives dropped hints about upcoming snapshot and high availability features and showed off a proof of concept of a new management user interface yesterday during their super session at EMC World.

Guy Churchward, the division’s president, told attendees to expect a big announcement
“shortly” for a new capability called Centaur. A slide referred to it as “snap shot primary to protection storage.” An EMC representative later confirmed Centaur is a “future deliverable.”

“Wouldn’t it be nice if we could actually spit directly from something like a VMAX into a Data Domain? And therefore you actually end run the software backup stack,” Churchward said. “Performance-wise, it’s great. Optimization-wise, it’s great. We’re always looking at disrupting this industry and actually driving a level of innovation.”

Churchward also disclosed plans for protection storage high availability (HA). He said that would take EMC Data Domain’s Data Invulnerability Architecture “just a tiny bit further.” Churchward didn’t supply a date. As with Centaur, the EMC representative would say only that HA is a “future deliverable.”

After displaying a slide illustrating some of the management user interface (UI) improvements for backup and recovery, Churchward issued the following caveat to attendees: “This is a concept of what you will be seeing in the next 18, 24 months of a UI of the future.”

The UI’s initial screen was divided into three segments: system optimization, system health and a data chat box for seeking outside help.

The health portion of the screen listed the total number of systems under management and information such as the number under self-care or EMC-assisted care and the number for which operational fixes were available.

Under system optimization, the UI displayed the number of systems optimized and unoptimized in categories such as capacity forecast, garbage collection and replication lag. The dashboard could indicate the number of systems running out of capacity within 90 days and let the user drill down for more detailed, actionable information, according to Stephen Manley, CTO of EMC’s Data Protection and Availability Division.

Manley outlined an example of a system that wasn’t seeing a good deduplication ratio because its workload included videos and encrypted data that doesn’t deduplicate well. The UI supplied information on options to resolve the issue, such as moving the videos from EMC’s Data Domain to VMware’s vCloud Hybrid Service (vCHS) and the encrypted data to an encrypted container.

“Now the cool thing with this move is it’s going to wait until a good time to move, when the performance and the network bandwidth are available,” Manley said.

In addition to explaining the new UI concept, Manley laid out the company’s vision for providing data protection that can span on-premise, virtualized and newer hybrid and “born in the cloud” consumption models.

“The future of data protection is in metadata,” Manley asserted. “It’s that information about your infrastructure, about your applications, the information about your information, who owns it, the tags, the keywords that are associated with it. That’s what’s going to move us forward.”

Manley broke down the discussion into three areas: hybrid cloud mobility (“the right data in the right place”), analytics-driven automation and cloud-centric data management.

On hybrid cloud mobility: Manley said a company might want to shift data for disaster recovery or analytics, but it needs to understand where it can move the data and what tools will facilitate the migration. “If I move it, is my protection still going to be there? That’s that infrastructure analytics I need and the metadata that goes with it,” he said.

He said application metadata can provide information to ensure the systems function well after the move. “Data mobility is really the lifeblood of the hybrid cloud, and metadata is how you’re going to make it work,” Manley said.

On analytics-driven automation: Manley said he has spoken with customers who have “gathered all the metadata into this lake” and ask him, “Now what?” Those doing analytics are often buried in reports and dashboards.

He said he often fields questions such as: “Am I getting the most I can out of my Data Domain? Am I getting the right dedupe rate? Am I getting the right performance? Should I be upgrading? Should I add more storage? Should I look at a different type of protection storage?”

“Right now, the answer to that is based on some experience and a lot of black magic,” he said. “But, we can do better.”

EMC already captures information about usage on installed systems to help with customer support. Manley said EMC could feed the telemetry into a Greenplum database, analyze it and apply algorithms to make sure the data is in the real-world sweet spot, “not just the white paper sweet spot.”

“What we really need is a social network of machines that connect to each other so we see the interrelationships and then connect to humans so we can take action on it,” Manley said. The answer lies in metadata, he said.

On cloud-centric data management: Manley discussed the need for metadata about a cloud provider’s infrastructure, such as logs to verify service delivery. He said customers may want to move data either within the provider’s infrastructure or to another cloud provider, or they may need to retrieve data. Searching on- and off-premise, they need the single source of metadata to locate what they need, he said.

“That means you need to do the metadata across the products,” said Churchward. “We’re going to play with things like RecoverPoint and Vplex and whatever, mush it up and it’s all going to be magic and it’ll happen.”

After Manley said “yes” with great enthusiasm, Churchward said, “Yeah, no wonder you’re a CTO.”


May 7, 2014  7:53 AM

EMC adds security to Syncplicity file sharing

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Las Vegas — Like many online file sharing companies, EMC Syncplicity is trying to balance the user experience with security functions that IT departments demand.

At EMC World this week, Syncplictiy added two enhancements to its online file sharing product — a Storage Vault Authentication function to set security controls on storage repositories and a Syncplicity Connector for SharePoint so data from that application can be accessed and edited via mobile devices.

EMC first integrated a policy-based Storage Vault capability last year that let IT administrators select where data is stored according to user, group, folder file or content type. The storage can be on-premises or in a private or public cloud. The new The Storage Vault Authentication capability gives the IT department the ability to set a second layer for security controls for sync and share data.

“Security has been at odds with usability,” said Jeetu Patel, vice president and general manager of EMC’s Syncplicity business unit. “Your design points should not be at odds. It’s the way you implement that capability. When you add security, there is a way to enhance productivity. That may sound counter-intuitive.”

Patel said the second layer authentication function allows IT to set policy-based controls on the Storage Vault repositories holding certain sensitive data. Previously, user authorization controls to access sync and share data were on the Syncplicity application only.

“This was driven by enterprise customers,” Patel said. “It’s for companies that say, ‘I’m still nervous about the cloud.’ We give them a second layer of authentication. So not only does Syncplicity do authorization but the repository has to allow authorization. You might not need this for all content.”

Patel said the Syncplicity Connector for SharePoint works as a repository for content and helps bridge the gap between the SharePoint application and EMC’s sync and share application. Online file sharing applications often are used to replace SharePoint as a collaborations tool, but companies may not move all files out of SharePoint.

“A lot of repositories don’t die,” Patel said. “You may have found a more progressive way to do things but you still have to access data from those repositories. You shouldn’t have to take on a massive migration problem.”

Syncplicity file and share application is expected to play a role in EMC’s federation business model, in which product development, marketing and sales are balanced among the companies Pivotal, VMware and EMC Information Infrastructure businesses. EMC has identified mobile devices, social platforms, big data, cloud and security as the main growth areas.

“We will work with these assets when it makes sense,” said Patel. “For instance, you can expect to see integration with (VMware’s) AirWatch mobile device manager. There is a lot of collaboration we are doing with different units.


May 5, 2014  9:25 AM

EMC World 2014 opens with cloud appliance, ViPR upgrade

Dave Raffo Dave Raffo Profile: Dave Raffo

LAS VEGAS — EMC World 2014 opened this morning with the launch of a cloud storage appliance and the next version of ViPR. The products represent two areas that EMC will focus on during the four-day conference – the cloud and software-defined storage.

EMC first unveiled the concept of ViPR at last year’s EMC World, and began shipping the software last fall with support for object storage and EMC VNX, VMAX, and Isilon arrays as well as NetApp arrays. The 1.1 upgrade in January added support for Hadoop and EMC Storage Resource Management suite but no additional arrays.

ViPR 2.0 supports HDS and commodity storage natively. Support for Dell, Hewlett-Packard and IBM arrays requires an OpenStack Cinder block storage plug-in. The new version of ViPR also includes geo-distribution and multi-tenancy support for clouds.

The EMC Elastic Cloud Storage (ECS) Appliance – known as Project Nile during its beta — is designed for public and hybrid clouds, and will scale to 2.9 PB in one rack. ECS is built on the ViPR platform. EMC did not offer many specifics in its initial press release, but more information will be available during the conference.

In a blog posted this morning, Manuvir Das, EMC’s VP of engineering for its advanced software division, listed features of the ECS appliance:

  • Universal protocol support in a single platform with support for block, object, and HDFS [Hadoop distributed file system]
  • Single management view across multiple types of infrastructures
  • Multi-site, active-active architecture with a single global namespace enabling the management of a geographically distributed environment as a single logical resource using metadata-driven policies to distribute and protect content, and
  • Multi-tenancy support, detailed metering, and an intuitive self-service portal, as well as billing integration.

 


May 2, 2014  12:47 PM

Nutanix tackles VDI on per desktop basis

Dave Raffo Dave Raffo Profile: Dave Raffo

Virtual desktop infrastructure (VDI) is a big use case for storage that includes flash, as well as storage built for virtualization. Nutanix fits both categories with its hyper-converged Virtual Computing Platform that includes flash and hard disk drives, so it’s no surprise that VDI is a key market for the startup.

This week Nutanix unveiled a per desktop program to make it easier for customers to size their systems for VDI. Customers tell Nutanix how many virtual desktops they want to deploy, Nutanix recommends the right system and guarantees performance based on user profiles. Nutanix has four VDI customers profiles – kiosk, task, knowledge user and power user.

Greg Smith, Nutanix senior director of product marketing, said pricing for the VDI-specific systems start at $200 per desktop for storage and computing. The systems start at 150 virtual desktops.
If performance is unsatisfactory, Nutanix will provide the right hardware and software to make it work, Smith said.

“Normally the onus is on the end user to build it themselves,” he said, “We’re making things simpler. Customers tell us how many VDI users thy have, what types of users they are, and we provide the infrastructure.”

Smith said Nutanix customers often start with VDI and then add other applications to their appliances. Per desktop customers can go back and add servers and clusters for other apps if they want to expand.

Smith said Nutanix’s Prism software optimizes VDI performance, so no other software is necessary except for VMware Horizon View and Citrix XenDesktop.


May 2, 2014  7:27 AM

When a technology goes from mainstream to niche

Randy Kerns Randy Kerns Profile: Randy Kerns

There was an interesting comment made by a person in the audience where I was giving a talk.  It was about a major technology area that I had worked in early in my career and the comment was that it was a niche technology now.  I had not thought about that much until then but it made sense.  And it brought on other thoughts about how natural was it for a technology to reach a zenith and then be eclipsed by other technologies. Maybe it will disappear altogether, or it will continue to have value for a protracted period without being the primary technology in use.

So, how does a technology go from mainstream to niche? I asked that question to people in the industry to get their opinions. First, the discussion gravitated towards eliminating technologies that had never gained a measure of success. The measure of success was generally considered to be “in widespread usage” if not the dominant one in use.  The favorites or pet technologies that some were enamored with because of potential or “coolness” were also eliminated.  Those that had not yet achieved mainstream usage are usually referred to as “emerging” or “developing.”

So mainstream is a technology considered to be widespread or in dominant usage. When the technology is no longer the primary one in use, but still has value and has not disappeared, it becomes a niche. It is usually characterized by declining or at least not increasing revenue.

When a technology becomes niche, the perceptions around it change and that could impact the future of that technology. This could lead to economic impacts for companies and to career impacts for individuals. Of course, technologies that have been successful in high value solutions have a long tail – they continue to generate revenue and continue careers for long periods. That is especially true in the storage industry where change is much slower than most realize.

A technology that has moved from mainstream to niche is interesting to track as the industry continues to evolve. It is a signpost of sorts indicating inflection points in the industry.  It’s not a bad thing – it’s just the natural order.  Those that have started working in technologies that have come and gone such as vacuum tubes can probably tell the story of the gradual decline after solid state electronics replaced them.

The question left hanging here is what niche technologies today were once mainstream. It does not mean they are no longer highly valuable.  And, they will probably continue to be used for a long time.  I’ve worked on several that are not dominant anymore.  They still have great value and have been the foundation for other technologies and systems. Because of their widespread use, they may not be niche.  Things just change.  But it does make for interesting discussions.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: