Storage Soup


August 4, 2016  6:02 PM

EMC container plugin supports any block storage

Garry Kranz Garry Kranz Profile: Garry Kranz

EMC has contributed an open source Apache Mesos container volume driver that supports any network-attached block storage system equipped with a Docker plugin, including storage of EMC competitors.

The EMC container plugin integration for Docker is a joint project of Apache Foundation and EMC code, part of EMC Emerging Technologies Division. It builds on previous EMC container initiatives. The Docker Volume Driver Isolator module exposes native Docker functionality through a command line interface.  It is part of the Apache Mesos distribution released in July.

“We’re making it possible for the community to do multi-tiered persistent storage within Docker, which up to now has been a struggle,” said Josh Bernstein, a vice president at EMC code.

Mesos  orchestrates deployment of containers on premises or in cloud storage. The Apache Mesos cluster manager presents abstracted data center compute, memory and storage in an aggregated resource pool. Mesos resides in the kernel to isolate resources as applications are shared across a distributed  framework.

Mesos lets users create a persistent volume to run a specific task from reserved disk. The volume persists on a node independently of the task’s sandbox and is returned to the orchestration framework when the task is complete.  If necessary, new or related tasks launch a container that consumes resources from the previous task. Docker recommends Apache Mesos as an  orchestration layer to implement large clusters of storage containers.

EMC’s container module communicates directly with Docker volume plugins, allowing developers to request a persistent volume from any block storage running under Mesos.  Mesos then passes the file request to EMC, which searches available storage to identify the volume and deliver it to the destined container host.

“Before this feature, while users could use persistent volumes for running stateful services, there were some limitations. First, the users were not able to easily use non-local storage volumes. Second, data migrations for local persistent volumes had to be manually handled by operators. The newly added Docker volume isolator addresses these limitations,” according to an Apache Software blog posted July 27.

Enterprise adoption of Docker is picking up, although several hurdles remain before containers are as ubiquitous as that of virtual machines. The Apache Mesos integration foreshadows the open source EMC container EMC libStorage project. LibStorage is extensible abstraction and provisioning presented as  common package for every heterogeneous storage and container runtime.

August 4, 2016  9:25 AM

Ctera builds new data migration, ILM and security capabilities into its platform

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Ctera Networks recently unveiled new enhancements to its Enterprise File Service Platform that include the ability to migrate data from an on-premise cloud to the public cloud without disrupting service, and support for information lifecycle management tools from Amazon S3 and the NetApp StorageGRID object storage.

The platform also has been upgraded to support Security Assertion Markup Language (SAML) 2.0 to centralize identity management for single sign-on (SSO) capabilities to access files and backups.

The new data migration tools targets customers who have not deployed a cloud strategy and want to start on-premises and need the flexibility to eventually move to the public cloud.

The Ctera Enterprise File Service Platform integrates Enterprise File Sync and Share, endpoint and data protection, along with branch and remote office storage. The new capability allows users to migrate workloads across storage nodes from any on-premise location to the public cloud.

“You can start with Ctera in (your) data center and then you can move to a public cloud. It’s moves very quickly. It provides flexibility,” said Jeff Denworth, CTERA’s senior vice president of marketing. “We built this tool because no one wants to be locked in. Now, you have a marketplace of options.”

The new ILM capability gives users a way to use the Ctera platform to tier high-performance workloads onto the NetApp StorageGRID and in-frequently accessed data to Amazon S3. It leverages ILM tools from Amazon S3) and NetApp StorageGRID to intelligently place files in cloud storage tiers according to their application profile.

Long-term archive and backup data can be directed to low-cost storage tiers, such as Amazon Web Services’s (AWS) Amazon S3-Standard Infrequent Access tier (Standard – IA), while interactive data, such as enterprise file sync and share workloads, can be stored on storage tiers that offer more cost-efficient ingress and egress capabilities.

“We tag data as it goes through our system as either interactive or archival and then its diverted to the general purpose tier like S3,” Denworth said.

The new security features CTERA now supports identity federation over Security Assertion Markup Language (SAML) 2.0 to so users can use centralized corporate user identity management and provide SSO capabilities for file and backup access. In conjunction with support for this new standard, Ctera also now is compatibility with leading SSO offerings, including Microsoft Active Directory Federation Services 2.0, Okta, OneLogin, and Ping Identity.

“(The platform) has been integrated with modern identity tools so users sign in with SSO,” Denworth said.

The Ctera Enterprise File Services Platform enables enterprise IT to protect data and manage files across endpoints, offices, and the cloud – all within the organization’s on-premises or virtual on-premises cloud storage.

The platform is powered by Ctera’s cloud service delivery middleware that users leverage to create, deliver, and manage cloud storage-based services such as enterprise file sync and share, in-cloud data protection, endpoint and remote server backup, and office storage modernization.


August 2, 2016  8:13 PM

Seagate, WD see exabyte growth with high-capacity enterprise HDDs

Carol Sliwa Carol Sliwa Profile: Carol Sliwa
Seagate, western digital

Unit shipments of hard disk drives (HDDs) may be on the decline, but the exabytes that Seagate Technology and Western Digital are shipping with their high-capacity enterprise HDDs is spiking.

Seagate noted during its earnings call today that HDD storage capacity hit a record 61.7 exabytes (EB) during the fiscal fourth quarter, on the heels of 60.6 EB in Q2 and 55.6 EB in Q3. Average per-drive capacity soared to a record 1.7 TB in Seagate’s fiscal Q4, which ended on July 1.

Steve Luczo, Seagate’s chairman and CEO, said demand was stronger than expected from cloud service providers (CSPs) in the fourth quarter. He noted that, on a year-over-year basis, average per-drive capacity grew 29%. In fiscal 2016, Seagate shipped 233 exabytes, including 70 exabytes for its “business-critical” product line – a 28% increase over the prior year.

Western Digital last week claimed to achieve overall exabyte growth of 12% on a year-over-year basis, largely driven by shipments of capacity enterprise HDDs to enterprise customers, according to Michael Cordano, president and chief operating officer. He said the growth of WD’s capacity-focused enterprise product line was 47% thanks to the ongoing success of high-capacity helium-based HDDs.

WD last week reported revenue of $13.0 billion for its last fiscal year, down 11% over last year’s $14.6 billion, and net income of $257 million for fiscal 2016. WD’s fourth-quarter revenue was $3.5 billion, and the company reported a $351 million loss.

Seagate Technology met or exceeded analysts’ expectations with $2.7 billion in revenue for its fiscal fourth quarter, largely driven by sales to cloud service providers. Seagate’s total revenue for fiscal 2016 was $11.2 billion, down 18.8% over last year’s $13.8 billion. Net income for the year was $248 million.

Both Seagate and Western Digital have been trying to diversify beyond their HDD businesses. WD last year acquired flash vendor SanDisk for $19 billion and object storage vendor Amplidata. Other past acquisitions include HDD competitor HGST, SSD maker sTec, all-flash array startup Skyera, PCI-flash vendor Virident Systems and flash-cache specialist VeloBit.

Seagate’s string of acquisitions includes Dot Hill Systems for $600 million last year, Avago’s LSI flash business in 2014 for $450 million and high-performance computing storage specialist Xyratex in 2013 for $374 million. Seagate sold off its EVault data protection business late last year to Carbonite for a mere $14 million in cash.

Luczo said Seagate completed the integration of Dot Hill and plans to launch converged storage products, including hybrid and all-flash arrays, later this year. He also noted that 12 TB helium near-line enterprise test units would be available this quarter for customer evaluation. Luczo said Seagate would refresh most of its high-volume capacity points over the next several quarters.

But Luczo cautioned that the growth rate for storage in the near term would likely fluctuate from quarter to quarter. He said the influence of the cloud service providers could be tricky to predict.

Near-line enterprise hard disk drives (HDDs) were hotter last quarter than Seagate anticipated they would be. Luczo said Seagate’s 8 TB enterprise HDD was the leading revenue SKU, as overall enterprise HDD revenue increased to 45% of total HDD sales. PC client shipments accounted for 25% of total HDD revenue.

Seagate said that although unit shipments of its HDDs have dropped 15% over the past five fiscal years, exabyte shipments have increased 112% and average capacity per drive has soared 133%. Luczo attributed the trends to the shift from client-server to mobile cloud architectures. He said most of the exabyte-scale growth relates to high-definition streaming content “where massive data ingest and sequential write operations” are critical.

Western Digital CEO Steve Milligan last week cited a key near-term priority as the transition to 3D NAND flash. He also noted that the company completed the alignment of the product and technology roadmaps for legacy WD, HGST and SanDisk products and opened a new wafer manufacturing facility in Japan with Toshiba.

WD expects 3D NAND wafer capacity to approach 40% of total NAND capacity by the end of 2017, according to Cordano.

Milligan said WD has been scaling down HDD capacity on a brick-and-mortar and head-count basis to react to the decline in the HDD market. He said WD had taken out 20% of its facilities and 25% of its head count during the last two years. Milligan said WD plans further reductions of up to one-third.


July 29, 2016  2:05 PM

OwnBackup CEO: Stay safe in the cloud

Paul Crocetti Paul Crocetti Profile: Paul Crocetti
Cloud Backup

Your cloud data may not be as secure as you think.

No matter where your data lives, you should put the same level of thought and care into its protection, according to Sam Gutmann, CEO of cloud-to-cloud backup and restore vendor OwnBackup. He pointed to the recent Salesforce outage that resulted in lost data.

“It really raised awareness,” Gutmann said. “There’s a myth that if it’s in the cloud, it’s safe.”

OwnBackup offers products to back up Salesforce data, ServiceNow data and social media accounts. Gutmann said OwnBackup allows users to compare two snapshots to see what has changed or been deleted, and then restore the database back to the way they want it. The company’s vision is to become a single pane of glass for backup and protection of software as a service data and platform as a service data stored in the cloud.

OwnBackup plans to add support for another application this year, and will likely add a couple more next year, but have not determined which ones yet. Microsoft Office 365 and Google Apps are common applications supported by vendors that protect data in the cloud.

Gutmann said there is no hurry to expand its product because “Salesforce is huge.” Customers are responsible for their data in Salesforce, Gutmann said. Salesforce recommends customers use one of the vendor’s “partner backup” products — which include OwnBackup — to ensure the safety of their data.

OwnBackup had customers affected by the outage who were able to restore data.

Many companies are moving business-critical data to the cloud. But on-premises platform requirements and vulnerabilities are also present in the cloud. Take the example of an employee on his way out the door who deletes important files.

“That threat is there no matter where the data is,” Gutmann said.

To that end, Gutmann offered a few more general tips for cloud-to-cloud backup and restore:

  • “Backup’s nice, but it all comes down to recovery,” so use a product that is strong in both disciplines
  • “Make sure the vendor understands the intricacies of your data,” and recognizes how complicated your setup is
  • Test your backup — verify you have a product that works

OwnBackup, which has sales and marketing in the United States and research and development in Israel, claims about 300 customers, ranging from small businesses to large manufacturing companies and universities. Gutmann, who helped found Intronis (now part of Barracuda) in 2003 and has been in the backup field for 16 years, said OwnBackup has about 20 employees, but that number should be closer to 30 by the end of the year.


July 29, 2016  1:59 PM

FalconStor’s dating, seeking technology match

Dave Raffo Dave Raffo Profile: Dave Raffo
FalconStor

There is a trend in storage for smaller vendors to band together to try and grow through mergers, for far different reasons than Dell and EMC are coming together. These mergers of small companies are driven by a sharp decline in venture funding, acquisitions by larger firms, and opportunities to go public. So their quickest route to growth is to merge.

In 2016, we’ve seen Pivot3-NexGen, Virtual Instruments-LoadDynamiX, and Gridstore-DHCQ (now called HyperGrid) mergers that combined technologies and work forces. FalconStor could be headed for one of these mergers as well.

Although FalconStor is a public company, it faces many challenges of private companies. It has little revenue and few funding options to fuel growth. If FalconStor wants to add features to its FreeStor product line, it could do so cheaper and faster by merging with a vendor that already has that technology. FalconStor already picked up real-time predictive analytics technology through a licensing and co-development deal with Cumulus Logic in 2015.

FalconStor CEO Gary Quinn hinted that a merger, acquisition or a least another technology partnership could be in the works during his quarterly earnings call this week.

“During the first half of 2016, FalconStor has been approached by a number of privately backed and publicly traded companies who are looking to find ways to partner or transform themselves into a new entity,” Quinn said. “As many of you know, a lot of the VC-backed privately held companies in the storage software category during the last several months have been unable to obtain additional capital to support their vision. They also do not have any commercially viable operations to sustain themselves. Some of those technologies would be excellent additions to the FreeStor product offering.”

He said FalconStor is reviewing opportunities for “partnerships, technology licensing or possible new combinations.”

When asked about these comments later in the call, he pointed out object storage may be a valuable addition to block-storage FreeStor. He said there are object storage vendors “who are experiencing difficulties at the moment because they don’t really have a commercially viable market yet due to the fact that that market hasn’t matured yet enough.”

FalconStor’s $8.1 revenue last quarter was down from $9.6 million a year ago, and it lost $3.5 as it continues to transition to its FreeStor product line of data management and protection software.

Quinn said FalconStor is making good progress selling FreeStor subscriptions to enterprises, managed service providers and OEMs. But FalconStor has only $9.4 million in cash, so it has to find a way to reverse its losses quickly.

“We believe we have the right product in FreeStor,” Quinn said. He said a FreeStor upgrade in October will focus on service provider requirements, public cloud connectivity and enterprises who want to build self-service capabilities. That release will extend FreeStor’s real-time analytics from the data center to edge devices.

“We are very cognizant of the road ahead both from the opportunity for FreeStor and the cost to achieve it,” he said. “We are diligently looking for ways to increase our quarterly billings as well as preserving our precious cash.

“We believe that we’re getting closer and closer to break-even and we continually adjust the business as necessary.”


July 29, 2016  9:55 AM

Quantum results indicate ‘stable’ backup market

Dave Raffo Dave Raffo Profile: Dave Raffo
Data protection, Quantum

It’s no secret that storage sales have been hard to come by over the last year or two. Most of the large vendors experience product revenue decreases or small increases quarter after quarter. But the data protection market appears to be on the upswing, or least it’s no longer on the downswing.

Backup software vendors Veeam Software and Commvault reported positive earnings growth for last quarter, and backup hardware vendor Quantum this week said its revenue increased and exceeded its previous forecast.

Quantum’s revenue of $116.3 million last quarter increased $5.4 million, or five percent, over the same quarter last year. The fastest growth came from Quantum’s StorNext scale-out file system, which grew 11% to $30.8 million. But that’s less than 30% of Quantum’s overall revenue. Its data protection revenue increased six percent to $76.9 million. That includes DXi disk backup revenue of $21.5 million (up 24%). Tape automation revenue dropped four percent to $42.6 million, but, hey, that’s tape. Tape OEM revenue did increased six percent, and tape devices and media revenue actually grew 17% to $12.8 million.

“We believe it’s indicative of a more stable traditional storage market, including tape backup, than the industry has seen over the last couple of years,” Quantum CEO Jon Gacek said of the results on the company earnings call. “One quarter does not a trend make, but as we look at what’s going around it, the data protection side has been solid.”

Quantum CFO Fuad Ahmad added: “As mentioned in previous calls, we’ve been impacted by overall market weakness in general purpose storage including data protection over the past year. However, we believe the market has begun to stabilize and our results reflect that.”

We don’t know yet if the overall market is turning, or vendors such as Veeam, Commvault and Quantum are taking business from larger rivals. There is no visibility from the largest data protection vendors. EMC did not break out data protection revenue in its bare-bones earnings report as it prepares to merge with Dell. IBM does not give specific data protection figures, and Veritas is now a private company and does not report earnings.

Quantum received a boost from several large StorNext deals last quarter, mainly in video surveillance and media and entertainment. Gacek spoke of an $800,000 follow-on purchase by a large consumer electronics company and a $200,000 deal with a virtual reality company. He said there was another $200,000 in Asia and a $150,000 installation at a government medical center. And a previously announced public cloud project deal is expected to bring Quantum at least $20 million this year.

Quantum forecasts a revenue range of $118 million to $122 million this quarter. Gacek said he expects $500 million in revenue this fiscal year (last quarter was Quantum’s first fiscal quarter) and he expects scale-out to become 35% to 40% of total revenue.

The vendor lost $3.8 million but that was down from $10.8 million last year. And it generated $5.2 million in cash from operations compared to using $13.6 million in cash a year ago.


July 28, 2016  6:20 PM

Spectra Logic bolsters BlackPearl storage archive for Amazon cloud

Garry Kranz Garry Kranz Profile: Garry Kranz
Storage

Backup tape specialist Spectra Logic has upgraded the operating software for its BlackPearl Deep Storage Gateway appliance, allowing petabyte-scale enterprises to build a storage archive using multiple Amazon Web Services (AWS) public cloud tiers.

The Boulder, Colo.-based vendor already supports the AWS Simple Storage Service (S3) by virtue of its S3-compatible Deep Storage Interface (DS3). The 3.x software version adds Amazon Glacier cold storage, S3 Infrequent Access, and Amazon Elastic Compute Cloud Block Storage (EC2) as destination targets within the BlackPearl tape gateway.

“We built the infrastructure to support Amazon S3. This gives us a hybrid cloud storage archive to go along with the BlackPearl private cloud. We let a customer write data out to any of Amazon’s three storage tiers,” Spectra Logic CTO Matt Starr said.

“Our hybrid cloud allows you to keep a local copy, either on disk or tape or both, and then only in a dire emergency would you have to pull it back from the cloud.”

BlackPearl is tape-based object storage that uses Linear Tape File System (LTFS) on the back end. The hybrid storage archive appliance caches incoming writes on disk and sends it to different replication targets as page sizes approach 100 gigabytes.

BlackPearl’s DS3 interface is modeled after Amazon S3. It uses REST-based command sets to index each tape cartridge with its own file system. Customers can replicate between BlackPearl storage at different sites.

Expanded Amazon S3 integration lets customers replicate data from Spectra Logic devices to AWS S3 storage. Archive data can be automatically restored from Amazon Glacier to local tape or disk. The upgrade supports multiple backup and disaster recovery copies in the cloud and across Spectra Logic’s LTFS tape libraries, Online Archive active archive appliance and ArcticBlue object-based nearline disk storage.

Archive management and retrieval is orchestrated via Advanced Bucket Management policy manager. Other than Amazon fees, Starr said Spectra Logic software enhancements are available at no cost to customers with valid maintenance support contracts.


July 27, 2016  10:49 AM

Commvault rallies ‘round the cloud

Dave Raffo Dave Raffo Profile: Dave Raffo
Cloud storage, Commvault, Data Management

Commvault increased its year-over-year revenues for the third straight quarter, with a big assist from the cloud.

Like all storage vendors, Commvault is looking for a way to work with public cloud providers to prevent getting steamrolled by them. In Commvault’s case, the strategy is to protect and manage data in public and hybrid storage clouds the same way it does on traditional on-prem storage. Commvault has emphasized the cloud in recent product releases, and that appears to be paying off.

Commvault Tuesday reported revenue of $152.4 million in last quarter, a 10% increase over last year. Its software revenue of $63.9 million increased 13%. Revenue from enterprise deals — $100,000 or more in software revenue in the quarter – came to 52% of the total software revenue for a 19% year-over-year increase. Commvault claims it added approximately 450 new customers in the quarter.

Commvault lost $2.5 million in the quarter following an aggressive hiring period and a licensing model change, but is heading in the right direction with three quarters of growth following three disappointing quarters in 2015. CFO Brian Carolan said he expects revenue to be higher this quarter than last.

CEO Bob Hammer said customers are using Commvault software to manage data stored in public clouds, to migrate data into the cloud and move it across private and public clouds.

Hammer said Commvault also increased its on-premise business, but the cloud appears to be where the future growth lies. He said the amount of data stored in public clouds using Commvault software has increased more than 60% over the past six months.

“The cloud is a catalyst for growth,” Hammer said. “The move to the cloud has become a major factor contributing to our increased business momentum.”

Commvault has worked closely with Amazon, Microsoft and other public cloud providers to make its software compatible. Hammer said cloud providers are also using Commvault software to services for disaster recovery and application development storage.

“We see meaningful contributions to license revenue growth from partners such as Microsoft and AWS as well as large global systems integrators,” he said.

Hammer said large enterprises are using Commvault to set up and manage hybrid clouds and it continues to tailor its software to the cloud. In the next few months, the vendor plans to launch “cloud-first” applications to improve data protection and management in the cloud. These improvements include user self-service, expanded content search and analytics, and embedded software to enables software-as-a-service and managed services.

Like Veeam Software, Commvault is growing revenue far ahead of the data protection industry. Their largest competitors are in transition – Veritas in the early days of a spinoff from Symantec, and EMC about to merge with Dell.

“We are out-innovating those competitors and are better organized in the field,” Hammer said about Veritas and EMC.

Hammer said Commvault is out-growing Veeam in the markets where they compete.

“Veeam has become, for us, less of a competitive issue,” he said. “Our growth rates in the mid-market products that compete against Veeam are high, probably higher than Veeam’s growth rates. So my guess is we’re picking up share in that segment of the market.’

While Commvault beat Wall Street’s consensus revenue expectation by more than $3 million, Hammer said it did not meet its own expectations.

“We could have executed better,” he said. “As good as the numbers were, there was opportunity to do better than that. So from an external standpoint these are really good numbers, but we have very aggressive internal plans.”


July 22, 2016  6:54 PM

Broadcom samples HBAs that support FC-NVMe to OEMs

Carol Sliwa Carol Sliwa Profile: Carol Sliwa
flash storage

The pieces are starting to fall into place for even higher performing flash storage with lower latency through the use of Nonvolatile Memory Express (NVMe) over Fibre Channel (FC).

Broadcom (part of Avago Technologies) this week made available to OEMs Emulex Gen 6 FC host bus adapters (HBAs) that support NVMe over FC. Broadcom claims the updated Gen 6 FC HBAs could help to lower latency by more than 50% and boost overall performance by more than 25% with SSDs that use NVMe, rather than Small Computer System Interface (SCSI), to transfer data and commands between host and target storage devices.

SCSI was designed years ago for slower storage media, such as hard disk drives (HDDs) and tape. The newer NVMe specification streamlines the I/O stack to facilitate higher performance, lower latency and lower power consumption with faster solid-state drives (SSDs). NVMe over Fabrics, include FC-NVMe, enables the NVMe command set to work across the network with external storage.

“As we’ve learned in talking to customers, the network’s becoming more and more of a bottleneck just because storage has gone from spinning media to these really low-latency architectures that are really fast,” said Brandon Hoff, director of product management at Broadcom. “So our focus with this solution is to hammer down latency and be the fastest network out there for moving NVMe traffic across the fabric.”

Last month, an industry consortium published version 1.0 of the NVMe over Fabrics specification. An NVMe Fabrics working group – which includes Broadcom – also published Linux target and driver code for inclusion in the Linux kernel. Hoff said the Linux distributions that enterprises typically use, such as Red Hat and SUSE, and other operating systems, such as Windows and VMware, should support FC-NVMe over time.

Server and storage operating systems, FC drivers, and HBAs will ultimately need to support NVMe over FC, according to Hoff. He said Broadcom updated and optimized its FC drivers and HBA firmware to support FC-NVMe and made available a reference architecture for vendors and early adopters. He said Broadcom Emulex has been demonstrating its Gen 6 HBAs, which support NVMe and SCSI, to server and storage vendors for several months.

“It was a very light lift for us to add NVMe as a protocol. Fibre Channel actually has multiple protocols that can run over it. FCP is the one that uses SCSI. And now we’re adding NVMe as a new protocol that runs over Fibre Channel,” Hoff said.

Hoff predicted the first phase of products to support NVMe over FC will be “just a bunch of flash” (JBOF) devices. He said the hardware is available, and the software needs to catch up. Hoff expects server OEMs to support FC-NVMe as they transition to Intel’s “Purley” enterprise platform in “2017ish.”

NVMe all-flash arrays will be a little in the future,” Hoff said. “Some are [currently] moving to NVMe drives on the backend, but there’s SCSI on the front end. So they do protocol conversion. They bring a SCSI command off Fibre Channel on the front side, then they have to convert it to NVMe so it talks to NVMe drives.”

Once all-flash arrays support NVMe on the front end, there will no need for the translation, and latency will drop even further. In the meantime, Fortune 1000 FC users will know that “the hardware just works” as they decide to move to NVMe-based storage, Hoff said.

“If you want to deploy NVMe in your data center, all you have to do is plug an NVMe array into the Fibre Channel network. You don’t have to update the Fibre Channel, the drivers and the host,” Hoff said.


July 22, 2016  6:26 AM

RDMA over Fabrics: a big step for SSD access

Randy Kerns Randy Kerns Profile: Randy Kerns
Storage

Shared storage access for servers has been the most basic requirement for storage networks. Performance demands for multiple systems accessing data continually increase due to improvements in compute and a desire to get more work done from infrastructure investments.

These demands have been met with technology developments to deliver storage networking and performance for access to data. The next big step function is with RDMA over Fabrics. RDMA is Remote Direct Memory Access and the fabric is the storage network.

RDMA over Fabrics is about increasing performance for access to shared data and taking advantage of solid-state memory technology.  RDMA over Fabrics can be a logical evolution of the current shared storage architectures and continue on the path to accelerate operations to increase value from the investments in applications, servers, and storage.

RDMA over Fabrics sends data from one memory address space to another over an interface using a protocol. RDMA is a zero-copy transfer where data can be sent to or received from a storage system from/to the application memory space without the overhead of moving it between other locations as required by some protocol stacks.

RDMA allows data transfers with much less overhead and a faster response time from lower latency. NVMe (Non-Volatile Memory express) is the protocol used for RDMA over Fabrics. Think of the protocol as the language for communication, and independent of the physical interface. Both ends of the communication —  server and storage — must speak the same language for the transfer.

Solid-state technology – including flash storage — is memory, accessed as memory segments. NVMe provides that access. When SCSI is used, a translation must occur to access the memory-based storage, which causes more latency. NVMe provides for parallel conversations to occur to use the physical interface more effectively.

There are competing options for the fabric interface. High performance Fibre Channel storage networks at Gen 6 (32 Gigbits per second) can support RDMA with HBAs. These Gen 6 switches and adapters are backwards compatible with current transfer environments.

Other options for RDMA over Fabrics include RoCE (RDMA over Converged Ethernet), iWARP (Internet Wide Area RDMA Protocol), InfiniBand, and PCIe. RoCE is a similar concept to FCoE. iWARP uses Transmission Control Protocol (TCP) or Stream Control Transmission Protocol (SCTP) for transmission. InfiniBand is an RDMA-based protocol used in high-performance computing and inter-system communication. PCie is a limited distance interface.

Each method has its own options, and a set of vendors promoting them.

New technology that promises to deliver improvements always attracts great interest and becomes the subject of discussion and investigation.  However, the final judgement of the value of the technology doesn’t occur until it is effectively deployed. Disruptive changes tend to be cause delays and may prevent deployment despite the potential value. Technology that can be seamlessly introduced with compatibility with current operations will be put to use more quickly. To understand the value of RDMA over Fabrics and how to take advantage of this new technology, it is important to recognize how it can be introduced into operational environments.

A useful characteristic for RDMA is the ability to use memory access for shared storage over a storage network as an internal memory extension. This would be especially useful for databases that could not fit within internal processor memory. It would provide much higher performance than traversing a protocol stack to deliver I/O to a storage device.

NVMe

 

 

 

 

 

 

 

 

 

 

The adoption rate will be determined by the immediacy of the need, the ability to deploy with the least risk or disruption, and the economic justification for making the transition. IT architects and directors should investigate RDMA over Fabrics with solid-state storage as part of their storage strategy.

 

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: