Backup tape specialist Spectra Logic has upgraded the operating software for its BlackPearl Deep Storage Gateway appliance, allowing petabyte-scale enterprises to build a storage archive using multiple Amazon Web Services (AWS) public cloud tiers.
The Boulder, Colo.-based vendor already supports the AWS Simple Storage Service (S3) by virtue of its S3-compatible Deep Storage Interface (DS3). The 3.x software version adds Amazon Glacier cold storage, S3 Infrequent Access, and Amazon Elastic Compute Cloud Block Storage (EC2) as destination targets within the BlackPearl tape gateway.
“We built the infrastructure to support Amazon S3. This gives us a hybrid cloud storage archive to go along with the BlackPearl private cloud. We let a customer write data out to any of Amazon’s three storage tiers,” Spectra Logic CTO Matt Starr said.
“Our hybrid cloud allows you to keep a local copy, either on disk or tape or both, and then only in a dire emergency would you have to pull it back from the cloud.”
BlackPearl is tape-based object storage that uses Linear Tape File System (LTFS) on the back end. The hybrid storage archive appliance caches incoming writes on disk and sends it to different replication targets as page sizes approach 100 gigabytes.
BlackPearl’s DS3 interface is modeled after Amazon S3. It uses REST-based command sets to index each tape cartridge with its own file system. Customers can replicate between BlackPearl storage at different sites.
Expanded Amazon S3 integration lets customers replicate data from Spectra Logic devices to AWS S3 storage. Archive data can be automatically restored from Amazon Glacier to local tape or disk. The upgrade supports multiple backup and disaster recovery copies in the cloud and across Spectra Logic’s LTFS tape libraries, Online Archive active archive appliance and ArcticBlue object-based nearline disk storage.
Archive management and retrieval is orchestrated via Advanced Bucket Management policy manager. Other than Amazon fees, Starr said Spectra Logic software enhancements are available at no cost to customers with valid maintenance support contracts.
Commvault increased its year-over-year revenues for the third straight quarter, with a big assist from the cloud.
Like all storage vendors, Commvault is looking for a way to work with public cloud providers to prevent getting steamrolled by them. In Commvault’s case, the strategy is to protect and manage data in public and hybrid storage clouds the same way it does on traditional on-prem storage. Commvault has emphasized the cloud in recent product releases, and that appears to be paying off.
Commvault Tuesday reported revenue of $152.4 million in last quarter, a 10% increase over last year. Its software revenue of $63.9 million increased 13%. Revenue from enterprise deals — $100,000 or more in software revenue in the quarter – came to 52% of the total software revenue for a 19% year-over-year increase. Commvault claims it added approximately 450 new customers in the quarter.
Commvault lost $2.5 million in the quarter following an aggressive hiring period and a licensing model change, but is heading in the right direction with three quarters of growth following three disappointing quarters in 2015. CFO Brian Carolan said he expects revenue to be higher this quarter than last.
Hammer said Commvault also increased its on-premise business, but the cloud appears to be where the future growth lies. He said the amount of data stored in public clouds using Commvault software has increased more than 60% over the past six months.
“The cloud is a catalyst for growth,” Hammer said. “The move to the cloud has become a major factor contributing to our increased business momentum.”
Commvault has worked closely with Amazon, Microsoft and other public cloud providers to make its software compatible. Hammer said cloud providers are also using Commvault software to services for disaster recovery and application development storage.
“We see meaningful contributions to license revenue growth from partners such as Microsoft and AWS as well as large global systems integrators,” he said.
Hammer said large enterprises are using Commvault to set up and manage hybrid clouds and it continues to tailor its software to the cloud. In the next few months, the vendor plans to launch “cloud-first” applications to improve data protection and management in the cloud. These improvements include user self-service, expanded content search and analytics, and embedded software to enables software-as-a-service and managed services.
Like Veeam Software, Commvault is growing revenue far ahead of the data protection industry. Their largest competitors are in transition – Veritas in the early days of a spinoff from Symantec, and EMC about to merge with Dell.
“We are out-innovating those competitors and are better organized in the field,” Hammer said about Veritas and EMC.
Hammer said Commvault is out-growing Veeam in the markets where they compete.
“Veeam has become, for us, less of a competitive issue,” he said. “Our growth rates in the mid-market products that compete against Veeam are high, probably higher than Veeam’s growth rates. So my guess is we’re picking up share in that segment of the market.’
While Commvault beat Wall Street’s consensus revenue expectation by more than $3 million, Hammer said it did not meet its own expectations.
“We could have executed better,” he said. “As good as the numbers were, there was opportunity to do better than that. So from an external standpoint these are really good numbers, but we have very aggressive internal plans.”
The pieces are starting to fall into place for even higher performing flash storage with lower latency through the use of Nonvolatile Memory Express (NVMe) over Fibre Channel (FC).
Broadcom (part of Avago Technologies) this week made available to OEMs Emulex Gen 6 FC host bus adapters (HBAs) that support NVMe over FC. Broadcom claims the updated Gen 6 FC HBAs could help to lower latency by more than 50% and boost overall performance by more than 25% with SSDs that use NVMe, rather than Small Computer System Interface (SCSI), to transfer data and commands between host and target storage devices.
SCSI was designed years ago for slower storage media, such as hard disk drives (HDDs) and tape. The newer NVMe specification streamlines the I/O stack to facilitate higher performance, lower latency and lower power consumption with faster solid-state drives (SSDs). NVMe over Fabrics, include FC-NVMe, enables the NVMe command set to work across the network with external storage.
“As we’ve learned in talking to customers, the network’s becoming more and more of a bottleneck just because storage has gone from spinning media to these really low-latency architectures that are really fast,” said Brandon Hoff, director of product management at Broadcom. “So our focus with this solution is to hammer down latency and be the fastest network out there for moving NVMe traffic across the fabric.”
Last month, an industry consortium published version 1.0 of the NVMe over Fabrics specification. An NVMe Fabrics working group – which includes Broadcom – also published Linux target and driver code for inclusion in the Linux kernel. Hoff said the Linux distributions that enterprises typically use, such as Red Hat and SUSE, and other operating systems, such as Windows and VMware, should support FC-NVMe over time.
Server and storage operating systems, FC drivers, and HBAs will ultimately need to support NVMe over FC, according to Hoff. He said Broadcom updated and optimized its FC drivers and HBA firmware to support FC-NVMe and made available a reference architecture for vendors and early adopters. He said Broadcom Emulex has been demonstrating its Gen 6 HBAs, which support NVMe and SCSI, to server and storage vendors for several months.
“It was a very light lift for us to add NVMe as a protocol. Fibre Channel actually has multiple protocols that can run over it. FCP is the one that uses SCSI. And now we’re adding NVMe as a new protocol that runs over Fibre Channel,” Hoff said.
Hoff predicted the first phase of products to support NVMe over FC will be “just a bunch of flash” (JBOF) devices. He said the hardware is available, and the software needs to catch up. Hoff expects server OEMs to support FC-NVMe as they transition to Intel’s “Purley” enterprise platform in “2017ish.”
“NVMe all-flash arrays will be a little in the future,” Hoff said. “Some are [currently] moving to NVMe drives on the backend, but there’s SCSI on the front end. So they do protocol conversion. They bring a SCSI command off Fibre Channel on the front side, then they have to convert it to NVMe so it talks to NVMe drives.”
Once all-flash arrays support NVMe on the front end, there will no need for the translation, and latency will drop even further. In the meantime, Fortune 1000 FC users will know that “the hardware just works” as they decide to move to NVMe-based storage, Hoff said.
“If you want to deploy NVMe in your data center, all you have to do is plug an NVMe array into the Fibre Channel network. You don’t have to update the Fibre Channel, the drivers and the host,” Hoff said.
Shared storage access for servers has been the most basic requirement for storage networks. Performance demands for multiple systems accessing data continually increase due to improvements in compute and a desire to get more work done from infrastructure investments.
These demands have been met with technology developments to deliver storage networking and performance for access to data. The next big step function is with RDMA over Fabrics. RDMA is Remote Direct Memory Access and the fabric is the storage network.
RDMA over Fabrics is about increasing performance for access to shared data and taking advantage of solid-state memory technology. RDMA over Fabrics can be a logical evolution of the current shared storage architectures and continue on the path to accelerate operations to increase value from the investments in applications, servers, and storage.
RDMA over Fabrics sends data from one memory address space to another over an interface using a protocol. RDMA is a zero-copy transfer where data can be sent to or received from a storage system from/to the application memory space without the overhead of moving it between other locations as required by some protocol stacks.
RDMA allows data transfers with much less overhead and a faster response time from lower latency. NVMe (Non-Volatile Memory express) is the protocol used for RDMA over Fabrics. Think of the protocol as the language for communication, and independent of the physical interface. Both ends of the communication — server and storage — must speak the same language for the transfer.
Solid-state technology – including flash storage — is memory, accessed as memory segments. NVMe provides that access. When SCSI is used, a translation must occur to access the memory-based storage, which causes more latency. NVMe provides for parallel conversations to occur to use the physical interface more effectively.
There are competing options for the fabric interface. High performance Fibre Channel storage networks at Gen 6 (32 Gigbits per second) can support RDMA with HBAs. These Gen 6 switches and adapters are backwards compatible with current transfer environments.
Other options for RDMA over Fabrics include RoCE (RDMA over Converged Ethernet), iWARP (Internet Wide Area RDMA Protocol), InfiniBand, and PCIe. RoCE is a similar concept to FCoE. iWARP uses Transmission Control Protocol (TCP) or Stream Control Transmission Protocol (SCTP) for transmission. InfiniBand is an RDMA-based protocol used in high-performance computing and inter-system communication. PCie is a limited distance interface.
Each method has its own options, and a set of vendors promoting them.
New technology that promises to deliver improvements always attracts great interest and becomes the subject of discussion and investigation. However, the final judgement of the value of the technology doesn’t occur until it is effectively deployed. Disruptive changes tend to be cause delays and may prevent deployment despite the potential value. Technology that can be seamlessly introduced with compatibility with current operations will be put to use more quickly. To understand the value of RDMA over Fabrics and how to take advantage of this new technology, it is important to recognize how it can be introduced into operational environments.
A useful characteristic for RDMA is the ability to use memory access for shared storage over a storage network as an internal memory extension. This would be especially useful for databases that could not fit within internal processor memory. It would provide much higher performance than traversing a protocol stack to deliver I/O to a storage device.
The adoption rate will be determined by the immediacy of the need, the ability to deploy with the least risk or disruption, and the economic justification for making the transition. IT architects and directors should investigate RDMA over Fabrics with solid-state storage as part of their storage strategy.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Veeam Software continued impressive growth last quarter, increasing bookings revenue 38% over last year in a market that is barely growing one percent.
Veeam Software remains a private company, but provides quarterly updates on its earnings. In the second quarter, Veeam said its growth outpaced its first quarter of 2016 when it grew 24% and its 22% growth (to $474 million) for 2015. The backup and recovery vendor counts 70% of the Fortune 500 as customers, and passed 200,000 paid customers last quarter. The Hard Rock Hotel & Casino in Las Vegas was Veeam’s 200,000th customer.
Veeam has been selling backup software for 10 years, beginning as a virtual machine data protection specialist. Over the past few years, Veeam has also concentrated on rapid recovery and disaster recovery.
“We’ve created a new market called the availability market,” said Doug Hazelman, Veeam’s VP of product strategy. “We talk about availability and 15-minute recovery of any application and all data.”
“If they start to look at other options and Veeam pops up, it’s a great opportunity to us to say ‘Here’s a better way to do things,’” he said. “They know it will take a massive re-architecture with their existing backup, so they’re ready to look at a new approach.”
Veeam claims it now protects 11.8 million VMs and has 41,000 channel partners, many of them cloud service providers. The vendor said Microsoft Hyper-V license bookings grew 49% year-over-year last quarter, although Hazelman said a clear majority of customers still use VMware hypervisors.
The data protection and recovery software market grew 0.9 percent for 2015, according to IDC. Veeam is looking to take advantage of transition periods that large rivals are going through. Veritas split off from Symantec earlier this year and EMC is going through a merger with Dell.
Veeam is also going through a leadership transition. Founder Ratmir Timashev stepped down as CEO last month and was replaced by Veeam veteran William Largent. Veeam also brought in former VMware executive Peter McKay as president and COO.
Seagate resurrected the BarraCuda brand name with its latest lineup of 10 TB hard disk drives.
The high-capacity drives, unofficially called the Guardian series, include the BarraCuda computer hard disk drives, the IronWolf for network attached storage (NAS) configurations and the Skyhawk drives for surveillance.
Jennifer Bradfield, senior director at Seagate Technology, said the vendor decided to bring back the BarraCuda brand name (previously spelled Barracuda) after it was changed in 2013 because customers and partners continued to refer to the drive by its original name.
Seagate’s high-end consumer drives were branded as Barracuda for 10 years, until 2013 when the company unofficially retired the name, choosing instead to refer to its hard disk drives as Desktop HDD.
The newest hard disk drives include the standard BarraCuda that comes in 500GBs, 1TB, 2TB and 3TB and the BarraCuda Pro for workstations in 6TB, 8TB and 10TB capacities. They work at 7200 RPM spindle speed along with a bandwidth rate as high as 220 MBs per second with a five-year limited warranty and power-saving features.
FireCuda drives also are part of this group and they combine flash with the latest hard disk drives. They are available in 1TB or 2TB for 2.5-inch or 3.5-inch form factors.
“We decided to bring back the legacy of BarraCuda,” Bradfield said. “We are trying to make it easier for customers and partners to find our products.”
The IronWolf includes two hard disk drives that target NAS configurations. The standard IronWolf comes in 1TB through 10TB, while the IronWolf Pro is available in capacities ranging from 2TB through 10TB. The Pro drive is for higher performance and better reliability with a warranty. It’s optimized with the AgileArray firmware, which provides 180TB a year user workload rate.
“We are playing on the idea that wolves are pack animals and NAS operate in packs,” Bradfield said. “The AgileArray firmware makes sure there are no performance drops in system vibrations.”
The SkyHawk hard disk drives, which have been shipping for 10 years, include the latest 10TB drive for surveillance systems and storage systems for network video recording. The SkyHawk also comes with rotational vibration sensors to minimize read/write errors, and a data recovery services options.
While the flash market is gaining more ground in the storage market, Seagate continues to invest in hard disk drives.
IBM’s disk array business took another hit last quarter, with storage hardware decreasing 13% as the company looks to shift its emphasis to flash and software-defined storage.
“Storage value is shifting to software,” CFO Martin Schroeter said Monday on IBM’s earnings call. “We’re continuing to grow software-defined storage, which includes object storage and our newly introduced Spectrum suite offerings. Storage hardware revenue decreased 13 percent, which continues to reflect weakness in the traditional disk storage market.”
Schroeter said IBM would expand its flash products “across the entire portfolio,” pointing to its release of FlashSystem A9000, A9000R and all-flash DS8888 arrays in April.
IBM has a new storage chief. Ed Walsh this month took over as general manager of IBM Storage and Software Defined Infrastructure. He is expected to concentrate on FlashSystem all-flash arrays, Spectrum software and Cleversafe object storage for cloud and on-premise storage. IBM acquired Cleversafe for $1.3 billion in October 2015.
IBM storage revenue has declined in each of the last four years. It fell from $3.7 billion in 2011 to $2.4 billion in 2015.
In a near unanimous vote, EMC shareholders today voted to approve the storage giant’s $67 billion merger with Dell.
The shareholder vote was considered one of two remaining obstacles to the deal, with regulatory approval from China still remaining.
EMC said approximately 98% of voting EMC shareholders cast their votes in favor of the merger, representing approximately 74% of EMC’s outstanding common stock.
In a press release detailing the vote, EMC repeated a phrase often use by EMC and Dell executives: “The transaction is expected to close on the original terms and within the originally announced timeframe …” without giving a specific timeframe for the close.
“Today’s resoundingly favorable shareholder vote clearly supports our view that combining Dell and EMC will create a powerhouse in the technology industry,” EMC CEO Joe Tucci said in the prepared statement. “The board and I care very deeply about, and have worked diligently to represent, what we believe is the best outcome for all stakeholders. I want to thank our shareholders for their support, as well as our customers and partners. My special thanks to the talented people of EMC for their hard work, dedication and passion.”
Upon close, Dell EMC will become the enterprise division of Dell Technologies.
The largest acquisition in technology history will pay EMC shareholders $24.05 per share plus 0.111 shares of VMware tracking stock. EMC is the majority owner of VMware, which will also become part of Dell when the deal closes.
Hyper-convergence is a rock star at VMware these days, according to CEO Pat Gelsinger.
VMware‘s earnings report Monday was better than expected, and Gelsinger called out VSAN hyper-converged software and NSX software-defined networking as the “rock stars” and “rocket ship products” of the quarter.
“They were just fabulous,” Gelsinger said.
Later during the earnings call he said VSAN was the biggest star of the quarter.
“If I would call out one product in Q2 that just blew their numbers away, VSAN is it, and it was the rock star of Q2 above all others,” Gelsinger said.
VMware and its parent company EMC are in the process of being absorbed into Dell as part of a $67 billion transaction.
Hyper-convergence is a highly competitive market now with all large server vendors and most large storage vendors involved, along with private companies such as Nutanix and Simplivity that pioneered the technology. Gelsinger said now that VSAN has been in the market for two years, the technology is ready for enterprise use.
“A storage product, you ship it, and about two years later the market says yes to it,” he said. “They test it. They try it. You can’t afford data loss. So there really is a gestation cycle in the market for these products.”
VMware’s $1.6 billion in revenue and $265 million in profit were up significantly from the same quarter last year and beat Wall Street analysts’ expectations.
Cisco and OpenStack-based object storage vendor SwiftStack have teamed on a highly dense, turnkey object storage appliance. It marks Cisco’s first foray into object storage software and gives SwiftStack a chance to expand its reach to high-end enterprise customers.
Customers can purchase SwiftStack object storage as part of the Cisco Metapod-managed cloud hosting service, or physically install a Cisco Unified Computing System (UCS) C3260 4U rack server bundled with SwiftStack 4.0 software.
The SwiftStack C3260 is a 4U modular rack with two node controllers and 60 drive trays per chassis. Each SwiftStack node can scale to 600 TB with 10 TB disks.
Cisco Metapod is a cloud storage product that combines technologies acquired from OpenStack cloud providers Metacloud and Piston Cloud Computing. San Francisco-based SwiftStack sells a commercial distribution of the open source OpenStack Swift object storage platform. Its object storage software lets customers mix and match erasure coding with traditional data replication schemes.
The Cisco-SwiftStack partnership is similar to reseller agreements Scality has to integrate its RING object storage software on servers by Dell and Hewlett Packard Enterprise.
Cisco isn’t the biggest server vendor – that distinction belongs to HPE, followed by Dell – but its server gear is broadly used by hyperscale data centers and cloud services providers. SwiftStack is trying to spur adoption in those market segments by prepackaging its object storage software on Cisco UCS hardware.
“Users often times find software-defined storage product (to be) complex and hard to understand,’ said Mario Blandini, SwiftStack vice president of marketing. “The choice to use any server is one big benefit of software-defined storage, but there are customers that prefer to consume it as an all-in-solution. The Cisco UCS is a great form factor for our object software to reside.”
Cisco has added the SwiftStack object storage to its global product list and will market it through its Cisco Metapod (formerly known as Cisco OpenStack Private Cloud) division. SwiftStack also has developed reference architecture for object storage with server hardware from Dell, HPE, Lenovo and Super Micro, among other vendors.
“It expands our reach, given there’s a lot of Cisco switch users out there that are also building clouds,” Blandini said.