Tape gets less respect than Rodney Dangerfield among the IT crowd. But tape still has its staunch defenders who say the medium remains technologically sound and continues to improve.
A major challenge is being able to communicate the advantages of tape storage systems, Fred Moore, president of Horison Information Strategies, said at the Fujifilm Global IT Executive Summit in Boston last month.
“The game has really changed,” as tape storage systems have made progress in the last 10 years, Moore said. “That’s the good news.” The bad news, he said, is that people don’t know it yet.
There is a tendency now to favor the tactical quick fix over strategic planning and regard cloud as a storage game-changer, said Jon Toigo, managing principal of Toigo Partners International and founder and chairman of the Data Management Institute. That mindset leaves tape on the outside looking in.
But using tape is getting much simpler thanks to the Linear Tape File System, Toigo said at the summit.
Other advantages of tape storage systems include their affordability, security, energy efficiency, long life and reliability, said Calline Sanchez, vice president of enterprise storage at IBM, in her presentation at the summit.
As an archive medium, tape’s capacity improvements are outperforming all other kinds of storage and it is ideal for storing less frequently accessed and modified data, Toigo said. LTO-7 offers 15 TB of compressed capacity and sustained data transfer rates of up to 750 megabytes per second for compressed data.
Is tape good for randomized access?
“No,” Toigo said. “It never was.”
But accessibility in tape storage systems is improving. Quantum, with its StorNext AEL6 appliance, combines the new Scalar i6 library with Quantum’s StorNext data management software. StorNext enables easier access to data stored on tape, with options for CIFS, NFS and RESTful interfaces.
“When we announced the [new Scalar tape platform], it was in the context of helping companies manage the rapid growth of unstructured data,” said Kieran Maloney, manager of archive and technical workflow solutions at Quantum.
The platform provides high-density storage for Quantum’s overall multi-tier portfolio that includes hybrid flash, object-based, cloud and tape storage.
With the cloud, too many administrators “think about how they can leverage cloud to replace tape” instead of how they can leverage cloud with tape, Maloney said. But as the amount of data and its time in storage rise, the cloud becomes more difficult to afford.
It makes sense to use tape for cloud seeding, Toigo said.
Compared to disk, tape storage systems provide more security against ransomware attacks and other hacks. Since tape isn’t online all the time, “it’s easier to keep away from ransomware,” Maloney said.
Disk is a breeding ground for hacks, Moore said.
So how does the tape community get its message out?
The message has to be engaging for millennials who don’t know about the technology, Toigo said.
“When they get it, we’ve got a whole new generation of tape users.”
Western Digital reported revenue of $4.7 billion in the first full quarter after its SanDisk acquisition closed, claiming demand was strong for hard drives and flash-based products with cloud and mobile customers.
CEO Steve Milligan said WD made significant progress in the quarter on its top two priorities: integrating its SanDisk and HGST acquisitions and transitioning to second-generation 3D NAND flash. He said WD remains on track to begin a significant ramp up to 64-layer 3D NAND in the first half of 2017.
WD said a net quarterly loss of $366 million included charges related to its recent acquisitions and its re-pricing and repayment of outstanding debt. WD reported net income of $283 million in the same quarter last year. WD generated $3.4 billion in quarterly revenue a year ago, for the period ending on Sept. 30, while SanDisk reported revenue of $1.45 billion for its third quarter, ending on Sept. 27.
This year, WD originally projected revenue in the range of $4.4 billion to $4.5 billion for the quarter ending on Sept. 30. But on Sept. 7, the company ratcheted up its prediction to $4.45 billion to $4.55 billion and exceeded those numbers, with $1.4 billion of the $4.7 billion total coming from data center products.
Milligan attributed the overachievement to higher than expected demand and “a bit better” pricing.
Capacity shipped on nearline hard disk drives (HDDs) grew nearly 50% for the quarter on a year-to-year comparison basis. Michael Cordano, WD’s president and chief operating officer, said cloud customers drove the increase in nearline HDDs. Cordano also said that WD’s 10 TB helium drives are gaining adoption among OEM and cloud customers. WD claims it has shipped more than 10 million HelioSeal drives since their 2013 launch.
Milligan said the capacity growth rate for flash exceeds disk, largely related to hyper-scale deployments. Still, Cordano said “Going forward, we don’t see a convergence to an all-flash world anytime soon, frankly, anytime in our planning horizon.” Milligan said the price differential between a solid-state drive (SSD) and an HDD can run between 5x and 10x, depending on performance requirements and other factors.
WD predicted that quarterly revenue would be flat this quarter compared to last quarter.
On Oct. 19, rival Seagate Technology reported revenue of $2.8 billion last quarter. That was up from the prior quarter’s $2.7 billion but down from $2.9 billion a year ago. Seagate noted record growth in exabytes shipped of HDDs for the quarter ending on Sept. 30, 2016. The 66.7 billion in exabytes shipped marked an eight percent increase over the prior quarter.
With its StorNext revenue rising while data protection sales remained flat, Quantum achieved its second straight growth period last quarter.
The StorNext file system is at the heart of Quantum’s scale-out data management products. Scale-out storage made up more than one-third of Quantum’s revenue according to its Wednesday night earnings report. However, Quantum will have to continue to land big scale-out deals to maintain growth. Its forecast for this quarter calls for a sequential drop in revenue.
Quantum’s $135 million in revenue in the quarter increased 15% from $117 million in the same quarter last year. A 56% increase in scale-out storage drove the revenue surge. The company reported $3.8 million in profit for its second straight profitable quarter.
Scale-out revenue hit $47 million, its highest quarter ever. CEO Jon Gacek said the increase came in sales to media and entertainment, video surveillance and technical industries such as life sciences and gas exploration. Quantum scored $1 million-plus deals with a media company and a consumer electronics company. Quantum added 85 scale-out customers in the quarter.
Quantum’s backup revenue was flat year-over year at $78.5 million. Tape automation revenue dropped $3.2 million to $45.2 million while disk backup increased $600,000 to $18.7 million, and devices and media increased to $14.6 million from $11.6M. Gacek said Quantum added 60 new branded tape automation customers and around 40 disk backup customers in the quarter.
“We’re getting growth on the scale-out side,” Gacek said in an interview after Quantum’s earnings call. “We still have lots of opportunity for improvement there. We have a unique product for video surveillance and we’re winning deals.”
Gacek said Quantum’s DXi disk backup sales “have been more stable, and they’re more reflective of the overall storage market.” He said the backup appliances do well in competitive deals but they are not the company’s main focus.
“If we have extra money to spend on go-to-market, we’re spending it on scale-out,” he said. “We’re not spending a lot of money to drive growth on disk backup. The opportunity is better on the scale-out side.”
Despite the growth expectations around StorNext, Quantum forecast a drop in revenue this quarter. Its forecast of $125 million to $130 million represents an unusual decline in the final quarter of the year, which is usually the best revenue period for storage vendors.
Gacek admitted the fourth calendar quarter is usually Quantum’s strongest but the uptick in larger deals makes it tough to predict. He is taking the conservative approach to guidance.
“We’re setting guidance based on what we see and we’re excluding large deals that we can’t predict in good conscience,” he said. “We’ve been closing those deals and that’s been driving us higher. If big deals close this quarter, our revenue will be higher, but we’re trying to be prudent.”
Cloudian today closed a $41 million in a Series D funding round that it will use to help increase its sales footprint in Europe and Asia in the second half of 2017.
New Cloudian investors in the round included Lenovo, City National Bank, Epsilon Venture Partners and Delta Venture Labs. Previous investors Intel Capital, INCJ, Eight Roads, Fidelity International Limited, and Goldman Sachs also contributed to the round.
Cloudian’s largest funding round brings total investment in the object storage vendor to $79 million.
Customers now can purchase Cloudian’s flagship object storage-based HyperStore through Amazon Web Services marketplace. They can deploy the software in their data center and centralize usage while getting billed through a monthly AWS invoice. They can also purchase both on-premises storage and Amazon S3 storage on a metered-by-usage basis and manage it as a single pool.
Lenovo sells a DX 8200C appliance with Cloudian software integrated. Cloudian HyperStore software. HyperStore is also integrated with Google Cloudline, allowing customers to migrate files based on age, frequency of access and file type. Customers can search data locally for discovery purposes, and administrators can manage it as a single pool.
“We need to scale our team to work with them,” Michael Tso, Cloudian’s CEO and founder, said of the startup’s partners. “The marketplace that we are attached to is starting to take off. We are at the convergence of the demand for our product. We have been relatively quiet (but now) we are ready to step up on the marketing side.”
Cloudian claims it has experienced a 300% increase in bookings and 100% growth in its customer base over the past year.
The company raised $11.58 million in a Series A funding round in April 2012, followed by $5,1 million in October 2013 and another $24 million in June 2015. The initial investment money that took place in May 2007 was not disclosed.
NetApp is the first storage vendor to sell Toshiba’s new highly secure solid-state drives (SSDs).
NetApp hybrid FAS and E-Series arrays will include Toshiba PXO4S 12 Gb per second SAS SSDs, built with Federal Information Processing Standard (FIPS) 140-2 Level 2 encryption.
The self-encrypting SSDs have sequential read and write speeds of up to 1,900 MiBps and 1,100 MiBps.
“Most SAS SSDs have non-encryption versions and encrypted versions. This is different because these SSDs went through a FIPS certification process, which is the highest government-level security,” said Cameron Brett, a director of SSD product marketing at Toshiba’s storage products business unit. “It’s the highest level of security for SAS SSDs because it’s certified by the government. It’s not an easy process. It’s very time-consuming.”
NetApp has shipped the SSDs in arrays since last month.
FIPS is a set of standards that describe document processing, encryption algorithms and other information technology processes for use within non-military federal government agencies and contractors who work with those agencies.
“The FIPS certification is very specific to the components and the firmware,” Brett said. “The drive itself cannot be breached.”
“This is an important step for NetApp to make this part of their campaign,” Coughlin said. “They are making it a key part of their offerings and that is the first time I’ve seen that. And this is not software encryption. This is encryption that is built into the storage devices. When it’s built into the drives, the key is more secure.”
Coughlin said software-based encryption carries a large overhead tax for encrypting and decrypting the data.
“Software-based encryption puts overhead on the system whereas drives that are encrypted don’t have that overhead,” he said. “With software encryption, you have to rewrite the data with the encryption or change the keys. With hardware encryption, the data on the drives is always encrypted and the keys never leave the drives. All you need is good, strong passwords.”
Commvault’s push to store customer data in the cloud is paying off.
CEO Bob Hammer said the amount of data stored in the cloud with Commvault software has more than doubled since the start of 2016. That helped the data protection and management software vendor to better revenue growth than expected last quarter.
Commvault Tuesday reported revenue of $159.3 million last quarter, a 13% increase over the previous year. Its software revenue of $70.5 million increased 22% year-over-year. Overall and software revenue both beat analysts’ expectations.
Commvault lost $800,00 in the quarter, down from $4.6 million a year ago. Hammer said he expects continued growth of the cloud and other standalone products will accelerate revenue increases over the next few quarters, and that should lead to consistent profits.
Commvault addresses the cloud with standalone products such as its Edge Drive file sharing product, cloud replication and DR packages and through partnerships with Microsoft and Amazon. The Edge Drive is part of the standalone applications that can plug into the Commvault Data Platform, along with apps for virtual machine backup, end point protection, archiving and others data management use cases.
“This represents a significant driver of our software growth,” Hammer said of the cloud during Commvault’s earnings call. “I can tell you the cloud is a significant, material part of our revenue and of our revenue growth.
Commvault said revenue from enterprise deals (more than $100,000) increased 44% from last year and made up 57% of its revenue in the quarter. The number of enterprise deals increased 45% from last year, averaging $268,000 per deal.
“At the 100,000-foot level, the major driver of our business is large enterprises as they move to the cloud,” Hammer said. “We’re helping them manage data from on-premise to the cloud, manage data in the cloud and then help them manage data in these hybrid environments.”
Hewlett Packard Enterprise launched its high-end enterprise storage array upgade today at the same time as Hitachi Data Systems. The HPE XP7 and HDS Virtual Storage Platform (VSP) use the same underlying hardware and software, supplied by HDS’ Japanese parent company Hitachi Ltd.
HPE has licensed the Hitachi technology for 15 years, for customers who want the highest availability mainframe storage. The XP7’s mainframe support and ability to virtualize any hardware array on the back end distinguish the platform from HPE’s flagship 3PAR StoreServ platform.
“The XP stands for advanced replication, mission critical RAS [resiliency, availability, serviceability], and 100 percent access to data and applications, regardless of any hardware or site failure,” said Vish Mulchand, senior director of product management for HPE storage.
You can read more about the XP7 speeds and feeds in this story on the HDS VSP upgrade. As with HDS, HPE calls out the platform’s all-flash options and data reduction technologies as key enhancements. While HPE adds its own availability software to the XP7 platform, the inline and post-process data reduction comes from the HDS flash module drives (7 TB and 14 TB), ASICs inside the FMDs and optimized software. “For this class of storage, customers want configuration flexibility,” Mulchand said. “We have customers using all-flash XP7s and hybrid XP7s”
Unlike HDS, HPE provides pricing info for its new platform. Street pricing starts at $20,800 for the new controllers and $22,200 for FMDs. HPE claims an all-flash configuration can cost $1.20 per GB with 4-1 data reduction. Software-based compression and deduplication costs $11,600 per XP7 frame.
FalconStor Software took aim at hybrid cloud deployments with a new pricing model and product upgrade for its FreeStor storage virtualization and block-based data services.
The Melville, NY-based software vendor now charges customers only for the primary copy of data – not the total storage capacity under management – with its subscription-based pricing model. The FreeStor software provides common tools, single-pane management and block-based services such as data migration, protection, recovery, and analytics for use with heterogeneous storage.
FalconStor CEO Gary Quinn estimated that 70% of FreeStor’s customers are managed service providers (MSPs). He said providers offer services such as backup or disaster recovery (DR) and want the ability to store an additional copy of their customers’ data in public clouds such as Amazon Web Services (AWS) or Microsoft Azure.
Quinn said FalconStor’s enterprise customers have also been asking for similar options to move to AWS or Azure for virtual backup, DR and test and development use cases.
“It doesn’t really cost me anything to make a copy of the data or replicate the data to another location and manage it through the FreeStor management server. So our view is that customers should pay once,” Quinn said.
He said the list price for the FreeStor software, inclusive of data services, is three cents per GB per month to use on the primary data copy. The customer supplies the hardware.
Eric Burgener, a research director at International Data Corp., said he has seen pay-as-you-go models from other vendors but nothing like FalconStor’s aggressive pricing.
FalconStor changed the pricing model in anticipation of a new version of its FreeStor software, which extends support to public clouds. FalconStor added support for Amazon, Microsoft, Alibaba, Huawei and Oracle to go with its prior support of OpenStack-based deployments.
Tim Sheets, vice president of marketing at FalconStor, said, in an Amazon environment, the FreeStor Virtual Appliance (FSS VA) would run on the AWS Elastic Compute Cloud (EC2). The FSS VA could either use Amazon’s Elastic Block Store (EBS) or present block services through AWS Storage Gateway (ASG) to load into Amazon’s object-based Simple Storage Service (S3) container, he said.
“You don’t have to go learn a new set of tools from Amazon if you haven’t done it before. We’ve already got the configuration set up to really simplify it for those customers,” he said. “And you also get the analytics, all the insights, through a single pane of glass with the FreeStor management server that you wouldn’t get if you had to use the Amazon or an Azure gateway,” Sheets said.
Customers could also use FreeStor to manage data across multiple supported public clouds or to move data from one public cloud to another, so long as the FSS VA runs in each cloud.
“I’m sure that Amazon’s not going to provide tools to leave Amazon and go to Azure,” Quinn said. “That’s what we’re doing here, the same way as if you wanted to move from EMC to HP on disk or EMC to Pure on flash. It’s just being done in the cloud.”
The new FreeStor software also beefs up external security with support for the Lightweight Directory Access Protocol (LDAP) and Microsoft Active Directory for authentication, authorization, and auditing.
Other newly supported features include enhanced analytics to enable core-to-edge visibility down to the applications and service-level agreement (SLA) management, improved support for NVMe to boost performance and lower latency, and Linux 7 compliance.
The FreeStor updates arrive as FalconStor battles financial woes. FalconStor reported $8.1 million in revenue for the second quarter, down from $9.6 million in Q2 of 2015, with only $9.4 in cash on hand. But Quinn said at the time that FalconStor was making solid progress selling FreeStor subscriptions to MSPs, enterprises and OEMs.
EMC’s XtremIO all-flash SAN is getting a file-system injection thanks to Dell Fluid File System (FluidFS).
Dell EMC previewed the NAS capabilities for XtremIO at Dell EMC World, saying they would be generally available by late 2017. FluidFS is a scale-out NAS technology that Dell acquired from Exanet in 2010 and used to add file capabilities to its Compellent and EqualLogic SAN arrays. But even before Dell acquired EMC for more than $60 billion, the development teams from XtremIO and FluidFS – both based in Israel – were collaborating on their integration.
Chris Ratcliffe, Dell EMC senior vice president of core technologies, jokingly referred to the joint development as a “black ops” operation. The integration will add NFS, SMB, Hadoop Distributed File System (HDFS) and NDMP to XtremIO’s current Fibre Chanel and iSCSI block storage support.
As when Dell added FluidFS to Compellent and EqualLogic, XtremIO will require a separate piece of hardware to deliver file services. XtremIO CTO Itzik Reich called the appliance an extension to XtremIO rather than a full gateway, and said the XtremIO approach will not impact performance. He also said file storage will be managed through the same interface as block storage with “the same look and feel.”
Reich said the original design goal for ExtremIO included adding data services in later iterations. “What’s in the market today is just the beginning,” he said of the product that EMC claims has more than 3,000 customers and $3 billion in revenue in three years on the market. He also said there will be a lot more added to the next-generation XtremIO, including more drives, higher capacity SSDs and software-defined storage capabilities.
“We were looking for ways to complement our scale-out architecture,” he said. “We wanted it to be more than just Fibre Channel. When we heard talk of a partnership, I gave Michael (Dell) a call and said this is a good project for us to add file services.”
Dell EMC this week announced plans to deliver an all-flash version of its Isilon scale-out NAS platform in 2017. Isilon is aimed at traditional scale-out NAS use cases such as media/entertainment, life sciences and Hadoop analytics. Ratcliffe said XtremIO’s NAS would be more for traditional SAN customers. “This is scale-out NAS for transactional environments that require sub-millisecond response times,” he said.
Reich estimated it would have taken at least five years to build file services from scratch into XtremIO. His team looked at filesy stems from EMC’s Unity unified and Isilon scale-out NAS but determined FluidFS fit better with XtremIO’s architecture.
“Unity doesn’t scale out,” he said. “Isilon scales out like nobody’s business, but it doesn’t provide the latency we need.”
EMC’s Unity, Isilon and VMAX All-Flash arrays already support 15 TB SSDs, but they won’t be available on XtremIO until the next generation. Reich said his team wants to make sure using the higher capacity drives will not impact performance. “People don’t realize, the larger the drive capacity gets, the worse the performance gets,” he said. “We are not willing to sacrifice our predictable performance.”
IBM marked the one-year anniversary of its Cleversafe acquisition with the launch of a “pay-as-you-go” cloud object storage service enabling customers to use the same technology on site and off premises.
IBM foreshadowed its plans to facilitate hybrid cloud deployments on Oct. 5, 2015, when it acquired Cleversafe. But until this month, IBM made available the Cleversafe object storage software for use only on-premises or in a dedicated environment in the IBM Cloud.
Russ Kennedy, vice president of product strategy and customer success at IBM, said IBM has done considerable work to extend its public cloud’s previously limited multi-tenancy capabilities to support millions of concurrent tenants and to integrate the core Cleversafe technology.
Kennedy said customers have the flexibility to store application data in the cloud and move it back on premises, or vice versa, if they choose. He said IBM is looking to provide more automation capabilities in the future, “where decisions are made based on utilization or access or certain parameters that may drive the workloads in one direction or another.”
IBM Cloud Object Storage services are now available in the U.S. and Europe in three configurations:
–Standard – Cleversafe-based high-performance offering for active workloads; supports object storage application programming interfaces (APIs) such as Amazon S3 and OpenStack Swift.
–Vault – lower-cost offering that targets archive, backup and other workloads where data is infrequently accessed.
–Dedicated – single-tenant IBM Object Storage running on dedicated servers in IBM Cloud data centers; available as an IBM managed service or a self-managed option.
Kennedy said SecureSlice technology from Cleversafe eliminates the need for customers to manage encryption keys. SecureSlice automatically encrypts each data segment before it is erasure coded and distributed. IBM Cloud Accesser technology can reassemble the data at the customer’s primary data center, and SecureSlice decrypts it.
IBM Cloud Object Storage has regional and cross-regional options. The cross-regional service sends sliced data to at least three geographic regions. The regional service stores data in multiple data centers in a specific region.
Kennedy said IBM operates close to 50 data centers worldwide, including 12 to 15 in North America. IBM Cloud Object Storage is due to become available in the Asia-Pacific region by year’s end, with other locations to follow in 2017, according to Kennedy.
IBM Cloud Object Storage pricing, found at this link, is based on per GB per month basis. There are also fees for transactions. IBM’s on-premise object storage software can be licensed based on capacity or through a subscription model.
Scott Sinclair, a senior analyst at Enterprise Strategy Group (ESG) Inc., said a 2016 ESG poll of current enterprise Amazon Web Services (AWS) customers identified Microsoft Azure and IBM as the most viable competitors to AWS.
Sinclair said using the same object storage software on premises and off premises could provide advantages. He said storage vendors often differ in how they implement protocols, so users might have piece of mind with the same technology in both places. He said they also know what to expect for service and support, working with a partner that understands both their on-premises and off-premises needs.
“The more vendors that you have to manage in your IT organization creates work,” Sinclair said. “And that work requires people.”
Kennedy said the exponential growth of information is driving users to recognize the cost, scalability and management benefits that object storage can provide over traditional storage, especially when they need petabytes or exabytes of capacity.
“There are still headwinds for object storage,” he said. “Not all the applications in the world have the ability to write to object storage like they do to traditional file-based or block-based storage. But that’s changing. And it’s changing quite rapidly with the popularity of moving to the cloud.”