Storage Soup


August 11, 2016  6:46 AM

SCSI trade group claims new SAS has pluses over NVMe/PCIe

Carol Sliwa Carol Sliwa Profile: Carol Sliwa
SCSI

NVMe and PCIe solid-state drives (SSDs) may be a hot topic at this week’s Flash Memory Summit, but the SCSI Trade Association is trying to remind everyone that new serial-attached SCSI (SAS) technology is on the way.

Rick Kutcipal, president of the SCSI Trade Association and product planner at Broadcom, said he expects the upcoming “24 Gigabits per second” (Gbps) SAS device-connect technology – which actually has a maximum bandwidth of 19.2 Gbps – to see its first use with SSDs.

“The biggest advantages will be in solid-state memory,” Kutcipal said.

He said the SCSI Trade Association hopes to hold its first plugfest for so-called “24 Gbps” SAS in mid-2017. He expects host bus adapters (HBAs), RAID cards, and expanders to support the new SAS technology in 2018, with server OEM products to follow in 2019.

Kutcipal claimed the 19.2 Gbps bandwidth would have a 21.5% per-lane performance advantage over non-volatile memory express (NVMe) running on top of PCI Express (PCIe) 4.0. The maximum bandwidth for single-lane PCIe 4.0 is 15.8 Gbps, he said.

SAS typically uses one lane to the drive, and enterprise NVMe SSDs typically use four-lane PCIe, Kutcipal acknowledged. Four-lane PCIe would obviously be faster than single-lane SAS.

But Kutcipal said, “The lanes are not free. [They’re] actually very expensive, so the comparison has to be per lane. SAS can go x2 or x4 [lanes] to the drive. ”

SAS uses the small computer system interface (SCSI) command set to transfer data between a host and a target storage device. SCSI was developed 30 years ago when hard disk drives (HDDs) and tape were the primary enterprise storage media. Manufacturers have continued to use serial-attached SCSI as a drive-connect with faster SSDs.

The SCSI Trade Association’s efforts to promote a new SCSI Express (SCSIe) interface to run SCSI commands over PCIe have largely fallen flat in comparison to the momentum behind NVMe with PCIe-based SSDs.

The NVM Express industry consortium developed NVMe as a lower-latency alternative to SCSI. NVMe streamlines the register interface and command set for use with faster PCIe-based SSDs and post-flash technologies, such as Intel-Micron’s 3D XPoint.

“SAS is inherently scalable, and NVMe is not,” Kutcipal said. “NVMe will scale to tens of devices, and it’s pretty arduous scaling, while SAS can go to thousands of devices. And there are arrays out there today that are thousands of devices.”

Kutcipal said NVMe cannot solve PCIe’s scaling challenges.

“The limitation in the scalability of NVMe as a device connect is really inherent in PCIe, not in NVMe,”he said. “That’s a big fundamental limitation of NVMe. It relies on PCI Express as its transport in the device connect world.”

SAS can serve as a device/drive connect as well as a storage networking technology. But Kutcipal said the dominant role for SAS is connecting a host bus adapter (HBA) or RAID card to an SSD or hard disk drive (HDD). SAS has distance limitations for storage networking, limiting its use to SANs inside the data center, he said.

The upcoming SAS specification has two parts: the SAS-4 physical layer and the SAS Protocol Layer (SPL)-4. The SPL-4 specification is expected to be complete and ready for use later this year, according to Kutcipal. He said SAS-4 would lag SPL-4 by a quarter.

In addition to the speed bump, new features on the way with next-generation SAS include Forward Error Correction, to ensure data integrity, and continuous adaptation, to enable the SAS transmitter to operate optimally, even if the temperature or operating voltage changes, Kutcipal said.

August 9, 2016  6:54 AM

HCI vendor Pivot3 reports more customers using more apps

Dave Raffo Dave Raffo Profile: Dave Raffo
Hyper-convergence, Pivot3

Pivot3 more than doubled its revenue in the first half of 2016 over 2015, which its CEO attributes to customers buying its hyper-converged appliances as a platform rather than for single applications.

Pivot3 CEO Ron Nash said Pivot3’s revenue increased by 103% over the past six month as it added more than 400 customers. That includes customers Pivot3 added through technology it acquired when it merged with flash storage vendor NexGen Storage in January. But Nash said revenue from NexGen made up less than 10% of Pivot3’s revenue in the quarter. The bulk of the growth came from customers expanding their hyper-converged workload. Nash said until the last six months or so, almost every Pivot3 system was used for a single application. But customers are now adding other apps to their hyper-converged appliance and new customers are buying hyper-converged for more than one app from the start.

“Once customers start using it, they say ‘This platform stays up, it’s easy to operate and has a small footprint,’ and then they start loading more applications on it,” Nash said.  “That’s the big change we’re seeing. Enough people have tried hyper-converged for a single app, and are now starting to buy it as a platform.”

He said 28% of Pivot3’s new sales in the first half of 2016 were for multiple applications from the start. The average spend of customers with multiple use cases is more than 500 % higher than customers with a single data center application use case. He pointed to a customer in the public transit industry with 6PB of data on 250 nodes.

The most common applications Pivot3 customers run are virtual desktops, backup, video surveillance and databases. Nash said the integration of NexGen’s quality of service with Pivot3’s hyper-converged appliances should prove particularly useful for multiple applications.

Despite the spike in sales, Nash said Pivot3 still rarely competes head-to-head with other hyper-converged products. He said three-quarters of Pivot3’s deals are against traditional server and storage products. The two best known hyper-converged products – Nutanix’s NX appliances and VMware Virtual SAN (VSAN) software — don’t show up in many competitive deals but do have an impact on Pivot3 by creating market awareness.

“Nutanix is out there spending tons of money educating market on hyper-converged infrastructure, which is fantastic for us,” Nash said. “I hope they keep advertising.”

As for VMware, Nash said he suspects it has a lot more VSAN customers than actual sales. “VMware doesn’t quote revenue, they quote customer numbers,” he said. “That’s what you say when you’re giving it away.”

Pivot3 also added Bill Stover as chief financial officer. Stover spent 18 years at Micron Technology, serving as vice president of finance and CFO of the public company. Nash said Stover’s background with a public company will help Pivot3 — still a private firm – grow into a more mature company.


August 5, 2016  9:14 AM

Pokémon Go’s lessons for storage pros

Dave Raffo Dave Raffo Profile: Dave Raffo
Pokemon GO

The Pokémon Go craze – mainly its augmented reality capability and server crashes – contains lessons for storage administrators.

Pokémon Go demonstrates how next-generation applications can drive cloud adoption as well as the pitfalls of handling that rapid adoption, according to Varun Chhabra, director of product marketing for EMC’s Advanced Software Division.

“A lot of the applications we use today already use geo-location,” Chhabra said. “What is interesting about Pokémon Go is the scale of usage when combined with geo-location tracking and data. That makes it especially challenging. Tens of millions of people are playing it, and the numbers are still going up.

Chhabra said while Pokémon Go developer Niantic has not disclosed its back end or storage infrastructure for the game that is attracting millions of users, it has clearly mastered the use of location-based applications. At the same time, it has been plagued by server crashes – delaying the launch of the game in Japan – and security issues that suggest it is growing too fast for its own infrastructure to keep up.

“When we talk about cloud-native apps, the assumption is, everything will work out OK if you have the infrastructure,” he said. “But you still need to manage data, manage the scale of users and figure out where the bottlenecks are.

There is speculation that Niantic is using NoSQL or PostgreSQL as its back-end database and Google Apps for its Platform-as-a-Service (PaaS) layer. But it has suffered server crashes that cannot be traced to any public cloud problems.

“It seems like they’re using the public cloud today, but even then they’ve had a fair share of outages even when there have been no outages in the public cloud,” Chhabra said. “So you can still have challenges with the public cloud. It’s how you write the application, and how you’re handling access for an avalanche of data.”

Chhabra said commercial enterprise application developers  can copy Pokémon Go’s success. For instance, retail stores can create apps to show shoppers in a store where a specific item is located. Or real estate agencies can develop an app with pop-ups showing which houses are for sale, where they are located, and their specs. These applications would tap into data that already exists.

“It should be easy to do, now that people are more comfortable holding up their screens without being embarrassed,” Chhabra said. “It’s more about creating an immersive user experience.”

He pointed to existing storage technologies such as object storage and data lakes that use analytics as tools that can be used in creating these immersive applications. But the development process is different than IT organizations are used to.

“You can’t throw the same approach at building an application for a geo-location mobile app than you do for traditional apps,” Chhabrasaid. “A lot of customers we talk to are talking about building apps from the ground up and learning how to use microservices.

“What is your storage platform doing for you natively to relieve the burden on developers? We’ve seen way too many examples of applications that don’t scale, and they crash the servers. Most businesses don’t expect to scale apps this fast, but they still have to test. Pokémon gets a pass, but most businesses don’t.”


August 4, 2016  6:02 PM

EMC container plugin supports any block storage

Garry Kranz Garry Kranz Profile: Garry Kranz

EMC has contributed an open source Apache Mesos container volume driver that supports any network-attached block storage system equipped with a Docker plugin, including storage of EMC competitors.

The EMC container plugin integration for Docker is a joint project of Apache Foundation and EMC code, part of EMC Emerging Technologies Division. It builds on previous EMC container initiatives. The Docker Volume Driver Isolator module exposes native Docker functionality through a command line interface.  It is part of the Apache Mesos distribution released in July.

“We’re making it possible for the community to do multi-tiered persistent storage within Docker, which up to now has been a struggle,” said Josh Bernstein, a vice president at EMC code.

Mesos  orchestrates deployment of containers on premises or in cloud storage. The Apache Mesos cluster manager presents abstracted data center compute, memory and storage in an aggregated resource pool. Mesos resides in the kernel to isolate resources as applications are shared across a distributed  framework.

Mesos lets users create a persistent volume to run a specific task from reserved disk. The volume persists on a node independently of the task’s sandbox and is returned to the orchestration framework when the task is complete.  If necessary, new or related tasks launch a container that consumes resources from the previous task. Docker recommends Apache Mesos as an  orchestration layer to implement large clusters of storage containers.

EMC’s container module communicates directly with Docker volume plugins, allowing developers to request a persistent volume from any block storage running under Mesos.  Mesos then passes the file request to EMC, which searches available storage to identify the volume and deliver it to the destined container host.

“Before this feature, while users could use persistent volumes for running stateful services, there were some limitations. First, the users were not able to easily use non-local storage volumes. Second, data migrations for local persistent volumes had to be manually handled by operators. The newly added Docker volume isolator addresses these limitations,” according to an Apache Software blog posted July 27.

Enterprise adoption of Docker is picking up, although several hurdles remain before containers are as ubiquitous as that of virtual machines. The Apache Mesos integration foreshadows the open source EMC container EMC libStorage project. LibStorage is extensible abstraction and provisioning presented as  common package for every heterogeneous storage and container runtime.


August 4, 2016  9:25 AM

Ctera builds new data migration, ILM and security capabilities into its platform

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Ctera Networks recently unveiled new enhancements to its Enterprise File Service Platform that include the ability to migrate data from an on-premise cloud to the public cloud without disrupting service, and support for information lifecycle management tools from Amazon S3 and the NetApp StorageGRID object storage.

The platform also has been upgraded to support Security Assertion Markup Language (SAML) 2.0 to centralize identity management for single sign-on (SSO) capabilities to access files and backups.

The new data migration tools targets customers who have not deployed a cloud strategy and want to start on-premises and need the flexibility to eventually move to the public cloud.

The Ctera Enterprise File Service Platform integrates Enterprise File Sync and Share, endpoint and data protection, along with branch and remote office storage. The new capability allows users to migrate workloads across storage nodes from any on-premise location to the public cloud.

“You can start with Ctera in (your) data center and then you can move to a public cloud. It’s moves very quickly. It provides flexibility,” said Jeff Denworth, CTERA’s senior vice president of marketing. “We built this tool because no one wants to be locked in. Now, you have a marketplace of options.”

The new ILM capability gives users a way to use the Ctera platform to tier high-performance workloads onto the NetApp StorageGRID and in-frequently accessed data to Amazon S3. It leverages ILM tools from Amazon S3) and NetApp StorageGRID to intelligently place files in cloud storage tiers according to their application profile.

Long-term archive and backup data can be directed to low-cost storage tiers, such as Amazon Web Services’s (AWS) Amazon S3-Standard Infrequent Access tier (Standard – IA), while interactive data, such as enterprise file sync and share workloads, can be stored on storage tiers that offer more cost-efficient ingress and egress capabilities.

“We tag data as it goes through our system as either interactive or archival and then its diverted to the general purpose tier like S3,” Denworth said.

The new security features CTERA now supports identity federation over Security Assertion Markup Language (SAML) 2.0 to so users can use centralized corporate user identity management and provide SSO capabilities for file and backup access. In conjunction with support for this new standard, Ctera also now is compatibility with leading SSO offerings, including Microsoft Active Directory Federation Services 2.0, Okta, OneLogin, and Ping Identity.

“(The platform) has been integrated with modern identity tools so users sign in with SSO,” Denworth said.

The Ctera Enterprise File Services Platform enables enterprise IT to protect data and manage files across endpoints, offices, and the cloud – all within the organization’s on-premises or virtual on-premises cloud storage.

The platform is powered by Ctera’s cloud service delivery middleware that users leverage to create, deliver, and manage cloud storage-based services such as enterprise file sync and share, in-cloud data protection, endpoint and remote server backup, and office storage modernization.


August 2, 2016  8:13 PM

Seagate, WD see exabyte growth with high-capacity enterprise HDDs

Carol Sliwa Carol Sliwa Profile: Carol Sliwa
Seagate, western digital

Unit shipments of hard disk drives (HDDs) may be on the decline, but the exabytes that Seagate Technology and Western Digital are shipping with their high-capacity enterprise HDDs is spiking.

Seagate noted during its earnings call today that HDD storage capacity hit a record 61.7 exabytes (EB) during the fiscal fourth quarter, on the heels of 60.6 EB in Q2 and 55.6 EB in Q3. Average per-drive capacity soared to a record 1.7 TB in Seagate’s fiscal Q4, which ended on July 1.

Steve Luczo, Seagate’s chairman and CEO, said demand was stronger than expected from cloud service providers (CSPs) in the fourth quarter. He noted that, on a year-over-year basis, average per-drive capacity grew 29%. In fiscal 2016, Seagate shipped 233 exabytes, including 70 exabytes for its “business-critical” product line – a 28% increase over the prior year.

Western Digital last week claimed to achieve overall exabyte growth of 12% on a year-over-year basis, largely driven by shipments of capacity enterprise HDDs to enterprise customers, according to Michael Cordano, president and chief operating officer. He said the growth of WD’s capacity-focused enterprise product line was 47% thanks to the ongoing success of high-capacity helium-based HDDs.

WD last week reported revenue of $13.0 billion for its last fiscal year, down 11% over last year’s $14.6 billion, and net income of $257 million for fiscal 2016. WD’s fourth-quarter revenue was $3.5 billion, and the company reported a $351 million loss.

Seagate Technology met or exceeded analysts’ expectations with $2.7 billion in revenue for its fiscal fourth quarter, largely driven by sales to cloud service providers. Seagate’s total revenue for fiscal 2016 was $11.2 billion, down 18.8% over last year’s $13.8 billion. Net income for the year was $248 million.

Both Seagate and Western Digital have been trying to diversify beyond their HDD businesses. WD last year acquired flash vendor SanDisk for $19 billion and object storage vendor Amplidata. Other past acquisitions include HDD competitor HGST, SSD maker sTec, all-flash array startup Skyera, PCI-flash vendor Virident Systems and flash-cache specialist VeloBit.

Seagate’s string of acquisitions includes Dot Hill Systems for $600 million last year, Avago’s LSI flash business in 2014 for $450 million and high-performance computing storage specialist Xyratex in 2013 for $374 million. Seagate sold off its EVault data protection business late last year to Carbonite for a mere $14 million in cash.

Luczo said Seagate completed the integration of Dot Hill and plans to launch converged storage products, including hybrid and all-flash arrays, later this year. He also noted that 12 TB helium near-line enterprise test units would be available this quarter for customer evaluation. Luczo said Seagate would refresh most of its high-volume capacity points over the next several quarters.

But Luczo cautioned that the growth rate for storage in the near term would likely fluctuate from quarter to quarter. He said the influence of the cloud service providers could be tricky to predict.

Near-line enterprise hard disk drives (HDDs) were hotter last quarter than Seagate anticipated they would be. Luczo said Seagate’s 8 TB enterprise HDD was the leading revenue SKU, as overall enterprise HDD revenue increased to 45% of total HDD sales. PC client shipments accounted for 25% of total HDD revenue.

Seagate said that although unit shipments of its HDDs have dropped 15% over the past five fiscal years, exabyte shipments have increased 112% and average capacity per drive has soared 133%. Luczo attributed the trends to the shift from client-server to mobile cloud architectures. He said most of the exabyte-scale growth relates to high-definition streaming content “where massive data ingest and sequential write operations” are critical.

Western Digital CEO Steve Milligan last week cited a key near-term priority as the transition to 3D NAND flash. He also noted that the company completed the alignment of the product and technology roadmaps for legacy WD, HGST and SanDisk products and opened a new wafer manufacturing facility in Japan with Toshiba.

WD expects 3D NAND wafer capacity to approach 40% of total NAND capacity by the end of 2017, according to Cordano.

Milligan said WD has been scaling down HDD capacity on a brick-and-mortar and head-count basis to react to the decline in the HDD market. He said WD had taken out 20% of its facilities and 25% of its head count during the last two years. Milligan said WD plans further reductions of up to one-third.


July 29, 2016  2:05 PM

OwnBackup CEO: Stay safe in the cloud

Paul Crocetti Paul Crocetti Profile: Paul Crocetti
Cloud Backup

Your cloud data may not be as secure as you think.

No matter where your data lives, you should put the same level of thought and care into its protection, according to Sam Gutmann, CEO of cloud-to-cloud backup and restore vendor OwnBackup. He pointed to the recent Salesforce outage that resulted in lost data.

“It really raised awareness,” Gutmann said. “There’s a myth that if it’s in the cloud, it’s safe.”

OwnBackup offers products to back up Salesforce data, ServiceNow data and social media accounts. Gutmann said OwnBackup allows users to compare two snapshots to see what has changed or been deleted, and then restore the database back to the way they want it. The company’s vision is to become a single pane of glass for backup and protection of software as a service data and platform as a service data stored in the cloud.

OwnBackup plans to add support for another application this year, and will likely add a couple more next year, but have not determined which ones yet. Microsoft Office 365 and Google Apps are common applications supported by vendors that protect data in the cloud.

Gutmann said there is no hurry to expand its product because “Salesforce is huge.” Customers are responsible for their data in Salesforce, Gutmann said. Salesforce recommends customers use one of the vendor’s “partner backup” products — which include OwnBackup — to ensure the safety of their data.

OwnBackup had customers affected by the outage who were able to restore data.

Many companies are moving business-critical data to the cloud. But on-premises platform requirements and vulnerabilities are also present in the cloud. Take the example of an employee on his way out the door who deletes important files.

“That threat is there no matter where the data is,” Gutmann said.

To that end, Gutmann offered a few more general tips for cloud-to-cloud backup and restore:

  • “Backup’s nice, but it all comes down to recovery,” so use a product that is strong in both disciplines
  • “Make sure the vendor understands the intricacies of your data,” and recognizes how complicated your setup is
  • Test your backup — verify you have a product that works

OwnBackup, which has sales and marketing in the United States and research and development in Israel, claims about 300 customers, ranging from small businesses to large manufacturing companies and universities. Gutmann, who helped found Intronis (now part of Barracuda) in 2003 and has been in the backup field for 16 years, said OwnBackup has about 20 employees, but that number should be closer to 30 by the end of the year.


July 29, 2016  1:59 PM

FalconStor’s dating, seeking technology match

Dave Raffo Dave Raffo Profile: Dave Raffo
FalconStor

There is a trend in storage for smaller vendors to band together to try and grow through mergers, for far different reasons than Dell and EMC are coming together. These mergers of small companies are driven by a sharp decline in venture funding, acquisitions by larger firms, and opportunities to go public. So their quickest route to growth is to merge.

In 2016, we’ve seen Pivot3-NexGen, Virtual Instruments-LoadDynamiX, and Gridstore-DHCQ (now called HyperGrid) mergers that combined technologies and work forces. FalconStor could be headed for one of these mergers as well.

Although FalconStor is a public company, it faces many challenges of private companies. It has little revenue and few funding options to fuel growth. If FalconStor wants to add features to its FreeStor product line, it could do so cheaper and faster by merging with a vendor that already has that technology. FalconStor already picked up real-time predictive analytics technology through a licensing and co-development deal with Cumulus Logic in 2015.

FalconStor CEO Gary Quinn hinted that a merger, acquisition or a least another technology partnership could be in the works during his quarterly earnings call this week.

“During the first half of 2016, FalconStor has been approached by a number of privately backed and publicly traded companies who are looking to find ways to partner or transform themselves into a new entity,” Quinn said. “As many of you know, a lot of the VC-backed privately held companies in the storage software category during the last several months have been unable to obtain additional capital to support their vision. They also do not have any commercially viable operations to sustain themselves. Some of those technologies would be excellent additions to the FreeStor product offering.”

He said FalconStor is reviewing opportunities for “partnerships, technology licensing or possible new combinations.”

When asked about these comments later in the call, he pointed out object storage may be a valuable addition to block-storage FreeStor. He said there are object storage vendors “who are experiencing difficulties at the moment because they don’t really have a commercially viable market yet due to the fact that that market hasn’t matured yet enough.”

FalconStor’s $8.1 revenue last quarter was down from $9.6 million a year ago, and it lost $3.5 as it continues to transition to its FreeStor product line of data management and protection software.

Quinn said FalconStor is making good progress selling FreeStor subscriptions to enterprises, managed service providers and OEMs. But FalconStor has only $9.4 million in cash, so it has to find a way to reverse its losses quickly.

“We believe we have the right product in FreeStor,” Quinn said. He said a FreeStor upgrade in October will focus on service provider requirements, public cloud connectivity and enterprises who want to build self-service capabilities. That release will extend FreeStor’s real-time analytics from the data center to edge devices.

“We are very cognizant of the road ahead both from the opportunity for FreeStor and the cost to achieve it,” he said. “We are diligently looking for ways to increase our quarterly billings as well as preserving our precious cash.

“We believe that we’re getting closer and closer to break-even and we continually adjust the business as necessary.”


July 29, 2016  9:55 AM

Quantum results indicate ‘stable’ backup market

Dave Raffo Dave Raffo Profile: Dave Raffo
Data protection, Quantum

It’s no secret that storage sales have been hard to come by over the last year or two. Most of the large vendors experience product revenue decreases or small increases quarter after quarter. But the data protection market appears to be on the upswing, or least it’s no longer on the downswing.

Backup software vendors Veeam Software and Commvault reported positive earnings growth for last quarter, and backup hardware vendor Quantum this week said its revenue increased and exceeded its previous forecast.

Quantum’s revenue of $116.3 million last quarter increased $5.4 million, or five percent, over the same quarter last year. The fastest growth came from Quantum’s StorNext scale-out file system, which grew 11% to $30.8 million. But that’s less than 30% of Quantum’s overall revenue. Its data protection revenue increased six percent to $76.9 million. That includes DXi disk backup revenue of $21.5 million (up 24%). Tape automation revenue dropped four percent to $42.6 million, but, hey, that’s tape. Tape OEM revenue did increased six percent, and tape devices and media revenue actually grew 17% to $12.8 million.

“We believe it’s indicative of a more stable traditional storage market, including tape backup, than the industry has seen over the last couple of years,” Quantum CEO Jon Gacek said of the results on the company earnings call. “One quarter does not a trend make, but as we look at what’s going around it, the data protection side has been solid.”

Quantum CFO Fuad Ahmad added: “As mentioned in previous calls, we’ve been impacted by overall market weakness in general purpose storage including data protection over the past year. However, we believe the market has begun to stabilize and our results reflect that.”

We don’t know yet if the overall market is turning, or vendors such as Veeam, Commvault and Quantum are taking business from larger rivals. There is no visibility from the largest data protection vendors. EMC did not break out data protection revenue in its bare-bones earnings report as it prepares to merge with Dell. IBM does not give specific data protection figures, and Veritas is now a private company and does not report earnings.

Quantum received a boost from several large StorNext deals last quarter, mainly in video surveillance and media and entertainment. Gacek spoke of an $800,000 follow-on purchase by a large consumer electronics company and a $200,000 deal with a virtual reality company. He said there was another $200,000 in Asia and a $150,000 installation at a government medical center. And a previously announced public cloud project deal is expected to bring Quantum at least $20 million this year.

Quantum forecasts a revenue range of $118 million to $122 million this quarter. Gacek said he expects $500 million in revenue this fiscal year (last quarter was Quantum’s first fiscal quarter) and he expects scale-out to become 35% to 40% of total revenue.

The vendor lost $3.8 million but that was down from $10.8 million last year. And it generated $5.2 million in cash from operations compared to using $13.6 million in cash a year ago.


July 28, 2016  6:20 PM

Spectra Logic bolsters BlackPearl storage archive for Amazon cloud

Garry Kranz Garry Kranz Profile: Garry Kranz
Storage

Backup tape specialist Spectra Logic has upgraded the operating software for its BlackPearl Deep Storage Gateway appliance, allowing petabyte-scale enterprises to build a storage archive using multiple Amazon Web Services (AWS) public cloud tiers.

The Boulder, Colo.-based vendor already supports the AWS Simple Storage Service (S3) by virtue of its S3-compatible Deep Storage Interface (DS3). The 3.x software version adds Amazon Glacier cold storage, S3 Infrequent Access, and Amazon Elastic Compute Cloud Block Storage (EC2) as destination targets within the BlackPearl tape gateway.

“We built the infrastructure to support Amazon S3. This gives us a hybrid cloud storage archive to go along with the BlackPearl private cloud. We let a customer write data out to any of Amazon’s three storage tiers,” Spectra Logic CTO Matt Starr said.

“Our hybrid cloud allows you to keep a local copy, either on disk or tape or both, and then only in a dire emergency would you have to pull it back from the cloud.”

BlackPearl is tape-based object storage that uses Linear Tape File System (LTFS) on the back end. The hybrid storage archive appliance caches incoming writes on disk and sends it to different replication targets as page sizes approach 100 gigabytes.

BlackPearl’s DS3 interface is modeled after Amazon S3. It uses REST-based command sets to index each tape cartridge with its own file system. Customers can replicate between BlackPearl storage at different sites.

Expanded Amazon S3 integration lets customers replicate data from Spectra Logic devices to AWS S3 storage. Archive data can be automatically restored from Amazon Glacier to local tape or disk. The upgrade supports multiple backup and disaster recovery copies in the cloud and across Spectra Logic’s LTFS tape libraries, Online Archive active archive appliance and ArcticBlue object-based nearline disk storage.

Archive management and retrieval is orchestrated via Advanced Bucket Management policy manager. Other than Amazon fees, Starr said Spectra Logic software enhancements are available at no cost to customers with valid maintenance support contracts.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: