IBM still hasn’t given up on tape.
The company today announced its new 6TB IBM Linear Tape-Open Ultrium 7 (LTO-7) drive that performs at 300 MB per second and has double the capacity of the previous drive. The drive will be integrated into the IBM TS4500 tape library, which launched last year, and with IBM’s new TS2270 tape drive for backup and archiving.
“Tape still is the cheapest solution out there,” said Eric Herzog, vice president of product marketing at IBM. “Eighty to 90 percent of data generally is not accessed after 90 days. You don’t want to put that stuff on primary storage if you only need to protect it for the first 90 days.”
The LTO-7 technology is also designed to support data encryption. The hardware encryption and decryption core
and control core reside in the drive.
Herzog said LTO-7 allows the TS45oo to scale to 347.5 PB of storage in 18 frames while using 43 percent less floor space compared to a high-density disk system. The library can store up to 5.5 PB of data in a single ten-square-foot library which is three times the capacity of the IBM TS3500 tape library.
The system grows by adding frame models and the storage footprint can be reduced with 10U of rack space on top of the library for Fibre Channel switches, tape data movers or IBM Spectrum Archive nodes. The TS4500 tape library designed for mid-sized and large enterprises dealing with high data volumes and growth in data centers.
The IBM TS2270 provides physical storage capacity of up to 15 TB and the data transfer performance increases up to 300 MBps with 6Gbps SAS interface connectivity. The TS2270 6 Gbps SAS interface can connect to a wide spectrum of open-system servers and the power of the TS2270 tape drive can be increased by managing it with tape management solutions such as IBM Spectrum Protect or third-party storage software.
IBM will make LTO-7 tape drives available on Oct. 23, and availability varies by automation platform. Enhancements to the TS4500 tape library will be available on Nov. 20, 2105.
The EMC rumor machine is in full force again today. This time it’s Dell that is reportedly interested in buying all or some of EMC.
Of course, if the sources for these rumors were 100% reliable, then EMC would already be part of Hewlett-Packard or VMware or Cisco, or it would own all of VMware instead of 80%. Even the Dell rumors are all over the place, with some saying Dell wants all of EMC and others putting specific pieces of EMC in Dell’s crosshairs.
What all these rumors tell you is that EMC is exploring many options in the wake of pressure from activist investors led by Elliott Management. Elliott wants EMC to break up the federation of companies that include EMC Information Infrastructure (the storage group), VMware, RSA, Pivotal and smaller pieces. EMC executives have argued that the federation model works best, and they clearly want to keep VMware most of all.
Elliott’s agreement to let EMC raise its stock price on its own expired in September without the desired result (although the Dell rumors have raised EMC’s share price). Now the activist investors are looking for EMC to make a significant move.
It’s unlikely that Dell can buy all of EMC when a few years ago it couldn’t afford to acquire 3PAR after HP got into the bidding. EMC’s valuation of $50 billion is twice as much as Dell’s, and Dell still has $11.7 billion in debt from when it went private in 2013. HP might also jump in and try to outbid Dell and derail its hopes of buying EMC.
There are pieces of EMC that Dell could use, though. Any server vendor would want VMware. Much of the EMC Federation strategy revolves around VMware’s virtualization and cloud technologies, however, and a sale of VMware would be a major loss for EMC.
More interesting on the storage front, one report said Dell might buy EMC’s VNX storage systems business. VNX would fill the gap in Dell’s storage portfolio that it originally wanted 3PAR to plug. Dell acquired Compellent after losing out on 3PAR, but Compellent’s arrays don’t reach as high into the enterprise as VNX systems.
The success of EMC’s XtremIO all-flash array might prompt it to part with VNX, which is part of EMC’s legacy storage portfolio that has experienced little or no growth in the past year. VNX arrays include old Clariion technology, and Dell used to sell Clariion under a partnership with EMC that ended in 2011. But why would Dell want a VNX business that has sluggish growth? There is also the chance that Dell would buy out all of EMC’s storage, although even that could be too pricey.
I expect we’ll see news on an EMC merger or spinout by Oct. 21 when EMC reports its quarterly earnings. I don’t think even EMC or its partner in any potential deal even knows yet what that news will be. But it doesn’t sound like the storage giant has ruled out much so far.
Veritas completed operational separation from Symantec last Friday, and this week made a minor upgraded to its flagship NetBackup software with a focus on Amazon Web Services (AWS) and NetApp environments.
Operational separation means the backup company and its parent security vendor are operating as separate organizations, ahead of the $8 billion sale of Veritas to the Carlyle Group. That sale is expected to close in January.
NetBackup 7.7.1 includes a connector to Amazon Simple Storage Service Standard-Infrequent Access (S3 Standard-IA) that Amazon launched last month. The new Amazon service is seen as a tier that will fit between production and archive data.
Veritas also extended NetBackup support to the AWS Commercial Cloud Services (C2S) region, a secure cloud service that is part of the AWS GovCloud.
NetBakup 7.7.1 also supports cluster-aware Network Data Management Protocol (NDMP) backups for NetApp clustered DataOnTap, and orchestration of snapshot and replication operations using NetApp’s SnapVault and SnapMirror. The backup software now includes an Accelerator for NDMP that supports NetApp filers. NetBackup Acclerator is designed to achieve full backups in the time it takes to do incremental backups.
The release follows NetBackup 7.7 launched in July with an emphasis on cloud support.
Simon Jelley, VP of backup management for Veritas, said the point-upgrade is part of the vendor’s new strategy of making quarterly releases to support new applications on the market. He also said the connectors – added in version 7.7 for Amazon S3, Google, Verizon, Cloudian and Hitachi Data Systems clouds — have been popular with customers looking to replace disk-to-disk-to-tape backups by using the cloud instead.
“They’re using the cloud as a long-term archive tier as an alternative to tape,” Jelley said. “It’s more efficient for recovery because they don’t have to recycle tapes. And cloud archiving is becoming more affordable with the [Amazon] Infrequent Access tier, Amazon Glacier and Google Nearline.
With its focus on the cloud, you would expect Veritas to add cloud-to-cloud backup for applications such as Microsoft Office 365, Salesforce.com and Google Apps. EMC Spanning, Datto Backupify and Asigra’s Cloud Backup are among those doing cloud-to-cloud backup, but Veritas has not gone there yet.
“We have not seen large enterprises move there, but it’s something we’re looking at,” Jelley said of cloud-to-cloud backup.
FalconStor signed a licensing and co-development agreement with Cumulus Logic that gives FalconStor exclusive use of Cumulus Logic analytics code.
Cumulus Logic is still in stealth, but has been developing an analytics engine that allows centralized reporting across heterogeneous storage systems. It will collect data from storage and applications, present historic and real-time reports and help maintain management policies for storage and servers. It will present data in web-based dashboards that can also be accessed through mobile devices.
The analytics will be built into FreeStor as part of the base product for no extra charge, FalconStor CEO Gary Quinn said.
“I think customers will buy FreeStor just for the analytics,” Quinn said. “They will want the pure ability to learn more about the environment and make decisions.”
FalconStor expects the analytics to be available around March or April of 2016.
FreeStor’s management server already provides configuration information and does monitoring and reporting. The Cumulus intelligence will add predictive analytics and allow it all to run in an Amazon cloud instance. Quinn said the new information will help customers plan for capacity, meet SLAs and predict the health of their storage systems. He expects service provider customers to use it in the cloud while large enterprises may prefer to keep the repository on-premise.
“Analytics have been around a long time, but a favorite reason to buy Nimble and Pure is the nice information you receive about your array,” Quinn said. “FreeStor is dedicated to a heterogeneous environment, and we think that [analytics] capability needs to go horizontal across the industry.”
The Cumulus code will pull all information from FreeStor Storage servers, which uses REST APIs to gather data from storage arrays and servers. “We collect a tremendous amount of information,” Quinn said. “We are now applying a smart rules engine to analyze all that data we’re receiving. We will present that in a simple Web browser or mobile application. You can take actions based on whether you’re achieving your SLAs, running out of capacity or having performance issues.”
Unlike Nimble and Pure, FalconStor will not collect customer information in its own cloud. “Nimble and Pure customers can log in and gain insight into how other customers are doing,” Quinn said. “We could do that, but that’s not our first iteration.”
Quinn said the analytics will make FreeStor more valuable to OEM partners. FalconStor has announced OEM deals with X-IO and Kaminario, and can bring its analytics to other vendors’ arrays. Quinn said the analytics are especially helpful with companies who want to make sure their flash storage is optimized for the best performance. FreeStor was developed in collaboration with flash array vendor Violin Memory.
“FreeStor Storage Server sits in the data path,” he said. “We see IOPS, latency and bottlenecks in the data path, and can even capture log information if you’re encountering hardware difficulties.”
A recent discussion with a client got me thinking about precipitating events that cause IT professionals to “put their house in order” regarding the information they store. In this case, there was a new all-flash storage system acquired for primary storage. The transition prompted the client to look at the information stored on the system to be replaced, discarding what was no longer useful and moving inactive data to another system.
This is similar to what many of us go through in our personal lives. Certain events cause us to examine what objects we have accumulated and make a conscious decision to discard some. Moving to a new home is the most obvious example. While packing all your belongings, junk, hoarded items, etc., you decide about what you really do not need and how to get rid of it. The first thoughts may be a garage sale or some friend that you know could really use that stuff. Other things go right into the dumpster. The second phase of reduction comes after you get your boxes to your new place. After a certain period of time, belongings that are still packed up can probably be safely discarded.
There is a parallel with our IT lives. Bringing in all-flash storage for primary adds a faster system that can provide greater economic value for the company, and it should be more carefully managed than the previous system. However, there are other “precipitating events” in managing information that should cause us to clean house, or, address our “data hoarding.”
For instance, but the purchase of a new primary storage system can also lead to a movement of data for load balancing. Deploying a new content repository can spark an initiative to store data based on value or activity, establish retention rules and accommodate growth. And organizational change can lead to new company dynamics – acquisitions or consolidations – and changes in services delivery model, such as a transition to IT as a Service.
These events happen with more regularity than most would think. To manage information strategically, you should add the task of organizing information to these events. Like when discarding junk from your house, it’s hard to do these tasks as regularly planned activities because they get infinitely postponed or discarded due to lack of time or resources.
So these events in IT do mirror our personal lives. We need to recognize this, plan for it, and take advantage of these events to make improvement. It may not be the most optimal way to clean out unneeded data, but it is a method that is naturally practiced.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Amazon Web Services (AWS) rolled out a new type of storage for infrequently accessed data within the S3 tier that cost 1.25 cents per GBs to store but only 1 cent per GB to access.
The cloud has become a repository for unstructured data storage that is rarely accessed. Amazon already has its Glacier service for this type of storage. However, now it has introduced a new pricing tier for its high-throughput Amazon S3 standard.
“The new S3 Standard – Infrequent Access (Standard – IA) storage class offers the same high durability, low latency, and high throughput of S3 Standard. You now have the choice of three S3 storage classes (Standard, Standard – IA, and Glacier) that are designed to offer 99.999999999 percent … of durability. Standard – IA has an availability SLA of 99 percent,” according to the Amazon blog post.
Earlier this month, Amazon also reduced the price for its data stored in Amazon Glacier from $0.01 a GB per month to $0.007 GB per month.
“This price is for the US East (Northern Virginia), US West (Oregon), and Europe (Ireland) regions; take a look at the Glacier Pricing page for full information on pricing in other regions,” Amazon stated in its blog.
The new tier service still allows customers to define data life-cycle policies to move data between different Amazon S3 classes, such as storing new data on the standard S3 storage class and then move it to the Standard-IA after a certain time that it has been uploaded. Over time, it can be moved to the Amazon Glacier service after the data is 60 days old.
“The new Standard-IA class is simply one of several attributes associated with each S3 object,” according to the AWS blog. “Because the objects stay in the same S3 bucket and are accessed from the same URLs when they transition to the Standard-IA, you can start using Standard-IA immediately through lifecycle policies without changing your application code. This means that you can add a policy and reduce S3 costs immediately, without having to make any changes to your application or affecting its performance.”
IDC Tuesday correccted the purpose-built backup appliance (PBBA) market tracker numbers it issued last week, giving market leader EMC more than $55 million in additional revenue for the second quarter.
The initial report showed steep declines for the market overall and EMC specifically. EMC apparently made a persuasive case that IDC under-reported its true backup appliance revenue, which consists mostly of Data Domain disk libraries. The new numbers show a less bleak picture for appliance sales, although they still declined slightly in the quarter.
The revised numbers give EMC $469.9 million compared to $414 million in the original report. The new total represents a 5.8 percent year-over-year drop for EMC and a 60.1 percent market share. The original numbers represented a 16.9 percent year-over-year drop and 57.1 percent share for EMC.
The revised numbers put total worldwide revenue at $781.1 million for last quarter, a one percent drop from last year instead of the eight percent decline from last week’s report. IDC includes revenue from appliances that require separate backup software along with integrated appliances that bundle software with storage.
Even a modest fall indicates a reversal of recent trends. The PBBA market grew 6.9 percent year-over-year in the first quarter of 2015 and increased 4 percent for the full year in 2014 over 2013.
No. 2 Symantec’s revenue fell 3.7 percent to $104.5 million last quarter, according to IDC. Barracuda Networks made the biggest revenue jump, growing 67.6 percent to $26.8 million and remained in fifth place with 3.4 percent share. That followed a 64.9 percent year-over-year jump in the first quarter for Barracuda following an aggressive rollout of backup appliances that support replication between appliances or to the Barracuda Cloud.
No. 3 IBM grew 0.8 percent to $54 million and No. 4 Hewlett-Packard increased 8.8 percent to $36.7 million. All other vendors combined to grow 13.4 percent to $89.6 million and 11.5 percent market share.
In the press release detailing the revenue report, IDC attributed the revenue drop to “market evolution.”
“Focus continues to shift away from hardware-centric, on-premise PBBA systems to hybrid/gateway systems,” said Liz Conner, IDC research manager for storage systems, in the press release. “The results are greater emphasis on backup and deduplication software, the ability to tier or push data to the cloud, and the increasing commoditization of hardware, all of which require market participants to adjust product portfolios accordingly.”
SanDisk is putting its investments in private storage companies to good use. Two of the companies it has invested in – Nexenta and Tegile Systems – have signed on as OEM partners for SanDisk’s InfiniFlash all-flash storage platform.
Nexenta is a software vendor that is porting its ZFS-based NexentaStor application onto the InfiniFlash platform, which consists of proprietary NAND cards.
Tegile is expanding its all-flash platform with its IntelliFlash HD product, combing its software and controller with the SanDisk InfiniFlash array. Tegile launched its home-built all-flash arrays in June 2014, and also sells hybrid flash systems combining hard disk drives and solid-state drives.
Tegile VP of marketing Rob Commins said because the IntelliFlash system scales far higher than Tegile’s other all-flash arrays, there won’t be much overlap among customers. Tegile’s all-flash minimum capacities range from 12 TB to 48TB in an array while the IntelliFlash system starts at 127 TB and scales to more than 10 PB of usable capacity in a 42u rack.
Commins said the average price of Tegile’s all-flash platform is around $100,000 while the IntelliFlash system will average around $250,000 to $300,000.
“We said that’s a nice logical extension of capacity optimized media,” Commins said of the IntelliFlash platform. “We can pulll out our disk drives and use IntelliFlash HD as cheap and deep capacity.
“Our premise is there will always be performance optimized media and capacity optimized media. We’ll eventually go to PCIe and NVDIMM to keep going cheaper and deeper on the capacity layer.”
Tegile’s software stack will enable its IntelliFlash system to support block and file storage. Tegile supports Fibre Channel, iSCSI, NFS and SMB protocols.
Tegile expects IntelliFlash to cost around $1.50 per GB of raw capacity, and as little as 50 cents per usable GB after dedupe and compression when it is released in early 2016.
Commins said the IntelliFlash system should be a good fit for big data analytics and oil/gas exploration companies. “It’s a real nice screamer, but at super high capacity,” he said.
Hard disk drives (HDDs) are up to 8 TB and 10 TB, and flash storage may be all the rage, but tape keeps rolling along.
Hewlett-Packard (HP), IBM and Quantum – the Linear Tape-Open (LTO) Program Technology Provider Companies (TPCs) – announced this week that the seventh generation specifications of the LTO Ultrium format are available for licensing by storage mechanism and media manufacturers.
The new LTO-7 specification lists the maximum compressed capacity at 15 TB per tape cartridge, more than double the 6.25 TB compressed capacity of the prior LTO-6 generation. The specification assumes a compression ratio of 2.5 to 1.
The compressed data transfer rate soars from 400 megabytes per second (MBps) with LTO-6 to 750 MBps with the new LTO-7 technology. That means users potentially could transfer more than 2.7 TB per hour per drive with LTO-7, up from 1.4 TB per hour per drive with LTO-6.
Paving the way for the higher capacity and data transfer rates were technology enhancements such as stronger magnetic properties and a doubling of the read/write heads in advanced servo format to allow the drive to write more data to the same amount of tape within the cartridge.
The new LTO-7 generation carries forward features of prior generations, including partitioning to enhance file control and space management with the Linear Tape File System (LTFS), hardware-based encryption, and write-once, read-many (WORM) functionality.
An LTO-7 Ultrium drive can read data from LTO-7, LTO-6 and LTO-5 cartridges and write data to an LTO-7 or LTO-6 cartridge.
Vendors who have already announced product support for LTO-7 include Quantum and SpectraLogic. Quantum expects LTO-7 technology to be available in its Scalar i6000 and Scalar i500 libraries in December, with other platforms to follow, and the company currently offers an LTO-7 pre-purchase program for interested customers.
The LTO-7 specification’s 15 TB compressed capacity and 750 MBps data transfer rate are slightly lower than the figures the LTO Program projected last year with the release of its extended roadmap. The September 2014 roadmap indicated the LTO-7 generation would provide a compressed capacity of 16 TB per tape cartridge and a compressed data transfer rate of 788 MBps.
The newly updated LTO Ultrium roadmap lists the following maximum compressed capacities and data transfer rates for future generations:
LTO-8: Up to 32 TB and 1,180 MBps
LTO-9: Up to 62.5 TB and 1,770 MBps
LTO-10: Up to 120 TB and 2,750 MBps
The LTO Program notes that the roadmap “is subject to change without notice and represents goals and objectives only.”
The LTO Program plans to provide further insight into the LTO roadmap and technology at the Storage Decisions conference on November 3-4 in New York, at the SC15 supercomputing conference running November 15-20 in Austin, Texas, and at the Government Video Expo on December 1-3 in Washington, D.C.
Market research firm Dell’Oro Group’s mid-year snapshot showed that total storage systems revenue is on track to grow 1% in 2015, driven largely by sales to hyperscale service providers of direct-attached storage (DAS) devices for servers.
The Redwood City, California-based company said total storage systems revenue approached $10 billion in the second quarter – a 1% increase compared to the same time frame in 2014. Revenue for internal storage rose 3%, while sales in the larger external storage segment stayed flat in the quarter, as high-end systems continued to experience a year-to-year decline, according to the recently released Dell’Oro report.
EMC maintained the top spot for overall storage revenue through the first half of the year, and Hewlett-Packard (HP) was No. 2. IBM dropped from third place at the end of 2014 to fifth place in the aftermath of the sale of its x86 server line. Dell and NetApp were third and fourth respectively.
Rapidly growing Huawei snuck ahead of Hitachi into fifth place in total storage systems revenue for the second quarter, but Dell’Oro said Huawei often has a strong second quarter after a seasonally weak first quarter.
Dell’Oro’s numbers varied a bit from those released by IDC earlier this month. IDC put total disk storage sales at $8.8 billion for the second quarter for a 2.1 percent increase over the second quarter of 2014. IDC said external storage sales declined 3.9 percent. In vendor market share, IDC had IBM in fourth place ahead of NetApp. IDC agreed with Dell’Oro that hyperscale storage is growing rapidly, putting it at a 26 percent increase over the second quarter of 2014.
Flash continued to factor into a higher percentage of total capacity for both internal and external storage systems. Dell’Oro estimated that flash drives represented 8% to 10% of the total capacity of hybrid arrays, and nearly 75% of midrange and high-end external storage systems included some flash. Dell’Oro expects the percentage to approach 100 within a few years.
Shipments of Fibre Channel (FC) and Ethernet ports for networked external storage systems remained even at about 50% each, and Dell’Oro expects the breakdown to stay the same for at least the next year.
For FC, the big trend was 16 Gbps taking share from 8 Gbps, as 69% of the switch ports and more than 20% of the adapter ports shipped at the higher data transfer rate in the second quarter. But Dell’Oro said total SAN revenue, including FC switches and adapters, dropped 5% from the first to second quarters to $550 million (the lowest level since Q2 of 2009), and the 1.9 million in port shipments represented a 7% decrease.
Dell’Oro attributed the SAN revenue decline to the resurgence of DAS as well as new storage alternatives, such as scale-out architectures, software-defined storage, hyperconverged infrastructure and cloud storage. Ethernet-based storage has also grown, although it still trails block-based storage in revenue, Dell’Oro said.
With Ethernet storage networking, 40 Gbps made inroads on 10 Gbps, but Dell’Oro expects the 40 Gbps Ethernet pattern to be short-lived as options such as 50 Gbps, 75 Gbps and 100 Gbps emerge in future years.