Storage Soup

October 8, 2015  10:08 AM

Dell is EMC’s latest potential dance partner

Dave Raffo Dave Raffo Profile: Dave Raffo
Dell, EMC

The EMC rumor machine is in full force again today. This time it’s Dell that is reportedly interested in buying all or some of EMC.

Of course, if the sources for these rumors were 100% reliable, then EMC would already be part of Hewlett-Packard or VMware or Cisco, or it would own all of VMware instead of 80%. Even the Dell rumors are all over the place, with some saying Dell wants all of EMC and others putting specific pieces of EMC in Dell’s crosshairs.

What all these rumors tell you is that EMC is exploring many options in the wake of pressure from activist investors led by Elliott Management. Elliott wants EMC to break up the federation of companies that include EMC Information Infrastructure (the storage group), VMware, RSA, Pivotal and smaller pieces. EMC executives have argued that the federation model works best, and they clearly want to keep VMware most of all.

Elliott’s agreement to let EMC raise its stock price on its own expired in September without the desired result (although the Dell rumors have raised EMC’s share price). Now the activist investors are looking for EMC to make a significant move.

It’s unlikely that Dell can buy all of EMC when a few years ago it couldn’t afford to acquire 3PAR after HP got into the bidding. EMC’s valuation of $50 billion is twice as much as Dell’s, and Dell still has $11.7 billion in debt from when it went private in 2013.

There are pieces of EMC that Dell could use, though. Any server vendor would want VMware. Much of the EMC Federation strategy revolves around VMware’s virtualization and cloud technologies, however, and a sale of VMware would be a major loss.

More interesting on the storage front, one report said Dell might buy EMC’s VNX storage systems business. VNX would fill the gap in Dell’s storage portfolio that it originally wanted 3PAR to plug. Dell acquired Compellent after losing out on 3PAR, but Compellent’s arrays don’t reach as high into the enterprise as VNX systems.

The success of EMC’s XtremIO all-flash array might prompt it to part with VNX, which is part of EMC’s legacy storage portfolio that has experienced little or no growth in the past year. VNX arrays include old Clariion technology, and Dell used to sell Clariion under a partnership with EMC that ended in 2011. But why would Dell want a VNX business that has sluggish growth? There is also the chance that Dell would buy out all of EMC’s storage, although even that could be too pricey.

I expect we’ll see news on an EMC merger or spinout by Oct. 21 when EMC reports its quarterly earnings. I don’t think even EMC or its partner in any potential deal even knows yet what that news will be. But it doesn’t sound like the storage giant has ruled out much so far.

October 7, 2015  9:35 AM

Veritas moves further toward freedom, cloud backup

Dave Raffo Dave Raffo Profile: Dave Raffo
NetBackup, Veritas

Veritas completed operational separation from Symantec last Friday, and this week made a minor upgraded to its flagship NetBackup software with a focus on Amazon Web Services (AWS) and NetApp environments.

Operational separation means the backup company and its parent security vendor are operating as separate organizations, ahead of the $8 billion sale of Veritas to the Carlyle Group. That sale is expected to close in January.

NetBackup 7.7.1 includes a connector to Amazon Simple Storage Service Standard-Infrequent Access (S3 Standard-IA) that Amazon launched last month. The new Amazon service is seen as a tier that will fit between production and archive data.

Veritas also extended NetBackup support to the AWS Commercial Cloud Services (C2S) region, a secure cloud service that is part of the AWS GovCloud.

NetBakup 7.7.1 also supports cluster-aware Network Data Management Protocol (NDMP) backups for NetApp clustered DataOnTap, and orchestration of snapshot and replication operations using NetApp’s SnapVault and SnapMirror. The backup software now includes an Accelerator for NDMP that supports NetApp filers. NetBackup Acclerator is designed to achieve full backups in the time it takes to do incremental backups.

The release follows NetBackup 7.7 launched in July with an emphasis on cloud support.

Simon Jelley, VP of backup management for Veritas, said the point-upgrade is part of the vendor’s new strategy of making quarterly releases to support new applications on the market. He also said the connectors – added in version 7.7 for Amazon S3, Google, Verizon, Cloudian and Hitachi Data Systems clouds — have been popular with customers looking to replace disk-to-disk-to-tape backups by using the cloud instead.

“They’re using the cloud as a long-term archive tier as an alternative to tape,” Jelley said. “It’s more efficient for recovery because they don’t have to recycle tapes. And cloud archiving is becoming more affordable with the [Amazon] Infrequent Access tier, Amazon Glacier and Google Nearline.

With its focus on the cloud, you would expect Veritas to add cloud-to-cloud backup for applications such as Microsoft Office 365, and Google Apps. EMC Spanning, Datto Backupify and Asigra’s Cloud Backup are among those doing cloud-to-cloud backup, but Veritas has not gone there yet.

“We have not seen large enterprises move there, but it’s something we’re looking at,” Jelley said of cloud-to-cloud backup.

September 30, 2015  10:13 AM

FalconStor acquires cloud-based analytics for FreeStor

Dave Raffo Dave Raffo Profile: Dave Raffo

FalconStor is preparing to add predictive analytics monitoring to its FreeStor storage virtualization software, which the vendor compares to Nimble Insight and Pure1 cloud-based analytics.

FalconStor signed a licensing and co-development agreement with Cumulus Logic that gives FalconStor exclusive use of Cumulus Logic analytics code.

Cumulus Logic is still in stealth, but has been developing an analytics engine that allows centralized reporting across heterogeneous storage systems. It will collect data from storage and applications, present historic and real-time reports and help maintain management policies for storage and servers. It will present data in web-based dashboards that can also be accessed through mobile devices.

The analytics will be built into FreeStor as part of the base product for no extra charge, FalconStor CEO Gary Quinn said.

“I think customers will buy FreeStor just for the analytics,” Quinn said. “They will want the pure ability to learn more about the environment and make decisions.”

FalconStor expects the analytics to be available around March or April of 2016.

FreeStor’s management server already provides configuration information and does monitoring and reporting. The Cumulus intelligence will add predictive analytics and allow it all to run in an Amazon cloud instance. Quinn said the new information will help customers plan for capacity, meet SLAs and predict the health of their storage systems. He expects service provider customers to use it in the cloud while large enterprises may prefer to keep the repository on-premise.

“Analytics have been around a long time, but a favorite reason to buy Nimble and Pure is the nice information you receive about your array,” Quinn said. “FreeStor is dedicated to a heterogeneous environment, and we think that [analytics] capability needs to go horizontal across the industry.”

The Cumulus code will pull all information from FreeStor Storage servers, which uses REST APIs to gather data from storage arrays and servers. “We collect a tremendous amount of information,” Quinn said. “We are now applying a smart rules engine to analyze all that data we’re receiving. We will present that in a simple Web browser or mobile application. You can take actions based on whether you’re achieving your SLAs, running out of capacity or having performance issues.”

Unlike Nimble and Pure, FalconStor will not collect customer information in its own cloud. “Nimble and Pure customers can log in and gain insight into how other customers are doing,” Quinn said. “We could do that, but that’s not our first iteration.”

Quinn said the analytics will make FreeStor more valuable to OEM partners. FalconStor has announced OEM deals with X-IO and Kaminario, and can bring its analytics to other vendors’ arrays. Quinn said the analytics are especially helpful with companies who want to make sure their flash storage is optimized for the best performance. FreeStor was developed in collaboration with flash array vendor Violin Memory.

“FreeStor Storage Server sits in the data path,” he said. “We see IOPS, latency and bottlenecks in the data path, and can even capture log information if you’re encountering hardware difficulties.”

September 29, 2015  12:34 PM

Flash storage and cleaning house

Randy Kerns Randy Kerns Profile: Randy Kerns
Data migration

A recent discussion with a client got me thinking about precipitating events that cause IT professionals to “put their house in order” regarding the information they store. In this case, there was a new all-flash storage system acquired for primary storage. The transition prompted the client to look at the information stored on the system to be replaced, discarding what was no longer useful and moving inactive data to another system.

This is similar to what many of us go through in our personal lives. Certain events cause us to examine what objects we have accumulated and make a conscious decision to discard some. Moving to a new home is the most obvious example. While packing all your belongings, junk, hoarded items, etc., you decide about what you really do not need and how to get rid of it. The first thoughts may be a garage sale or some friend that you know could really use that stuff. Other things go right into the dumpster. The second phase of reduction comes after you get your boxes to your new place. After a certain period of time, belongings that are still packed up can probably be safely discarded.

There is a parallel with our IT lives. Bringing in all-flash storage for primary adds a faster system that can provide greater economic value for the company, and it should be more carefully managed than the previous system. However, there are other “precipitating events” in managing information that should cause us to clean house, or, address our “data hoarding.”

For instance, but the purchase of a new primary storage system can also lead to a movement of data for load balancing. Deploying a new content repository can spark an initiative to store data based on value or activity, establish retention rules and accommodate growth. And organizational change can lead to new company dynamics – acquisitions or consolidations – and changes in services delivery model, such as a transition to IT as a Service.

These events happen with more regularity than most would think. To manage information strategically, you should add the task of organizing information to these events. Like when discarding junk from your house, it’s hard to do these tasks as regularly planned activities because they get infinitely postponed or discarded due to lack of time or resources.

So these events in IT do mirror our personal lives. We need to recognize this, plan for it, and take advantage of these events to make improvement. It may not be the most optimal way to clean out unneeded data, but it is a method that is naturally practiced.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).

September 25, 2015  3:06 PM

AWS introduces a new infrequently accessed S3 storage tier

Sonia Lelii Sonia Lelii Profile: Sonia Lelii
Amazon, Cloud storage

Amazon Web Services (AWS) rolled out a new type of storage for infrequently accessed data within the S3 tier that cost 1.25 cents per GBs to store but only 1 cent per GB to access.

The cloud has become a repository for unstructured data storage that is rarely accessed. Amazon already has its Glacier service for this type of storage. However, now it has introduced a new pricing tier for its high-throughput Amazon S3 standard.

“The new S3 Standard – Infrequent Access (Standard – IA) storage class offers the same high durability, low latency, and high throughput of S3 Standard. You now have the choice of three S3 storage classes (Standard, Standard – IA, and Glacier) that are designed to offer 99.999999999 percent … of durability.‎  Standard – IA has an availability SLA of 99 percent,” according to the Amazon blog post.

Earlier this month, Amazon also reduced the price for its data stored in Amazon Glacier from $0.01 a GB per month to $0.007 GB per month.

“This price is for the US East (Northern Virginia), US West (Oregon), and Europe (Ireland) regions; take a look at the Glacier Pricing page for full information on pricing in other regions,” Amazon stated in its blog.

The new tier service still allows customers to define data life-cycle policies to move data between different Amazon S3 classes, such as storing new data on the standard S3 storage class and then move it to the Standard-IA after a certain time that it has been uploaded. Over time, it can be moved to the Amazon Glacier service after the data is 60 days old.

“The new Standard-IA class is simply one of several attributes associated with each S3 object,” according to the AWS blog. “Because the objects stay in the same S3 bucket and are accessed from the same URLs when they transition to the Standard-IA, you can start using Standard-IA immediately through lifecycle policies without changing your application code. This means that you can add a policy and reduce S3 costs immediately, without having to make any changes to your application or affecting its performance.”

September 23, 2015  9:42 AM

IDC backs up on appliance revenue drop

Dave Raffo Dave Raffo Profile: Dave Raffo
Barracuda, EMC, IDC, Symantec

IDC Tuesday correccted the purpose-built backup appliance (PBBA) market tracker numbers it issued last week, giving market leader EMC more than $55 million in additional revenue for the second quarter.

The initial report showed steep declines for the market overall and EMC specifically. EMC apparently made a persuasive case that IDC under-reported its true backup appliance revenue, which consists mostly of Data Domain disk libraries. The new numbers show a less bleak picture for appliance sales, although they still declined slightly in the quarter.

The revised numbers give EMC $469.9 million compared to $414 million in the original report. The new total represents a 5.8 percent year-over-year drop for EMC and a 60.1 percent market share. The original numbers represented a 16.9 percent year-over-year drop and 57.1 percent share for EMC.

The revised numbers put total worldwide revenue at $781.1 million for last quarter, a one percent drop from last year instead of the eight percent decline from last week’s report. IDC includes revenue from appliances that require separate backup software along with integrated appliances that bundle software with storage.

Even a modest fall indicates a reversal of recent trends. The PBBA market grew 6.9 percent year-over-year in the first quarter of 2015 and increased 4 percent for the full year in 2014 over 2013.

No. 2 Symantec’s revenue fell 3.7 percent to $104.5 million last quarter, according to IDC. Barracuda Networks made the biggest revenue jump, growing 67.6 percent to $26.8 million and remained in fifth place with 3.4 percent share. That followed a 64.9 percent year-over-year jump in the first quarter for Barracuda following an aggressive rollout of backup appliances that support replication between appliances or to the Barracuda Cloud.

No. 3 IBM grew 0.8 percent to $54 million and No. 4 Hewlett-Packard increased 8.8 percent to $36.7 million. All other vendors combined to grow 13.4 percent to $89.6 million and 11.5 percent market share.

In the press release detailing the revenue report, IDC attributed the revenue drop to “market evolution.”

“Focus continues to shift away from hardware-centric, on-premise PBBA systems to hybrid/gateway systems,” said Liz Conner, IDC research manager for storage systems, in the press release. “The results are greater emphasis on backup and deduplication software, the ability to tier or push data to the cloud, and the increasing commoditization of hardware, all of which require market participants to adjust product portfolios accordingly.”

September 22, 2015  3:31 PM

Tegile expands all-flash portfolio through SanDisk partnership

Dave Raffo Dave Raffo Profile: Dave Raffo
Nexenta, SanDisk

SanDisk is putting its investments in private storage companies to good use. Two of the companies it has invested in – Nexenta and Tegile Systems – have signed on as OEM partners for SanDisk’s InfiniFlash all-flash storage platform.

Nexenta is a software vendor that is porting its ZFS-based NexentaStor application onto the InfiniFlash platform, which consists of proprietary NAND cards.

Tegile is expanding its all-flash platform with its IntelliFlash HD product, combing its software and controller with the SanDisk InfiniFlash array. Tegile launched its home-built all-flash arrays in June 2014, and also sells hybrid flash systems combining hard disk drives and solid-state drives.

Tegile VP of marketing Rob Commins said because the IntelliFlash system scales far higher than Tegile’s other all-flash arrays, there won’t be much overlap among customers. Tegile’s all-flash minimum capacities range from 12 TB to 48TB in an array while the IntelliFlash system starts at 127 TB and scales to more than 10 PB of usable capacity in a 42u rack.

Commins said the average price of Tegile’s all-flash platform is around $100,000 while the IntelliFlash system will average around $250,000 to $300,000.

“We said that’s a nice logical extension of capacity optimized media,” Commins said of the IntelliFlash platform. “We can pulll out our disk drives and use IntelliFlash HD as cheap and deep capacity.

“Our premise is there will always be performance optimized media and capacity optimized media. We’ll eventually go to PCIe and NVDIMM to keep going cheaper and deeper on the capacity layer.”

Tegile’s software stack will enable its IntelliFlash system to support block and file storage. Tegile supports Fibre Channel, iSCSI, NFS and SMB protocols.

Tegile expects IntelliFlash to cost around $1.50 per GB of raw capacity, and as little as 50 cents per usable GB after dedupe and compression when it is released in early 2016.

Commins said the IntelliFlash system should be a good fit for big data analytics and oil/gas exploration companies. “It’s a real nice screamer, but at super high capacity,” he said.

September 19, 2015  3:42 PM

New LTO-7 tape specification is now available for licensing

Carol Sliwa Carol Sliwa Profile: Carol Sliwa

Hard disk drives (HDDs) are up to 8 TB and 10 TB, and flash storage may be all the rage, but tape keeps rolling along.

Hewlett-Packard (HP), IBM and Quantum – the Linear Tape-Open (LTO) Program Technology Provider Companies (TPCs) – announced this week that the seventh generation specifications of the LTO Ultrium format are available for licensing by storage mechanism and media manufacturers.

The new LTO-7 specification lists the maximum compressed capacity at 15 TB per tape cartridge, more than double the 6.25 TB compressed capacity of the prior LTO-6 generation. The specification assumes a compression ratio of 2.5 to 1.

The compressed data transfer rate soars from 400 megabytes per second (MBps) with LTO-6 to 750 MBps with the new LTO-7 technology. That means users potentially could transfer more than 2.7 TB per hour per drive with LTO-7, up from 1.4 TB per hour per drive with LTO-6.

Paving the way for the higher capacity and data transfer rates were technology enhancements such as stronger magnetic properties and a doubling of the read/write heads in advanced servo format to allow the drive to write more data to the same amount of tape within the cartridge.

The new LTO-7 generation carries forward features of prior generations, including partitioning to enhance file control and space management with the Linear Tape File System (LTFS), hardware-based encryption, and write-once, read-many (WORM) functionality.

An LTO-7 Ultrium drive can read data from LTO-7, LTO-6 and LTO-5 cartridges and write data to an LTO-7 or LTO-6 cartridge.

Vendors who have already announced product support for LTO-7 include Quantum and SpectraLogic. Quantum expects LTO-7 technology to be available in its Scalar i6000 and Scalar i500 libraries in December, with other platforms to follow, and the company currently offers an LTO-7 pre-purchase program for interested customers.

The LTO-7 specification’s 15 TB compressed capacity and 750 MBps data transfer rate are slightly lower than the figures the LTO Program projected last year with the release of its extended roadmap. The September 2014 roadmap indicated the LTO-7 generation would provide a compressed capacity of 16 TB per tape cartridge and a compressed data transfer rate of 788 MBps.

The newly updated LTO Ultrium roadmap lists the following maximum compressed capacities and data transfer rates for future generations:

LTO-8: Up to 32 TB and 1,180 MBps

LTO-9: Up to 62.5 TB and 1,770 MBps

LTO-10: Up to 120 TB and 2,750 MBps

The LTO Program notes that the roadmap “is subject to change without notice and represents goals and objectives only.”

The LTO Program plans to provide further insight into the LTO roadmap and technology at the Storage Decisions conference on November 3-4 in New York, at the SC15 supercomputing conference running November 15-20 in Austin, Texas, and at the Government Video Expo on December 1-3 in Washington, D.C.

September 17, 2015  7:26 AM

Dell’Oro: Hyperscale DAS use drives storage revenue growth

Carol Sliwa Carol Sliwa Profile: Carol Sliwa

Market research firm Dell’Oro Group’s mid-year snapshot showed that total storage systems revenue is on track to grow 1% in 2015, driven largely by sales to hyperscale service providers of direct-attached storage (DAS) devices for servers.

The Redwood City, California-based company said total storage systems revenue approached $10 billion in the second quarter – a 1% increase compared to the same time frame in 2014. Revenue for internal storage rose 3%, while sales in the larger external storage segment stayed flat in the quarter, as high-end systems continued to experience a year-to-year decline, according to the recently released Dell’Oro report.

EMC maintained the top spot for overall storage revenue through the first half of the year, and Hewlett-Packard (HP) was No. 2. IBM dropped from third place at the end of 2014 to fifth place in the aftermath of the sale of its x86 server line. Dell and NetApp were third and fourth respectively.

Rapidly growing Huawei snuck ahead of Hitachi into fifth place in total storage systems revenue for the second quarter, but Dell’Oro said Huawei often has a strong second quarter after a seasonally weak first quarter.

Dell’Oro’s numbers varied a bit from those released by IDC earlier this month. IDC put total disk storage sales at $8.8 billion for the second quarter for a 2.1 percent increase over the second quarter of 2014. IDC said external storage sales declined 3.9 percent. In vendor market share, IDC had IBM in fourth place ahead of NetApp. IDC agreed with Dell’Oro that hyperscale storage is growing rapidly, putting it at a 26 percent increase over the second quarter of 2014.

Flash continued to factor into a higher percentage of total capacity for both internal and external storage systems. Dell’Oro estimated that flash drives represented 8% to 10% of the total capacity of hybrid arrays, and nearly 75% of midrange and high-end external storage systems included some flash. Dell’Oro expects the percentage to approach 100 within a few years.

Shipments of Fibre Channel (FC) and Ethernet ports for networked external storage systems remained even at about 50% each, and Dell’Oro expects the breakdown to stay the same for at least the next year.

For FC, the big trend was 16 Gbps taking share from 8 Gbps, as 69% of the switch ports and more than 20% of the adapter ports shipped at the higher data transfer rate in the second quarter. But Dell’Oro said total SAN revenue, including FC switches and adapters, dropped 5% from the first to second quarters to $550 million (the lowest level since Q2 of 2009), and the 1.9 million in port shipments represented a 7% decrease.

Dell’Oro attributed the SAN revenue decline to the resurgence of DAS as well as new storage alternatives, such as scale-out architectures, software-defined storage, hyperconverged infrastructure and cloud storage. Ethernet-based storage has also grown, although it still trails block-based storage in revenue, Dell’Oro said.

With Ethernet storage networking, 40 Gbps made inroads on 10 Gbps, but Dell’Oro expects the 40 Gbps Ethernet pattern to be short-lived as options such as 50 Gbps, 75 Gbps and 100 Gbps emerge in future years.

September 14, 2015  3:04 PM

Survey finds companies’ disaster recovery testing is inadequate

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Despite all the talk about disaster recovery testing, most organizations still don’t do it enough. And recovery point objectives (RPOs) are still way too high to facilitate adequate DR, according to a survey conducted by cloud vendor CloudVelox.

CloudVelox, which offers automated disaster recovery in the cloud, interviewed 343 IT executives responsible for DR in their organizations from nine vertical markets. The surveyed organizations ranged from less than 100 employees to more than 1,000.

The survey found 58 percent of the respondents ran DR tests once a year or less. Another 33 percent tested their DR infrequently or never, while 26 percent tested it quarterly and 16 percent did it monthly.

These results should not be surprising because other recent surveys have had similar results, including one conducted by our parent company TechTarget.

So why aren’t people testing more often? Fifty-six percent of the CloudVelox respondents said their DR testing was infrequent because they didn’t have adequate internal resources. Another 34 percent found the process complex, while 19 percent did not find it to be a priority and 12 percent said it costs too much.

Respondents also say their traditional DR solutions don’t offer adequate RPOs. One-third said their RPO was more than 12 hours, with only 21 percent claiming it is two hours or less and 46 percent said it is between two hours and 12 hours.

“The fact that RTO and RPO in this day and age is still in the two-to-12-hour range shows that disaster recovery is broken,” said Vasu Subbiah, CloudVelox’s vice president of products. “And IT does not have the resources. The average IT spend for disaster recovery is between five to seven percent. If they test less frequently, then mistakes are compounded when they try to recover in the future.”

Cloud Velox, formerly called CloudVelocity, offers cloud-based disaster recovery, cloud data migration and testing and development in the cloud. The July 2015 surveyed verticals that included oil and gas, basic materials, industrial, consumer goods and services, healthcare, telecommunications, utilities and finance.

The survey also found variations based on the vertical. For instance, the survey found the oil and gas industry has the highest average RPO, with 70 percent stating their it took 12 hours or more and they had the lowest test frequency, with 80 percent of those surveyed said they test once a year or less. Thirty percent of the all the industries included in the survey stated they had an RPO of 12 or more hours.

In healthcare, 69 percent tested once a year or less. Consumer services and healthcare were most willing to embrace cloud-based DR if they could automate network and security controls to the cloud. Sixty-five percent of respondents in consumer services and 64 percent of healthcare would do cloud DR if they had the option of automation.

One in four of the respondents said they experience failure or delays over half of the time when they tested their secondary data center. Fifty-three percent said network connectivity was the common cause of failure when testing their disaster recovery environment. Another thirty-seven percent cited wrong configuration and 33 percent cited missing patches.

Network and security concerns often are singled out as barriers to cloud adoption. CloudVelox’s survey found that 55 percent of respondents would use cloud DR if they could automate their on-premises network and security controls in the cloud, while the other 45 percent would not consider the cloud even if they had on-premise network and security controls

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: