Storage Soup

October 9, 2015  3:20 PM

Microsoft Azure strengthens its backup and recovery services

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Microsoft Azure is strengthening its backup portfolio. The company now offers backups of Micosoft SQL Server, Hyper-V virtual machines, SharePoint Server, Microsoft Exchange and Windows clients in Microsoft Azure Backup Server.

Microsoft made the announcement in a blog post this week. Micosoft already offers its Azure Site Recovery and other disaster recovery options for virtual machines but now the company is beefing up its backup and recovery offerings.

Customers can use the Microsoft Azure Backup agent or Microsoft Azure Recovery Services agent to allow backups of only files and folders to Azure. To protect application workloads, users can download Microsoft Azure Backup Server and install on a Windows Server.

The Microsoft Azure Backup Server  is available in all geographies where Azure is available except Microsoft Azure Government data center and Microsoft Azure in China via 21Vianet.

“We are working to make it available in these geographies by end of this calendar year,”Samir Mehta, Microsoft’s senior product manager, wrote in the company blog. “Users with tier 1 workloads like Microsoft SQL Server can benefit from Microsoft Azure Backup Server by choosing disk backups for better RPOs and RTOs.

“Users can continue to backup to Azure for long-term retention using disk-disk-cloud backup strategy. Users can also leverage Microsoft Azure Backup Server to monitor backups of all applications in single on-premises console.”

To use Azure as a backup target, users need the Azure Backup agent or Microsoft Recovery Services Agent on servers or personal computers.

“That code will take care of the business of shuttling data into Azure, as its been imbued with the power to move the aforementioned workloads into Azure, an evolution from its previous file-only powers,” Mehta wrote.

October 9, 2015  10:31 AM

OpenStack Manila project leader previews Liberty, Mitaka releases

Carol Sliwa Carol Sliwa Profile: Carol Sliwa

The OpenStack Manila file share service is growing up.

During a Storage Networking Industry Association (SNIA)-sponsored webcast this week, Ben Swartzlander, a NetApp architect who is the project team lead for Manila, outlined the new features in the OpenStack Liberty release that is due for general availability next week. He also gave a preview of the upcoming Mitaka release, which he estimated would be ready in late April 2016.

“There’s an infinite list of ideas for ways to enhance Manila,” Swartzlander said, “so I don’t think we’re going to run out of new things to work on for a long time.”

The Liberty release of the OpenStack Manila file-based storage service introduces some experimental APIs and features that people can use with the understanding those new capabilities could change in the future.

“It enables us to get features out into the hands of users and get feedback on them before we pour the concrete, so to speak,” Swartzlander said.

With Liberty, Swartzlander said the community focused on documentation to bring Manila up to par with the rest of the OpenStack projects, which include Swift object and Cinder block storage. Manila contributors also open sourced the generic server image, which Swartzlander admitted is “something we should have done earlier.”

Other new features in the Liberty release of OpenStack Manila include:

–Oversubscription, which Swartzlander said is “basically thin provisioning your storage and having Manila manage the degree to which you oversubscribe your backend.” He said users could oversubscribe by a factor of 2x or 10x or whatever they are comfortable with.

–Expanding/shrinking of shares. (A share is an instance of a shared file system.) Swartzlander said the new expand/shrink feature is important “because you don’t always know how much space you’re going to need from the beginning.”

–Micro-versions, which Swartzlander described as “basically a fine-grained version of the API, so that every time you make a change to the API, we increment the version number.” He said, “The servers and clients are implemented in such a way that they can negotiate down to a version that is in common between that server and that client so that in case things have changed in the API, they can find a common version and speak that version and maintain compatibility over a wider range of releases.”

–Consistency groups, an experimental feature that allows users to snapshot multiple shares as a unit. Swartzlander cited a potential use case with storage for a database. “Maybe I want to have the table space on relatively large performing storage, and I want to have my database logs on really fast storage to maximize my database performance, and without costing too much,” he said. “But to back up my database, I need to be able to take a consistent snapshot of those two shares. ‘Consistency groups’ enables you to do that.”

–Mount automation of shares. The feature would enable users to intercept operations such as the creation of shares or granting access to a share and trigger a script to automate the mounting of the shares. He said there are different ways to do automation, but this feature covers many use cases.

“One of the major differences between block storage and shared file systems is exactly how the storage gets attached from where the bytes are to where the client is using the storage,” Swartzlander explained. “With block storage, you have a hypervisor in the middle, and the hypervisor has an API where you can tell it, ‘Go connect to this and get that storage and then provide it through to the guests.’ And the guests just see the new hard disk pop up, and the operating system then sees the new block device, and it can automate doing what needs to be done.

“With the shared file system, the mounting is actually direct from that guest [virtual machine] VM through to the backend,” he said. “The hypervisor isn’t really involved in that process, and so getting the clients to automatically mount the storage is a challenge that we’ve been aware of since the beginning of the project.”

–Share migration, an experimental feature that permits the movement of a share from one storage controller to another. Share migration will be administrator-controlled initially, according to Swartzlander.

“The use case for something like this would be evacuation of a storage controller for maintenance,” he said. “Perhaps you want to do load balancing. You have one storage controller that’s working really hard and another one that’s not working hard enough. You can move some stuff around.”

Swartzlander added that share migration would form the basis of future features such as re-typing a share, changing the type of an existing share, changing the availability of an existing share and changing the security domain.

Areas of focus for the upcoming Mitaka release include “Migration 2.0,” additional first-party open source drivers supported by the community, improved support for rolling upgrades and high availability, and share replication.

Swartzlander said the latter feature would allow Manila to configure a share to be replicated to a different availability zone. In the event of a power failure, fire or flood in the data center, users would be able to switch to the replicated copy of the data to keep an application running, he said.

“The goal is to support a wide variety of implementations,” Swartzlander said. “We have, for example, a proposal to do active-active replication or active-passive replication. Synchronous or asynchronous are both supported depending on what the vendor wants to implement and what the administrator wants to enable.”

October 9, 2015  8:41 AM

Big deals make hyper-convergence a bigger deal

Dave Raffo Dave Raffo Profile: Dave Raffo
Cisco, Dell, Hyper-convergence, Nutanix, SimpliVity

Partnerships with large hardware vendors are paying off for hyper-converged pioneers Nutanix and SimpliVity.

Dell this week disclosed a $28 million deal with the Federal Bureau of Investigation (FBI) that included Nutanix-powered hyper-converged systems for virtual desktops. Dell said the FBI acquired more than 600 XC Series appliances that bundle Nutanix software on Dell hardware through an OEM deal. The FBI deal also included Dell AppAssure data protection software and Dell networking products.

SimpliVity said it closed its biggest deal yet last quarter when it landed a European service provider that implemented more than 200 SimpliVity OmniStack deployments with Cisco UCS hardware. That deal helped SimpliVity increase revenue by 50 percent over the second quarter and more than double its revenue from the same quarter last year, CEO Doron Kempel said.

Kempel said most of SimpliVity’s revenue came from outside the U.S. last quarter, mainly because of the large service provider deal.

Kempel said bigger transactions are coming in partly because of SimpliVity partnerships with Cisco and Lenovo but also because OmniStack 3.0 has broadened its use cases. He said earlier customers are also expanding their hyper-converged footprint, and the market itself is gaining acceptance with data center buyers.

“The hyper-converged market is starting to become more mature,” Kempel said. “Customers do their homework now and find out about all the hyper-converged players. There are really three vendors that the market views as leaders – SimpliVity, VMware and Nutanix. We don’t see VMware yet, it’s still mostly in the lower-end single-site cases. We see Nutanix in about 20 percent of our deals. The rest of the deals are against EMC, NetApp and the standard players.”

October 8, 2015  2:24 PM

IBM’s new Linear-Tape Open Ultrium 7 will hit the market later this month

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

IBM still hasn’t given up on tape.

The company today announced its new 6TB IBM Linear Tape-Open Ultrium  7 (LTO-7) drive that performs at 300 MB per second and has double the capacity of the previous drive. The drive will be integrated into the IBM TS4500 tape library, which launched last year, and with IBM’s new TS2270 tape drive for backup and archiving.

“Tape still is the cheapest solution out there,” said Eric Herzog, vice president of product marketing at IBM. “Eighty to 90 percent of data generally is not accessed after 90 days. You don’t want to put that stuff on primary storage if you only need to protect it for the first 90 days.”

The LTO-7 technology is also designed to support data encryption. The hardware encryption and decryption core
and control core reside in the drive.

Herzog said LTO-7 allows the TS45oo to scale to 347.5 PB of storage in 18 frames while using 43 percent less floor space compared to a high-density disk system. The library can store up to 5.5 PB of data in a single ten-square-foot library which is three times the capacity of the IBM TS3500 tape library.

The system grows by adding frame models and the storage footprint can be reduced with 10U of rack space on top of the library for Fibre Channel switches, tape data movers or IBM Spectrum Archive nodes. The TS4500 tape library designed for mid-sized and large enterprises dealing with high data volumes and  growth in data centers.

The IBM TS2270 provides physical storage capacity of up to 15 TB and the data transfer performance increases up to 300 MBps with 6Gbps SAS interface connectivity. The TS2270 6 Gbps SAS interface can connect to a wide spectrum of open-system servers and the power of the TS2270 tape drive can be increased by managing it with tape management solutions such as IBM Spectrum Protect or third-party storage software.

IBM will make LTO-7 tape drives available on Oct. 23, and availability varies by automation platform. Enhancements to the TS4500 tape library will be available on Nov. 20, 2105.

October 8, 2015  10:08 AM

Dell is EMC’s latest potential dance partner

Dave Raffo Dave Raffo Profile: Dave Raffo
Dell, EMC

The EMC rumor machine is in full force again today. This time it’s Dell that is reportedly interested in buying all or some of EMC.

Of course, if the sources for these rumors were 100% reliable, then EMC would already be part of Hewlett-Packard or VMware or Cisco, or it would own all of VMware instead of 80%. Even the Dell rumors are all over the place, with some saying Dell wants all of EMC and others putting specific pieces of EMC in Dell’s crosshairs.

What all these rumors tell you is that EMC is exploring many options in the wake of pressure from activist investors led by Elliott Management. Elliott wants EMC to break up the federation of companies that include EMC Information Infrastructure (the storage group), VMware, RSA, Pivotal and smaller pieces. EMC executives have argued that the federation model works best, and they clearly want to keep VMware most of all.

Elliott’s agreement to let EMC raise its stock price on its own expired in September without the desired result (although the Dell rumors have raised EMC’s share price). Now the activist investors are looking for EMC to make a significant move.

It’s unlikely that Dell can buy all of EMC when a few years ago it couldn’t afford to acquire 3PAR after HP got into the bidding. EMC’s valuation of $50 billion is twice as much as Dell’s, and Dell still has $11.7 billion in debt from when it went private in 2013. HP might also jump in and try to outbid Dell  and derail its hopes of buying EMC.

There are pieces of EMC that Dell could use, though. Any server vendor would want VMware. Much of the EMC Federation strategy revolves around VMware’s virtualization and cloud technologies, however, and a sale of VMware would be a major loss for EMC.

More interesting on the storage front, one report said Dell might buy EMC’s VNX storage systems business. VNX would fill the gap in Dell’s storage portfolio that it originally wanted 3PAR to plug. Dell acquired Compellent after losing out on 3PAR, but Compellent’s arrays don’t reach as high into the enterprise as VNX systems.

The success of EMC’s XtremIO all-flash array might prompt it to part with VNX, which is part of EMC’s legacy storage portfolio that has experienced little or no growth in the past year. VNX arrays include old Clariion technology, and Dell used to sell Clariion under a partnership with EMC that ended in 2011. But why would Dell want a VNX business that has sluggish growth? There is also the chance that Dell would buy out all of EMC’s storage, although even that could be too pricey.

I expect we’ll see news on an EMC merger or spinout by Oct. 21 when EMC reports its quarterly earnings. I don’t think even EMC or its partner in any potential deal even knows yet what that news will be. But it doesn’t sound like the storage giant has ruled out much so far.

October 7, 2015  9:35 AM

Veritas moves further toward freedom, cloud backup

Dave Raffo Dave Raffo Profile: Dave Raffo
NetBackup, Veritas

Veritas completed operational separation from Symantec last Friday, and this week made a minor upgraded to its flagship NetBackup software with a focus on Amazon Web Services (AWS) and NetApp environments.

Operational separation means the backup company and its parent security vendor are operating as separate organizations, ahead of the $8 billion sale of Veritas to the Carlyle Group. That sale is expected to close in January.

NetBackup 7.7.1 includes a connector to Amazon Simple Storage Service Standard-Infrequent Access (S3 Standard-IA) that Amazon launched last month. The new Amazon service is seen as a tier that will fit between production and archive data.

Veritas also extended NetBackup support to the AWS Commercial Cloud Services (C2S) region, a secure cloud service that is part of the AWS GovCloud.

NetBakup 7.7.1 also supports cluster-aware Network Data Management Protocol (NDMP) backups for NetApp clustered DataOnTap, and orchestration of snapshot and replication operations using NetApp’s SnapVault and SnapMirror. The backup software now includes an Accelerator for NDMP that supports NetApp filers. NetBackup Acclerator is designed to achieve full backups in the time it takes to do incremental backups.

The release follows NetBackup 7.7 launched in July with an emphasis on cloud support.

Simon Jelley, VP of backup management for Veritas, said the point-upgrade is part of the vendor’s new strategy of making quarterly releases to support new applications on the market. He also said the connectors – added in version 7.7 for Amazon S3, Google, Verizon, Cloudian and Hitachi Data Systems clouds — have been popular with customers looking to replace disk-to-disk-to-tape backups by using the cloud instead.

“They’re using the cloud as a long-term archive tier as an alternative to tape,” Jelley said. “It’s more efficient for recovery because they don’t have to recycle tapes. And cloud archiving is becoming more affordable with the [Amazon] Infrequent Access tier, Amazon Glacier and Google Nearline.

With its focus on the cloud, you would expect Veritas to add cloud-to-cloud backup for applications such as Microsoft Office 365, and Google Apps. EMC Spanning, Datto Backupify and Asigra’s Cloud Backup are among those doing cloud-to-cloud backup, but Veritas has not gone there yet.

“We have not seen large enterprises move there, but it’s something we’re looking at,” Jelley said of cloud-to-cloud backup.

September 30, 2015  10:13 AM

FalconStor acquires cloud-based analytics for FreeStor

Dave Raffo Dave Raffo Profile: Dave Raffo

FalconStor is preparing to add predictive analytics monitoring to its FreeStor storage virtualization software, which the vendor compares to Nimble Insight and Pure1 cloud-based analytics.

FalconStor signed a licensing and co-development agreement with Cumulus Logic that gives FalconStor exclusive use of Cumulus Logic analytics code.

Cumulus Logic is still in stealth, but has been developing an analytics engine that allows centralized reporting across heterogeneous storage systems. It will collect data from storage and applications, present historic and real-time reports and help maintain management policies for storage and servers. It will present data in web-based dashboards that can also be accessed through mobile devices.

The analytics will be built into FreeStor as part of the base product for no extra charge, FalconStor CEO Gary Quinn said.

“I think customers will buy FreeStor just for the analytics,” Quinn said. “They will want the pure ability to learn more about the environment and make decisions.”

FalconStor expects the analytics to be available around March or April of 2016.

FreeStor’s management server already provides configuration information and does monitoring and reporting. The Cumulus intelligence will add predictive analytics and allow it all to run in an Amazon cloud instance. Quinn said the new information will help customers plan for capacity, meet SLAs and predict the health of their storage systems. He expects service provider customers to use it in the cloud while large enterprises may prefer to keep the repository on-premise.

“Analytics have been around a long time, but a favorite reason to buy Nimble and Pure is the nice information you receive about your array,” Quinn said. “FreeStor is dedicated to a heterogeneous environment, and we think that [analytics] capability needs to go horizontal across the industry.”

The Cumulus code will pull all information from FreeStor Storage servers, which uses REST APIs to gather data from storage arrays and servers. “We collect a tremendous amount of information,” Quinn said. “We are now applying a smart rules engine to analyze all that data we’re receiving. We will present that in a simple Web browser or mobile application. You can take actions based on whether you’re achieving your SLAs, running out of capacity or having performance issues.”

Unlike Nimble and Pure, FalconStor will not collect customer information in its own cloud. “Nimble and Pure customers can log in and gain insight into how other customers are doing,” Quinn said. “We could do that, but that’s not our first iteration.”

Quinn said the analytics will make FreeStor more valuable to OEM partners. FalconStor has announced OEM deals with X-IO and Kaminario, and can bring its analytics to other vendors’ arrays. Quinn said the analytics are especially helpful with companies who want to make sure their flash storage is optimized for the best performance. FreeStor was developed in collaboration with flash array vendor Violin Memory.

“FreeStor Storage Server sits in the data path,” he said. “We see IOPS, latency and bottlenecks in the data path, and can even capture log information if you’re encountering hardware difficulties.”

September 29, 2015  12:34 PM

Flash storage and cleaning house

Randy Kerns Randy Kerns Profile: Randy Kerns
Data migration

A recent discussion with a client got me thinking about precipitating events that cause IT professionals to “put their house in order” regarding the information they store. In this case, there was a new all-flash storage system acquired for primary storage. The transition prompted the client to look at the information stored on the system to be replaced, discarding what was no longer useful and moving inactive data to another system.

This is similar to what many of us go through in our personal lives. Certain events cause us to examine what objects we have accumulated and make a conscious decision to discard some. Moving to a new home is the most obvious example. While packing all your belongings, junk, hoarded items, etc., you decide about what you really do not need and how to get rid of it. The first thoughts may be a garage sale or some friend that you know could really use that stuff. Other things go right into the dumpster. The second phase of reduction comes after you get your boxes to your new place. After a certain period of time, belongings that are still packed up can probably be safely discarded.

There is a parallel with our IT lives. Bringing in all-flash storage for primary adds a faster system that can provide greater economic value for the company, and it should be more carefully managed than the previous system. However, there are other “precipitating events” in managing information that should cause us to clean house, or, address our “data hoarding.”

For instance, but the purchase of a new primary storage system can also lead to a movement of data for load balancing. Deploying a new content repository can spark an initiative to store data based on value or activity, establish retention rules and accommodate growth. And organizational change can lead to new company dynamics – acquisitions or consolidations – and changes in services delivery model, such as a transition to IT as a Service.

These events happen with more regularity than most would think. To manage information strategically, you should add the task of organizing information to these events. Like when discarding junk from your house, it’s hard to do these tasks as regularly planned activities because they get infinitely postponed or discarded due to lack of time or resources.

So these events in IT do mirror our personal lives. We need to recognize this, plan for it, and take advantage of these events to make improvement. It may not be the most optimal way to clean out unneeded data, but it is a method that is naturally practiced.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).

September 25, 2015  3:06 PM

AWS introduces a new infrequently accessed S3 storage tier

Sonia Lelii Sonia Lelii Profile: Sonia Lelii
Amazon, Cloud storage

Amazon Web Services (AWS) rolled out a new type of storage for infrequently accessed data within the S3 tier that cost 1.25 cents per GBs to store but only 1 cent per GB to access.

The cloud has become a repository for unstructured data storage that is rarely accessed. Amazon already has its Glacier service for this type of storage. However, now it has introduced a new pricing tier for its high-throughput Amazon S3 standard.

“The new S3 Standard – Infrequent Access (Standard – IA) storage class offers the same high durability, low latency, and high throughput of S3 Standard. You now have the choice of three S3 storage classes (Standard, Standard – IA, and Glacier) that are designed to offer 99.999999999 percent … of durability.‎  Standard – IA has an availability SLA of 99 percent,” according to the Amazon blog post.

Earlier this month, Amazon also reduced the price for its data stored in Amazon Glacier from $0.01 a GB per month to $0.007 GB per month.

“This price is for the US East (Northern Virginia), US West (Oregon), and Europe (Ireland) regions; take a look at the Glacier Pricing page for full information on pricing in other regions,” Amazon stated in its blog.

The new tier service still allows customers to define data life-cycle policies to move data between different Amazon S3 classes, such as storing new data on the standard S3 storage class and then move it to the Standard-IA after a certain time that it has been uploaded. Over time, it can be moved to the Amazon Glacier service after the data is 60 days old.

“The new Standard-IA class is simply one of several attributes associated with each S3 object,” according to the AWS blog. “Because the objects stay in the same S3 bucket and are accessed from the same URLs when they transition to the Standard-IA, you can start using Standard-IA immediately through lifecycle policies without changing your application code. This means that you can add a policy and reduce S3 costs immediately, without having to make any changes to your application or affecting its performance.”

September 23, 2015  9:42 AM

IDC backs up on appliance revenue drop

Dave Raffo Dave Raffo Profile: Dave Raffo
Barracuda, EMC, IDC, Symantec

IDC Tuesday correccted the purpose-built backup appliance (PBBA) market tracker numbers it issued last week, giving market leader EMC more than $55 million in additional revenue for the second quarter.

The initial report showed steep declines for the market overall and EMC specifically. EMC apparently made a persuasive case that IDC under-reported its true backup appliance revenue, which consists mostly of Data Domain disk libraries. The new numbers show a less bleak picture for appliance sales, although they still declined slightly in the quarter.

The revised numbers give EMC $469.9 million compared to $414 million in the original report. The new total represents a 5.8 percent year-over-year drop for EMC and a 60.1 percent market share. The original numbers represented a 16.9 percent year-over-year drop and 57.1 percent share for EMC.

The revised numbers put total worldwide revenue at $781.1 million for last quarter, a one percent drop from last year instead of the eight percent decline from last week’s report. IDC includes revenue from appliances that require separate backup software along with integrated appliances that bundle software with storage.

Even a modest fall indicates a reversal of recent trends. The PBBA market grew 6.9 percent year-over-year in the first quarter of 2015 and increased 4 percent for the full year in 2014 over 2013.

No. 2 Symantec’s revenue fell 3.7 percent to $104.5 million last quarter, according to IDC. Barracuda Networks made the biggest revenue jump, growing 67.6 percent to $26.8 million and remained in fifth place with 3.4 percent share. That followed a 64.9 percent year-over-year jump in the first quarter for Barracuda following an aggressive rollout of backup appliances that support replication between appliances or to the Barracuda Cloud.

No. 3 IBM grew 0.8 percent to $54 million and No. 4 Hewlett-Packard increased 8.8 percent to $36.7 million. All other vendors combined to grow 13.4 percent to $89.6 million and 11.5 percent market share.

In the press release detailing the revenue report, IDC attributed the revenue drop to “market evolution.”

“Focus continues to shift away from hardware-centric, on-premise PBBA systems to hybrid/gateway systems,” said Liz Conner, IDC research manager for storage systems, in the press release. “The results are greater emphasis on backup and deduplication software, the ability to tier or push data to the cloud, and the increasing commoditization of hardware, all of which require market participants to adjust product portfolios accordingly.”

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: