Storage Soup


October 29, 2014  1:53 PM

Quorum aims for one-click disaster recovery

Sonia Lelii Sonia Lelii Profile: Sonia Lelii
Storage

Quorum if offering a one-click disaster recovery product that gives customers the ability to prioritize restores.

The company recently announced OnQ Flex solution so customers can designate which servers need quicker restores based on Recovery Time Objectives. OnQ Flex, which is part of Quorum’s Disaster Recovery as a Service (DRaaS) product, offers one-click recovery and one-click testing capabilities whether the servers are on-premises or residing in a co-location.

“Before we offered full-level protection all the time. Now we have introduced something that is more flexible based on what server needs priority,” said Kemal Balioglu, Quorum’s vice president of products.

Typical disaster recovery processes often require hours or days to get back up and running. Balioglu said the OnQ Flex solution offers an instant, one-click recovery after any storage, server or complete site failure with the ability to restore mission critical data instantly while less-critical data becomes available at a later time. Virtual clones of all the protected servers can run on local and remote appliances.

With OnQ Flex, the primary hardware and application can be on premise or a customer-operated co-location. The on-premise configurations that encounter a disaster will fail over to the cloud while a virtual machine will be spun up for a configuration already in the cloud.

“We always have a high availability node on standby,” said Balioglu, “For a single-click recovery to the cloud.”

OnQ Flex provides replication for compressed and encrypted data, integrated server monitoring, Email and text alerts and scheduled health reports.

For data protection and recovery, the solution has full system imaging, sub-file-level incremental updates, global deduplication at the source, and bare metal restores and file-level recovery for any snapshot level. The deployment includes bandwidth throttling for internal and external storage.

Customers are charged through a  prescription-payment model.

October 29, 2014  11:52 AM

CommVault slump continues

Dave Raffo Dave Raffo Profile: Dave Raffo
Backup software, Commvault, Storage

CommVault CEO Bob Hammer insists his company isn’t broken, although he has a plethora of fixes lined up.

The backup software vendor Tuesday reported rocky results for last quarter (it’s third straight disappointing quarter) and indicated this quarter won’t be much better.

CommVault’s $151.1 million in revenue last quarter was close to $7 million below Wall Street expectations. The revenue was up seven percent year-over-year and down one percent from the previous quarter, disappointing for a company that not long ago was growing in the 20 percent range year-over-year every quarter. CommVault executives said on the company’s earnings call that they don’t expect much revenue growth this quarter.

Hammer did maintain that CommVault will bounce back to post revenue growth in the 20 percent range by the end of 2016 and hit $1 billion in annual revenue within three years. He blames the poor recent sales mostly on the way the company has packaged and priced its Simpana software. His cures include management additions (new global sales VP and chief management officer) and structure, new product bundles and pricing models, and product upgrades that include Simpana 11 and an appliance partnership with NetApp.

“We knew this quarter would be challenging … and that it would take us several more quarters to get back to sustainable consistent high-growth trajectory,” Hammer said.

Hammer said he realized change would be needed for more than a year and put them in place, but not fast enough. “We possess strong underlying business fundamentals, our target markets continue to have solid growth potential and we are well-positioned to take advantage of the increasing demand by both enterprise-level and mid-sized companies,” he said.

“We see our current challenge as difficult but resolvable.”

Much of his optimism is based on Simpana 11, which is scheduled to begin its beta program in early 2016. Hammer said Simpana 11 will be open so “anyone will be able to access and read data that we store under more sophisticated security functionality” and APIs throughout the stack. Simpana 11 will also index and transport data differently.

Hammer said CommVault is also emphasizing fast access to data and native copy capabilities that allow customers to store data in the same format the original application created it in. That allows recovery without having to use a backup copy.

Integrated backup appliances are catching on in the market, and CommVault has not followed Symantec’s lead of selling its own appliances with its software. However, CommVault is adding hardware partners, and Hammer said NetApp will begin selling its E Series storage appliances with Simpana software this quarter. Fujitsu is also adding a CommVault appliance in Europe this quarter.

“We didn’t want to be in the hardware business,” Hammer said. “So it took us more time to put programs together with our hardware and distribution partners. It took us a long time but now we have them.”

CommVault’s revenue from enterprise deals – considered deals above $100,000 in software revenue – fell five percent from last year and 14 percent from the previous quarter. The average enterprise deal fell to $281,000 from $396,000 the prior quarter.

Still, CommVault executives said its problems are mainly in the mid-market. Because of CommVault’s pricing structure, smaller companies targeted at specific use cases have taken business away, Hammer said.

CommVault has already switched to per user or per VM licensing for new solution sets launched in August. Those bundles were for virtual machine and cloud backup, endpoint protection, email archiving, and snapshot management. It has also set up business units for data protection, cloud ops/orchestration, information compliance, mobile and vertical solutions to concentrate on those market segments. Each business unit will be responsible for technical roadmaps and executing against strategic and revenue goals.

Veeam Software is not among the vendors hurting CommVault, according to Hammer and CommVault COO Al Bunte. Veeam specializes in virtual machine backup and has rapidly grown into close to a $400 million annual revenue company. Hammer acknowledged that data protection for virtualization is the fastest growing part of the data management market, but said CommVault does well there. He said Veeam plays at the low-end of the mid-market, and he expects the new packaging and pricing to help CommVault there. Bunte added that CommVault faces more competition from its traditional larger rivals EMC, Symantec and IBM in the mid-market.


October 22, 2014  9:33 AM

EMC takes control of VCE, Tucci willing to delay retirement

Dave Raffo Dave Raffo Profile: Dave Raffo
Cisco, EMC, Storage, VCE

EMC, under pressure to spin off assets or merge with another large company, today spun in one of its assets – its VCE joint venture with Cisco.

EMC CEO and chairman Joe Tucci would not comment on any other possible M&A strategy during EMC’s earnings report call. Hewlett-Packard executives have claimed the two companies held merger talks before HP split its company in two. Tucci said EMC’s policy is to not comments on speculation rumors.

He did say he agreed with investor Elliott Management that EMC’s stock is undervalued. Elliott is pushing for EMC to spin off assets. Tucci called the stock performance “painful” and “baffling” and said it does not reflect EMC’s growth in recent years. When asked if EMC would give any updates on possible mergers or sales, he said, “I believe we owe investors an update. We will do that early in the new year.”

Tucci’s contract expires next February, and that has been a catalyst for much of the merger and spinoff talk. But Tucci today said he is open to staying beyond that date in his current role or as chairman only. But not for long.

“You should view February of 2015 as a guidepost, not a definitive date,” he said. “I told the board, ‘If you have a [replacement] and want to move earlier, that’s fine.’ Or if you want me to stay a little longer – I’m not talking years, but months or quarters — that’s fine. Or if you want me to stay on in a chairman role, I would contemplate that favorably.’”

Tucci certainly didn’t sound like he favored spinning out VMware or any EMC asset. He said methods of raising stockholder value through spinoffs and stock buybacks “aren’t strategies, they’re tactics. You need to build a strategy. We’ve invested in a strategy. We have some great assets, and these are going to pay off big time.”

EMC reports revenue for all of its companies, including independently run VMware, Pivotal and EMC Information Infrastructure (EMC II). EMC II is the main storage group within the EMC federation.

EMC II reported revenue of $4.5 billion, which was up six percent from last year. Tucci and EMC II CEO David Goulden said emerging technologies such as XtremIO flash arrays and ViPR and ScaleIO software-defined storage fueled much of the growth, along with midrange VNX arrays and Data Domain backup appliances.

VCE will move under the EMC II umbrella, with Cisco reducing its stake in the joint venture from 35 percent to 10 percent. The move comes after weeks of rumors that Cisco would pull out or greatly reduce its role in the money-losing company that was created in 2009.

VCE sells Vblocks, which are pre-tested bundles of EMC storage, Cisco server and networking products, and VMware software. EMC claims VCE has more than a $2 billion run rate for VCE revenue, which means it sold more than $500 million worth of products last quarter. EMC also claims more than 2,000 Vblocks have been sold since VCE began.</

Although an EMC blog today hailed VCE as “The most successful joint venture in IT history,” it has been a money loser for the partners.

According to a report published by financial analyst Aaron Rakers of Stifel, EMC and Cisco suffered more than $1.6 billion in combined operating losses from the joint venture through July. With a 35 percent equity stake, Cisco’s share of the losses would be $644 million. Cisco decreased its VCE investment to $10 million in the quarter that ended last April compared to $91 million the previous quarter. VCE partners have invested a combined $1.988 billion in the joint venture with more than $700 million coming from Cisco, according to Rakers.

Cisco has maintained partnerships with EMC competitors over the life of VCE, including reference architectures with NetApp (FlexPod) and Nimble Storage (SmartStacks).

Goulden said Vblocks will continue to exclusively use EMC, Cisco and VMware technology. VCE’s 2,000 employees will join EMC.


October 20, 2014  5:10 PM

Cleversafe grabs an HP reseller deal, too

Dave Raffo Dave Raffo Profile: Dave Raffo
Cleversafe, HP, Object storage, Scality, Storage

Hewlett-Packard isn’t tied to one object storage product, not even its own. Days after revealing a reseller deal with Scality for its Ring software, HP and Cleversafe today said HP would also resell Cleversafe’s dsNet. HP resells software from both private companies on its ProLiant servers.

HP has worked closely with both vendors, and already set up a web page set to sell their object storage software before it reached official reseller deals for either. HP also has its own object storage in its StoreAll product.

As with Scality, Cleversafe sees a deal with HP as an opportunity to expand its sales reach.

Peter Howard, Cleversafe’s vice president of channels and alliances, said the deal came about after HP and Cleversafe developed a base of common customers. Cleversafe sells its dsNet software on appliances, but optimized it to work for HP customers who wanted to buy the software separately to run on ProLiant servers.

Cleversafe continues to sell appliances, but Howard said the software holds the value.

“We’re committed to being a software company,” Howard said. “The hardware is there as a convenience for customers who want one throat to choke. All the value is in the software. HP said they wanted to be the one throat for their customers, and they wanted to sell our software.”

Howard said Cleversafe’s largest customer segment is service providers, and most of its growth is in financial services, life sciences and other verticals that have a great deal of data growth. Active archive and web content are common uses cases.

“We look pretty attractive after you get above a petabyte of data,” he said.


October 20, 2014  11:15 AM

Calculating capacity utilization can be challenging

Randy Kerns Randy Kerns Profile: Randy Kerns
Storage

The capacity utilization for storage is one area where storage vendors have made a lot of improvements. Advanced features such as storage pooling, thin provisioning, and storage virtualization have introduced greater efficiencies for using storage capacity.

Still, trying to understand capacity utilization can be confusing. The utilization must be examined at a larger scale than a single storage system. Storage virtualization can span systems. Thin provisioning overcommits capacity across systems with the ability to drive up utilization rates. The larger the pool, the more flexibility is allowed for a system in allocating storage resources.

Data reduction (compression and/or deduplication) usually allows more data to be stored in a given amount of storage. Data reduction effectiveness varies based on the data type and the implementation by the vendor. Data reduction represents a potential increase in usable capacity. Guidelines or guarantees from the vendor can be used to gauge that potential, and actual measurements are usually available from the management interfaces on the storage systems when data reduction is in use.

In the discussion about storage capacity utilization, it is useful understand basic definitions and update them to current terminology for the technology in use. The following are some of the more basic terms and explanations.

Used capacity – where the data is stored that can be accessed from hosts.

Usable capacity –storage space within a storage system or across pooled systems that can be configured for volumes (LUNs) or filesystems. This is the capacity minus the storage system overhead. The overhead includes data protection such as RAID devices and allocated chunks in storage pools and segments for forward error correction using correcting codes such as erasure codes. Filesystems also reserve space for operational processes, which is not included in the usable capacity calculation.

Allocated but unused capacity – allocated storage space in a volume or filesystem with no data stored. This space is not available for applications or file systems, although it can be used later for data.

Effective capacity – the usable capacity multiplied by the expected effectiveness of data reduction.

Raw capacity – the aggregate of the capacity of the storage devices (hard disk drive, solid-state devices, flash modules).

Storage system data protection also has special considerations.

Snapshots – there are two primary types of implementations: Redirect-On-Write and Copy-On-Write. Redirec- On-Write is used with more recent storage pooling implementations such as all solid-state storage systems, where available space from the storage pool is used for the change data. With thin provisioning, the recommendation is to not exceed 90% utilization including snapshots and used capacity. Copy-On-Write implementations usually depend on pre-allocated capacity to contain a copy of the original data when a change is made. The pre-allocated space is included in the storage system overhead and reduces the usable capacity.

Replicated copies for disaster recovery / business continuance – these are volumes or filesystems, typically at remote sites, that represent a copy of the original active data. For capacity utilization calculation, the space is treated the same as any of the primary volumes – replication just means you need that much more capacity. The effect of low capacity utilization is multiplied with replication.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


October 17, 2014  7:56 PM

OpenStack Juno release adds storage policies for Swift clusters

Carol Sliwa Carol Sliwa Profile: Carol Sliwa
Storage

The most prominent storage feature made available yesterday with the 10th release of OpenStack cloud software — known as Juno — gives users the ability to control how and where they want to store, replicate and access data across object storage clusters.

The new “storage policies” capability applies to the OpenStack Object Storage project, which is better known by its code name, Swift. The latest OpenStack Swift release also includes updated support for the OpenStack Keystone identity service and CPU-lowering data handling improvements, but the feature drawing the most attention is storage policies.

“They’re the biggest thing that’s happened to Swift since it was open sourced as part of OpenStack four years ago,” said John Dickinson, the project technical lead for OpenStack Swift and director of technology at SwiftStack Inc., which sells a commercially supported version of the open source Swift software.

Dickinson said, by using storage policies, a company with a Swift-based server cluster located in the United States and in Europe could choose to store some data only in one geographic region. Or, a user with flash- and disk-based storage could set up tiers based on storage policies and offer different service-level agreements or chargeback/billing options.

Storage policies also enable users to decide the number of data replicas they want across a Swift cluster. For instance, an enterprise might choose to replicate some data only in two locations and other data across four data centers in different geographies.

“You can very specifically customize your Swift cluster for your use case – which, in my opinion, is really the whole purpose of cloud,” Dickinson said.

In addition to the immediate benefits, storage policies will also pave the way for an important feature in the 11th version of OpenStack, known by its project code name, Kilo. Dickinson said storage policies are the “critical foundation” allowing the community to build erasure code support in Swift. The community hopes to finish its work on erasure codes by year’s end, and at the latest, by the time of next spring’s Kilo release, according to Dickinson.

Another key storage capability targeted for OpenStack’s Kilo release is encryption of data at rest by Swift, but Dickinson said the feature is still in the design phase at the moment.

Of course, Swift isn’t the only storage option in OpenStack. The OpenStack Block Storage project, known as Cinder, will focus on core internals in the Kilo release, according to John Griffith, the project’s technical lead and a software engineer at SolidFire Inc.

“There’s a good deal of housekeeping that needs to be done, not only general architecture and stability improvements, but also we would like to focus on things like rolling upgrades and project interactions,” Griffith said via an email.

In the meantime, this week’s OpenStack Juno release added new features such as support for volume replication, volume pools, consistency groups and snapshots of consistency groups to OpenStack Cinder block storage.

File storage remains a work in progress for the OpenStack community. The OpenStack Foundation’s press release listed the Manila shared file system among several projects in the incubation phase, “expected to land in late 2015 and beyond.”


October 17, 2014  7:22 AM

Symantec halts Backup Exec appliance, vultures circle

Dave Raffo Dave Raffo Profile: Dave Raffo
Backup Exec, Storage, Symantec, Unitrends, Zetta.net

At least one Symantec backup product will no longer be in the lineup by time the vendor splits apart its security and backup businesses in a little more than a year from now.

While many in the storage world were discussing the new information management company that would come from the Symantec split, Symantec last week disclosed plans to stop selling Backup Exec on an integrated appliance.

As of Jan. 5, Symantec will discontinue the Backup Exec 3600. It will sell Backup Exec the old-fashioned way – it will provide the software and let other vendors provide the hardware.

While integrated appliances for Symantec’s enterprise NetBackup software have been successful– it recently expanded the NetBackup appliance line – that has not been the case with the SMB-focused Backup Exec.

In a blog on the Symantec website announcing the move, senior director of global product marketing Drew Meyer wrote:

”Providing our partners with Backup Exec software that they can bundle with hardware and services best meets the needs of our small and mid-sized business customers looking for a combined offering.”

Meyer cited Fujitsu, which sells an Eternus BE50 appliance with Backup Exec in Japan and Europe. He also wrote the recent release of Backup Exec 2014 shows that Symantec is committed to the software, which ran into problems when the 2012 version came out.

Symantec’s new information management company will offer maintenance renewals for the Backup Exec 3600 through January of 2018 and support will continue until January of 2020.

Competitors are more than happy to relieve Backup Exec customers of their appliances. Zetta.net and Unitrends this week came forward with programs to tempt Backup Exec customers to switch.

Zetta said Backup Exec customers can sign up for Zetta’s cloud backup and DR service free for six months, and it will give up to 20 percent discounts on annual contracts. This is similar to a migration program Zetta ran for BackupExec.cloud customers after Symantec shut down that service earlier this year.

Unitrends said Backup Exec 3600 customers can trade their appliances for one if its integrated appliances for only the cost of support. The Unitrends Recovery-713, Recovery-813 and Recovery-822 are the available models. Backup Exec customers must sign three-year or five-year support contracts for their free appliances.


October 16, 2014  10:34 AM

HP and Scality officially tie the knot

Dave Raffo Dave Raffo Profile: Dave Raffo
Cleversafe, HP, Storage

Object storage vendor Scality has scored a reseller deal with Hewlett-Packard, which the private company’s CEO said will greatly expand its global reach.

Scality and HP have worked together closely in the field, and a lot of Scality’s Ring software runs on HP Proliant servers.

“We’ve been working with  all the server vendors since the beginning,” Scality CEO Jerome Lecat said. “HP has been the most proactive in coming up with a server that fits our industry.”

HP sells Scality software on the ProLiant SL4540 and DL360p Gen 8 servers.

Lecat said Scality has more than 40 PB of customer data deployed on HP servers. Scality-HP customers include DailyMotion, TimeWarner Cable and European television station RTL2, he said.

Lecat said the deal is crucial for Scality because “we’re still a relatively small company, and we do not have thousands of sales people around the globe like HP does.”

The deal is not exclusive. HP sells its own StoreAll product with object storage, and it also works closely with Cleversafe. There is no formal reseller deal with Cleversafe, but it is featured alongside Scality on HP’s object storage software for ProLiant web page.

Lecat said Cleversfe’s dsNet object storage is more suited for long-term archives while Sclaity Ring is for active applications such as email and video archiving.

“We don’t see ourselves as an object storage company,” Lecat said. “Object storage companies only focus on archiving. Our ambitions are larger than that. We have a lot of media companies running video on demand, consumer web mail and other applications. We’re not just deep and cheap archiving.”


October 10, 2014  4:26 PM

Druva moves from endpoint to server backup

Sonia Lelii Sonia Lelii Profile: Sonia Lelii
Cloud Backup, Storage

Druva is taking its enterprise endpoint backup software and moving it into backup for small businesses and remote and branch office backup.

The company this week launched Druva Phoenix, a centralized management backup and archive product targeting companies that have tight budgets, limited local IT staff or none at all. The software is based on Druva’s nSync enterprise endpoint backup and nCube architecture. Phoenix is an agent-based software with global deduplication that is done at the source level.

Druva Phoenix is offered an alternative to traditional server backup that requires secondary storage, tape and archiving.

“This is a pure play software as a service cloud product,” said Jaspreet Singh, Druva’s CEO and founder. “The core to solving backup to the cloud is building a scalable deduplication in the cloud. In the last five and a half years, we built endpoint backup for the cloud. In the last 18 months, we were looking for what we can solve next. The remote office looked interesting.

“We thought we could remove a few processes by introducing Phoenix,” he said. “We are extending from endpoint to remote offices. It’s a very natural extension for us.”

Phoenix has a software-based cache accelerator for backup and restores, which resides on the server in the remote or branch office. The rest of the data is moved into the Amazon cloud.

“Because there is not much metadata, it can scale fairly well,” Singh said.

Singh said without deduplication, the amount of data stored in the cloud becomes exorbitant. For instance, 1 TB of data can multiple to 719 TB of data after it is retained for seven years if dailies, incrementals and full backups are done.

“One data reduction price-point is based on the source data,” Singh said.

Jason Buffington, senior analyst at Enterprise Strategy Group, said ROBO servers are the next “battleground” for cloud-based backup where it makes sense.  For the  remote office, he said the decision to back up to the cloud depends on whether  IT wants to control ROBO backups or just manage the data repositories.

Druva’s endpoint software lends itself to small business and ROBO backup and archiving because the software was designed with administrative over-site capabilities, Buffington said. The software also comes with a three-year, seven-year and infinite retention policy.

“No one would keep endpoint data for an infinite amount of time,” Buffington said. “But it should be a requirement for server-based protection.”


October 10, 2014  2:27 PM

A look at access methods for open systems and mainframes

Randy Kerns Randy Kerns Profile: Randy Kerns
Storage

The term access method is frequently used to identify types of I/O in open systems. Many who use it probably don’t understand the historical context for what has been known as an access method for over 50 years.  In open systems, the types of I/O are for block data, file data, and object data. Access methods represent how the types of data are stored on devices.

The term access method comes from the mainframe world and denotes a number of well known (at least to those who have worked with mainframes) means to store or access information. Access methods are really software routines accessed by application programs using software commands that are inline calls to system functions. You can call these Application Program Interfaces (APIs). The closest equivalent function in open systems would be a device driver.

There are many types of access methods and most deal with how data is organized, usually in the form of records, which are typically fixed length blocks of data in a dataset.

Some the familiar access methods for storage in the mainframe world include:

  • BSAM – Basic Sequential Access Method
  • QSAM – Queued Sequential Access Method
  • BDAM – Basic Direct Access Method
  • BPAM – Basic Partitioned Access Method
  • ISAM – Index Sequential Access Method
  • VSAM – Virtual Storage Access Method
  • OAM – Object Access Method

An example of doing I/O in an application in QSAM would be to set up buffers in memory for queued I/O (multiple records in a block) and then do a GET or PUT.  Interestingly, the basic I/O for S3 object access is GET and PUT.

Open systems access methods are termed:

  • Block – individual blocks of data are read or written from/to storage
  • File – a stream of bytes that represent a file with associated file metadata is written or read within the organization of a hierarchical tree structure.
  • Object – data segments and user or system-defined metadata is stored in a flat namespace with access through object ID resolution.

The open systems access methods don’t map directly to those in the mainframe world, but you can understand them if you know the mainframe methods. The term access method in open systems isn’t wrong, it just means a slightly different thing. Translating between the two will help understand the meaning.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: