Storage Soup


October 30, 2013  12:52 PM

CommVault CEO tells rivals, ‘Bring it on’

Dave Raffo Dave Raffo Profile: Dave Raffo

CommVault went against the grain and reported better-than-expected financial results last quarter. That makes the backup software vendor “public enemy number one” to its larger competitors, according to CEO Bob Hammer.

CommVault’s revenue of $141.9 million last quarter grew 20% from the previous year and six percent over the previous quarter. The revenue figure and the company’s $17.4 million net income beat Wall St. expectations. That comes after EMC, Symantec and IBM all missed expectations, including slow growth or declines in backup software.

Still, CommVault is not immune from problems plaguing the storage industry, such as slow federal government spending and companies’ cautious approach to closing big deals. Most of all, it faces pricing pressure from the big boys of data protection.

When asked if larger competitors Symantec, EMC and IBM are doing anything different competitively, Hammer said they were coming up with “tricky, crazy pricing initiatives” such as deep discounts and product bundling.

“Those guys are completely irrational in their pricing policies,” Hammer said on CommVault’s earnings call with analysts. “We’ve become public enemy number one. So any tricky, crazy pricing initiative they can possibly think of, they throw at customers and we’re pretty savvy in understanding what those are and can parry them pretty well. But that’s their primary weapon. We’re pretty well attuned to what each of these different vendors are doing there and respond accordingly. So my answer to them is, bring it on.”

CommVault has some tricks of its own to play in the form of new features for its Simpana 10 platform. Hammer said will bolster Simpana 10 “in the very near future” with products including enhanced archiving for Mircrosoft Exchange and SharePoint, self-service try and buy products for SMBs, features for virtual machine administrators and more partners for its IntelliSnap array-based snapshotting.

All of that goes with the Reference Copy archive option CommVault added last week that allows customers to index and classify data to low-cost storage.

Unlike several storage companies, CommVault did not have to reduce its forecast for this quarter although Hammer admitted there are possible pitfalls ahead. Although CommVault reported its revenue from the U.S. federal government increased 43% from last year, Hammer said “We are particularly cautious about U.S. federal government spending due to uncertainty associated with the recent fiscal impasse.” He also said he expects “softness” in big deals of greater than $500,000. “Many in the industry have reported big deal cancellations and pushouts,” he said.

Enterprise deals – which CommVault defines as $100,000 and up – only increased three percent last quarter.

“We understand we’re in a weak environment and also lumpy,” Hammer said. “So when you start getting into possibly seven-figure deals which makes a difference in our performance, we’re just issuing a concern. The positive is that the opportunities are there and the negative is we’re in an environment where those deals get pushed out, and there could be some future problems.”

October 28, 2013  9:36 AM

Data reduction a key feature in solid-state storage

Randy Kerns Randy Kerns Profile: Randy Kerns

The premise of doing data reduction of stored information is that more data can be put in the available physical space. Storing more data in a fixed amount of space drives down the price of storing data and gives added benefits of reducing the footprint, power consumption, and cooling required.

Performance requirements for data reduction vary depending on the type of data. If the data needs to be accessed frequently or in a time critical manner, the process of data reduction and expansion on access must have no measurable impact on performance. The performance demand is relaxed as the data becomes less important or more infrequently accessed.

Performance impact is crucial when using data reduction with solid-state technology. Solid-state storage, implemented in NAND flash today, is used in performance demanding environments. Response time is the most critical element in accelerating performance.

Data reduction is accomplished through deduplication and compression. Deduplication is most effective where there is repetitive data, such with successive backups. The effectiveness diminishes as the data becomes less repetitive. Compression uses an algorithmic process to reduce the representation of data in strings as it is parsed. The compression effectiveness varies based on the type of data or compressibility of the data, but is relatively consistent for a type and has predictable averages.

There are arguments for using either dedupe or compression, but many of the arguments are parochial. For primary data, compression in a storage system has proven effective for a long time, going back to the StorageTek Iceberg/IBM RVA virtual disk products from the 1990s.

There are several ways to reduce data on NAND flash. One method is predicated on the use of standard solid-state devices (SSDs) packaged to replace hard disk drives (HDDs) with the attachment and data transfer using disk drive protocols. These standard devices have an internal flash controller and flash memory chips along with the protocol interfaces to mimic a disk drive. For the use of these drives, data reduction is added external to the SSD, in what we would call the storage controller. The implementation in the storage controller is done using the internal processor or with custom hardware. In this case, data reduction uses controller resources and may have a noticeable performance impact.

There is less likely to be a performance impact if the reduction is done inline – while the data is being written. Other implementations may store data and then do the data reduction later (called a post-storage data reduction or sometimes referred to as post-processing data reduction). Post-storage reduction consumes resources which may or may not be impacting and the response time may be delayed while the data is expanded before access.

Other designs using flash storage have custom flash controllers with flash memory. These are unique designs for the different storage system implementations. Often, shadow RAM is used in these designs to optimize page updating. A processor element is included to control the algorithms for flash usage. Data reduction in the flash controller is transparent to the storage controller that manages the access to the storage. The flash controller is expected to do the data reduction without impacting performance.

Over time, data reduction will become an important competitive feature for solid-state storage, and designs and capabilities will continue to advance. This does not mean that compressing data elsewhere will not be useful. There is value for compressing data on HDDs and for transferring data, especially to remote sites. The important thing to understand is that reducing data stored in solid state technology is an evolutionary development with compelling value and will result in vendor competitive implementations.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


October 25, 2013  5:01 PM

Quantum drops revenue, gains ‘viability’

Dave Raffo Dave Raffo Profile: Dave Raffo

Although Quantum’s revenue declined over last year, CEO Jon Gacek said the backup vendor is in much better shape than it was 12 months ago.

Quantum this week reported revenue of $131.4 million, which was below its guidance and down 11% from the same quarter last year.

Gacek said the plus side is the company cut its loss form $8 million last year to $5 million this year, increased gross margin from 42% to 42.9%, reduced operating expenses by 11% and increased its cash from $33 million to $77 million.

“Last year there was so much anxiety about our balance sheet,” Gacek said. “Our cash has more than doubled and we paid off all our current debt. Last year I had to spend a lot of time defending our viability. Now I’m getting pressure on revenue growth, but last year it was ‘Hey, you lost money again.’”

Gacek blamed the poor revenue last quarter on low federal government spending ahead of this month’s shutdown, and the poor European economy. He said the government problems particularly hurt sales of DXi deduplication backup appliances, which declined 30% over last year. Tape automation revenue declined 15%.

Gacek said he is optimistic about the prospects for recently launched StorNext 5 and new Lattus object storage systems, and is hoping the government spending constraints will lift. “We believe deals that got hung up in the lead up to the federal government shutdown may materialize,” he said. “We know there are deals in the pipeline, it’s a matter of whether they’re going to pop.”


October 25, 2013  3:22 PM

NetApp, HGST play roles in Verizon Cloud

Dave Raffo Dave Raffo Profile: Dave Raffo

Since the beta program for the Verizon Cloud Compute and Cloud Storage began earlier this month, Verizon Terremark has been pulling back the curtains on the storage used for those services.

Over the last two weeks, Verizon Terremark launched a partnership with storage vendor NetApp and revealed it is using flash storage from HGST as a building block for its cloud.

Verizon said it would use NetApp Data Ontap virtual storage appliances (VSAs) in the Verizon clouds that let NetApp array customers access Data Ontap data protection and file management features.

This is a different arrangement than NetApp has with Amazon, which allows customers to set up NetApp FAS and V-Series storage on the Amazon cloud and access them via Amazon Web Services (AWS).

“Here, it’s all software. There is no NetApp array involved, “ Tom Shields, NetApp director of service provider marketing, said of the Verizon partnership.

Shields said the VSA used in the Verizon cloud is similar to the Data Ontap Edge virtual appliance NetApp sells for remote offices. “That’s the starting point,” he said of the Edge technology.

Verizon Terremark CTO John Considine said Verizon customers can set up the VSA filer through a template, selecting the capacity and services needed.

This week, Verizon announced it is using HGST’s s800 SAS solid-state drives (SSDs) as primary storage for the Cloud Compute service and as cache for Cloud Storage. HGST acquired the s800 SSDs in the recently closed sTec deal.

The HGST SSDs play a role in Verizon Cloud’s service options, as users can select service levels based on performance. “We allow the customer to adjust the performance level,” he said. “If it’s non-critical data and they just want to have the data out there and not do much with it, they can dial the performance level down an only pay for what they’re using.”

Verizon Cloud Storage uses the same SSDs for caching, with most of the data going on spinning disk.

“We’ll use SSDs to boost performance as we encode data and spread it across spinning disk,” Considine said of Cloud Storage.

Verizon Terremark was already discussing the deal with sTec before the HGST acquisition. Verizon does not use HGST hard disk drives, although Considine said “it is not out of the realm of consideration.”

You can expect to hear about more Verizon storage partners. Verizon has pledged to support cloud gateways and is also using object storage from a vendor it has yet to identify.


October 23, 2013  4:08 PM

Seagate puts Kinetic object storage plan in motion

Dave Raffo Dave Raffo Profile: Dave Raffo

If you need more proof that object storage is picking up steam, there is the Kinetic Open Storage Platform unveiled by Seagate this week.

If it flies, the architecture can provide a huge boost to object storage and cloud storage adoption.

There are two parts to the Kinetic architecture. One is high capacity hard drives with two Gigabit Ethernet ports replacing SAS pins used for SAS/SATA disk interfaces. The other piece is an open-source API supported by OpenStack Object Storage.

The API is a software library that uses a Key-Value interface where the key is the metadata and the value is the data itself. The API will communicate directly with the hard drive for object commands such as PUT and GET.

Seagate’s goal is to eliminate the storage server tier and enable applications to speak directly to the storage device.

“The file system is gone. The drive does the space management,” said Ali Fenn, senior director of Seagate’s advanced storage platform. “Applications are dealing with objects, and we should let them deal with objects right down to the drive.

Seagate is trying to put together the wide industry support that will be necessary for this to catch on. Its release included supporting comments from Rackspace, Dell, Yahoo, Supermicro, SwiftStack, Xyratex, Newisys EVault, Basho, Huawei, and Hive Solutions. That’s a start, but it will take a lot more support to make it work.

Seagate has made the software library and a simulator available to companies who want to write apps for Kinetic storage, and Fenn said 3.5-inch nearline drives supporting the architecture will be available in mid-2014.

If it takes hold, the Kinetic platform can wipe out a good piece of the existing storage stack. Then it will get interesting to see what else goes – traditional storage arrays maybe?

According to a report for the SSG-Now newsletter, SSG-Now analyst James Bagley wrote: “the Seagate announcement does not doom the manufacturers of conventional storage arrays. However, it does mean that object storage users, particularly in cloud deployments, will have a much more efficient means of getting data into and out of storage devices.”


October 23, 2013  2:07 PM

Product of the Year competition highlights the top storage products again

Rich Castagna Rich Castagna Profile: Rich Castagna

We’re just a week or so away from the deadline for filing an entry in the Storage magazine/SearchStorage.com Products of the Year competition. If you’re a storage vendor and you rolled out something new and cool—or upgraded an existing product to make it cooler than ever—you should click here now and fill out the entry form. If you work in—or manage—a storage shop and were impressed with a new or improved product, tell your vendor to get on the stick and enter it.

This is the 12th year of the Storage Products of the Year project, and since its inception in 2002 thousands of product entries have been submitted and 167 of those products have won either gold, silver or bronze awards. Fourteen of those winners were honored last year.

Our Products of the Year winners have always been a gratifying mix of storage stalwarts and upstarts. That combination has shown that technical innovation can come from companies big and small, and from companies that have already earned a following and those that are seeking to gain some recognition. Being named a Storage Product of the Year winner has helped a number of companies increase their visibility—but more importantly, it has helped storage managers make better informed purchasing decisions.

We’re proud that we’ve been able to recognize some vendors and their products long before they became household names—such as CommVault and pre-EMC Data Domain. Some vendors showcase products that are so ground-breaking that they provoke “gotta have it” longings from not only users, but from competing vendors as well. As a result, 41 of our past winners have been acquired by larger vendors who just couldn’t resist their compelling technologies.

We’ve also seen how vendors can keep improving on good technology, as many of past winners returned to grab additional awards for new products or for enhancing existing wares. Some notable multi-year winners include:

  • CommVault – 6 awards
  • Data Domain (before and after EMC acquisition) – 6
  • NetApp – 6
  • Quantum – 5
  • Symantec (including Veritas) – 5
  • FalconStor – 4
  • QLogic – 4
  • Riverbed – 4

This year’s finalists will be announced on SearchStorage.com in January. Winners of the 2013 Storage Product of the Year awards will be announced in the February 2014 issue of Storage magazine and on SearchStorage.com.


October 22, 2013  9:59 AM

Government shutdown slammed EMC storage sales

Dave Raffo Dave Raffo Profile: Dave Raffo

The U.S. federal government shutdown and cautious IT spending caused EMC to miss its revenue goals last quarter and lower expectations for the year.

The storage market leader reported revenue of $5.5 billion last quarter — up 5% over last year but $250 million below expectations – and its new guidance for the year of $23.25 billion is below analysts’ expectation of $23.44 billion. EMC executives blamed the shortfall on low U.S. federal government spending and customers outside of government waiting until the end of the quarter to place orders.

On the company’s earnings call today, EMC CEO Joe Tucci said he had “mixed feelings and emotions” about last quarter. “I am disappointed that we missed expectations,” he said. “On the other hand, I feel extremely good about our strategic positioning, products and service offerings. … We feel good that we continue to grow faster than most of our IT peers.”

Tucci said EMC’s federal storage business revenue declined more than 40% over last year — a huge drop because it was the government’s fiscal year-ending quarter, meaning it usually spends more on IT than in any other quarter.  Tucci said while that revenue did not go to another storage vendor and the lost deals may not be dead, government budget uncertainties will prevent EMC from making up the shortfall this quarter.

Outside of the government, Tucci said “customer caution and scrutiny of purchases continued,” forcing a backlog of orders that came in on the final day of the quarter. He said EMC received almost $300 million in orders on the final day – three times what it was expecting – and about $100 million of those orders were pushed to this quarter.

Despite that $100 million already on the books, EMC lowered its expectations for this quarter. EMC president David Goulden said he is still expecting a budget flush in the last quarter of the year but not as strong as in most years.

Like Tucci, however, Goulden said EMC is increasing its market share over competitors. “We are doing well relative to the market in all the segments we play in,” he said.”

EMC executives said sales of high-end VMAX arrays took the biggest hit from federal government spending slowdown.

Other tidbits from the call:

  • 70% of VMAX systems shipped had some flash storage in them
  • The next generation Atmos object storage system due to ship next year will be part of EMC’s Project Nile.
  • Around half of the VNX unified storage systems shipped last quarter were the new models launched in August.
  • The all-flash XtremIO array is scheduled to ship this quarter.
  • EMC will add Hadoop Distributed File System (HDFS) support to ViPR next year


October 18, 2013  4:57 PM

Syncsort divorces itself

Dave Raffo Dave Raffo Profile: Dave Raffo

Syncsort is splitting up. The company revealed this week that the data protection business spun off to form a company separate from the data integration business.

The data integration side will keep the Syncsort name. The data protection business will be known as Syncsort Data Protection for now, but will eventually pick a new name for its company. Flavio Santoni, who was Syncsort CEO, will run the data protection business and Lonne Jaffe becomes Syncsort CEO.

Santoni said while the data integration business is bigger than the backup business, he went with the backup team because his background is in storage. Santoni was general manager of LSI’s Engenio storage unit (now owned by NetApp) before becoming Syncsort CEO.

Santoni said the split was planned for more than a year and made sense because the product lines required separate sales, marketing and management teams. He said Syncsort had been run as two separate businesses since the start of this year, and the sales, marketing and engineering teams were independent for four years.

“We’ve been on this path for a while,” he said. “We realized in 2012 that we had a clear path on both businesses. It was time to create two pure-play companies, each with its own management team and investment group.”

The data protection business will be built around the ECX file and snapshot catalog application rolled out last week. The first ECX version works with NetApp and VMware products, but Santoni said the platform will expand.

“The technology is flexible and extensible, and we can extend it to other vendors,” he said. “Catalog is the first service module. Over time, we’ll add backup as another service module and then other service modules.”

One thing that won’t change is the employees’ commute. The companies will continue to share Syncsort’s Woodcliff Lake, N.J. office building, each taking one of its two floors.


October 18, 2013  1:43 PM

Nimble follows Violin’s lead, files for IPO

Dave Raffo Dave Raffo Profile: Dave Raffo

Nimble Storage is trying to become the next public storage company after filing its S-1 form for an initial public offering today.

Nimble is right on time with its IPO plans. When the startup raised $40.7 million in funding in September of 2012, CEO Suresh Vasudeven said the goal was to go public between the third quarter of 2013 and the second quarter of 2014.

Nimble, which sells hybrid flash arrays that handle primary storage and data protection, reported $53.8 million in revenue for its fiscal year ending Jan. 31, 2013 and has already generated $50.6 million in the six months of this year that ended July 31. It has more than doubled the $19.1 million in revenue from the first six months of 2012.

However, Nimble is also losing money. It lost $27.9 million last year and $19.9 million in the first six months of this year. The vendor, which began selling its CS arrays in August, 2010, has a total of $77.8 million in losses. It raised $98.7 million in funding, with the last round coming in Sept. 2012.

Nimble wants to raise $150 million with its IPO, according to the filing. All-flash array vendor Violin Memory raised $162 million when it went public last month after reporting $73.8 million in revenue for the year ending Jan. 31 and $51.3 million in the six months ending July 31. Violin’s losses were $109.1 million for last year and $59.2 million over the six months ending July 31.

Violin’s IPO hasn’t been great for investors so far, though. The company sold its original shares at $8 but the price dropped more than 21% to $7.11 on the first day and stood at $7.26 at mid-afternoon today.


October 17, 2013  3:09 PM

Will object replace file?

Randy Kerns Randy Kerns Profile: Randy Kerns

I keep hearing this question, and I keep responding “not anytime soon,” rather than with a flat “no.” I can understand why some may consider replacing file storage with object storage given developments that are occurring, but file storage still has plenty of life.

Object storage has several substantial benefits over file storage when it comes to solving a couple of problems:

• Scale – The continuing growth in unstructured data means large capacity and much greater numbers of “objects,” usually files that must be stored and managed. Traditional file systems usually cannot handle the billions of objects in their hierarchical file structure effectively. Object storage uses a flat address space with access through an object ID, which may be maintained external to the storage system or as an integral element in a distributed environment.

• Durability – Information must be available, potentially from different geographies and needs a protection process that does not impact operational environments. Also, information has a long life that usually outlasts the devices it is stored on. Object storage systems, for the most part, have addressed these issues by geographically dispersing data so only a specific number of object elements must be available to access the information. The selectable protection also includes immutability of the objects with versioning so an object incorrectly altered is still recoverable from a prior version. Organizations can manage technology transitions with object storage by introducing a new node/location for storage of an object element, replacing one that is to be retired and automatically redistribute object elements.

Here are ways that object storage is being used:

• Web-based storage is typically implemented with object storage using a simple GET/PUT access method. The most common in usage is some form of the Amazon S3 protocol, now simply referred to as S3 or S3-like.

• On-premise object storage systems, often called private clouds, are being promoted by vendors to give companies the web-based capabilities of self-service and massive scalability for those that have reasons for not using public cloud-based systems. These are usually object storage systems with similar access protocols such as S3.

• Some applications are being modified to write to object storage directly and new applications are being developed with cloud-based access (public or private) as the fundamental design and use of the extended metadata. The most commonly modified applications are backup and archiving software. These applications represent some of the first uses of cloud-based storage.

Gateway devices that bridge file access to object storage in clouds are available. These systems allow use of cloud storage from existing applications without modification while adding features such as metadata tagging and file caching.

So why do I say object will not replace file storage anytime soon? Because changes in applications occur slowly, if at all. There will be resistance to making a transition from files to objects. Not only is there a predominance of file-based applications, there is also widespread familiarity with using files and file structures. The current use of files has been effective for most environments and change will be embraced only when it is necessary.

Object storage is most likely to solve problems in backup, archive and large content repositories, and those problems are already being solved by current technology. But while object will not replace file usage today, it does deal with the problems of scale and durability, and opens new opportunities for storage in the future.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: