Storage Soup


November 8, 2013  4:49 PM

Fusion-io founder Flynn rounds up funding for Primary Data

Dave Raffo Dave Raffo Profile: Dave Raffo

David Flynn, who built Fusion-io into the early market leader in PCI flash storage and left abruptly in May, this week pulled in $50 million in funding for his new software startup Primary Data.

Flynn and Fusion-io co-founder Rick White haven’t given much specifics on their new company. The fundingpress  release said they plan to bring a product to market in 2014 that will solve problems caused by a “new era of storage that spans from flash to cloud.”

I spoke with Flynn recently and he didn’t get much deeper into product specifics than that, but said “I want to be a vendor to help solve problems coming our way. We want to deal with the hard problems of managing distributed data, performance and capacity.”

He added that these management problems are leading to trends such as putting data in the cloud and on bring-your-own-devices (BYOD). “I think we’re at a tipping point,” Flynn said. “Hardware is either being sent off to the cloud or to a person’s own devices. Running a business, you don’t put in your own server infrastructure now, you send it off to somebody in the cloud.”

He said this is changing the way people buy storage, and who they buy it from. Flynn said he expects a lot more organizations to take the lead from the way companies such as Amazon and Facebook are building their architectures.

“The cloud is forcing a distributed model for storage,” Flynn said. “It’s not centralized any more, it’s distributed in an infrastructure. It’s commodity off-the-shelf industry standard platform storage, not proprietary systems from EMC. You think Amazon uses NetApp or EMC? No way, that’s crazy. And anybody who wants to compete with Amazon is not going to use NetApp or EMC. It’s not cost competitive, and you can’t scale it big enough.

“Facebook is having servers built to their specs by Chinese ODMs, and not the major system vendors anymore.”

As for BYOD, Flynn said IT may still consider it a nuisance but will growth to love it. “Didn’t PCs come about in the same way?” he said. “Administrators held on to their mainframes and they hated PCs. I think the same is true for BYOD. It’s inevitable because of productivity improvements. People are doing most of their work on their own devices — tablets, smartphones, and laptops.”

Flynn said flash is in a similar position as the cloud – it’s here to stay but brings a new set of management issues.

“I think there are still significant challenges with how to get the manageability of traditional storage with the performance of a distributed flash architecture and the capacity of distributed cloud object storage,” Flynn said. “There are still significant challenges with how to deploy and use flash and we are still in the early innings with that. But that problem is not just about flash. It’s about distributed storage architectures. That’s a big problem. Doing something centralized makes it easy. Managing a mainframe was easy compared to managing PCs on everybody’s desktop.”

How will Primary Data try to solve these problems? That’s one of the secrets that will be revealed in 2014.

November 7, 2013  10:27 AM

Newly public Barracuda has big plans for integrated appliance/cloud backup

Dave Raffo Dave Raffo Profile: Dave Raffo

Twitter isn’t the only technology IPO this week. Security and backup vendor Barracuda Networks began trading on the New York Stock Exchange Wednesday with promising initial results.

While Barracuda is still more security than storage, CEO BJ Jenkins said backup makes up about one-third of Barracuda’s new business, and is increasing year-over-year at a faster rate than the 26% overall market growth.

He attributes that to Barracuda’s end-to-end data protection approach. While it still sells its backup software standalone, most deals are for integrated appliances along with cloud subscriptions for backup and disaster recovery. Barracuda maintains its own multi-petabyte cloud, and Jenkins said most of its backup appliance customers also use it.

“If you back up into the cloud and have an issue locally, you can spin up a virtual server in our cloud and run your business off a deduped backup copy,” Jenkins said. “This end-to-end offering as made a big difference. Customers used to buy Symantec and some kind of disk and tape, and rotate tapes and do replication for DR.”

Barracuda sells mostly to SMB and mid-range companies, competing primarily with Symantec Backup Exec.

Jenkins, who ran EMC’s backup division before becoming Barracuda CEO in Nov. 2012, said one reason Barracuda went public is to gain more credibility with customers who want to know their security and data protection vendors are stable companies. Unlike flash vendor Violin Memory’s first day as a public company, Barracuda’s price rise in the hours after its IPO. Barracuda began trading at $18 – the low end of its projected range – but its shares closed at $21.55 Wednesday.

“I feel good about the first day of trading,” Jenkins said. “We were fortunate to get out before Twitter. They’ve taken a lot of oxygen out of the air.”


November 6, 2013  12:54 PM

Coho Data looks to expand fledgling storage platform with funding haul

Dave Raffo Dave Raffo Profile: Dave Raffo

Coho Data this week pulled in $25 million in funding to expand and market the scale-out storage platform it launched into beta last month.

CTO Andy Warfield said new funding will be used to add features to the Coho DataStream series in 2014. The DataStream is a hybrid storage system that combines a software-controlled switch with PCI flash and hard drives that Coho sees as a building block for companies who want Amazon-style storage.

The original product is file-based. Warfield said the vendor has seen little demand for Fibre Channel or iSCSI but there have been requests for FC over Ethernet (FCoE), so FCoE support will likely follow next year. There will also probably be SMB protocol support coming to go with NFS-only in the original version, and deduplication and replication are on the roadmap list.

Warfield said Coho Data will also announce a product upgrade path next year. “You can expect a continuous and dynamic approach to upgrades than has historically been the case for storage products.”

As Coho Data prepares to make its product generally available, Warfield said the target audience has shifted from the original plan of marketing to small-to-medium businesses (SMBs).

“We found SMBs had no need for the performance we get from our box,” he said. “But we found larger storage environments with from three petabytes to 10 petabyte range environments often had performance pain with their existing enterprise storage.”

Coho DataStream micro arrays ship with 3.2 TB of raw flash (Intel SSD 910 PCI cards) and 36 TB of spinning disk. Warfield said the startup uses a pricing model similar to the Amazon AWS provisioned IOPS model. He said 40,000 IOPS with a three-year support contract costs around $2.50 per GB.

The B funding round was led by new investor Ignition Partners with previous investor Andreeson Horowitz participating, and brings Coho Data’s total funding to $35 million.


November 5, 2013  11:48 AM

Load DynamiX will put your stress to the test

Dave Raffo Dave Raffo Profile: Dave Raffo

If you’ve never heard of Load DynamiX, that’s probably because until today the start-up was known as SwiftTest. And if you never heard of SwiftTest, that’s probably because until today it only sold its storage validation software directly to storage vendors.

Along with the name change, Load DynamiX today launched a series of infrastructure and application performance validation appliances for IT organizations. The appliances generate massive loads to stress enterprise storage systems, simulate production workloads and validate new devices before putting them into production.

The appliance models include the 10G Base Series with two 10 Gigabit (10 GigE) ports and support for iSCSI and NAS protocol emulation; the 10G Advanced Series with support for NFS 4, SMB 3, HTTP/S, CDMI and OpenStack Swift protocol emulation on top of the Base Series; FC Series with two 8 Gbps Fibre Channel (FC) ports and FC and iSCSI emulation; and the Unified Series with support for two 10 GigE and two FC ports and iSCSI, NFS, SMB 2, FC and SCSI emulation. List prices are $130,000 for the 10G Base, $225,000 for the 10G Advanced, $95,000 for the FC and $180,000 for the Unified appliances.

All of the appliances include Workload Insight Manager, the software the vendor has made available to storage vendors since 2009.

Load DynamiX VP of marketing Len Rosenthal said EMC, Dell, NetApp and Hitachi Data Systems (HDS) use Workload Insight Manager to test their storage arrays.

Rosenthal said each 2u Load DynamiX appliance has the load generation capabilities of 20 servers, and they emulate the I/O profile of applications. He said the appliances are an alternative to using Iometer with a bank of servers. Unlike Iometer, Load DynamiX simulates metadata.

“We’re about understanding changing workloads,” Rosenthal said. “We get people to simulate workloads before going live.”

Rosenthal said GoDaddy.com used Load DyanmiX to validate a hybrid solid-state drive (SSD) storage array and significantly reduced its cost before putting it into production, and the Healthcare.gov site fiasco was caused at least in part by lack of load testing before going live.

If you’ve never heard of the Healthcare.gov fiasco, that’s probably because you’ve been spending too much time trying to get your SAN or NAS up to speed.


November 1, 2013  4:23 PM

Overland and Tandberg lean on each other for survival

Dave Raffo Dave Raffo Profile: Dave Raffo
Storage

Struggling storage vendors companies Overland Storage and Tandberg Data today confirmed their plans to combine and try to turn two money-losing businesses into a winner. The companies today said they have reached agreement for Overland to acquire Tandberg in an all-stock transaction.

No purchase price was given, but Tandberg will become a wholly owned subsidiary of Overland. Overland CEO Eric Kelly and CFO Kurt Kalbfleisch will remain in their current roles and COO Randy Gast, Overland’s senior VP of worldwide operations and services, becomes COO of the new company. Cyrus Capital, which bought Tandberg out of bankruptcy in 2009, will get two of seven board positions.

On a conference call to discuss the deal, Kelly said the merged companies had more than $100 million in revenue last year – with around $60 million coming from Tandberg – and combining them provides “a clear path to profitability.”

Both companies have struggled on their own. Along with Tandberg’s bankruptcy, Overland has been losing money for years and its fortunes took a steep downturn after it lose at tape OEM deal with Hewlett-Packard in 2005 that accounted for most of its revenue. Overland has been trying to rebound as a storage systems company since then, although it still tapes tape drives and libraries to go with SAN and NAS systems and disk backup.

Tandberg also sells tape libraries and drives, RDX removable disk, disk backup and low-end NAS. Kelly pointed out that Tandberg’s tape and NAS products are sold into a lower-end of the market than Overland’s, and there is little or no competing products.

“The product lines are complementary with minimal overlap,” he said.

Overland executives disclosed in May that they were discussing a merger with Tandberg. Kelly said he hopes the shareholder vote needed to close the deal will come by the end of the year.


November 1, 2013  9:39 AM

Backup appliance vendor Unitrends finds a buyer

Dave Raffo Dave Raffo Profile: Dave Raffo

Integrated backup appliance vendor Unitrends has new ownership while management remains the same and vows to move deeper into cloud-based data protection.

Insight Venture Partners completed a majority investment in Unitrends this week, giving it control of the data protection startup. Insight general partner Mike Triplett said Unitrends’ management team was one of the things he likes about the company. He said Insight will keep Unitrends CEO Mike Coney and his management team while Triplett and Richard Wells of Insight join the Unitrends board.

“There are three things we like about Unitrends,” Triplett said. “We like that it’s in a large and growing market segment, we like the management team, and the product is head and shoulders above the competition.”

Triplett likes the market so much that he also sits on the board of virtual machine backup software specialist Veeam Software and Acronis – two other Insight investments. He said he’s not concerned about being involved with companies that compete with each other because there is plenty of backup to go around.

“The market is big enough that everyone can prosper and do well,” he said.

Coney became Unitrends CEO in 2009 after working for Acronis and Veritas (now part of Symantec). He said Unitrends has about 260 employees and he expects to go grow substantially with the Insight investment. Although it does not disclose revenue and income figures, Unitrends claims it has grown revenue in 19 straight quarters and its year-over-year bookings increased 72% last quarter.

Coney said the vendor will continue to build on its integrated appliance platform, but “the biggest roadmap area of us is the cloud and DR as a service.” He said those plans include selling to managed service providers, offering its appliance customers options for replicating to the cloud for DR, and connecting to public clouds such as Amazon and Microsoft Azure.

He said Unitrends will maintain its focus on the mid-market – companies with from 50 to 1,000 customers.


October 30, 2013  12:52 PM

CommVault CEO tells rivals, ‘Bring it on’

Dave Raffo Dave Raffo Profile: Dave Raffo

CommVault went against the grain and reported better-than-expected financial results last quarter. That makes the backup software vendor “public enemy number one” to its larger competitors, according to CEO Bob Hammer.

CommVault’s revenue of $141.9 million last quarter grew 20% from the previous year and six percent over the previous quarter. The revenue figure and the company’s $17.4 million net income beat Wall St. expectations. That comes after EMC, Symantec and IBM all missed expectations, including slow growth or declines in backup software.

Still, CommVault is not immune from problems plaguing the storage industry, such as slow federal government spending and companies’ cautious approach to closing big deals. Most of all, it faces pricing pressure from the big boys of data protection.

When asked if larger competitors Symantec, EMC and IBM are doing anything different competitively, Hammer said they were coming up with “tricky, crazy pricing initiatives” such as deep discounts and product bundling.

“Those guys are completely irrational in their pricing policies,” Hammer said on CommVault’s earnings call with analysts. “We’ve become public enemy number one. So any tricky, crazy pricing initiative they can possibly think of, they throw at customers and we’re pretty savvy in understanding what those are and can parry them pretty well. But that’s their primary weapon. We’re pretty well attuned to what each of these different vendors are doing there and respond accordingly. So my answer to them is, bring it on.”

CommVault has some tricks of its own to play in the form of new features for its Simpana 10 platform. Hammer said will bolster Simpana 10 “in the very near future” with products including enhanced archiving for Mircrosoft Exchange and SharePoint, self-service try and buy products for SMBs, features for virtual machine administrators and more partners for its IntelliSnap array-based snapshotting.

All of that goes with the Reference Copy archive option CommVault added last week that allows customers to index and classify data to low-cost storage.

Unlike several storage companies, CommVault did not have to reduce its forecast for this quarter although Hammer admitted there are possible pitfalls ahead. Although CommVault reported its revenue from the U.S. federal government increased 43% from last year, Hammer said “We are particularly cautious about U.S. federal government spending due to uncertainty associated with the recent fiscal impasse.” He also said he expects “softness” in big deals of greater than $500,000. “Many in the industry have reported big deal cancellations and pushouts,” he said.

Enterprise deals – which CommVault defines as $100,000 and up – only increased three percent last quarter.

“We understand we’re in a weak environment and also lumpy,” Hammer said. “So when you start getting into possibly seven-figure deals which makes a difference in our performance, we’re just issuing a concern. The positive is that the opportunities are there and the negative is we’re in an environment where those deals get pushed out, and there could be some future problems.”


October 28, 2013  9:36 AM

Data reduction a key feature in solid-state storage

Randy Kerns Randy Kerns Profile: Randy Kerns

The premise of doing data reduction of stored information is that more data can be put in the available physical space. Storing more data in a fixed amount of space drives down the price of storing data and gives added benefits of reducing the footprint, power consumption, and cooling required.

Performance requirements for data reduction vary depending on the type of data. If the data needs to be accessed frequently or in a time critical manner, the process of data reduction and expansion on access must have no measurable impact on performance. The performance demand is relaxed as the data becomes less important or more infrequently accessed.

Performance impact is crucial when using data reduction with solid-state technology. Solid-state storage, implemented in NAND flash today, is used in performance demanding environments. Response time is the most critical element in accelerating performance.

Data reduction is accomplished through deduplication and compression. Deduplication is most effective where there is repetitive data, such with successive backups. The effectiveness diminishes as the data becomes less repetitive. Compression uses an algorithmic process to reduce the representation of data in strings as it is parsed. The compression effectiveness varies based on the type of data or compressibility of the data, but is relatively consistent for a type and has predictable averages.

There are arguments for using either dedupe or compression, but many of the arguments are parochial. For primary data, compression in a storage system has proven effective for a long time, going back to the StorageTek Iceberg/IBM RVA virtual disk products from the 1990s.

There are several ways to reduce data on NAND flash. One method is predicated on the use of standard solid-state devices (SSDs) packaged to replace hard disk drives (HDDs) with the attachment and data transfer using disk drive protocols. These standard devices have an internal flash controller and flash memory chips along with the protocol interfaces to mimic a disk drive. For the use of these drives, data reduction is added external to the SSD, in what we would call the storage controller. The implementation in the storage controller is done using the internal processor or with custom hardware. In this case, data reduction uses controller resources and may have a noticeable performance impact.

There is less likely to be a performance impact if the reduction is done inline – while the data is being written. Other implementations may store data and then do the data reduction later (called a post-storage data reduction or sometimes referred to as post-processing data reduction). Post-storage reduction consumes resources which may or may not be impacting and the response time may be delayed while the data is expanded before access.

Other designs using flash storage have custom flash controllers with flash memory. These are unique designs for the different storage system implementations. Often, shadow RAM is used in these designs to optimize page updating. A processor element is included to control the algorithms for flash usage. Data reduction in the flash controller is transparent to the storage controller that manages the access to the storage. The flash controller is expected to do the data reduction without impacting performance.

Over time, data reduction will become an important competitive feature for solid-state storage, and designs and capabilities will continue to advance. This does not mean that compressing data elsewhere will not be useful. There is value for compressing data on HDDs and for transferring data, especially to remote sites. The important thing to understand is that reducing data stored in solid state technology is an evolutionary development with compelling value and will result in vendor competitive implementations.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


October 25, 2013  5:01 PM

Quantum drops revenue, gains ‘viability’

Dave Raffo Dave Raffo Profile: Dave Raffo

Although Quantum’s revenue declined over last year, CEO Jon Gacek said the backup vendor is in much better shape than it was 12 months ago.

Quantum this week reported revenue of $131.4 million, which was below its guidance and down 11% from the same quarter last year.

Gacek said the plus side is the company cut its loss form $8 million last year to $5 million this year, increased gross margin from 42% to 42.9%, reduced operating expenses by 11% and increased its cash from $33 million to $77 million.

“Last year there was so much anxiety about our balance sheet,” Gacek said. “Our cash has more than doubled and we paid off all our current debt. Last year I had to spend a lot of time defending our viability. Now I’m getting pressure on revenue growth, but last year it was ‘Hey, you lost money again.’”

Gacek blamed the poor revenue last quarter on low federal government spending ahead of this month’s shutdown, and the poor European economy. He said the government problems particularly hurt sales of DXi deduplication backup appliances, which declined 30% over last year. Tape automation revenue declined 15%.

Gacek said he is optimistic about the prospects for recently launched StorNext 5 and new Lattus object storage systems, and is hoping the government spending constraints will lift. “We believe deals that got hung up in the lead up to the federal government shutdown may materialize,” he said. “We know there are deals in the pipeline, it’s a matter of whether they’re going to pop.”


October 25, 2013  3:22 PM

NetApp, HGST play roles in Verizon Cloud

Dave Raffo Dave Raffo Profile: Dave Raffo

Since the beta program for the Verizon Cloud Compute and Cloud Storage began earlier this month, Verizon Terremark has been pulling back the curtains on the storage used for those services.

Over the last two weeks, Verizon Terremark launched a partnership with storage vendor NetApp and revealed it is using flash storage from HGST as a building block for its cloud.

Verizon said it would use NetApp Data Ontap virtual storage appliances (VSAs) in the Verizon clouds that let NetApp array customers access Data Ontap data protection and file management features.

This is a different arrangement than NetApp has with Amazon, which allows customers to set up NetApp FAS and V-Series storage on the Amazon cloud and access them via Amazon Web Services (AWS).

“Here, it’s all software. There is no NetApp array involved, “ Tom Shields, NetApp director of service provider marketing, said of the Verizon partnership.

Shields said the VSA used in the Verizon cloud is similar to the Data Ontap Edge virtual appliance NetApp sells for remote offices. “That’s the starting point,” he said of the Edge technology.

Verizon Terremark CTO John Considine said Verizon customers can set up the VSA filer through a template, selecting the capacity and services needed.

This week, Verizon announced it is using HGST’s s800 SAS solid-state drives (SSDs) as primary storage for the Cloud Compute service and as cache for Cloud Storage. HGST acquired the s800 SSDs in the recently closed sTec deal.

The HGST SSDs play a role in Verizon Cloud’s service options, as users can select service levels based on performance. “We allow the customer to adjust the performance level,” he said. “If it’s non-critical data and they just want to have the data out there and not do much with it, they can dial the performance level down an only pay for what they’re using.”

Verizon Cloud Storage uses the same SSDs for caching, with most of the data going on spinning disk.

“We’ll use SSDs to boost performance as we encode data and spread it across spinning disk,” Considine said of Cloud Storage.

Verizon Terremark was already discussing the deal with sTec before the HGST acquisition. Verizon does not use HGST hard disk drives, although Considine said “it is not out of the realm of consideration.”

You can expect to hear about more Verizon storage partners. Verizon has pledged to support cloud gateways and is also using object storage from a vendor it has yet to identify.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: