“We missed you at the [name deleted] Institute for Sports Medicine.” This was the title of an email I received, reminding me of when I blew out a knee skiing years ago, and the subsequent rehab process.
The knee injury was not a positive experience. It was one of those events that you use to gauge different points in your life. Among other things, to me it meant no more runs on black diamond trails. But the email didn’t only bring up personal memories. The introductory paragraph also reminded me of marketing messages I’ve seen recently in the storage industry:
“Do you have something that has been bothering you for a while? Sometimes early intervention of an issue can be addressed conservatively through a variety of treatment options. Don’t let a little problem right now become a big problem later!”
This had several negative implications. The first was that I’d hurt myself again and did not seek treatment. This is playing on the probability that I’ve been injured once so I’m likely to do it again. That’s either a reflection on the activities I’m involved in or that I have a tendency to get hurt. The other is that I have not addressed the problem that results. The final sentence is a thinly veiled prediction that it will get worse.
This type of marketing is not much different than what we see in the storage industry – not to mention super paranoia information security marketing. Many approaches around storage data protection try to get the potential customer to identify with a recent real-world disaster. If the customers have experienced a data loss or unavailability incident themselves, even better.
There are several approaches taken, depending on the focus and product being sold. The easy association is with data protection and the need to guard against high-profile disasters. Headlines involving IT usually involve bad news and protecting IT from the calamities experienced by others is a great sales approach.
There are also several negative sales approaches used for storage systems. One is to avoid the type of failures that result in data unavailability. Purchasing a more reliable, mature system with the best support available is the answer. There are enough examples to remind potential customers of failures with inadequate storage systems or storage software because their functions were not mature (meaning they lacked extensive field experience in critical environments).
Arguing from the negative works with those that have been injured previously. (I can feel my knee twitch a bit now). But it may not work so well with those who have not. A better approach may to explain the value a solution brings. In most cases, that value must be explained in economic terms.
Unavailability to information has an economic impact. Looking at the potential impact and what it means requires understanding the customers and their businesses. From there, you have to show the prevention alternatives provided by the product and the economics – cost vs. value. This requires more homework and evaluation but provides a better solution with better understanding to the customer. It may not be as quick a sales opportunity as associating with a negative, potentially painful, event but it is probably the best for the customer and leads to building trust and follow-on business.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Flynn and Fusion-io co-founder Rick White haven’t given much specifics on their new company. The fundingpress release said they plan to bring a product to market in 2014 that will solve problems caused by a “new era of storage that spans from flash to cloud.”
I spoke with Flynn recently and he didn’t get much deeper into product specifics than that, but said “I want to be a vendor to help solve problems coming our way. We want to deal with the hard problems of managing distributed data, performance and capacity.”
He added that these management problems are leading to trends such as putting data in the cloud and on bring-your-own-devices (BYOD). “I think we’re at a tipping point,” Flynn said. “Hardware is either being sent off to the cloud or to a person’s own devices. Running a business, you don’t put in your own server infrastructure now, you send it off to somebody in the cloud.”
He said this is changing the way people buy storage, and who they buy it from. Flynn said he expects a lot more organizations to take the lead from the way companies such as Amazon and Facebook are building their architectures.
“The cloud is forcing a distributed model for storage,” Flynn said. “It’s not centralized any more, it’s distributed in an infrastructure. It’s commodity off-the-shelf industry standard platform storage, not proprietary systems from EMC. You think Amazon uses NetApp or EMC? No way, that’s crazy. And anybody who wants to compete with Amazon is not going to use NetApp or EMC. It’s not cost competitive, and you can’t scale it big enough.
“Facebook is having servers built to their specs by Chinese ODMs, and not the major system vendors anymore.”
As for BYOD, Flynn said IT may still consider it a nuisance but will growth to love it. “Didn’t PCs come about in the same way?” he said. “Administrators held on to their mainframes and they hated PCs. I think the same is true for BYOD. It’s inevitable because of productivity improvements. People are doing most of their work on their own devices — tablets, smartphones, and laptops.”
Flynn said flash is in a similar position as the cloud – it’s here to stay but brings a new set of management issues.
“I think there are still significant challenges with how to get the manageability of traditional storage with the performance of a distributed flash architecture and the capacity of distributed cloud object storage,” Flynn said. “There are still significant challenges with how to deploy and use flash and we are still in the early innings with that. But that problem is not just about flash. It’s about distributed storage architectures. That’s a big problem. Doing something centralized makes it easy. Managing a mainframe was easy compared to managing PCs on everybody’s desktop.”
How will Primary Data try to solve these problems? That’s one of the secrets that will be revealed in 2014.
Twitter isn’t the only technology IPO this week. Security and backup vendor Barracuda Networks began trading on the New York Stock Exchange Wednesday with promising initial results.
While Barracuda is still more security than storage, CEO BJ Jenkins said backup makes up about one-third of Barracuda’s new business, and is increasing year-over-year at a faster rate than the 26% overall market growth.
He attributes that to Barracuda’s end-to-end data protection approach. While it still sells its backup software standalone, most deals are for integrated appliances along with cloud subscriptions for backup and disaster recovery. Barracuda maintains its own multi-petabyte cloud, and Jenkins said most of its backup appliance customers also use it.
“If you back up into the cloud and have an issue locally, you can spin up a virtual server in our cloud and run your business off a deduped backup copy,” Jenkins said. “This end-to-end offering as made a big difference. Customers used to buy Symantec and some kind of disk and tape, and rotate tapes and do replication for DR.”
Barracuda sells mostly to SMB and mid-range companies, competing primarily with Symantec Backup Exec.
Jenkins, who ran EMC’s backup division before becoming Barracuda CEO in Nov. 2012, said one reason Barracuda went public is to gain more credibility with customers who want to know their security and data protection vendors are stable companies. Unlike flash vendor Violin Memory’s first day as a public company, Barracuda’s price rise in the hours after its IPO. Barracuda began trading at $18 – the low end of its projected range – but its shares closed at $21.55 Wednesday.
“I feel good about the first day of trading,” Jenkins said. “We were fortunate to get out before Twitter. They’ve taken a lot of oxygen out of the air.”
Coho Data this week pulled in $25 million in funding to expand and market the scale-out storage platform it launched into beta last month.
CTO Andy Warfield said new funding will be used to add features to the Coho DataStream series in 2014. The DataStream is a hybrid storage system that combines a software-controlled switch with PCI flash and hard drives that Coho sees as a building block for companies who want Amazon-style storage.
The original product is file-based. Warfield said the vendor has seen little demand for Fibre Channel or iSCSI but there have been requests for FC over Ethernet (FCoE), so FCoE support will likely follow next year. There will also probably be SMB protocol support coming to go with NFS-only in the original version, and deduplication and replication are on the roadmap list.
Warfield said Coho Data will also announce a product upgrade path next year. “You can expect a continuous and dynamic approach to upgrades than has historically been the case for storage products.”
As Coho Data prepares to make its product generally available, Warfield said the target audience has shifted from the original plan of marketing to small-to-medium businesses (SMBs).
“We found SMBs had no need for the performance we get from our box,” he said. “But we found larger storage environments with from three petabytes to 10 petabyte range environments often had performance pain with their existing enterprise storage.”
Coho DataStream micro arrays ship with 3.2 TB of raw flash (Intel SSD 910 PCI cards) and 36 TB of spinning disk. Warfield said the startup uses a pricing model similar to the Amazon AWS provisioned IOPS model. He said 40,000 IOPS with a three-year support contract costs around $2.50 per GB.
The B funding round was led by new investor Ignition Partners with previous investor Andreeson Horowitz participating, and brings Coho Data’s total funding to $35 million.
If you’ve never heard of Load DynamiX, that’s probably because until today the start-up was known as SwiftTest. And if you never heard of SwiftTest, that’s probably because until today it only sold its storage validation software directly to storage vendors.
Along with the name change, Load DynamiX today launched a series of infrastructure and application performance validation appliances for IT organizations. The appliances generate massive loads to stress enterprise storage systems, simulate production workloads and validate new devices before putting them into production.
The appliance models include the 10G Base Series with two 10 Gigabit (10 GigE) ports and support for iSCSI and NAS protocol emulation; the 10G Advanced Series with support for NFS 4, SMB 3, HTTP/S, CDMI and OpenStack Swift protocol emulation on top of the Base Series; FC Series with two 8 Gbps Fibre Channel (FC) ports and FC and iSCSI emulation; and the Unified Series with support for two 10 GigE and two FC ports and iSCSI, NFS, SMB 2, FC and SCSI emulation. List prices are $130,000 for the 10G Base, $225,000 for the 10G Advanced, $95,000 for the FC and $180,000 for the Unified appliances.
All of the appliances include Workload Insight Manager, the software the vendor has made available to storage vendors since 2009.
Load DynamiX VP of marketing Len Rosenthal said EMC, Dell, NetApp and Hitachi Data Systems (HDS) use Workload Insight Manager to test their storage arrays.
Rosenthal said each 2u Load DynamiX appliance has the load generation capabilities of 20 servers, and they emulate the I/O profile of applications. He said the appliances are an alternative to using Iometer with a bank of servers. Unlike Iometer, Load DynamiX simulates metadata.
“We’re about understanding changing workloads,” Rosenthal said. “We get people to simulate workloads before going live.”
Rosenthal said GoDaddy.com used Load DyanmiX to validate a hybrid solid-state drive (SSD) storage array and significantly reduced its cost before putting it into production, and the Healthcare.gov site fiasco was caused at least in part by lack of load testing before going live.
If you’ve never heard of the Healthcare.gov fiasco, that’s probably because you’ve been spending too much time trying to get your SAN or NAS up to speed.
Struggling storage vendors companies Overland Storage and Tandberg Data today confirmed their plans to combine and try to turn two money-losing businesses into a winner. The companies today said they have reached agreement for Overland to acquire Tandberg in an all-stock transaction.
No purchase price was given, but Tandberg will become a wholly owned subsidiary of Overland. Overland CEO Eric Kelly and CFO Kurt Kalbfleisch will remain in their current roles and COO Randy Gast, Overland’s senior VP of worldwide operations and services, becomes COO of the new company. Cyrus Capital, which bought Tandberg out of bankruptcy in 2009, will get two of seven board positions.
On a conference call to discuss the deal, Kelly said the merged companies had more than $100 million in revenue last year – with around $60 million coming from Tandberg – and combining them provides “a clear path to profitability.”
Both companies have struggled on their own. Along with Tandberg’s bankruptcy, Overland has been losing money for years and its fortunes took a steep downturn after it lose at tape OEM deal with Hewlett-Packard in 2005 that accounted for most of its revenue. Overland has been trying to rebound as a storage systems company since then, although it still tapes tape drives and libraries to go with SAN and NAS systems and disk backup.
Tandberg also sells tape libraries and drives, RDX removable disk, disk backup and low-end NAS. Kelly pointed out that Tandberg’s tape and NAS products are sold into a lower-end of the market than Overland’s, and there is little or no competing products.
“The product lines are complementary with minimal overlap,” he said.
Overland executives disclosed in May that they were discussing a merger with Tandberg. Kelly said he hopes the shareholder vote needed to close the deal will come by the end of the year.
Integrated backup appliance vendor Unitrends has new ownership while management remains the same and vows to move deeper into cloud-based data protection.
Insight Venture Partners completed a majority investment in Unitrends this week, giving it control of the data protection startup. Insight general partner Mike Triplett said Unitrends’ management team was one of the things he likes about the company. He said Insight will keep Unitrends CEO Mike Coney and his management team while Triplett and Richard Wells of Insight join the Unitrends board.
“There are three things we like about Unitrends,” Triplett said. “We like that it’s in a large and growing market segment, we like the management team, and the product is head and shoulders above the competition.”
Triplett likes the market so much that he also sits on the board of virtual machine backup software specialist Veeam Software and Acronis – two other Insight investments. He said he’s not concerned about being involved with companies that compete with each other because there is plenty of backup to go around.
“The market is big enough that everyone can prosper and do well,” he said.
Coney became Unitrends CEO in 2009 after working for Acronis and Veritas (now part of Symantec). He said Unitrends has about 260 employees and he expects to go grow substantially with the Insight investment. Although it does not disclose revenue and income figures, Unitrends claims it has grown revenue in 19 straight quarters and its year-over-year bookings increased 72% last quarter.
Coney said the vendor will continue to build on its integrated appliance platform, but “the biggest roadmap area of us is the cloud and DR as a service.” He said those plans include selling to managed service providers, offering its appliance customers options for replicating to the cloud for DR, and connecting to public clouds such as Amazon and Microsoft Azure.
He said Unitrends will maintain its focus on the mid-market – companies with from 50 to 1,000 customers.
CommVault went against the grain and reported better-than-expected financial results last quarter. That makes the backup software vendor “public enemy number one” to its larger competitors, according to CEO Bob Hammer.
CommVault’s revenue of $141.9 million last quarter grew 20% from the previous year and six percent over the previous quarter. The revenue figure and the company’s $17.4 million net income beat Wall St. expectations. That comes after EMC, Symantec and IBM all missed expectations, including slow growth or declines in backup software.
Still, CommVault is not immune from problems plaguing the storage industry, such as slow federal government spending and companies’ cautious approach to closing big deals. Most of all, it faces pricing pressure from the big boys of data protection.
When asked if larger competitors Symantec, EMC and IBM are doing anything different competitively, Hammer said they were coming up with “tricky, crazy pricing initiatives” such as deep discounts and product bundling.
“Those guys are completely irrational in their pricing policies,” Hammer said on CommVault’s earnings call with analysts. “We’ve become public enemy number one. So any tricky, crazy pricing initiative they can possibly think of, they throw at customers and we’re pretty savvy in understanding what those are and can parry them pretty well. But that’s their primary weapon. We’re pretty well attuned to what each of these different vendors are doing there and respond accordingly. So my answer to them is, bring it on.”
CommVault has some tricks of its own to play in the form of new features for its Simpana 10 platform. Hammer said will bolster Simpana 10 “in the very near future” with products including enhanced archiving for Mircrosoft Exchange and SharePoint, self-service try and buy products for SMBs, features for virtual machine administrators and more partners for its IntelliSnap array-based snapshotting.
All of that goes with the Reference Copy archive option CommVault added last week that allows customers to index and classify data to low-cost storage.
Unlike several storage companies, CommVault did not have to reduce its forecast for this quarter although Hammer admitted there are possible pitfalls ahead. Although CommVault reported its revenue from the U.S. federal government increased 43% from last year, Hammer said “We are particularly cautious about U.S. federal government spending due to uncertainty associated with the recent fiscal impasse.” He also said he expects “softness” in big deals of greater than $500,000. “Many in the industry have reported big deal cancellations and pushouts,” he said.
Enterprise deals – which CommVault defines as $100,000 and up – only increased three percent last quarter.
“We understand we’re in a weak environment and also lumpy,” Hammer said. “So when you start getting into possibly seven-figure deals which makes a difference in our performance, we’re just issuing a concern. The positive is that the opportunities are there and the negative is we’re in an environment where those deals get pushed out, and there could be some future problems.”
The premise of doing data reduction of stored information is that more data can be put in the available physical space. Storing more data in a fixed amount of space drives down the price of storing data and gives added benefits of reducing the footprint, power consumption, and cooling required.
Performance requirements for data reduction vary depending on the type of data. If the data needs to be accessed frequently or in a time critical manner, the process of data reduction and expansion on access must have no measurable impact on performance. The performance demand is relaxed as the data becomes less important or more infrequently accessed.
Performance impact is crucial when using data reduction with solid-state technology. Solid-state storage, implemented in NAND flash today, is used in performance demanding environments. Response time is the most critical element in accelerating performance.
Data reduction is accomplished through deduplication and compression. Deduplication is most effective where there is repetitive data, such with successive backups. The effectiveness diminishes as the data becomes less repetitive. Compression uses an algorithmic process to reduce the representation of data in strings as it is parsed. The compression effectiveness varies based on the type of data or compressibility of the data, but is relatively consistent for a type and has predictable averages.
There are arguments for using either dedupe or compression, but many of the arguments are parochial. For primary data, compression in a storage system has proven effective for a long time, going back to the StorageTek Iceberg/IBM RVA virtual disk products from the 1990s.
There are several ways to reduce data on NAND flash. One method is predicated on the use of standard solid-state devices (SSDs) packaged to replace hard disk drives (HDDs) with the attachment and data transfer using disk drive protocols. These standard devices have an internal flash controller and flash memory chips along with the protocol interfaces to mimic a disk drive. For the use of these drives, data reduction is added external to the SSD, in what we would call the storage controller. The implementation in the storage controller is done using the internal processor or with custom hardware. In this case, data reduction uses controller resources and may have a noticeable performance impact.
There is less likely to be a performance impact if the reduction is done inline – while the data is being written. Other implementations may store data and then do the data reduction later (called a post-storage data reduction or sometimes referred to as post-processing data reduction). Post-storage reduction consumes resources which may or may not be impacting and the response time may be delayed while the data is expanded before access.
Other designs using flash storage have custom flash controllers with flash memory. These are unique designs for the different storage system implementations. Often, shadow RAM is used in these designs to optimize page updating. A processor element is included to control the algorithms for flash usage. Data reduction in the flash controller is transparent to the storage controller that manages the access to the storage. The flash controller is expected to do the data reduction without impacting performance.
Over time, data reduction will become an important competitive feature for solid-state storage, and designs and capabilities will continue to advance. This does not mean that compressing data elsewhere will not be useful. There is value for compressing data on HDDs and for transferring data, especially to remote sites. The important thing to understand is that reducing data stored in solid state technology is an evolutionary development with compelling value and will result in vendor competitive implementations.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Although Quantum’s revenue declined over last year, CEO Jon Gacek said the backup vendor is in much better shape than it was 12 months ago.
Quantum this week reported revenue of $131.4 million, which was below its guidance and down 11% from the same quarter last year.
Gacek said the plus side is the company cut its loss form $8 million last year to $5 million this year, increased gross margin from 42% to 42.9%, reduced operating expenses by 11% and increased its cash from $33 million to $77 million.
“Last year there was so much anxiety about our balance sheet,” Gacek said. “Our cash has more than doubled and we paid off all our current debt. Last year I had to spend a lot of time defending our viability. Now I’m getting pressure on revenue growth, but last year it was ‘Hey, you lost money again.’”
Gacek blamed the poor revenue last quarter on low federal government spending ahead of this month’s shutdown, and the poor European economy. He said the government problems particularly hurt sales of DXi deduplication backup appliances, which declined 30% over last year. Tape automation revenue declined 15%.
Gacek said he is optimistic about the prospects for recently launched StorNext 5 and new Lattus object storage systems, and is hoping the government spending constraints will lift. “We believe deals that got hung up in the lead up to the federal government shutdown may materialize,” he said. “We know there are deals in the pipeline, it’s a matter of whether they’re going to pop.”