If you’ve never heard of Load DynamiX, that’s probably because until today the start-up was known as SwiftTest. And if you never heard of SwiftTest, that’s probably because until today it only sold its storage validation software directly to storage vendors.
Along with the name change, Load DynamiX today launched a series of infrastructure and application performance validation appliances for IT organizations. The appliances generate massive loads to stress enterprise storage systems, simulate production workloads and validate new devices before putting them into production.
The appliance models include the 10G Base Series with two 10 Gigabit (10 GigE) ports and support for iSCSI and NAS protocol emulation; the 10G Advanced Series with support for NFS 4, SMB 3, HTTP/S, CDMI and OpenStack Swift protocol emulation on top of the Base Series; FC Series with two 8 Gbps Fibre Channel (FC) ports and FC and iSCSI emulation; and the Unified Series with support for two 10 GigE and two FC ports and iSCSI, NFS, SMB 2, FC and SCSI emulation. List prices are $130,000 for the 10G Base, $225,000 for the 10G Advanced, $95,000 for the FC and $180,000 for the Unified appliances.
All of the appliances include Workload Insight Manager, the software the vendor has made available to storage vendors since 2009.
Load DynamiX VP of marketing Len Rosenthal said EMC, Dell, NetApp and Hitachi Data Systems (HDS) use Workload Insight Manager to test their storage arrays.
Rosenthal said each 2u Load DynamiX appliance has the load generation capabilities of 20 servers, and they emulate the I/O profile of applications. He said the appliances are an alternative to using Iometer with a bank of servers. Unlike Iometer, Load DynamiX simulates metadata.
“We’re about understanding changing workloads,” Rosenthal said. “We get people to simulate workloads before going live.”
Rosenthal said GoDaddy.com used Load DyanmiX to validate a hybrid solid-state drive (SSD) storage array and significantly reduced its cost before putting it into production, and the Healthcare.gov site fiasco was caused at least in part by lack of load testing before going live.
If you’ve never heard of the Healthcare.gov fiasco, that’s probably because you’ve been spending too much time trying to get your SAN or NAS up to speed.
Struggling storage vendors companies Overland Storage and Tandberg Data today confirmed their plans to combine and try to turn two money-losing businesses into a winner. The companies today said they have reached agreement for Overland to acquire Tandberg in an all-stock transaction.
No purchase price was given, but Tandberg will become a wholly owned subsidiary of Overland. Overland CEO Eric Kelly and CFO Kurt Kalbfleisch will remain in their current roles and COO Randy Gast, Overland’s senior VP of worldwide operations and services, becomes COO of the new company. Cyrus Capital, which bought Tandberg out of bankruptcy in 2009, will get two of seven board positions.
On a conference call to discuss the deal, Kelly said the merged companies had more than $100 million in revenue last year – with around $60 million coming from Tandberg – and combining them provides “a clear path to profitability.”
Both companies have struggled on their own. Along with Tandberg’s bankruptcy, Overland has been losing money for years and its fortunes took a steep downturn after it lose at tape OEM deal with Hewlett-Packard in 2005 that accounted for most of its revenue. Overland has been trying to rebound as a storage systems company since then, although it still tapes tape drives and libraries to go with SAN and NAS systems and disk backup.
Tandberg also sells tape libraries and drives, RDX removable disk, disk backup and low-end NAS. Kelly pointed out that Tandberg’s tape and NAS products are sold into a lower-end of the market than Overland’s, and there is little or no competing products.
“The product lines are complementary with minimal overlap,” he said.
Overland executives disclosed in May that they were discussing a merger with Tandberg. Kelly said he hopes the shareholder vote needed to close the deal will come by the end of the year.
Integrated backup appliance vendor Unitrends has new ownership while management remains the same and vows to move deeper into cloud-based data protection.
Insight Venture Partners completed a majority investment in Unitrends this week, giving it control of the data protection startup. Insight general partner Mike Triplett said Unitrends’ management team was one of the things he likes about the company. He said Insight will keep Unitrends CEO Mike Coney and his management team while Triplett and Richard Wells of Insight join the Unitrends board.
“There are three things we like about Unitrends,” Triplett said. “We like that it’s in a large and growing market segment, we like the management team, and the product is head and shoulders above the competition.”
Triplett likes the market so much that he also sits on the board of virtual machine backup software specialist Veeam Software and Acronis – two other Insight investments. He said he’s not concerned about being involved with companies that compete with each other because there is plenty of backup to go around.
“The market is big enough that everyone can prosper and do well,” he said.
Coney became Unitrends CEO in 2009 after working for Acronis and Veritas (now part of Symantec). He said Unitrends has about 260 employees and he expects to go grow substantially with the Insight investment. Although it does not disclose revenue and income figures, Unitrends claims it has grown revenue in 19 straight quarters and its year-over-year bookings increased 72% last quarter.
Coney said the vendor will continue to build on its integrated appliance platform, but “the biggest roadmap area of us is the cloud and DR as a service.” He said those plans include selling to managed service providers, offering its appliance customers options for replicating to the cloud for DR, and connecting to public clouds such as Amazon and Microsoft Azure.
He said Unitrends will maintain its focus on the mid-market – companies with from 50 to 1,000 customers.
CommVault went against the grain and reported better-than-expected financial results last quarter. That makes the backup software vendor “public enemy number one” to its larger competitors, according to CEO Bob Hammer.
CommVault’s revenue of $141.9 million last quarter grew 20% from the previous year and six percent over the previous quarter. The revenue figure and the company’s $17.4 million net income beat Wall St. expectations. That comes after EMC, Symantec and IBM all missed expectations, including slow growth or declines in backup software.
Still, CommVault is not immune from problems plaguing the storage industry, such as slow federal government spending and companies’ cautious approach to closing big deals. Most of all, it faces pricing pressure from the big boys of data protection.
When asked if larger competitors Symantec, EMC and IBM are doing anything different competitively, Hammer said they were coming up with “tricky, crazy pricing initiatives” such as deep discounts and product bundling.
“Those guys are completely irrational in their pricing policies,” Hammer said on CommVault’s earnings call with analysts. “We’ve become public enemy number one. So any tricky, crazy pricing initiative they can possibly think of, they throw at customers and we’re pretty savvy in understanding what those are and can parry them pretty well. But that’s their primary weapon. We’re pretty well attuned to what each of these different vendors are doing there and respond accordingly. So my answer to them is, bring it on.”
CommVault has some tricks of its own to play in the form of new features for its Simpana 10 platform. Hammer said will bolster Simpana 10 “in the very near future” with products including enhanced archiving for Mircrosoft Exchange and SharePoint, self-service try and buy products for SMBs, features for virtual machine administrators and more partners for its IntelliSnap array-based snapshotting.
All of that goes with the Reference Copy archive option CommVault added last week that allows customers to index and classify data to low-cost storage.
Unlike several storage companies, CommVault did not have to reduce its forecast for this quarter although Hammer admitted there are possible pitfalls ahead. Although CommVault reported its revenue from the U.S. federal government increased 43% from last year, Hammer said “We are particularly cautious about U.S. federal government spending due to uncertainty associated with the recent fiscal impasse.” He also said he expects “softness” in big deals of greater than $500,000. “Many in the industry have reported big deal cancellations and pushouts,” he said.
Enterprise deals – which CommVault defines as $100,000 and up – only increased three percent last quarter.
“We understand we’re in a weak environment and also lumpy,” Hammer said. “So when you start getting into possibly seven-figure deals which makes a difference in our performance, we’re just issuing a concern. The positive is that the opportunities are there and the negative is we’re in an environment where those deals get pushed out, and there could be some future problems.”
The premise of doing data reduction of stored information is that more data can be put in the available physical space. Storing more data in a fixed amount of space drives down the price of storing data and gives added benefits of reducing the footprint, power consumption, and cooling required.
Performance requirements for data reduction vary depending on the type of data. If the data needs to be accessed frequently or in a time critical manner, the process of data reduction and expansion on access must have no measurable impact on performance. The performance demand is relaxed as the data becomes less important or more infrequently accessed.
Performance impact is crucial when using data reduction with solid-state technology. Solid-state storage, implemented in NAND flash today, is used in performance demanding environments. Response time is the most critical element in accelerating performance.
Data reduction is accomplished through deduplication and compression. Deduplication is most effective where there is repetitive data, such with successive backups. The effectiveness diminishes as the data becomes less repetitive. Compression uses an algorithmic process to reduce the representation of data in strings as it is parsed. The compression effectiveness varies based on the type of data or compressibility of the data, but is relatively consistent for a type and has predictable averages.
There are arguments for using either dedupe or compression, but many of the arguments are parochial. For primary data, compression in a storage system has proven effective for a long time, going back to the StorageTek Iceberg/IBM RVA virtual disk products from the 1990s.
There are several ways to reduce data on NAND flash. One method is predicated on the use of standard solid-state devices (SSDs) packaged to replace hard disk drives (HDDs) with the attachment and data transfer using disk drive protocols. These standard devices have an internal flash controller and flash memory chips along with the protocol interfaces to mimic a disk drive. For the use of these drives, data reduction is added external to the SSD, in what we would call the storage controller. The implementation in the storage controller is done using the internal processor or with custom hardware. In this case, data reduction uses controller resources and may have a noticeable performance impact.
There is less likely to be a performance impact if the reduction is done inline – while the data is being written. Other implementations may store data and then do the data reduction later (called a post-storage data reduction or sometimes referred to as post-processing data reduction). Post-storage reduction consumes resources which may or may not be impacting and the response time may be delayed while the data is expanded before access.
Other designs using flash storage have custom flash controllers with flash memory. These are unique designs for the different storage system implementations. Often, shadow RAM is used in these designs to optimize page updating. A processor element is included to control the algorithms for flash usage. Data reduction in the flash controller is transparent to the storage controller that manages the access to the storage. The flash controller is expected to do the data reduction without impacting performance.
Over time, data reduction will become an important competitive feature for solid-state storage, and designs and capabilities will continue to advance. This does not mean that compressing data elsewhere will not be useful. There is value for compressing data on HDDs and for transferring data, especially to remote sites. The important thing to understand is that reducing data stored in solid state technology is an evolutionary development with compelling value and will result in vendor competitive implementations.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).
Although Quantum’s revenue declined over last year, CEO Jon Gacek said the backup vendor is in much better shape than it was 12 months ago.
Quantum this week reported revenue of $131.4 million, which was below its guidance and down 11% from the same quarter last year.
Gacek said the plus side is the company cut its loss form $8 million last year to $5 million this year, increased gross margin from 42% to 42.9%, reduced operating expenses by 11% and increased its cash from $33 million to $77 million.
“Last year there was so much anxiety about our balance sheet,” Gacek said. “Our cash has more than doubled and we paid off all our current debt. Last year I had to spend a lot of time defending our viability. Now I’m getting pressure on revenue growth, but last year it was ‘Hey, you lost money again.’”
Gacek blamed the poor revenue last quarter on low federal government spending ahead of this month’s shutdown, and the poor European economy. He said the government problems particularly hurt sales of DXi deduplication backup appliances, which declined 30% over last year. Tape automation revenue declined 15%.
Gacek said he is optimistic about the prospects for recently launched StorNext 5 and new Lattus object storage systems, and is hoping the government spending constraints will lift. “We believe deals that got hung up in the lead up to the federal government shutdown may materialize,” he said. “We know there are deals in the pipeline, it’s a matter of whether they’re going to pop.”
Since the beta program for the Verizon Cloud Compute and Cloud Storage began earlier this month, Verizon Terremark has been pulling back the curtains on the storage used for those services.
Over the last two weeks, Verizon Terremark launched a partnership with storage vendor NetApp and revealed it is using flash storage from HGST as a building block for its cloud.
Verizon said it would use NetApp Data Ontap virtual storage appliances (VSAs) in the Verizon clouds that let NetApp array customers access Data Ontap data protection and file management features.
This is a different arrangement than NetApp has with Amazon, which allows customers to set up NetApp FAS and V-Series storage on the Amazon cloud and access them via Amazon Web Services (AWS).
“Here, it’s all software. There is no NetApp array involved, “ Tom Shields, NetApp director of service provider marketing, said of the Verizon partnership.
Shields said the VSA used in the Verizon cloud is similar to the Data Ontap Edge virtual appliance NetApp sells for remote offices. “That’s the starting point,” he said of the Edge technology.
Verizon Terremark CTO John Considine said Verizon customers can set up the VSA filer through a template, selecting the capacity and services needed.
This week, Verizon announced it is using HGST’s s800 SAS solid-state drives (SSDs) as primary storage for the Cloud Compute service and as cache for Cloud Storage. HGST acquired the s800 SSDs in the recently closed sTec deal.
The HGST SSDs play a role in Verizon Cloud’s service options, as users can select service levels based on performance. “We allow the customer to adjust the performance level,” he said. “If it’s non-critical data and they just want to have the data out there and not do much with it, they can dial the performance level down an only pay for what they’re using.”
Verizon Cloud Storage uses the same SSDs for caching, with most of the data going on spinning disk.
“We’ll use SSDs to boost performance as we encode data and spread it across spinning disk,” Considine said of Cloud Storage.
Verizon Terremark was already discussing the deal with sTec before the HGST acquisition. Verizon does not use HGST hard disk drives, although Considine said “it is not out of the realm of consideration.”
You can expect to hear about more Verizon storage partners. Verizon has pledged to support cloud gateways and is also using object storage from a vendor it has yet to identify.
If it flies, the architecture can provide a huge boost to object storage and cloud storage adoption.
There are two parts to the Kinetic architecture. One is high capacity hard drives with two Gigabit Ethernet ports replacing SAS pins used for SAS/SATA disk interfaces. The other piece is an open-source API supported by OpenStack Object Storage.
The API is a software library that uses a Key-Value interface where the key is the metadata and the value is the data itself. The API will communicate directly with the hard drive for object commands such as PUT and GET.
Seagate’s goal is to eliminate the storage server tier and enable applications to speak directly to the storage device.
“The file system is gone. The drive does the space management,” said Ali Fenn, senior director of Seagate’s advanced storage platform. “Applications are dealing with objects, and we should let them deal with objects right down to the drive.
Seagate is trying to put together the wide industry support that will be necessary for this to catch on. Its release included supporting comments from Rackspace, Dell, Yahoo, Supermicro, SwiftStack, Xyratex, Newisys EVault, Basho, Huawei, and Hive Solutions. That’s a start, but it will take a lot more support to make it work.
Seagate has made the software library and a simulator available to companies who want to write apps for Kinetic storage, and Fenn said 3.5-inch nearline drives supporting the architecture will be available in mid-2014.
If it takes hold, the Kinetic platform can wipe out a good piece of the existing storage stack. Then it will get interesting to see what else goes – traditional storage arrays maybe?
According to a report for the SSG-Now newsletter, SSG-Now analyst James Bagley wrote: “the Seagate announcement does not doom the manufacturers of conventional storage arrays. However, it does mean that object storage users, particularly in cloud deployments, will have a much more efficient means of getting data into and out of storage devices.”
We’re just a week or so away from the deadline for filing an entry in the Storage magazine/SearchStorage.com Products of the Year competition. If you’re a storage vendor and you rolled out something new and cool—or upgraded an existing product to make it cooler than ever—you should click here now and fill out the entry form. If you work in—or manage—a storage shop and were impressed with a new or improved product, tell your vendor to get on the stick and enter it.
This is the 12th year of the Storage Products of the Year project, and since its inception in 2002 thousands of product entries have been submitted and 167 of those products have won either gold, silver or bronze awards. Fourteen of those winners were honored last year.
Our Products of the Year winners have always been a gratifying mix of storage stalwarts and upstarts. That combination has shown that technical innovation can come from companies big and small, and from companies that have already earned a following and those that are seeking to gain some recognition. Being named a Storage Product of the Year winner has helped a number of companies increase their visibility—but more importantly, it has helped storage managers make better informed purchasing decisions.
We’re proud that we’ve been able to recognize some vendors and their products long before they became household names—such as CommVault and pre-EMC Data Domain. Some vendors showcase products that are so ground-breaking that they provoke “gotta have it” longings from not only users, but from competing vendors as well. As a result, 41 of our past winners have been acquired by larger vendors who just couldn’t resist their compelling technologies.
We’ve also seen how vendors can keep improving on good technology, as many of past winners returned to grab additional awards for new products or for enhancing existing wares. Some notable multi-year winners include:
- CommVault – 6 awards
- Data Domain (before and after EMC acquisition) – 6
- NetApp – 6
- Quantum – 5
- Symantec (including Veritas) – 5
- FalconStor – 4
- QLogic – 4
- Riverbed – 4
This year’s finalists will be announced on SearchStorage.com in January. Winners of the 2013 Storage Product of the Year awards will be announced in the February 2014 issue of Storage magazine and on SearchStorage.com.
The U.S. federal government shutdown and cautious IT spending caused EMC to miss its revenue goals last quarter and lower expectations for the year.
The storage market leader reported revenue of $5.5 billion last quarter — up 5% over last year but $250 million below expectations – and its new guidance for the year of $23.25 billion is below analysts’ expectation of $23.44 billion. EMC executives blamed the shortfall on low U.S. federal government spending and customers outside of government waiting until the end of the quarter to place orders.
On the company’s earnings call today, EMC CEO Joe Tucci said he had “mixed feelings and emotions” about last quarter. “I am disappointed that we missed expectations,” he said. “On the other hand, I feel extremely good about our strategic positioning, products and service offerings. … We feel good that we continue to grow faster than most of our IT peers.”
Tucci said EMC’s federal storage business revenue declined more than 40% over last year — a huge drop because it was the government’s fiscal year-ending quarter, meaning it usually spends more on IT than in any other quarter. Tucci said while that revenue did not go to another storage vendor and the lost deals may not be dead, government budget uncertainties will prevent EMC from making up the shortfall this quarter.
Outside of the government, Tucci said “customer caution and scrutiny of purchases continued,” forcing a backlog of orders that came in on the final day of the quarter. He said EMC received almost $300 million in orders on the final day – three times what it was expecting – and about $100 million of those orders were pushed to this quarter.
Despite that $100 million already on the books, EMC lowered its expectations for this quarter. EMC president David Goulden said he is still expecting a budget flush in the last quarter of the year but not as strong as in most years.
Like Tucci, however, Goulden said EMC is increasing its market share over competitors. “We are doing well relative to the market in all the segments we play in,” he said.”
EMC executives said sales of high-end VMAX arrays took the biggest hit from federal government spending slowdown.
Other tidbits from the call:
- 70% of VMAX systems shipped had some flash storage in them
- The next generation Atmos object storage system due to ship next year will be part of EMC’s Project Nile.
- Around half of the VNX unified storage systems shipped last quarter were the new models launched in August.
- The all-flash XtremIO array is scheduled to ship this quarter.
- EMC will add Hadoop Distributed File System (HDFS) support to ViPR next year