IT management software vendor SolarWinds today acquired storage management vendor Tek-Tools for $42 million in cash and stock. SolarWinds executives say by adding Tek-Tools Profiler software, SolarWinds can combine storage and virtualization management with its Orion network and applications management portfolio. Its goal is the elusive “end to end IT management solution.”
Profiler provides backup reports and monitoring and storage resource management, and in recent years made a push into virtual server management.
SolarWinds chief product strategist Kenny Van Zant said the acquisition makes sense because “silos are breaking down” between storage and networking teams and they’re looking for a common management tool. He said this trend is driven by consolidation of storage and networking around Ethernet and iSCSI SANs.
“Everyone is now plugging storage into their network, so storage has become a network device,” he said.
SolarWinds CEO Mike Bennett said his company will sell Profiler separately and will also make it available as an integrated module of Orion by the end of the year.
Both companies have headquarters in Texas. SolarWinds is based in Austin and Tek-Tools in Dallas. Bennett said about 60 Tek-Tools employees will join SolarWinds, including CEO Ken Barth, senior management, and its engineering team in Chennai, India
SolarWinds completed an IPO last May, making it one of the few companies to go public in 2009. SolarWinds reported $32.4 million in revenue for the third quarter of 2009 with $12 million in income. It will report fourth quarter and 2009 yearly revenues Feb. 8. The vendor claims more than 88,000 customers and says Tek-Tools has about 1,300 customers.
Bennett said SolarWinds is paying $32 million in cash for Tek-Tools with the rest in stock. He expects about $4 million to $5 million in revenue from Tek-Tools software this year and an operating loss of around $3 million to $3.5 million. Bennett said Tek-Tools has been operating at a loss.
This will come as no surprise to many of you, some of whom I may have interviewed for my cloud storage feature published this month: two new analyst research reports show that despite the hype around cloud storage, its actual uptake among enterprise users has been minimal.
According to TheInfoPro’s Wave 13 survey of 309 Fortune 1000 and midsize enterprise storage professionals between August and November of last year, cloud storage showed up at the bottom of the list of technologies storage pros cited as likely to change their storage architecture in the next year.
TIP managing director Robert Stevenson, speaking in an audio slideshow showing Wave 13 highlights on TIP’s website, said that “Storage clouds…which [are] incredibly hyped in the enterprise, marketing and press, [are] really fairly low on the list of what storage pros think will change their storage architecture.” Storage clouds garnered about 5% of the responses in this category, putting it alongside file virtualization, enterprise SAS, 16 Gbps FC, and 10 GbE Storage at the bottom of TIP’s chart. Virtual tape libraries, SSD and Fibre Channel over Ethernet (FCoE) rank above “storage clouds” on the list.
Also this week, Forrester Research analyst Andrew Reichman published a report titled “Business Users are Not Ready for Cloud Storage.” According to the report, based on the results of Forrester’s Enterprise And SMB Hardware Survey, North America And Europe, conducted in the third quarter last year, users are well aware of the cloud storage concept, “reflecting the buzz in the market.”
However, whether those users had interest in deploying the concept anytime soon was an entirely different matter. Of the 1,272 decision-makers surveyed, 43% said they were “simply not interested” while another 43% said they were interested but had no immediate plans to move forward with cloud storage.
“Respondents in all geographies and of all company sizes appear to have little interest in moving their data to the cloud any time soon,” the report concludes. “There is long-term potential for storage-as-a-service, but Forrester sees issues with guaranteed service levels, security, chain of custody, shared tenancy, and long-term pricing as significant barriers that still need to be addressed before it takes off in any meaningful way.”
So what DO the analysts see as the hot topic in 2010? According to TIP, data deduplication is far and away the hottest technology for the year so far, with about 40% of respondents saying they expect it to impact their storage architecture. Block virtualization, thin provisioning and server virtualization follow with between 20 and 30 percent of responses for each of those technologies.
TIP also sees IT spending on SAN and NAS storage making a recovery this year, with NAS leading the way at 33% projected growth. “Block storage will start to show a turnaround later in the year,” said Stevenson. “It takes longer for database teams to talk to server teams and for server teams to talk to storage professionals [about what they need].”
Still, as we’ve heard CEOs like EMC’s Joe Tucci say, things aren’t getting back to 2007 levels anytime soon. The Wave 13 survey showed a continued reduction in the number of dialogues within organizations that are focused on business expansion, a pattern Stevenson said has been consistent for the last 18 months or so. Instead, key initiatives remain focused more narrowly on infrastructure-related issues like disaster recovery, archiving and regulatory compliance.
Among the vendors garnering the “exciting” label from respondents in the Wave 13 survey were EMC, which Stevenson said got a big jump with its acquisition of Data Domain; IBM, whose “excitement index” is up almost 100% over 2008 levels among Fortune 1000 respondents; HDS with its dynamic provisioning; and NetApp and Compellent, which caught the most attention in the midrange. Other vendors mentioned include 3PAR and primary storage capacity optimization vendors GreenBytes, Ocarina and StorWize.
In a followup to my post Monday about VMware and VSS integration issues as pointed out by W. Curtis Preston, Symantec emailed over the following statement from Peter Elliman, senior manager, product marketing for the Information Management Group.
Yes, we agree, that VMware should update their [sic] VSS writer code and we believe this is the best place for this issue to be resolved. We’re probably not the only one who believes this which is why only two vendors have created a work around here. VMware tools is constantly updated [sic], so when updates are provided by VMware, it lowers administration effort which is not trivial when you have 100s of VMs. With third party VSS writer code in a VM you run the risk that an upgrade from VMware tools will cause a conflict with the VSS writer there and you have separate code that has to be updated. This is why we focus on integration with VMware, rather than work-around efforts. We believe that VMware will address this issue in the future. Finally, we recommend agents when protecting mission-critical applications within VMs because not only does it assure consistency and proper log management, it also offers many more recovery options tailored to that application, e.g. Oracle tablespace recovery, or SQL Server filegroup recovery.
So there you have it. Until VMware’s VSS integration changes, we will probably still see users deploying backup software agents on guest machines, as recommended by Symantec.
Among the updates Symantec announced today to its NetBackup 7.0 and Backup Exec 2010 backup applications are enhancements to the granular backup of applications running in virtual servers, in part through integration with VMware’s new vStorage APIs for data protection.
These APIs are among the more widely hailed updates in vSphere 4 for storage pros. They promise to eliminate the cumbersome VMware Consolidated Backup (VCB) from the infrastructure and allow existing enterprise backup software tools to make backups directly from virtual machines, the same way they’ve been doing for physical servers.
Backup Expert W. Curtis Preston has been among those claiming the vStorage APIs for data protection will be a boon for improving virtual server backups, but points out in light of Symantec’s announcement that its approach of integrating NetBackup 7 with VMware’s VSS implementation leaves something to be desired.
According to Preston’s research, VMware’s VSS support will perform consistent backups of a data volume, which he calls “table stakes” in the snapshot backup market. But Preston says the VMware VSS integration can’t perform application consistent backups with Windows 2008 hosts, and in the case of either Windows 2003 or Windows 2008 hosts, it won’t notify the application when a recent backup has been made or refresh where it starts tracking incrementally changed data. (Think of it as resetting the trip speedometer in a car after a trip is finished).
“What this means is that anyone wishing to get proper backups of applications in Windows must run an agent of some kind in their guests in order to make this happen,” Preston wrote in a Jan. 11 blog post. He goes on to warn, “This means that any backup tools that are using only VMware’s infrastructure are going to have the same limitations.”
Symantec declined comment on the limitations cited by Preston. VMware officials confirmed that for Windows 2008, vSphere supports backups at the operating system level (as opposed to the application or transactional level). They also confirmed that vSphere’s integration with VSS doesn’t make the application aware it’s backed up (see the trip speedometer analogy above), but said through a spokesperson, “Back in the old file-level days there used to be an archive bit that was changed, and hence the application was aware of the backup. [But] the question is, does it really matter for the image level backup?”
“This has nothing to do with the archive bit,” Preston responded in an interview with Storage Soup. “Applications need to know when to truncate their transaction logs.” If transaction logs aren’t truncated, in the case of a database application, “they’ll eventually fill up and crash the database.”
VMware plans to continue innovating around VSS backups, the spokesperson added. Preston’s blog post also mentions VMware is working on more granular VSS support.
In the meantime, I’m wondering if anyone out there reading this in blogland has personally encountered these problems, or better yet, any workarounds they would like to share.
Coraid, which has stayed mostly under the radar with its ATA over Ethernet (AoE) storage systems, is looking to make some noise this year.
Coraid today named a new CEO and said it closed a $10 million A funding round. An “A” round means it’s the vendor’s first VC funding, even though it has been around since 2000.
The CEO is Kevin Brown, who was CEO of desktop virtualization startup Kidaro until Microsoft acquired it last year. He also ran marketing for storage encryption appliance vendor Decru before NetApp bought it in 2005, and then spent time as VP of NetApp’s security business unit.
The new backers are Azure Capital Partners and Allegis Capital, and Coraid has added former Cisco honcho Charlie Giancarlo and Veritas founder Mark Leslie to its advisory board and they have also invested in the company.
That’s an impressive haul of money and talent for a company that has been so quiet that its new CEO says, “Despite having been in storage industry for some time, I hadn’t come across Coraid until recently.” Apparently, Brown doesn’t read SearchStorage enough or he would know at least this or this about his new employer.
What he does know about Coraid is its Etherdrive AoE systems don’t use the TCP/IP Ethernet protocol, which he claims gives it a performance advantage over iSCSI. He says Coraid has more than 1,100 customers and its systems cost about $500 per terabyte running with Gigabit Ethernet and $900 per terabyte with 10-Gigabit Ethernet.
Brown said when he did take a good look, “I saw Coraid from a disruptive perspective, and was impressed what they did with organic growth.”
Coraid lists NASA, Dunkin’ Donuts, the U.S. Navy, National Institutes of Health, and several large universities as customers. Not bad, for a vendor that has had one sales person until now. With the funding, Coraid is looking to build its sales, support, and development teams.
Brown positions Coraid as a low-cost alternative to iSCSI, which has made inroads as the low-cost alternative to Fibre Channel. He says Coraid is cheaper than iSCSI because AoE doesn’t require TCP/IP offload engine (TOE) cards. The problem with that claim, though, is that most iSCSI systems work with software initiators that also remove the need for TOE cards.
iSCSI systems also have more mature data protection and management software, while Coraid relies mainly on virtual servers and third-party applications for those features. Brown says higher-end Coraid units are coming, and points out Etherdrive already supports SAS and solid state drives (SSDs). While Coraid isn’t looking to taking over the enterprise, its new CEO says there are plenty of organizations seeking low-cost quality storage.
“We’re not going to attack every environment,” he said. “We’re not going to do a rip and replace on an investment banking company using SRDF. But there are a lot of new opportunities out there for us.”
IBM wants everybody to know it is still advancing its tape storage systems, even as newer technologies such as 6 Gbps SAS and solid state drives (SSD) are poised to make their mark on enterprise storage.
IBM today said its Zurich IBM Research lab in tandem with FujiFilm has demonstrated technology advances that increase the density of tape 44 times from what is possible in today’s LTO-4 cartridges.
IBM scientists say they have recorded areal data at a density of 29.5 billion bits per square inch on an advanced prototype tape. That density would enable cartridges to hold up to 35 TB of non-compressed data.
IBM Fellow Evangelos Eleftheriou says the demonstration involves new dual-coated particulate magnetic tape from FujiFilm that uses an ultra-fine, perpendicularly-oriented barium-ferrite magnetic medium. The new tape enables high-density data recording without using expensive metal sputtering or evaporation coating methods. It also involves advanced servo control technologies for more accurate head positioning, allowing a 25-fold increase in the number of data tracks that can fit onto a half-inch wide tape. Other advances include new signal processing algorithms for the data channel and new head technologies for low-friction giant magnetoresistive (GMR) read/write head assemblies.
Eleftheriou said his team surpassed its target of achieving 20 billion bits per square on tape by 2009 by hitting 29.5 billion. The previous record set in 2006 was 6.7 billion. The team’s next goal is 100 billion bits per square inch.
“We’re now in position to set the bar higher,” Eleftheriou said. “This demonstrates that tape has a lot of life left.”
So when will we see the results of this higher density in shipping products?
“I don’t know,” Eleftheriou said. “I’m a scientist, not a product guy.”
Disk drive maker Seagate Technologies last night reported its earnings for its fiscal second quarter of 2010 (ending on Jan. 10), and the results follow on IBM’s storage sales upswing with continued reports of an enterprise storage spending rebound.
“Over the course of calendar year 2009 the technology industry has improved faster than the broader economy and the storage sector has outperformed almost every other sector in technology,” CFO Stephen Luczo said on the company’s earnings call. “As a result, the demand for storage continued to accelerate throughout the calendar year.”
Still, Seagate was not planning for an economic rebound during the quarter and officials said it came as something of a pleasant surprise to see stronger than forecasted demand. The company’s top line revenue for the quarter was $3 billion, a 33% increase year-over-year and a 14% increase sequentially. Net income was $533 million, and the company was able to generate $753 billion in operating cash flow and repaying $246 million in debt, which Stifel Nicolaus Equity Research analyst Aaron Rakers wrote in a note to clients this morning was the biggest quarterly cash flow generation the company has achieved since his firm began tracking it in 2001.
Wall Street analysts, many of which were also pleasantly surprised by earnings from Seagate above their estimates, are now looking to see this boon for Seagate ripple out to its enterprise storage OEMs, especially EMC, which reports calendar first-quarter earnings Tuesday.
During its conference call to report fourth-quarter earnings last night, IBM reported its storage sales were up 1% year-over-year, in line with the cautious optimism about storage sales that has been growing over the last few months.
Specific product highlights in storage included Tivoli software and the XIV disk array. “Tivoli storage continued its robust growth as customers manage their rapidly growing storage data,” said CFO Mark Loughridge in prepared remarks on the earnings call. “Data Protection as well as Storage Management grew double-digits with broad- based geography and sector growth.”
Loughridge added later, “we added more than 130 new customers to our XIV platform in the fourth quarter and 400 since the acquisition.” IBM also claimed to have taken share in both the disk and tape markets, even though tape revenue declined 10% for the quarter.
Stifel Nicolaus Equity Research Aaron Rakers pointed out that the 1% year-on-year gain doesn’t tell the whole story. “This implies a sequential increase of [about] 55%, which compares to an average sequential increase of [about] 36% in [the fourth calendar quarter] over the prior 7 years,” Rakers wrote. “IBM estimates that it had gained [one percentage point] of share in the storage market during [the fourth calendar quarter of 2009].” Rakers compared this with his estimate that competitor EMC Corp.’s Information Infrastructure business grew 19% sequentially last quarter. EMC reports earnings results next Tuesday.
EMC launched a news midrange Clariion model today, the High Density Clariion CX-4.
The new storage frame can hold up to 390 disks in three rack units and supports 2 TB 7200 RPM and 5400 RPM SATA drives, or a combination of SATA drives and up to 60 solid-state drives (SSD).
The new Clariions are 2U higher and five inches deeper than the standard Clariions, and let customers build larger capacity configurations in a smaller footprint. For instance, a standard Clariion CX 4-960 with support for 1 TB SATA drives will take up six racks to store 945 usable TBs. The High Density units can store the same amount in three racks.
The Clariion announcement itself wasn’t anything all that earth-shattering, but it led to an interesting discussion with Enterprise Strategy Group analyst Mark Peters that I think bears summarizing here.
Thoughts on “high end” vs. “midrange”
I spoke Monday with a Pillar Data Systems reseller about Pillar’s new Axiom 600 controller. He said Pillar had recently come up against Symmetrix V-Max in a deal. Pillar lost, but the reseller claimed that was due to EMC’s brand recognition rather than a technical disadvantage for Pillar in this environment.
At first, the comparison between EMC’s high-end disk array and Pillar’s Axiom, which has lived squarely in the midrange, might seem a jarring one. While a fully configured Axiom 600 can scale up to over a petabyte and a half, a single V-Max engine pair can address that much capacity, and scale up from there. V-Max also offers more bandwidth, at 8 Gbps FC while Axiom uses 4 Gbps FC, and mainframe connections through FICON, which most midrange arrays, including Pillar, don’t offer.
But this kind of comparison isn’t nearly as much of a stretch as it might’ve been even four or five years ago, when “midrange” or “low-end” meant “stripped of certain features and functionality.” “It used to be when you wanted a feature that was really complex or clever, you would get a high-end system,” said ESG’s Peters.
These days, the features advertised for Axiom and V-Max have quite a bit of overlap, from thin provisioning to quality of service to tiered storage data migration and solid-state drive support. Since EMC came out with the V-Max Symmetrix model, a successor to the monolithic DMX series, the two have also, had a broadly similar architecture, a matrix made up of separately scalable performance (“engines” for EMC, “slammers” for Pillar) and capacity nodes (“capacity” for EMC, “bricks” for Pillar).
It also used to be that a high end disk array could be identified by the amount of cache available to boost performance, as well as intelligent algorithms to manage placement of data on that disk. With the advent of Flash for enterprise consumption, vendors like Pillar are able to offer the kind of cache capacities that used to be available only in the highest-end arrays — 192 GB with the Series 2 Axiom 600 controller.
The overlap between midrange and high end is also increasing within EMC’s own product line — the Clariion, which it calls a midrange array, can now achieve capacities well into the Symmetrix range, especially with the high-density model. Both products also offer Flash support, QoS, thin provisioning, drive spin down, etc.
Of course this doesn’t mean that people will begin speaking of the two categories interchangeably, at least not tomorrow or in the near future. But Peters noted that users may emphasize new types of purchasing criteria now that the old lines of sheer capacity and horsepower are beginning to blur. This is where, as the Pillar reseller mentioned, brand recognition and vendor cachet will become more important than ever. “People are buying more than a product,” Peters said. “They’re buying interoperability, a sheer number of service engineers, money going into a research lab, and global support.”
The size of the environment will also continue to play a role, but this will find more of an emphasis on risk management depending on the size and profile level of the business involved. “Different size companies still have different bases on which they make purchasing decisions, and one of those is risk.”
It’s also important to recognize that different companies will use the same technical term to describe features that might still be different under the covers. “All the vendors will now say they have thin provisioning and remote replication,” Peters said. “But you always have to be careful in this game to look out for semantic similarities that may not mean the features are exactly the same.”
So the story as told by the Pillar reseller will probably continue to be retold in various forms.
As an example, Peters turned to a car analogy. “Hyundai and Mercedes might both say they have four-wheel drive,” he said. “And it may be that Hyundai’s really is just as good as the Mercedes, but I don’t necessarily trust them. That’s similar to a storage buyer [today].”