NAS vendor Isilon recorded its first profitable quarter to end 2009, three years after becoming a public company. It didn’t make it by much, but it was a milestone nonetheless.
Isilon reported net income of $140,000 for the quarter, or income of $1.6 million on a non-GAAP basis with adjustments for items such as stock-based compensation and legal fees. Its revenue for last quarter was $37.5 million, up 23% from the previous quarter and 18% year over year.
The climb to profitability was a steep one for Isilon, which stumbled after going public in December 2006 with heavy losses and questionable sales tactics that resulted in CTO Sujal Patel taking over as CEO in October 2007. Patel brought in a new management team and reduced spending while expanding Isilon’s product platform last year.
Remaining in the black will require more progress on both fronts. The fourth quarter is traditionally the biggest of the year for storage sales, and Isilon can’t afford much of a dip in sales this quarter if it is to say above break-even. It still lost $18.9 million for the full year in 2009.
Yet Patel says his goal is for a profitable 2010, despite NetApp’s beginning to combine its GX clustered technology in its core operating system, a surge in EMC Celerra NAS sales, and Hewlett-Packard’s increased integration of clustered NAS from the acquisitions of PolyServe and Ibrix.
“We’re planning a profitable year here,” Patel said, adding that Isilon will add new storage platforms and operating systems releases this year. “We’re continuing to innovate and build on our scale-out NAS platform and build our channel program. Most of last year was about making tough decisions that would help us position ourselves well for the future.”
Patel says Isilon’s overall product strategy is to sell storage systems that go beyond its traditional customer base of media/entertainment, life sciences, and online service providers. “The high value opportunities are bigger deals where you’re part of the mission critical infrastructure in medium to large-size customers,” he said.
“We’re going to grow our business by getting in front of larger enterprise customers who want a highly technologically differentiated scale-out NAS solution, and want to buy $500,000 or a couple million dollars of storage every single year.”
CommVault added block-level data deduplication to its Simpana data protection and management suite at the start of 2009, and introduced cloud connectivity this week. Now CEO Bob Hammer says there will be even more dedupe and cloud when Simpana 9 launches later this year.
Hammer says dedupe has been a major source of revenue for CommVault with 900 customers licensing the feature in 2009, including around 300 in the fourth quarter as CommVault increased total revenue 18% year over year to $71 million. So what’s next for Simpana’s dedupe?
“We will take deduplication up a level so it can enable us to manage data in and out of clouds and remote locations in a much more comprehensive way than we do today,” Hammer said of CommVault’s plans for Simpana 9. “Our objectives are to improve scale, the algorithms and the way we do source site deduplication. It’s a pretty significant enhancement to the product line.”
Hammer says CommVault is unlikely to seek more OEM partners beyond Dell for its deduplication, and will instead concentrate on picking up more channel distribution partners for Simpana. That’s a different strategy than other dedupe vendors such as Quantum and FalconStor, who are trying to capitalize on dedupe demand by offering the software to storage vendors to compete with EMC’s Data Domain and Avamar dedupe products.
Hammer says CommVault clearly has the attention of backup software market leaders EMC and Symantec, which recently beefed up deduplication across its NetBackup and Backup Exec platforms. Hammer says CommVault has run into aggressive pricing tactics from those rivals. In other words, EMC and Symantec have been discounting their dedupe software, according to CommVault.
“EMC’s strategy is what I would call the more rational and better managed strategy,” Hammer said. “They’re trying to stop us in that space, and they’re aggressive about it. Symantec pricing is a lot more disorganized, I’d call it panicky. The discounts are deeper and vary between regions.”
Whatever EMC’s strategy, it seemed to work. EMC last month reported 600 new Data Domain customers last quarter and more than 50% revenue growth for Data Domain and Avamar.
Hammer says the cloud and information management would also be areas of improvement in Simpana 9. He estimates about 40 percent to 50 percent of CommVault customers have expressed in interest in using the cloud for backup and long-term archiving. He says Simpana 9 will have more cloud connectivity features that go beyond the Service Pack released for Simpana 8 this week that lets customers hook into services from Amazon, Microsoft, Nirvanix, EMC and Iron Mountain.
“We’ll have a number of significant cloud enhancements,” Hammer said. “We’ve been working this for years. People are going to want cloud configurations, whether private or public. Enterprise customers will build large internal cloud structures that different divisions will access. We have a platform to enable that as a seamless tier. Likewise, we can help a customer who wants to access public clouds.”
Oracle laid out its strategy for Sun’s product lines last week, following the approval of the $7.4 billion deal by European regulators. It was met with dismay in some corners of the IT world, including some Sun channel partners, after Oracle revealed it intends to sell directly to Sun’s top 4000 customers.
But it also put to rest lingering questions about Oracle’s intentions for Sun’s hardware products, particularly in storage. So far, Oracle has pledged to continue Sun’s storage product lines, but it’s early yet. We caught up today with Evan Powell, the CEO of Nexenta Systems, a storage vendor building clustered NAS products based on ZFS to see what he thought of Oracle’s plans.
Storage Soup: How do you see the strategy playing out, or impacting your customers?
Powell: From Nexenta’s perspective we think that the general storage aduience may be missing the point which is community community community. and the ZFS community as we’ve been saying for when there was more uncertainty about Oracle’s plans over the last several months, the ZFS community is extremely vibrant. There are hundreds of thousands of users if not more. The horse has left the barn a long time ago on ZFS in the sense that the vibrancy of the community, the class, scale and number of users, is already equal to that of any proprietary commercialized closed legacy file system. So it’s great that Oracle is continuing to invest in ZFS and that they decided to bring out new products, the the ZFS Storage Appliance and so forth. But for us, that’s sort of the icing on the cake. The cake is community community community and we know firsthand having more deployments on ZFS than any other company including Sun that the community is very vibrant.
Storage Soup: So do you see that being any kind of issue with Oracle, or a cultural depature for them?
Powell: In terms of their support for the community? I don’t know is the real answer. What will happen, what Oracle will do…they’re saying all the right things, which is great, but again…at some level, it’s like Java. There’s no end of users. It’s going to be okay. Same with ZFS. There’s no end of users, fully open sourced, it’s going to be okay…irrespective of what Oracle decides to do.
What I saw from their announcements is that they’re going to do the right things, instead of continuing to support the community, and they’re getting on board more clearly in their branding, calling their product the ZFS Storage Appliance. But again if you look at the folks maybe in enterprise software that could’ve been the first question they asked– ‘How big is the user base? Okay, it’s fine.’ In the storage world, the notion of real open solution is a little bit still orthoganal to people’s thinking.
Storage Soup: With any kind of announcement like this you’re going to find contrarians and skeptics, and there were some people who expressed skepticism that what was laid out at Oracle’s strategy webcast is exactly what was going to happen or the long term plan. Is that overthinking things?
Powell: Larry Ellison is a proven and brilliant leader. I don’t exactly know what he’s going to do, but everything he’s saying is very positive. But it’s not that important specifically what Oracle does or doesn’t do with ZFS.
Intel and Micron unveiled a new 25 nanometer lithography process for NAND wafers which are used to build Flash devices on Friday, saying the process will yield denser, cheaper Flash devices for consumers and commercial use.
The announcement comes less than a year after Intel and Micron first joined up to form a joint venture called IM Flash Technologies, which started by collaborating on 34 nanometer (nm) Flash components. IM Flash Technologies also has a partnership with Hitachi GST.
Tom Rampone, vice president of the Technology and Manufacturing Group and general manager of the NAND Solutions Group at Intel, said in a press conference on Friday that the vendors are qualifying 25 nm NAND wafers and sampling them to OEM customers at Intel’s fabrication plants. Rampone showed this product at Friday’s conference (see YouTube video)–a 167 square millimeter block he said is twice as dense as Flash devices created with the 34 nm lithography process.
The first products to be built using this technology will be an 8 GB multi-level cell (MLC) consumer Flash device, Rampone said, and most of the discussion Friday revolved around its consumer applications — an ability to hold 2000 songs in that small footprint, for example.
But Intel and Micron’s press release also makes reference to the product’s uses in solid-state drives (SSDs), and the 25 nanometer process holds at least some hope for enterprise users interested in Flash but put off currently by its high prices and relatively low densities. Moreover, traditionally MLC drives have been first to market and seen as consumer-grade, but recently SSD vendors like STEC and system builders like WhipTail have come along claiming to offer enterprise-level reliability and endurance with MLC Flash.
Quantum CEO Rick Belluzzo says his company has a new “significant” OEM partner for its data deduplication backup software.
Belluzzo refused to name the new partner, saying the partner will make that announcement when it is ready. His revelation came on the same day Fujitsu made a deduplication announcement but did not refer to any partners.
Quantum sells its own DXi family of data deduplication virtual tape libraries (VTLs) and NAS appliances, and also sold dedupe through an OEM deal with EMC until EMC acquired Quantum rival Data Domain for $2.1 billion last July. Since then, Belluzzo has been talking to new prospective partners.
During Quantum’s earnings conference call with analysts Thursday night, Belluzzo said the deal was in the “final stages of completion.” In a one-on-one interview after the call, he said it was a done deal although wouldn’t give any more information except that the new OEM partner will use Quantum’s deduplication software on the partner’s hardware, just as EMC did with its Disk Library platform.
“I would just say we have an agreement, and we’re proceeding around implementation,” Belluzzo said. “It’s all happening pretty fast. The EMC-Data Domain acquisition really lit a fire under the industry in terms of people determining what their [deduplication] product roadmap should look like. We’ve been in a lot of conversations. For something to close as quickly as this, it really shows how high of a priority deduplication is becoming.”
NetApp and Hitachi Data Systems are the most obvious candidates among major storage vendors looking for a backup dedupe partner. NetApp was outbid by EMC for Data Domain, and HDS was close to sealing an OEM deal with Data Domain before Data Domain was acquired. Dell also lacks its own deduplication product, but it partners with CommVault and Symantec for dedupe software. Industry insiders also expect Dell to eventually add Data Domain devices to the list of EMC products that it resells.
Then again, maybe Quantum’s partner already launched its dedupe feature. Fujitsu says it has added deduplication to its Eternus CS disk appliances, including support for Symanted OST. Quantum has worked closely with Symantec on OST, and Fujitsu is already a Quantum tape partner. In fact, Belluzzo said on the call that Quantum expanded its tape reseller relationship with Fujitsu. When asked if Quantum’s new dedupe partner was a Tier 1 vendor, he said, “it’s a significant company but I don’t know how you’d characterize its tier.”
Quantum reported $182 million in revenue last quarter, an 11% year-over-year decline — mainly because of lost revenue from EMC and tape OEM deals. Disk and software – a category made up largely of DXi sales – came to $25 million, down from $31 million a year ago. Belluzzo says nearly all the EMC revenue was lost, but branded DXi sales increased.
Belluzzo also said Quantum’s legacy tape business was stronger than it has been in a long time, leading to three consecutive profitable quarters. “We have positioned our tape business as a healthy place,” he said, claiming Quantum is the leader in automated libraries for open systems.
Belluzzo said he’s not worried about that business now that Oracle has closed its Sun acquisition, and revealed plans to continue Sun’s StorageTek tape business. “There’s been a lot of change in the tape market in recent years,” he said, “including the continuing saga of StorageTek being acquired by Sun, rumored to be sold, and now Oracle says they’re going to keep it and go direct and not through the channel. We’re positioned as well as we’ve been in years.”
IT management software vendor SolarWinds today acquired storage management vendor Tek-Tools for $42 million in cash and stock. SolarWinds executives say by adding Tek-Tools Profiler software, SolarWinds can combine storage and virtualization management with its Orion network and applications management portfolio. Its goal is the elusive “end to end IT management solution.”
Profiler provides backup reports and monitoring and storage resource management, and in recent years made a push into virtual server management.
SolarWinds chief product strategist Kenny Van Zant said the acquisition makes sense because “silos are breaking down” between storage and networking teams and they’re looking for a common management tool. He said this trend is driven by consolidation of storage and networking around Ethernet and iSCSI SANs.
“Everyone is now plugging storage into their network, so storage has become a network device,” he said.
SolarWinds CEO Mike Bennett said his company will sell Profiler separately and will also make it available as an integrated module of Orion by the end of the year.
Both companies have headquarters in Texas. SolarWinds is based in Austin and Tek-Tools in Dallas. Bennett said about 60 Tek-Tools employees will join SolarWinds, including CEO Ken Barth, senior management, and its engineering team in Chennai, India
SolarWinds completed an IPO last May, making it one of the few companies to go public in 2009. SolarWinds reported $32.4 million in revenue for the third quarter of 2009 with $12 million in income. It will report fourth quarter and 2009 yearly revenues Feb. 8. The vendor claims more than 88,000 customers and says Tek-Tools has about 1,300 customers.
Bennett said SolarWinds is paying $32 million in cash for Tek-Tools with the rest in stock. He expects about $4 million to $5 million in revenue from Tek-Tools software this year and an operating loss of around $3 million to $3.5 million. Bennett said Tek-Tools has been operating at a loss.
This will come as no surprise to many of you, some of whom I may have interviewed for my cloud storage feature published this month: two new analyst research reports show that despite the hype around cloud storage, its actual uptake among enterprise users has been minimal.
According to TheInfoPro’s Wave 13 survey of 309 Fortune 1000 and midsize enterprise storage professionals between August and November of last year, cloud storage showed up at the bottom of the list of technologies storage pros cited as likely to change their storage architecture in the next year.
TIP managing director Robert Stevenson, speaking in an audio slideshow showing Wave 13 highlights on TIP’s website, said that “Storage clouds…which [are] incredibly hyped in the enterprise, marketing and press, [are] really fairly low on the list of what storage pros think will change their storage architecture.” Storage clouds garnered about 5% of the responses in this category, putting it alongside file virtualization, enterprise SAS, 16 Gbps FC, and 10 GbE Storage at the bottom of TIP’s chart. Virtual tape libraries, SSD and Fibre Channel over Ethernet (FCoE) rank above “storage clouds” on the list.
Also this week, Forrester Research analyst Andrew Reichman published a report titled “Business Users are Not Ready for Cloud Storage.” According to the report, based on the results of Forrester’s Enterprise And SMB Hardware Survey, North America And Europe, conducted in the third quarter last year, users are well aware of the cloud storage concept, “reflecting the buzz in the market.”
However, whether those users had interest in deploying the concept anytime soon was an entirely different matter. Of the 1,272 decision-makers surveyed, 43% said they were “simply not interested” while another 43% said they were interested but had no immediate plans to move forward with cloud storage.
“Respondents in all geographies and of all company sizes appear to have little interest in moving their data to the cloud any time soon,” the report concludes. “There is long-term potential for storage-as-a-service, but Forrester sees issues with guaranteed service levels, security, chain of custody, shared tenancy, and long-term pricing as significant barriers that still need to be addressed before it takes off in any meaningful way.”
So what DO the analysts see as the hot topic in 2010? According to TIP, data deduplication is far and away the hottest technology for the year so far, with about 40% of respondents saying they expect it to impact their storage architecture. Block virtualization, thin provisioning and server virtualization follow with between 20 and 30 percent of responses for each of those technologies.
TIP also sees IT spending on SAN and NAS storage making a recovery this year, with NAS leading the way at 33% projected growth. “Block storage will start to show a turnaround later in the year,” said Stevenson. “It takes longer for database teams to talk to server teams and for server teams to talk to storage professionals [about what they need].”
Still, as we’ve heard CEOs like EMC’s Joe Tucci say, things aren’t getting back to 2007 levels anytime soon. The Wave 13 survey showed a continued reduction in the number of dialogues within organizations that are focused on business expansion, a pattern Stevenson said has been consistent for the last 18 months or so. Instead, key initiatives remain focused more narrowly on infrastructure-related issues like disaster recovery, archiving and regulatory compliance.
Among the vendors garnering the “exciting” label from respondents in the Wave 13 survey were EMC, which Stevenson said got a big jump with its acquisition of Data Domain; IBM, whose “excitement index” is up almost 100% over 2008 levels among Fortune 1000 respondents; HDS with its dynamic provisioning; and NetApp and Compellent, which caught the most attention in the midrange. Other vendors mentioned include 3PAR and primary storage capacity optimization vendors GreenBytes, Ocarina and StorWize.
In a followup to my post Monday about VMware and VSS integration issues as pointed out by W. Curtis Preston, Symantec emailed over the following statement from Peter Elliman, senior manager, product marketing for the Information Management Group.
Yes, we agree, that VMware should update their [sic] VSS writer code and we believe this is the best place for this issue to be resolved. We’re probably not the only one who believes this which is why only two vendors have created a work around here. VMware tools is constantly updated [sic], so when updates are provided by VMware, it lowers administration effort which is not trivial when you have 100s of VMs. With third party VSS writer code in a VM you run the risk that an upgrade from VMware tools will cause a conflict with the VSS writer there and you have separate code that has to be updated. This is why we focus on integration with VMware, rather than work-around efforts. We believe that VMware will address this issue in the future. Finally, we recommend agents when protecting mission-critical applications within VMs because not only does it assure consistency and proper log management, it also offers many more recovery options tailored to that application, e.g. Oracle tablespace recovery, or SQL Server filegroup recovery.
So there you have it. Until VMware’s VSS integration changes, we will probably still see users deploying backup software agents on guest machines, as recommended by Symantec.
Among the updates Symantec announced today to its NetBackup 7.0 and Backup Exec 2010 backup applications are enhancements to the granular backup of applications running in virtual servers, in part through integration with VMware’s new vStorage APIs for data protection.
These APIs are among the more widely hailed updates in vSphere 4 for storage pros. They promise to eliminate the cumbersome VMware Consolidated Backup (VCB) from the infrastructure and allow existing enterprise backup software tools to make backups directly from virtual machines, the same way they’ve been doing for physical servers.
Backup Expert W. Curtis Preston has been among those claiming the vStorage APIs for data protection will be a boon for improving virtual server backups, but points out in light of Symantec’s announcement that its approach of integrating NetBackup 7 with VMware’s VSS implementation leaves something to be desired.
According to Preston’s research, VMware’s VSS support will perform consistent backups of a data volume, which he calls “table stakes” in the snapshot backup market. But Preston says the VMware VSS integration can’t perform application consistent backups with Windows 2008 hosts, and in the case of either Windows 2003 or Windows 2008 hosts, it won’t notify the application when a recent backup has been made or refresh where it starts tracking incrementally changed data. (Think of it as resetting the trip speedometer in a car after a trip is finished).
“What this means is that anyone wishing to get proper backups of applications in Windows must run an agent of some kind in their guests in order to make this happen,” Preston wrote in a Jan. 11 blog post. He goes on to warn, “This means that any backup tools that are using only VMware’s infrastructure are going to have the same limitations.”
Symantec declined comment on the limitations cited by Preston. VMware officials confirmed that for Windows 2008, vSphere supports backups at the operating system level (as opposed to the application or transactional level). They also confirmed that vSphere’s integration with VSS doesn’t make the application aware it’s backed up (see the trip speedometer analogy above), but said through a spokesperson, “Back in the old file-level days there used to be an archive bit that was changed, and hence the application was aware of the backup. [But] the question is, does it really matter for the image level backup?”
“This has nothing to do with the archive bit,” Preston responded in an interview with Storage Soup. “Applications need to know when to truncate their transaction logs.” If transaction logs aren’t truncated, in the case of a database application, “they’ll eventually fill up and crash the database.”
VMware plans to continue innovating around VSS backups, the spokesperson added. Preston’s blog post also mentions VMware is working on more granular VSS support.
In the meantime, I’m wondering if anyone out there reading this in blogland has personally encountered these problems, or better yet, any workarounds they would like to share.