Storage Soup


May 18, 2009  9:00 AM

Hifn adds speed and software to data reduction cards

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Hifn (now part of Exar Corp.) is taking another crack at getting major OEMs to ship products integrated with its DR line of compression, encryption and deduplication hashing acceleration cards, which could potentially spur the development of primary storage data deduplication offerings.

Prior to its acquisition by Exar, Hifn began sampling Express DR 250 and 255 cards to OEMs, but they hadn’t made their way into any announced third-party products. At this spring’s SNW, Hifn launched its own product based on the DR 255.

It was unclear why the chip boards, which perform processor-intensive data reduction and encryption in silicon, hadn’t caught on with OEMs. Maybe Hifn’s announcement today of its new DR 1600 series may tacitly answer that question with new features such as high availability and boosted performance.

The DR 1600 line consists of six new models offering different levels of performance and combinations of compression, encryption, and dedupe. The Express DR 1600, 1610 and 1620 perform LZS compression and encryption only, at speeds of up to 300 MBps, 900 MBps, and 1800 MBps, respectively. The Express DR 1605, 1615, and 1625 run at the same three levels of throughput, but offer compression, encryption and hardware-based hashing for data deduplication (hash comparisons must still be performed by an OEM in software).

Hifn has also developed new software to go with the cards for this release, which includes a new API to standardize and ease integration of the cards into storage products to make it quicker for OEMs to take them to market. The 1600 series includes new high availability software for failover between cards, or to “pass through” traffic. That means if one card fails, the other can still perform compression, encryption, and dedupe in software.

According to Zack Mihalis, director of product marketing for Hifn, the new cards are sampling to OEMs and will become generally available at the end of July. Mihalis claimed that several large OEMs are considering the cards, potentially for primary storage dedupe. EMC, NetApp and Quantum are traditionally among Hifn’s OEMs, but Mihalis declined to disclose if any of them are sampling the DR 1600 cards.

Still, some industry analysts see this as the first step toward primary storage data reduction products becoming as ubiquitous as those for backup workloads. “Hifn has some very major OEMs as clients,” said IDC analyst Benjamin Woo. “This release is very timely – in this downturn we need to be more efficient with how we deal with data.”

However, Taneja Group analyst Jeff Boles pointed out that there’s still plenty of engineering work to be done to produce primary storage dedupe products, even with some of it already completed by Hifn. “Keep in mind that Hifn is hashing at 1,800 megabytes per second, but that’s not the speed of writing out to disk,” he said. “It’s still up to someone to make maximum use of this on disk, with caching, etc. Can you use this to service a random workload? That may be an engineering feat in itself.”

May 15, 2009  7:27 PM

BakBone broadens its message

Dave Raffo Dave Raffo Profile: Dave Raffo

Ten days after picking up Exchange CDP vendor Asempra, BakBone Software Thursday grabbed an entire message management division.

BakBone acquired ColdSpark for $15.9 million in cash and stock, and ColdSpark’s products will make up BakBone’s new division. ColdSpark founder and CTO Scott Brown becomes general manager of the message management group.
.
With Asempra, ColdSpark and BakBone’s NetVault platform, BakBone can protect data in a structured repository or file system or in motion as it moves across messaging systems. This also brings it beyond pure backup, where it probably can be no more than a niche player in a market where it competes with Symantec, EMC, IBM, Hewlett-Packard, CA, and CommVault. Now BakBone is a player in email management as well. That’s another crowded market, but there is still opportunity for a small player.

BakBone’s shopping spree comes within months of its taking a big step to remove a cloud that has hovered over the company for years. BakBone was knocked off the Toronto Stock Exchange in 2004 after accounting irregularities forced it to restate earnings for 2003 and 2004. It has been working on getting its books in compliance and current since then, and finally did that in February by filing annual reports for 2007, 2008 and the first three quarters of its 2009 fiscal year. BakBone is working on getting re-listed now that is financials are caught up.

My guess is BakBone will continue to move aggressively in the coming months, whether it’s making more acquisitions or pushing out its message about messaging.


May 15, 2009  3:22 PM

Deja vu: Emulex rejects Broadcom

Dave Raffo Dave Raffo Profile: Dave Raffo

Emulex rebuffed Broadcom again today, advising its shareholders to reject the cash tender offer Broadcom made May 5 in its latest attempt to acquire Emulex.

If you’ve been following this story, today’s news is no surprise. Since Broadcom first went public with its offer of $9.25 per share April 21, Emulex executives have claimed the offer undervalued the stock and that Emulex could do better on its own. The Emulex board rejected Broadcom’s original offer in December, then on May 4 it publicly declined the Broadcom offer. Broadcom then decided to make its offer directly to the Emulex shareholders.

The reasons cited by Emulex today:

  • Significantly undervalues Emulex’s long-term prospects and does not adequately compensate stockholders for their shares;
  • Is opportunistic, given that Broadcom was aware of significant new non-public design wins by Emulex in converged networking prior to making its proposal on April 21, 2009;
  • Does not compensate Emulex’s stockholders for a range of other initiatives being undertaken by Emulex that will start to meaningfully impact earnings within the next year and beyond;
  • Is clearly timed to take advantage of Emulex’s depressed stock price, which has been impacted by the current unprecedented negative macroeconomic conditions;
  • Is funded in significant part by Emulex’s own cash resulting in Broadcom offering only $5.59 per share for the operations of
    Emulex; and
  • Is highly conditional, creating substantial uncertainty as to whether Broadcom would be required to consummate the Offer.

Broadcom’s offer to Emulex shareholders expires June 3. The Ethernet chipmaker is looking to acquire Emulex’s Fibre Channel technology to position itself for the consolidation of storage and server networks expected to take place over the next few years.

Emulex’s stock opened today at $10.52.


May 15, 2009  9:18 AM

05-14-2009 Storage Headlines

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Another bumper crop o’ news this week.

(0:25) EPA begins long process to green storage specification

(1:40) EMC’s Storage Configuration Advisor provides policy-based monitoring, overlaps other EMC SRM offerings

(4:03) HP prepares Taxonom data classification SaaS service

(6:04) F5 Networks makes ARX file virtualization switch more tier friendly

(7:36) NetGear ups ante in SMB network-attached storage market with ReadyNAS 2100

(9:07) Sun discloses possible bribery, shareholder lawsuits
Sun details Oracle, IBM negotiations in SEC filing


May 14, 2009  8:11 PM

Analyst: SMB NAS products should cozy up to the TV

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

If you’ve been paying attention to recent storage product news, you may detect a distinctly consumer-ish flavor. Several vendors have recently redoubled efforts to reach the ever-elusive small business, home office and tech-savvy consumer. So far this year, the wave of new products has included EMC’s Iomega StorCenter ix4 product, Fabrik Inc. systems that were acquired by Hitachi GST, Seagate Technology’s BlackArmor NAS, and various ReadyNAS systems from NetGear.

One analyst who follows the consumer space closely says these products may be hitting the market at just the right time, but vendors still may need to tweak their approach to get the attention of shoppers at Best Buy.

“Digitization is reaching critical mass” in the home and small-office market, said ABI Research digital home group senior analyst Jason Blackwell in an interview with Storage Soup this week. Blackwell recently authored a market research report declaring that consumers are growing comfortable with home networks and network-attached devices. Products have been refined to offer easier installation and more features, and digital multimedia has become mainstream for users who want to keep music and photos on home networked storage products.

Still, storage vendors and customer may need help finding each other. Retailers don’t always provide that yet.

“There needs to be continued education in this market, of retailers as well as consumers,” Blackwell said. “Usually [home storage systems] end up at Best Buy in the networking section. They really need to be located closer to the TVs” so consumers associate the storage boxes with what they’re good for – streaming multimedia. In the networking section, they’ll be mixed in with routers and other devices.

“Typically stores have really good salespeople for televisions, but the same cannot always be said about the networking section,” he said. “Customers are kind of left on their own.”

The products themselves can get closer to other devices as well, Blackwell said, in more ways than one – through integration and automation of media adapter cards to more easily network with those devices, and through attention to industrial design to look more like them. “Products need to look good, and provide a good overall experience for consumers,” Blackwell said.


May 14, 2009  1:03 PM

CommVault sales slip, looks to cloud for sunnier days

Dave Raffo Dave Raffo Profile: Dave Raffo

Even with the sales expectation bar lowered due to the economy, CommVault still failed to clear it by a long way last quarter. Now CommVault CEO Bob Hammer is looking for data deduplication and management of storage clouds to pull his company out of its slump.

CommVault’s revenue of $56.1 million last quarter was down 1% from last year and down 7% from the disappointing previous quarter, and well below its previous forecast of $63 million to $67 million. CommVault’s net income of $200,000 for the quarter was down from $6.2 million in the same quarter last year.

Hammer blamed the poor results mainly on the economy, compounded by pricing discounts from his larger competitors Symantec and – to a lesser extent — EMC with its Avamar products.

“The numbers weren’t good,” Hammer told StorageSoup. “We got hit pretty hard clearly, but most of it was the economy. We found customers freezing budgets, reducing budgets, reducing capex. We also saw more competitive pricing pressures, but the big issue was the market locked up.”

The good news, Hammer says, is CommVault has already seen a thaw in spending budgets and strong interest in sales of Simpana 8 driven by deduplication. CommVault released Simpana 8 in late January, and its large OEM partners Dell and Hitachi Data Systems will begin selling it this quarter.

CommVault’s internal goals call for revenues to increase in double-digit percentages this quarter, but the company lacked the confidence to give any forecast. Hammer did say many customers’ budget restrictions have lifted.

“It’s too early to call this a big thaw, but it looks like the fundamentals are in place,” Hammer said. “The whole psychology is lot more positive. Budgets are there and customers are initiating projects. There’s still budget scrutiny, but it seems to be a lot easier to work with customers to close the deal.”

Hammer said CommVault shuffled its workforce to try to increase revenue by placing more people in sales and reducing other areas. The vendor will also offer “more flexible” pricing and payment models to counter what Hammer calls Symantec’s “kill CommVault in the cradle” discount programs. CommVault’s average selling price dropped to around $200,000 last quarter from $250,000 the previous quarter.

Hammer said Simpana 8 gained several hundred customers in the quarter, including more than 100 for its block-level dedupe. He says the software dedupe product had a high win rate against dedupe appliances from Data Domain, Quantum and others.

“The release was extremely successful, which sounds interesting given that we missed our number,” Hammer said.

CommVault is already looking to Simpana 9, which will likely be in beta late this year and in general release in mid-2010. The concentration will be on helping service providers managing storage in the cloud. Hammer says managed service providers are already a fast-growing segment of CommVault’s customer base.

“Storage clouds represent a natural target for Simpana,” he said. “There is no universal automated platform to manager internal and external clouds in a large global enterprise. We’ve been working on several innovative concepts to enable Simpana to be the first fully automated platform to deal with key aspects of cloud computing.”


May 12, 2009  11:04 PM

Sun details Oracle, IBM negotiations in SEC filing

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

In the wake of lawsuits filed by shareholders over its acquisition by Oracle, Sun Microsystems called for a meeting of shareholders to vote on the proposed merger and released background information about the negotiation process in an SEC filing today. And it’s a fascinating read.

The industry was almost immediately abuzz after the filing hit Sun’s website – Sun revealed in the document that there had been a third suitor in negotiations along with Oracle and, as widely reported, IBM. Aside from Oracle, no companies are actually named in the document. The others are referred to instead as Party A and Party B.

From widely reported details about Sun’s negotiations with IBM, it seems clear the Blue Blue is Party A. Party B is much more mysterious, but a source told Bloomberg news that “Party ” was Hewlett-Packard Co. According to the filing, Party B did due diligence but never made a formal offer.

The details of how negotiations fell apart with Party A are interesting, with offers ranging from $8.40 to $10 per share.

On April 3, 2009, Party A indicated that it wished to bring the process to a close. On April 4, 2009, legal counsel for Party A delivered to us and our counsel two versions of a merger agreement, indicating that these agreements represented Party A’s final offer to acquire us and that such offer would expire at 6 p.m. that day if one of the two agreements were not executed by us prior to that time. One of the draft agreements proposed a price per share for our common stock of $9.40 in cash and the other proposed a price of $9.10 per share in cash. Each of the agreements contained certain material terms related to transaction certainty to which we and our advisors had previously objected. The $9.40 agreement also required us to take certain actions as a condition to Party A’s obligation to take certain steps to obtain antitrust clearance, which we had previously communicated to Party A that our management considered impossible for us to satisfy. The $9.10 agreement did not contain this condition.

Oracle paid $9.50 per share, for a total of $7.4 billion including Sun’s debts.

There has been speculation since the Oracle deal was announced about whether or not Oracle intends to maintain Sun’s hardware business going forward, or spin it off to another vendor. However, the Sun SEC filing today also disclosed that Sun will be bought whole by Oracle and operated as a wholly owned subsidiary. Another section of the document indicates Oracle had approached Sun with offers to buy just the software business, which went nowhere.

The document also goes into fine-grained detail about estimated severance for practically all of Sun’s upper management, including CEO Jonathan Schwartz ($9 million) and EVP of Systems John Fowler ($2,7 million). Whether or not they get those payoffs soon after the expected acquisition close (August 2009) is still anybody’s guess.


May 12, 2009  6:09 PM

All eyes on NetApp as earnings report approaches

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Because of how NetApp Inc.’s fiscal quarters fall, it was the first storage vendor to report results that included the month of January this year. As its second fiscal-quarter earnings call approaches May 20, Wall Street analysts are paying close attention to see what NetApp’s earnings will say about March and April.

So far much of the speculation is derived from other vendors’ reports on their January quarters, which in EMC‘s case included a prediction that storage spending will remain flat in the second calendar quarter of this year, and probably in the third quarter, too.

This week, however, financial analysts revised estimates in notes to clients, predicting that NetApp’s revenues will be down, not flat, well below the Street consensus of $863-$865 million for the quarter (which would be down from $873 million for the previous quarter). Stifel Nicolaus analyst Aaron Rakers predicted the number will be closer to $830 million, while RBC Capital Markets analyst Jared Rinderer pegged his estimate at $840 million.

According to Rakers, “Derivative data points and our channel checks leave us to believe that NetApp will miss Street revenue estimates by a far margin, albeit likely offset by another quarter of better-than-anticipated [operational expense] management.”

Specifically –

(1) EMC reported CLARiiON revenue declined by 18% [year over year (yr/yr)] and we estimate 33% sequentially. EMC did report that its Celerra revenue grew double-digits yr/yr during 1Q09 (vs. +42% yr/yr in 2008). (2) IBM, which accounted for 6% of NetApp’s Jan 09 revenue (seasonal strength relating to IBM’s 4Q08), reported that its storage revenue declined by 20% yr/yr, or we estimate as much as 40% sequentially. (3) Arrow and Avnet, which account for 20% of NetApp’s total revenue (~30% of indirect revenue) both highlighted continued weak enterprise spending trends over the past few weeks, (4) Europe has been consistently highlighted as the weakest geography in terms of IT spending trends. NetApp generated 36% of its revenue from EMEA last quarter.

Rinderer said the channel had executed well, and focused more on regional weakness in EMEA. Rakers placed emphasis on NetApp’s continued efforts to cut costs; if that’s truly the only bright spot for NetApp this quarter, that’ll make it the second quarter in a row. CFO Steve Gomo opened NetApp’s previous earnings call by saying, “The financial highlight of our quarter was strong expense management.”


May 11, 2009  7:23 PM

Tandberg hitches hopes to new VTL

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Tape backup company Tandberg is battling to establish a new foothold in the disk-based data protection market following the bankruptcy of its parent holding company in Europe.

Late last month Tandberg Data’s Norwegian parent holding company filed for bankruptcy and was sold to creditor Cyrus Capital after it couldn’t repay a lapsed loan. The holding company, Tandberg Data ASA, along with an R&D arm called Tandberg Storage ASA, were an umbrella over four regional sales offices in the U.S., Germany, Singapore and Tokyo. Nothing has changed yet for those regional subsidiaries in the bankruptcy.

McClain Buggle, product manager for Tandberg Data Corp. U.S., says the U.S. organization has its own development team in Boulder, Colo., and “we are moving forward with the development of our product line and expanding our offerings.”

Today, Tandberg launched a virtual tape library (VTL) called the DPS1000 series. The VTL offers features like virtual tape stacking, in which data is stacked on virtual tape cartridges before being exported to physical tape for maximum tape utilization. Customers have the option of policy-driven tape export or native tape export that matches virtual tapes to physical tapes created by the backup application.

Tandberg’s legacy is tape systems, and this is one of its first forays into disk-based backup. The product will face hurdles in a market that’s been busily shoring up checkbox features/barriers to entry for some time now. The DPS1000 doesn’t allow the backup application to control writes to tape, a feature that has been problematic for some users of VTLs for years. APIs such as Symantec’s OpenStorage (OST) have been developed to overcome the issue with other VTL vendors.

Another key feature for VTLs today is the ability to deduplicate backup data. The importance of this feature for VTL users was the impetus for the partnership between EMC and dedupe VTL maker Quantum last year as well as IBM’s acquisition of Diligent. Buggle said Tandberg wants to offer dedupe, but is still wrestling with the “limitations on writing to tape,” which require either a method of deduping data on tape or a way to quickly reinflate data into its native format before writing it to a physical tape device. There have also been moves made elsewhere in the market on this issue, by backup software vendors CommVault with Simpana 8, and CA Inc. with ARCserve 12.5, which can both dedupe data sent to physical tape.

The DPS1000 is an iSCSI based appliance, and Tandberg is looking to use it to appeal to midmarket shops, according to Buggle. “We’re not trying to compete with the enterprise guys,” he said. Buggle said the company is aware of another trend in the VTL space, of a move among small and midsized companies to disk-only interfaces such as those offered by Data Domain and ExaGrid.

According to research by IDC, VTL is an $877 million market, with the overall data protection market placed at $2.6 billion. “I understand the trend, but I don’t think the VTL fades off completely,” Buggle said. Users may yet find that adding processes at the backup software level problematic for performance reasons, he said.

Still, storage analysts say there are table stakes in this market that Tandberg will have to catch up with if it hopes to gain significant traction. “I don’t know what hope a VTL has at this point without deduplication,” said backup expert W. Curtis Preston. “I also can’t imagine what it’s like for a company to begin developing a new dedupe product now.”


May 11, 2009  6:53 PM

Remote data protection vendors cozy up to Microsoft

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Microsoft’s annual TechEd user conference kicks off today in Los Angeles, accompanied by the usual flurry of supporting news announcements from industry vendors. Today the theme seems to be remote data protection, whether file delivery to branch offices, or data backup and disaster recovery.

Riverbed Technology Inc. is the first wide-area data services (WAN optimization/WAFS) vendor to announce that it will support a new feature called Direct Access when it becomes available in Windows Server 2008 R2, due out in 2010. Direct Access will create “the equivalent of a VPN tunnel” for Windows 7 remote clients attaching to a Server 2008 R2 host in the data center. Another coming feature called BranchCache will allow files to be stored locally at branches for Windows 7 clients. BranchCache will be certified to run on Riverbed’s SteelHead appliances at the branch office, according to Riverbed director of product marketing Apurva Dave.

As always when Microsoft expands into new areas, there’s the specter of the operating system vendor subsuming the value-add of smaller, more specialized players, which Dave admitted was a fear for Riverbed when it first heard about BranchCache. However, as with its Windows Storage Server product, Microsoft is pulling in partners for delivery, and Riverbed is prepared to sell BranchCache as part of “the complete picture for the branch,” which includes non-Windows and non-2008 Windows clients, Dave pointed out.

Also at TechEd, FalconStor Software Inc. will demonstrate a new product offering with partner Idera Software for backing up and single-instancing SharePoint documents. Idera’s SharePoint backup software will do the data protection; FalconStor’s File-interface Deduplication System (FDS) will do the data reduction. The co-marketed products will be available from both vendors.

Rounding out the remote data protection picture, Double-Take Software also indicated it’s on the Microsoft bandwagon today with the announcement that its GeoCluster integrates with Windows Server 2008 failover clustering and Hyper-V.

Finally, Sanbolic announced SQL Server 2008 clustering support and automation tools for the Melio clustered file system (CFS).

These announcements follow Sanbolic’s support for Hyper-V virtual servers, rolled out in January.

At that time, Scott Lowe, senior engineer for ePlus Technology, Inc. and a blogger on server virtualization, wrote that Sanbolic might be facing that old ‘coopetition’ bugaboo with Microsoft:

Clearly, Sanbolic wants to protect the value of Melio FS as Microsoft prepares to enter the clustered file system market with Cluster Shared Volumes (CSV), included in the R2 beta. It’s unclear to me whether CSV is going to be limited to virtualization only, addressing the “one-VM-per-LUN” issue, or whether Microsoft will also support CSV in other applications. By optimizing Melio FS for shared access to objects like virtual disk files and by extending support to run Melio FS in VMs on all the major platforms, Sanbolic hopes to establish Melio FS as a “de facto” standard in Windows-based clustered file systems.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: