Storage Soup

April 9, 2008  5:14 PM

EMC certifies Exchange competitor

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

It’s fairly routine for EMC to certify a multitude of different products as interoperable with its own, based on customer requests. But a recent press release about official compatibility between EMC and a Linux-based mail server positioned as an alternative to Microsoft Exchange made me pay more attention than I usually do to such proclamations.

One thing especially sticks out from this arrangement: several EMC customers, with plenty of Microsoft integration available from EMC’s product line, have instead chosen to go with this alternative mail server. From a startup called PostPath, no less.

Moreover, Barry Ader, EMC’s senior director of product marketing, acknowledged that there are several customers who have asked for the integration. “There are a handful I’m aware of, but there may be more,” was as specific as he would get, but he added, “They tend to be important customers to drive this kind of application work for us.”

EMC’s “important” customers tend to be large. In my book, if more than one important EMC customer is catching on to a product, it might be worth paying attention to.

In and of itself, PostPath’s application is a little bit outside our realm in storage, but it’s the way that the mail server handles storage that chiefly sets it apart from Exchange. According to CEO Duncan Greatwood, PostPath uses a file system (NFS or XFS depending on how servers are attached to storage) rather than the JET database, which allows for more efficient indexing schemes and a more organized layout of data on disk The JET database, which was never designed for the kinds of workloads enterprise Exchange servers are seeing today, has a deadly sequential-reads-with-random-writes issue slowing its storage I/O. PostPath also does a single write when a message is received, as opposed to Exchange, which writes blocks to multiple areas of storage based on different database fields with each message.

What all of this means is that attached to the right storage (ahem), PostPath allows email admins to offer virtually “bottomless” mailboxes to users.

Still, Greatwood acknowledges that he has an uphill battle on his hands. “Most of the Linux-based mail server alternatives to Exchange have not gone very far,” he said. But he maintains a key difference with PostPath is that the product speaks the same language as familiar Microsoft peripherals such as Outlook and Active Directory, so end users don’t have to stop using the tools they’re comfortable with. He also says that with all of Microsoft’s recent antritrust woes, especially in Europe, they’re not keen on crushing upstart competitors lately.

I know that storage managers (to say nothing if admins who have managed Exchange) have been looking for a better mousetrap for quite some time. And cozying up to EMC customers can’t be hurting PostPath’s cause.

April 8, 2008  4:30 PM

HP unveils unlimited online storage for SOHO market

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

HP has taken the wraps off a new online storage service for consumers and small offices, called HP Upline. The service has three levels: Home and Home Office, Family Account and Professional Account. Home accounts include one license, unlimited storage, online backup and basic support for $59 per year; a family account adds 3 licenses and a management dashboard for $149 per year; and a professional account gets 3 licenses, expandable to 100, as well as priority support.

The product is limited to PCs and doesn’t include some of the more advanced features being offered by online storage services such as file versioning. However, it does offer users the ability to tag content for later search and share, and to publish files online using the service through a feature called the Upline Library.

Like most online storage offerings to date, this offering is small in scale and limited in its features when compared with on-premise products. Most analysts and vendors say online storage will be limited by bandwidth constraints and security concerns to the low end of the market, with most services on the market looking a lot like HP Upline. Symantec has focused its backup software as a service (SaaS) within its Windows-centric Backup Exec product, traditionally sold into smaller shops; EMC’s Mozy Enterprise service, despite the name, is at this point recommended only for workstation-level backup. However, a “hybrid” approach for larger shops is now being proposed by EMC.

April 8, 2008  9:25 AM

Riverbed, Silver Peak offered third-party AutoCAD testing

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Wading into bickering between vendors is always fun. My most recent go-round with this has been the AutoCAD compatibility debate between Silver Peak and Riverbed. It began with the difficulties Riverbed users were seeing with optimizing AutoCAD 2007 and 2008 files, and progressed into a weeklong followup process culminating in a conference call between me, Riverbed VP of marketing Alan Saldich, Riverbed chief scientist Mark Day, Silver Peak director of product marketing Jeff Aaron, and Silver Peak CTO David Hughes, which led to this story

Don’t think this drama’s over yet, either. While on that rather unusual conference call they seemed to reach a consensus that further testing is necessary on both products, neither company has stopped sending little hints my way since that the other guy’s full of it. Meanwhile, another contact I spoke with for the followup story wrote me late last week to suggest they’re both perhaps piling it higher and deeper.

“After reading the back and forth between Silver Peak and Riverbed, and finding neither firms’ claims especially credible, we’ve put forth a public offer to test in a controlled environment,” wrote James Wedding, an Autodesk consultant who blogs at “Shockingly, neither company has responded or replied. We have visitors logged from both firms, so they are reading, but no takers. Color me shocked that neither firm wants independent testing on this problem that will continue for a minimum of another year as Autodesk decides to make a change to accommodate the WAN accelerator market.” The Taneja Group has also offered to carry out testing, also with no discernable response from the vendors.

We’re ready when you are, guys.

April 8, 2008  9:25 AM

Still more followup about Atrato

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

After I covered the launch of Atrato’s self-maintaining array of identical disks (SAID) product, there were some unanswered questions, which I blogged about last week. Shortly after that, I had a followup call with Atrato’s chief scientist Sam Siewart and executive vice president of marketing Steve Visconti to tie up at least some of the loose ends.

There were inconsistent reports on the size of disk drives used by the Atrato system; the officials confirmed they are using 2.5-inch SATA drives.

More detail was desired on exactly what disk errors the system can fix and how. That’s proprietary, Siewart said, which I’d anticipated, but he gave one example – the system can do region-level remapping on a drive with nonrecoverable sector areas. The system also runs continual diagnostic routines in the background and can fix such problems proactively, meaning it can find such an error and force the controller to remap the sector before a user’s I/O comes in.

I asked them if their IOPS number (ranging anywhere from 10,000 to 20,000 IOPS depending on which news source or press release you were reading) was achieved through testing or estimated based on a 512KB block size. To which they replied, “definitely tested.” The average result of those tests was about 11,000, though their mysterious government customer reported 20,000 with a tuned application. “What we need to do is have discipline in talking about approximately 11,000 and then describing how the results may vary,” Visconti said of the inconsistent numbers that appeared in the first round of Atrato coverage. The bandwidth is about 1.2 GBps.

Part of the problem when it comes to talking about this system is that so many of the parameters are variable, including price. “Pricing ranges from the low hundreds [of thousands of dollars] to upwards of 200K depending on the configuration,” Visconti said. So in a way, all of us who reported anything in that range were right. Performance is the chief configuration factor that influences price–a system that’s populated with 160 slower, higher-capacity drives will be more at the $100,000 end of the range. “Most customers are opting for higher-speed 7200 RPM SATA drives,” Siewart said. Added Visconti, “We shouldn’t be quoting a precise number.”

Clarification on the 5U/3U question – 3U is the size of the storage array, but doesn’t include the size of its controller, which might be either 2U or 3U depending on whether or not it’s an x3650 from IBM (2U) or a souped-up one from SGI (3U). Atrato’s developing its own controller to be announced later this year.

The array attaches to servers using a modular front-end that today is shipping with a 4 Gbps FC interface that can also accomodate NFS.  “We’re close on 8-gig Fibre Channel,” Siewart said, and working on iSCSI, 10 GbE and InfiniBand as well. Distance replication and mirroring also remain on the roadmap.

Xiotech is expected to announce a system very much like Atrato’s tomorrow morning at SNW. Stay tuned to our news page for coverage.

Meanwhile, it seems Atrato is looking for marketing help.

April 4, 2008  2:35 PM

EMC hashes out enterprise archiving SaaS

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

PhotobucketIf you’ve been following the data archiving and compliance markets, you’ve probably heard the consensus that the real boom in software as a service (SaaS) will come from small to midsized businesses (SMBs). That’s the prevailing wisdom among analysts, anyway, as far as I’ve heard.

But EMC revealed today at a Writers Summit in Boston that it intends to push its Fortress-based SaaS offering into the high end space with a hybrid approach to on-site and off-site archiving.

The event today was unusual, at least compared with the rest of my experience in the industry. There were no end users or high-profile industry analyst firms represented and hardly any trade press, either. Most of the attendees were technology writers from new mediums such as blogs and Wikis. EMC executives explained that they wanted the summit to be an interactive discussion around industry trends (read: free help for their marketing research?).

It was an odd situation for me, since I’m used to listening and asking questions at industry events, rather than offering opinions. Along the way, though, the EMC execs dropped a few nuggets about their plans. Convergence was a pervasive theme–and the SaaS plans fit into it. EMC predicted a convergence not only between traditional technologies and new mobile technologies (that’s why they bought a stealth startup with no product on the market yet, in Pi) but between on-site and off-site data repositories.

The new aim of the Content Management and Archiving unit at EMC is to use Documentum to unify pieces of its archiving portfolio (CMA president Mark Lewis says EmaileXtender will be integrated into Documentum by mid-year), and also to unify those repositories. Lewis and Documentum founder Howard Shao, now EMC senior VP of CMA, said in their view there are four factors influencing this approach: enterprise content management and archiving place significant demands on outsourced infrastructures, especially when it comes to network bandwidth; companies are wary of letting sensitive, regulated data outside their firewalls; any application you’d want to deliver through SaaS is inextricable from applications that remain on-site; and that the volume and value of archival storage dictates a tiered approach.

This sparked some debate among some of the pundits at the meeting. Carl Frappaolo, VP of market intelligence for enterprise content management (ECM) industry association AIIM, pointed out that the biggest reasons companies resist deploying ECM is because of complexity. “Aren’t you just adding complexity to the equation?” he asked. Shao countered that a complex problem or a complex back-end doesn’t mean that management can’t be simple.

Kahn Consulting Inc.’s Barclay Blair piped up in support of Shao’s view that users will be wary of letting certain data outside their firewalls, but said “our clients would be attracted to a model that keeps the information on-site, but has the applications which manage the information being managed for them by someone else.”

Countered Frappaolo, “If EMC is doing its job right, shouldn’t users be willing to trust data to them? The whole idea is that you’re supposed to be better at security than me, and I should trust you to keep from exposing private data both inside and outside the data center.”

At any rate, the upshot according to Lewis will be a rollout of this hybrid ECM SaaS model by the end of this year. Another thing I got out of this discussion, with all its focus on security and privacy within a multitenant repository, is a clearer reason why EMC spent all that money on RSA.

April 3, 2008  9:42 AM

Isilon ‘fesses up

Dave Raffo Dave Raffo Profile: Dave Raffo

We’ve know for a while that Isilon had disappointing sales results in its first year as a public company. Now we know the clustered NAS vendor inflated those lackluster sales numbers through questionable sales tactics.

Isilon restated financial earnings reports Wednesday to avoid getting delisted by Nasdaq, but made embarrassing disclosures about the findings of its audit committee review in its accompanying SEC filing.

The audit review found that Isilon bumped up its sales numbers in late 2006 and last year through phantom deals and overstatements. “When persuasive evidence of an end-user did not exist, when oral or side arrangements existed, when contingencies existed with respect to the acceptance of the product by the end-user, or when resellers did not have the ability or intent to pay independent of payment by the end-user customer, this information was not properly communicated among sales, finance, accounting, legal and senior management personnel …” Isilon said in its Annual Report.

Isilon disclosed one deal with a reseller that recorded for $1.1 million although there were conditions related to the product’s performance. The reseller ended up paying “only a portion” of that amount and returned the product. In another case, Isilon recognized revenue from a customer that “included a commitment from us to acquire software from the customer” (one of those side agreements) but the end-user “did not have the ability or intent to pay.” Isilon also said there were times it recognized revenue “when persuasive evidence of an end-user did not exist.”

On the plus side for Isilon, it made changes at the top after getting whiff of the problems late last year. Founder and former CTO Sujal Patel replaced Steve Goldman as CEO, and Isilon appointed controller Bill Richter as interim CFO to replace Stu Fuhlendorf. While Isilon’s previous team seemed willing to take shortcuts, new management has taken the first tough steps to cleaning up the mess left behind.

But Isilon’s problems aren’t over.  Its recent disclosures could make it tough to defend shareholder class action lawsuits accusing Isilon of being dishonest in filings before its IPO. And it must restore confidence among customers at a time when it faces much more competition than it did a year ago.

Isilon execs get a chance to explain their strategy for improving their prospects when they host an earnings call this afternoon.

April 2, 2008  10:31 AM

The Symantec Shuffle

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

It all started, as stories usually do, with a call to PR. A little birdie had told me I might want to follow up on how Symantec is organizing its execs following the departure of Data Center Group president Kris Hagerman and others in November.

There was just one problem with that: Julie Quattro, Symantec’s former director of Global Public Relations, has also left, or at least so it would appear–her email address bounced, and she’s no longer listed on the Symantec website.

I got in touch with another contact at Symantec who confirmed Quattro left in February.  In the meantime, following up on the advice from aforementioned birdie, I asked about the replacement for Hagerman.

Turns out Symantec has realigned its execs–this part has been public information, but in case you missed it, the groups have been restructured to focus on topic areas rather than on customer size. Enrique Salem, previously group president for sales and marketing, has been promoted to Chief Operating Officer. Under himn, Rob Soderberry is the SVP in charge of the storage and availability management group, which includes the Storage Foundation, CommandCentral and Veritas Cluster Server products. Deepak Mohan, who was a VP in the Backup Exec group, is now the senior vice president of the data protection group, which will join the NetBackup and Backup Exec business units together. Joseph Ansanelli will head up a data loss prevention team, and Brad Kingsbury has been put in charge of the enpoint security and management group. Finally, Francis deSouza will be in charge of the Information Foundation, compliance and security group.

Symantec’s market share and revenue numbers have slipped in recent IDC reports, but the software tracker for the fourth quarter of 2007 shows it bouncing back. Its $518 million in revenues for the quarter was an increase from $471 million in the third quarter of 2007, and up from $446 million in the fourth quarter of 2007.

A lot of moves have been made shuffling around the personnel, but we still find Symantec’s corporate hierarchy more decipherable than its many, many product lines and versions. A while back we asked them to make us a diagram of all their storage software products and how they fit together. We’re still waiting.

April 1, 2008  2:36 PM

Fact-checking Atrato

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Soon after filing a story on storage newcomer Atrato and its Self-Maintaining Array of Identical Disks (SAID) product, I started getting little pokes here and there from my analyst friends urging me to look at the story again.

The analysts had been reading stories from my esteemed fellow members of the Fourth Estate and noticed that some of the numbers weren’t matching up. Of course it’s always possible that reporters screw up, but when you have a good half-dozen or so reporting the story in slightly different ways, it could mean something different.

Take pricing, for example. I spent several minutes in my interview with CEO Dan McCormick trying to get him to tell me what the machine cost. The closest I could get was “six figures,” which seems to be what the Channel Register got, too. But they updated their article a little while later to add that had apparently been given a number of $140,000 for 20 TB. Another source, HPCwire, had $150,000 for 20 TB.

I hadn’t focused on disk size in my article, but the specs reported there were inconsistent across news sources too: HPCWire and CIO Today  reported the system uses 2.5-inch disks, but Byte and Switch reported 1.8-inch disks. I had assumed 3.5-inch disks, but StorageMojo’s Robin Harris pointed out that the kind of density Atrato’s talking about would be impossible in a 3RU system using 3.5-inch disks.

But that’s if the system is 3RU. Atrato’s own website doesn’t get this straight, either. The solutions section quotes a spec of “over 3000 data streams in 5RU” (click on “streaming” on the right hand side of the flash object at the top of the page), while the products section specifies “3600 data streams in 3RU.” Harris was given a  preview of the product a few months ago, and was originally told the box was 5RU.

In fairness, there are areas where the nature of Atrato’s product makes the kind of specs we in the storage industry are used to seeing tricky. Because the system allows customers to throttle parity, the capacity stats get a little complicated. Most of the news sources I saw either reported the same total raw capacity number, 50 TB, or got into different permutations of how the capacity is distributed according to your reserve space for parity protection. 

On the IOPS front, what I found was actually fairly consistent, either “over 11,000” or the exact number, “11,500.” The one place I saw a major discrepancy was in the details about the SRC deployment at an unnamed government customer, which claimed 20,000 sustained IOPS. Their explanation for this is that 11,000 to 11,500 is where they’re quoting to be safe but that the 20,000 at the SRC customer represents the fastest speed they’ve seen in the field on a carefully tuned application.

But Harris took issue with the “11,500” number, saying it’s too specific to really mean much, since IOPS are dependent on a number of factors.

“One possible take [on the discrepancies] would be, how many of these things have they built?” Harris pointed out. “With contract manufacturing, you don’t start building until you get volume, and you don’t get volume until you start convincing customers you’ve got something.” In this chicken-and-egg cycle, it could be that some of the Atrato arrays shipped to date have been 5 RU and they’ve decided to make more in the 3RU size. “But either way, they should get it straight on their own website,” he said.

Atrato got it straight in the press release announcing the product, identifying it as a 3U device, twice. No mention was made of a 5U box.

Also frustrating the propeller-headed among us is the lack of in-depth technical detail about how the product exactly works from Atrato execs. They wouldn’t tell any journalists exactly what disk errors their software claims to fix and other items details such as how the product connects to servers–iSCSI, NAS, or FC?

It appears Atrato has at least one potential customer commenting on this over on Robin’s blog:

It’s a pity that there doesn’t seem to be anything on their site about the connectivity options, processor redundancies, replication or clustering. If they provided a way to create a cloud of these they would probably be on the top of my solution list for permanent near-line archiving of about 60TB of data.

And it would be a pity, if Atrato really is sitting on something truly revolutionary, but the message just isn’t getting out there.

Then again, I’m writing about them again, aren’t I? “The conspiracy side of my brain tells me they could also be doing this to get maximum press,” Harris added. In that case, I guess we’ll find out if there really is no such thing as bad publicity.

April 1, 2008  11:20 AM

HP buys records management partner

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

HP announced last night that it has bought its enterprise content management (ECM) partner, Tower Software, Australia-based makers of TRIM Context 6. TRIM is already sold with HP’s Information Access Platform (IAP–formerly RISS). Terms of the deal weren’t disclosed.

Tower’s software is tangential to digital data storage–it deals in paper records management and also offers workflow management similar to Documentum (though Documentum is a broader product), which doesn’t get much coverage on

But HP is also framing the acquisition as an e-discovery play, according to Robin Purohit, vice president and general manager of information management for HP software. “The proposed deal will [give] HP software the broadest e-discovery capabilities and help manage the capture, collection and preservation of electronic records for government and highly regulated industries,” Purohit said.

Tower also has a good reputation when it comes to managing SharePoint, which Purohit predicted will be the next concern to hit the e-discovery market. “[The acquisition] allows HP software to address the next wave of e-discovery and compliance challenges posed by the explosion in business content stored in Microsoft SharePoint portals,” he said.

ESG analyst Brian Babineau said he agreed with that assessment, and said Tower’s work with Microsoft to integrate with SharePoint has been deeper than most. “Tower has been focused on integrating its application with other applications, from the desktop to the application server, and they’ve done a lot of work with Microsoft,” he said. An example of the integration Tower offers is the ability to mark files as TRIM records within the application, including Word and SharePoint documents.

“Everyone’s going to say they can archive SharePoint,” Babineau acknowledged. But “it’s a matter of how close you are with Microsoft.”

Tower’s going to have to get closer to HP, too, in Babineau’s estimation. Right now TRIM can draw from IAP as a content repository, but Babineau said he’d like to see TRIM and IAP work together to sort out data that’s being treated as a business record from data that’s being archived for storage management purposes, and to enforce policies on business records in tandem.

Learning this market space will also be a challenge for HP, Babineau predicted. “They need to understand the dynamics of records management and how to connect it to their software group,” he said. “They also need to figure out how to sell the technology.

“It’s not something they can’t handle, but it’s something they’ll have to learn,” he added. “As long as they can retain [Tower] people and figure out how to sell it, it’ll work.”

March 31, 2008  1:01 PM

Startup Fusion-io flashes its card

Dave Raffo Dave Raffo Profile: Dave Raffo

Fusion-io came out of stealth today with a PCIe flash card designed to give off-the-shelf servers SAN-like performance.

Fusion-io calls its product the ioDrive, and it’s NAND-based storage that comes in 80 Gbyte, 160 Gbyte and 320 Gbyte configurations. Fusion-io CTO David Flynn says the startup will have a 640 Gbyte card later this year. The ioDrive fits in a standard PCI express slot, shows up to an operating system as traditional storage and can be enabled as virtual swap space.

Flynn said its access rates are more comparable to DRAM than traditional flash memory.

“This is an IO drive, we do not consider it to be a solid state disk,” Flynn said. “It does not pretend to be a disk drive. It does not sit behind SATA or a SCSI bus talking SATA or SCSI protocol to a RAID controller. It sits directly on the arteries of a system.”

Fusion-IO bills its card as high-performance DAS that can reduce the need for more expensive SAN equipment. Fusion-io prices the drives at $2,400 for 80 Gbytes, $4,800 for 160 Gbytes and $8,900 for 320 Gbytes.

“Dropped into commodity off the shelf server, you have something that can outperform big iron,” Flynn said.

Not even the Fusion-io execs see their cards as SAN competitors, though. If it finds a place in storage, it will be as a way to run applications that require high performance — such as transactional databases or digital media – on servers that aren’t attached to SANs.

“It’s a way of extending the life of servers with direct attached storage,” said analyst Deni Connor of Storage Strategies Now. “I don’t see it as a replacement for Fibre Channel SANs, but it may prevent companies from going to Fibre Channel SANs as quickly.”

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: