Storage Soup

A SearchStorage.com blog.


April 2, 2008  10:31 AM

The Symantec Shuffle



Posted by: Beth Pariseau
Strategic storage vendors

It all started, as stories usually do, with a call to PR. A little birdie had told me I might want to follow up on how Symantec is organizing its execs following the departure of Data Center Group president Kris Hagerman and others in November.

There was just one problem with that: Julie Quattro, Symantec’s former director of Global Public Relations, has also left, or at least so it would appear–her email address bounced, and she’s no longer listed on the Symantec website.

I got in touch with another contact at Symantec who confirmed Quattro left in February.  In the meantime, following up on the advice from aforementioned birdie, I asked about the replacement for Hagerman.

Turns out Symantec has realigned its execs–this part has been public information, but in case you missed it, the groups have been restructured to focus on topic areas rather than on customer size. Enrique Salem, previously group president for sales and marketing, has been promoted to Chief Operating Officer. Under himn, Rob Soderberry is the SVP in charge of the storage and availability management group, which includes the Storage Foundation, CommandCentral and Veritas Cluster Server products. Deepak Mohan, who was a VP in the Backup Exec group, is now the senior vice president of the data protection group, which will join the NetBackup and Backup Exec business units together. Joseph Ansanelli will head up a data loss prevention team, and Brad Kingsbury has been put in charge of the enpoint security and management group. Finally, Francis deSouza will be in charge of the Information Foundation, compliance and security group.

Symantec’s market share and revenue numbers have slipped in recent IDC reports, but the software tracker for the fourth quarter of 2007 shows it bouncing back. Its $518 million in revenues for the quarter was an increase from $471 million in the third quarter of 2007, and up from $446 million in the fourth quarter of 2007.

A lot of moves have been made shuffling around the personnel, but we still find Symantec’s corporate hierarchy more decipherable than its many, many product lines and versions. A while back we asked them to make us a diagram of all their storage software products and how they fit together. We’re still waiting.

April 1, 2008  2:36 PM

Fact-checking Atrato



Posted by: Beth Pariseau
Storage

Soon after filing a story on storage newcomer Atrato and its Self-Maintaining Array of Identical Disks (SAID) product, I started getting little pokes here and there from my analyst friends urging me to look at the story again.

The analysts had been reading stories from my esteemed fellow members of the Fourth Estate and noticed that some of the numbers weren’t matching up. Of course it’s always possible that reporters screw up, but when you have a good half-dozen or so reporting the story in slightly different ways, it could mean something different.

Take pricing, for example. I spent several minutes in my interview with CEO Dan McCormick trying to get him to tell me what the machine cost. The closest I could get was “six figures,” which seems to be what the Channel Register got, too. But they updated their article a little while later to add that Internetnews.com had apparently been given a number of $140,000 for 20 TB. Another source, HPCwire, had $150,000 for 20 TB.

I hadn’t focused on disk size in my article, but the specs reported there were inconsistent across news sources too: HPCWire and CIO Today  reported the system uses 2.5-inch disks, but Byte and Switch reported 1.8-inch disks. I had assumed 3.5-inch disks, but StorageMojo’s Robin Harris pointed out that the kind of density Atrato’s talking about would be impossible in a 3RU system using 3.5-inch disks.

But that’s if the system is 3RU. Atrato’s own website doesn’t get this straight, either. The solutions section quotes a spec of “over 3000 data streams in 5RU” (click on “streaming” on the right hand side of the flash object at the top of the page), while the products section specifies “3600 data streams in 3RU.” Harris was given a  preview of the product a few months ago, and was originally told the box was 5RU.

In fairness, there are areas where the nature of Atrato’s product makes the kind of specs we in the storage industry are used to seeing tricky. Because the system allows customers to throttle parity, the capacity stats get a little complicated. Most of the news sources I saw either reported the same total raw capacity number, 50 TB, or got into different permutations of how the capacity is distributed according to your reserve space for parity protection. 

On the IOPS front, what I found was actually fairly consistent, either “over 11,000″ or the exact number, “11,500.” The one place I saw a major discrepancy was in the details about the SRC deployment at an unnamed government customer, which claimed 20,000 sustained IOPS. Their explanation for this is that 11,000 to 11,500 is where they’re quoting to be safe but that the 20,000 at the SRC customer represents the fastest speed they’ve seen in the field on a carefully tuned application.

But Harris took issue with the “11,500″ number, saying it’s too specific to really mean much, since IOPS are dependent on a number of factors.

“One possible take [on the discrepancies] would be, how many of these things have they built?” Harris pointed out. “With contract manufacturing, you don’t start building until you get volume, and you don’t get volume until you start convincing customers you’ve got something.” In this chicken-and-egg cycle, it could be that some of the Atrato arrays shipped to date have been 5 RU and they’ve decided to make more in the 3RU size. “But either way, they should get it straight on their own website,” he said.

Atrato got it straight in the press release announcing the product, identifying it as a 3U device, twice. No mention was made of a 5U box.

Also frustrating the propeller-headed among us is the lack of in-depth technical detail about how the product exactly works from Atrato execs. They wouldn’t tell any journalists exactly what disk errors their software claims to fix and other items details such as how the product connects to servers–iSCSI, NAS, or FC?

It appears Atrato has at least one potential customer commenting on this over on Robin’s blog:

It’s a pity that there doesn’t seem to be anything on their site about the connectivity options, processor redundancies, replication or clustering. If they provided a way to create a cloud of these they would probably be on the top of my solution list for permanent near-line archiving of about 60TB of data.

And it would be a pity, if Atrato really is sitting on something truly revolutionary, but the message just isn’t getting out there.

Then again, I’m writing about them again, aren’t I? “The conspiracy side of my brain tells me they could also be doing this to get maximum press,” Harris added. In that case, I guess we’ll find out if there really is no such thing as bad publicity.


April 1, 2008  11:20 AM

HP buys records management partner



Posted by: Beth Pariseau
data compliance and archiving

HP announced last night that it has bought its enterprise content management (ECM) partner, Tower Software, Australia-based makers of TRIM Context 6. TRIM is already sold with HP’s Information Access Platform (IAP–formerly RISS). Terms of the deal weren’t disclosed.

Tower’s software is tangential to digital data storage–it deals in paper records management and also offers workflow management similar to Documentum (though Documentum is a broader product), which doesn’t get much coverage on SearchStorage.com.

But HP is also framing the acquisition as an e-discovery play, according to Robin Purohit, vice president and general manager of information management for HP software. “The proposed deal will [give] HP software the broadest e-discovery capabilities and help manage the capture, collection and preservation of electronic records for government and highly regulated industries,” Purohit said.

Tower also has a good reputation when it comes to managing SharePoint, which Purohit predicted will be the next concern to hit the e-discovery market. “[The acquisition] allows HP software to address the next wave of e-discovery and compliance challenges posed by the explosion in business content stored in Microsoft SharePoint portals,” he said.

ESG analyst Brian Babineau said he agreed with that assessment, and said Tower’s work with Microsoft to integrate with SharePoint has been deeper than most. “Tower has been focused on integrating its application with other applications, from the desktop to the application server, and they’ve done a lot of work with Microsoft,” he said. An example of the integration Tower offers is the ability to mark files as TRIM records within the application, including Word and SharePoint documents.

“Everyone’s going to say they can archive SharePoint,” Babineau acknowledged. But “it’s a matter of how close you are with Microsoft.”

Tower’s going to have to get closer to HP, too, in Babineau’s estimation. Right now TRIM can draw from IAP as a content repository, but Babineau said he’d like to see TRIM and IAP work together to sort out data that’s being treated as a business record from data that’s being archived for storage management purposes, and to enforce policies on business records in tandem.

Learning this market space will also be a challenge for HP, Babineau predicted. “They need to understand the dynamics of records management and how to connect it to their software group,” he said. “They also need to figure out how to sell the technology.

“It’s not something they can’t handle, but it’s something they’ll have to learn,” he added. “As long as they can retain [Tower] people and figure out how to sell it, it’ll work.”


March 31, 2008  1:01 PM

Startup Fusion-io flashes its card



Posted by: Dave Raffo
Storage

Fusion-io came out of stealth today with a PCIe flash card designed to give off-the-shelf servers SAN-like performance.

Fusion-io calls its product the ioDrive, and it’s NAND-based storage that comes in 80 Gbyte, 160 Gbyte and 320 Gbyte configurations. Fusion-io CTO David Flynn says the startup will have a 640 Gbyte card later this year. The ioDrive fits in a standard PCI express slot, shows up to an operating system as traditional storage and can be enabled as virtual swap space.

Flynn said its access rates are more comparable to DRAM than traditional flash memory.

“This is an IO drive, we do not consider it to be a solid state disk,” Flynn said. “It does not pretend to be a disk drive. It does not sit behind SATA or a SCSI bus talking SATA or SCSI protocol to a RAID controller. It sits directly on the arteries of a system.”

Fusion-IO bills its card as high-performance DAS that can reduce the need for more expensive SAN equipment. Fusion-io prices the drives at $2,400 for 80 Gbytes, $4,800 for 160 Gbytes and $8,900 for 320 Gbytes.

“Dropped into commodity off the shelf server, you have something that can outperform big iron,” Flynn said.

Not even the Fusion-io execs see their cards as SAN competitors, though. If it finds a place in storage, it will be as a way to run applications that require high performance — such as transactional databases or digital media – on servers that aren’t attached to SANs.

“It’s a way of extending the life of servers with direct attached storage,” said analyst Deni Connor of Storage Strategies Now. “I don’t see it as a replacement for Fibre Channel SANs, but it may prevent companies from going to Fibre Channel SANs as quickly.”


March 28, 2008  2:26 PM

CERN black hole flak headed to court?



Posted by: Beth Pariseau
Around the water cooler

Last August I wrote about Swiss research facility CERN and its plan to store petabytes of data from its Large Hadron Collider (LHC) device on commodity NAS and in tape silos for scalability and cost savings. A month ago, it came to my attention that some people thought the collection of that data beginning in May might create a black hole that will eat the Earth.

Anybody who’s ever been exposed to TimeCube will know that just because people are shouting about something scientific on the Internet doesn’t make it solid science or make them experts. So my first post on the black hole issue was tongue in cheek–and it still all seems far-fetched (which is what CERN apparently wants us to believe…cue spooky music).

But since that post, more people with a bit more gravitas have come forward with black hole concerns. Such as a Scientific American blogger who commented on my original post. And former U.S. nuclear safety officer Walter Wagner, who according to MSNBC has filed a lawsuit against CERN to stop LHC in Hawaii.

There’s one puzzling element of the story about the lawsuit for me: the MSNBC writer says conferences on the suit are scheduled for June 16. In part, the suit seeks a temporary restraining order to keep CERN from turning on LHC until everybody’s satisfied it’s not going to bring about Armageddon. But last I knew, LHC was supposed to start up in May, making that hearing on the restraining order about a month too late if something disastrous does happen…

P.S. Speaking of lawsuits (or, at least, potential lawsuits), I got a very interesting followup call to my story on Atrato this week from a man who declined to tell me who exactly he is or why he’s interested, but who claims not to have been able to find evidence of the more than 100 patents Atrato claims for its Self-managing Array of Idle Disks. (An Atrato spokesperson sent a link to a Google search page when asked for a list of the patents.)

One thing this followup caller did happen to mention to me is that he’s an attorney in Minnesota. The light bulb went on…there’s another Minnesota-based company that has been rumored to be working on a product very similar to Atrato’s.

Could just be coincidence, though.


March 27, 2008  2:11 PM

Sun fires another shot at NetApp



Posted by: Dave Raffo
Storage

Although NetApp fired the first volley in its ZFS lawsuit against Sun Microsystems, Sun has been the aggressor since NetApp’s initial strike. Following NetApp’s lawsuit last September charging that Sun violated several of its patents regarding ZFS, Sun countersued and accused NetApp of violating Sun’s patents. Sun has also asked the U.S. Patents Office to re-examine several NetApp patents.

Sun filed yet another lawsuit Wednesday, alleging patent infringement related to storage management technology NetApp acquired when it bought Onaro in January.

“As NetApp attempts to extend its product line, it also expands its exposure to Sun patents,” Dana Lengkeek of Sun’s Corporate Communications office wrote in an emailed statement.

The latest lawsuit filed in U.S. Discrict Court in the northern district of California claims that software NetApp gained from Onaro uses Sun’s patented technology. Sun seeks compensation from NetApp for patent infringement and an injunction preventing NetApp from using Sun’s technology.

Sun also revealed the U.S. Patent Office granted its request to re-examine NetApp’s patent related to its “copy on write” technology.

But perhaps the harshest accusation Sun leveled against NetApp in its latest filing came in the opening paragraph of the suit. Chiding NetApp for only spending about $390 million on research and development last year and for holding “only approximately 200″ patents, Sun declared: “Indeed, rather than innovate, NetApp builds on the innovation of others” and “NetApp … uses extensive amounts of open source code developed by others, without contributing any innovation of its own.”

Instead of demanding money if it wins the suit, maybe Sun should request that NetApp change its already-taken slogan “Go Further, Faster ” to “NetApp: Built on others’ innovation.”

NetApp responded to the latest suit with a terse: “NetApp does not comment on ongoing litigation.”


March 27, 2008  12:42 PM

Joe Tucci’s game of catch



Posted by: Beth Pariseau
Storage

As a fanatical Red Sox fan and a storage reporter, the whole EMC-logo-on-Red-Sox-uniforms thing has been a matter of some, er, ambivalence for me. It’s also been the source of some trash talk between me and EMC acquaintances, one of whom–a Yankees fan–keeps threatening to send me one of the defiled jerseys. To which I reply I’ll be ready with a seam ripper suitable for removing the patch on the sleeve. To which my father, who raised me a Red Sox fan, replied that I would be an idiot for not keeping it as a collector’s item. But anyway.

Meanwhile, since the EMC logo was slapped on the Olde Towne Team for the Japan trip (and for the Japan trip ONLY, they promise us, but we’ll see), Joe Tucci took a jaunt to Japan with the team and hobnobbed with the players at a gala reception last week. A gala reception at which Globe Red Sox columnist Dan Shaughnessy was also present, and witnessed the following, as reported in his column today:

Highlight of the trip, hands down, was EMC CEO Joe Tucci having a catch with Hideki Okajima at a fancy reception at the Sox’ New Otani Hotel headquarters Monday. While 2007 World Series clips were shown on a Green Monster-sized LED screen, assorted clients and dignitaries – most of them Japanese – feasted on sushi and fine wines. After a few speeches and interviews with Mike Lowell, Dustin Pedroia, Kevin Youkilis, and Terry Francona, a couple of fielding mitts were produced and Tucci lined up to play catch with the Sox’ second-most-famous Japanese hurler. Standing in front of the giant screen, Okajima softly tossed to Tucci, who was about 20 feet away. Tucci made the catch, and before you could say, “Nuke LaLoosh,” gunned a wild heater that sailed far high and wide of a sprawling Okajima and punctured the precious LED screen. I will never look at the EMC logo (which was on the Sox uniforms for the Japan games) without thinking of this.

Was that karmic payback for Tucci — a Yankees fan who dismayed Sox purists everywhere? Not for me to say. But I would have killed to be a fly on the wall–especially if I could have been a fly on the wall with a camera.


March 26, 2008  10:12 AM

NetApp’s slogan snafu



Posted by: Beth Pariseau
Strategic storage vendors

Oopsie.

The UK’s Channel Register broke the story yesterday that NetApp’s new slogan, ‘Go Further, Faster,’ is kind of, um, already taken. By, er, Halliburton.

Eh, no worries. Not like that company is really well-known or well-connected or anything.

The Register weighs the two slogans:

On one hand, according to the Halliburton recruitment video, the company makes a habit of going further, faster every god damn day. That’s consistency. On the other, NetApp’s video has a 4/5 star rating on YouTube.

“Very cool!” says a commenter who we are sure is not an employee of NetApp. “Awesome,” echoes another completely random observer.

(When vendors get all nitpicky with me, I wonder how they even deal with The Register, or if they just pretend it doesn’t exist, since it’s across the pond anyway.)

And of course you know EMC bloggers are jumping up and down and singing happy tunes about this little gaffe.


March 25, 2008  9:32 AM

Another storage analyst defects to a vendor



Posted by: Beth Pariseau
Storage

Barry Murphy, formerly of Forrester Research, has been named the new director of product marketing for Mimosa, tasked with “expanding the company’s eDiscovery and content management partner ecosystem and broadening awareness for and adoption of Mimosa Systems’ award-winning content archiving platform.”

The cynically inclined might say he already did a similar thing with his last major act as a Forrester analyst, the publication of two reports on message archiving products. The reports concluded that on-premise software archives (such as Mimosa’s) are gaining more traction and are more mature in their features than hosted archiving offerings.

I don’t really believe this was anything other than coincidence–the research for such a report goes on for months and the report was obviously started well in advance of this transition. It makes sense that an analyst whose expertise was in records management and archiving would go to a vendor in that sector of the market. But sometimes the appearance of a conflict of interest can be as problematic as an actual conflict of interest. At the least, from my perspective, it’s unfortunate timing.

Murphy joins Tony Asaro, who recently resurfaced as chief strategy officer for Virtual Iron after a short stint with Dell, as the most recent storage analysts to head to vendors. It has been suggested to me that most analysts wind up at vendors or doing consulting, so maybe this is a natural lifecycle we’re seeing.

Speaking of defections, it has also been announced that Dr. David Yen has left Sun for Juniper. Yen was formerly the head of Sun’s storage group, who was shifted to their chip group following the restructuring of the storage and server groups under John Fowler last year.


March 24, 2008  2:47 PM

Tape is dead, long live tape



Posted by: Beth Pariseau
Data storage management, tape data storage

Ever since I started covering storage, I’ve been hearing the disk vs. tape debate, usually including proclamations that tape is dead or dying.

There are good reasons to make that assertion. Disk-based backup is catching on, particularly among SMBs, and data deduplication is evening out the cost-per-GB numbers between disk and tape for many midrange applications. Disk is preferable to tape in many ways, especially because it allows faster restore times for backup and archival data. Once again, people are starting to ask, what’s the point of using tape? Dell/EqualLogic’s Marc Farley posted a funny video on his blog to illustrate the question on Friday.

I’m not so sure we’ll ever really see the end of tape. When it comes to the high end, there’s simply too much data to keep on spinning disk. The cost of disk is often still higher per GB, depending on the type of disk and the type of application accessing it. And that doesn’t include power and cooling costs.

I’ve also heard lots of good reasons to give up tape. And maybe in certain markets, like SMBs, tape will die — if it hasn’t already. But whenever tape is on the ropes, another trend comes along to boost it back into relevance.  When disk took over backup, the data archiving trend kicked in, and tape’s savings in power and cooling and its shelf life for long-term data preservation came to the fore. Now, as data dedupe has disk systems vendors pitching their products for archive, too, along comes “green IT” to buoy tape.

Now, I’d like to ask the same questions Farley did, because I’m just as curious to know, and because he and I may have different audiences with different opinions. Do you think tape is dead? If not, what do you use it for? Let us know the amount of data you’re managing in your shop as well.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: