Storage Soup

November 7, 2007  5:20 PM

Another former Veritas exec defects from Symantec

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Executive holdovers from Veritas are growing fewer and farther between at Symantec, following the disclosure by Symantec in an SEC filing this week that Kris Hagerman,  president of the Data Center Management group, has left the company. According to a Reuters story on the departure, Symantec has also confirmed that senior vice president James Socas has also left. Socas was senior vice president of Corporate Development, leading the mergers & acquisition practice.

These departures are the latest in a general exodus of former Veritas officials from Symantec since the companies merged in December 2005. Veritas CEO Gary Bloom and head of the data management group Jeremy Burton also left Symantec before Hagerman. Beyond saying “a number of factors contributed” to the departure of Hagerman, Symantec declined to specify a reason Hagerman walked away.

Symantec has been keeping up with frequent updates to its Veritas products, especially the “meat and potatoes” backup software products, Backup Exec and NetBackup. But its sales numbers and ranking in the market have continued to decline. On its most recent earnings call Symantec CEO John Thompson said the company would begin pruning back its Data Center products, but did not give specifics as to which products would get the axe:

Earlier this year, we started an active review of our product portfolio to ensure many of the investments made over the years are meeting our return expectations. During the September quarter we identified some non-strategic assets in the Data Center Management Group that have not met those expectations. As a result, we are taking an $87 million writedown of some assets in the data center management group acquired by Veritas during the 2003/2004 timeframe. Going forward, we will continue to evaluate our portfolio to ensure that we focus our investment efforts on a few key strategic areas that drive long-term revenue growth.

Meanwhile, what had been a vibrant brand in Veritas has started to fade from the market, according to analysts.  “Since the acquisition, Veritas has not been as visible in the industry as it was–they used to be a very engaged company and took a leadership position with technology,” according to Taneja Group founder Arun Taneja. “After a year of hiatus following the acquisition, Symantec is keeping up with products, but they’re not providing leadership like Veritas used to.”

November 5, 2007  12:52 PM

Dell-EqualLogic-EMC: three’s a crowd

Dave Raffo Dave Raffo Profile: Dave Raffo

For years, Dell has made noise about pushing deeper into storage outside of its partnership with EMC – all the while maintaining that partnership.

Until now, that strategy consisted of baby steps with its PowerVault SMB storage platform. But today, Dell took a $1.4 billion leap into storage with its acquisition of EqualLogic.

The deal tells us Dell is clearly interested in becoming a bona fide storage vendor, and it took the best route possible to making that happen. It also tells us it is a matter of time until its partnership with EMC falls apart, despite an agreement that runs through 2011.

Nobody from Dell or EMC will say that. Their party line is the EqualLogic products fall into Dell’s SMB  PowerVault platform, and Dell and EMC will continue to co-market mid-range Clariion systems.

That argument doesn’t hold up for several reasons. First, EqualLogic’s higher end systems are not SMB plays. The PS3000 it launched a year ago has a starting list price of $65,000 – more than twice that of most PowerVault products. EqualLogic has always considered midrange storage titans EMC, Hewlett-Packard and Network Appliance its main competition. And EMC people privately consider EqualLogic a genuine midrange competitor. EqualLogic can help Dell sell to SMBs, but it has also been adding features such as thin provisioning and virtualization capabilities to make its SANs more enterprise friendly. Is Dell going to scrap those technologies? That’s unlikely.

Then there is the iSCSI factor. By buying EqualLogic, Dell is betting most of its storage chips on Ethernet-based iSCSI. That’s no surprise. Dell’s business is built on Ethernet. But EMC’s is built on Fibre Channel. While Fibre Channel vendors have come to sell iSCSI and accept that it has benefits, they’re not betting their business on it. The major Fibre Channel vendors have even created a new protocol — Fibre Channel over Ethernet (FCoE) — aimed at stunting iSCSI’s adoption. Down the road, the paths of Dell and EMC will diverge over iSCSI and Fibre Channel.

Finally, there is the personal relationship factor. The EMC-Dell partnership benefitted from a close relationship between their respective CEOs, Joe Tucci and Kevin Rollins. Tucci even showed up unannounced at a Dell Technology Day last year to show support when angry investors were calling for Rollins’ head. Now Rollins is gone, and founder Michael Dell is back at the helm. Nobody’s saying Tucci and Dell don’t get along, but it’s not the same as Tucci and Rollins. And there’s no guarantee that EMC fits in Dell’s plans to turn around his company.

The EMC-Dell marriage made great sense when it began in 2001. Both companies were staunch competitors with Hewlett-Packard and IBM, who sold servers and storage. So instead of EMC making its own servers and Dell manufacturing storage, they partnered. And the relationship worked out until now — Dell is responsible for about 16 percent of EMC’s storage systems revenue and around one-third of its Clariion sales. But the landscape has changed, accelerated by Dell’s purchase of EqualLogic. The main question is: how long it will take for divorce papers to be filed.

November 2, 2007  1:51 PM

The Storage “Killer App”

Tskyers Tory Skyers Profile: Tskyers

I know, I know … whenever I see “killer app” I roll my eyes too, but I’ve been shown the light, or indoctrinated, whichever way you want to look at it. So here is my take: I’ve found storage’s killer app, and it’s hiding in plain sight.


There … I said it … EMAIL.

At work we’ve been deeply involved in identifying the platform for our next generation SAN. We’ve been busy identifying performance metrics (the benchmarks I’ve taken off some of these machines are incredible–I’ll blog about it soon!!) and precisely what we use our storage for. I’m a big VMware fan and throughout this discovery process I’ve had in the back of my head that VMware is the biggest thing we need to plan for. I was looking at VMware/host virtualization as the killer app for storage and I was wrong.

While we don’t and wouldn’t actually look at what is in end users’ mailbox, we do see the size of the mailboxes. We have quotas for most but some folks warrant an exception. It dawned on me during this process that we’ve been building out our Vmware infrastructure to mainly provide the filler for messaging and collaboration. I’m a cynic by nature so I decided to take a deeper look into my own email habits to see if this theory held water.

If you ever really wanted to surprise yourself, take a look into the dusty corners of your home machine’s email program.

So my basic conclusion is my dinky laptop hard drive is no place for my archive.pst. I need RAID 10 striped across 14 drives … for just the imbroglio (this was a great SAT word submission!!) I call my inbox. How many attachments I have, how much email I actually get, how much of it I keep, how it gets indexed by my desktop search, and finally how it gets archived all lead to a surprising portion of my work laptop hard drive dedicated to email. I also started looking at where I pull the information from to put in my email and found that almost all the work I produce gets emailed to someone, then they store it or email it to someone else.

At work I have about 800MB of active email data and about 4GB archived for the last 6 months. At home it’s triple that–I actually had to build a virtual machine to handle my third-tier email archives (I like to have my email indexed and available) which leads me to why I believe email is the killer app for storage.

My personal email is always online, indexed and searchable. If I need a piece of information and I can remember one or two unique words I stand a great chance of retrieving it from just about anywhere (I use IMAPS) that I can install Thunderbird or Evolution. It is very convenient and once I figure out how to search from my BlackBerry I won’t ever have to remember anything but keywords.

This convenience of course requires storage, and not only that, but storage that can chew through tons and tons of 2K and 4K files (I use Postfix and Cyrus IMAP on virtual machines at home) to find the bit of information I’m looking for.

Scale me to an enterprise level, think Ultraman. Today there are people in the work force who don’t know what POP is (remember PINE? Ahh the good old days…), have been using their email account since they could type, and email things to themselves when they want to save it. There is a new generation coming to the workforce after them that EXPECT to be able to have their entire lives searchable and indexed, Google-style.
At work we are moving towards the Sharepoint 2007 /Exchange 2007 /Office 2007 hegemony in the next year or two, and I am concerned that we as an industry don’t really and truly understand what collaboration does to storage requirements.

If I can now collaborate completely on my computer how long will my organization have to keep my Onenote stuff around? Will there be some sort of e-discovery for the group whiteboard? Where will we store all this stuff?

November 1, 2007  9:37 AM

Three emerging storage ‘mega-trends’

Maggie Wright Profile: mwright16

It may be a little early in 2007 to start prognosticating about what is going to occur in 2008 and beyond. However, there are some major trends — I would almost classify them as “mega-trends” — that I see taking shape. These trends indicate that, at a higher level, storage management is shifting from managing bits and bytes to treating storage as a cheap, abundant commodity that can be used to solve specific business problems.

Nowhere is this more evident than in the increasing number of small and midsize businesses (SMBs) who are switching to online backup. Though this trend started some time ago (some vendors noticed a serious up tick in business about 18 months ago), this trend should only accelerate in 2008.

Backup is strategic to SMBs only in the sense that SMBs recognize they need to do it and that they need help doing it. If they can outsource it for about the same cost or slightly more than they are paying now with a high level of assurance that it will work, most will do it.

Contributing to this trend is that backup service providers are maturing to become managed service providers (MSPs). They no longer provide just online backup and support user-initiated recoveries. They are diversifying to provide an entire range of data management services that SMBs need such as archiving, data classification and different tiers of disaster recovery services.

MSPs still are at different stages in providing these services and, for now, users should still view these new service offerings with a fair amount of skepticism. However, it is reasonable to assume that by 2009 MSPs should have many of the kinks worked out and will offer more robust data management services.

Another trend that is emerging is the need for storage managers to develop a close relationship with one’s legal department. This is significant because the way IT manages data going forward will be driven as much by their corporate legal departments as it is by internal business applications. “Just keep it all” or “Delete it after three years” may be good starting points for data management but the world has become much more complicated than that.

Andrew Cohen, who handles EMC’s legal department and corporate compliance, cites cost, legal statutes, defensible data management policies and e-discovery as the specific reasons that data management polices need to evolve and for IT and legal departments to work more closely together. Yet, for storage managers to focus on broader business and legal issues, they must put into place a storage infrastructure that doesn’t require their constant attention and is self-managing and self-healing.

That leads to the last major mega-trend I see emerging in storage: clustered storage. Anyone who deals with storage on a day-to-day basis knows that storage is anything but self-managing and self-healing — especially when used in a storage network. It anything, I would characterize most current storage network designs as exactly the opposite: self-destructing and self-defeating.

Clustered storage is shaping up to take one of two forms: clustered storage systems and virtualized storage. From a best practices point of view, clustered storage systems (sometimes called grid storage) from NEC, Isilon Systems and Panasas  can create one large logical storage pool that are probably the best option. However, that model often requires companies to standardize on a storage vendor’s product which may or may not fit with how companies procure their storage.

Virtualized storage is accomplished using a network based storage virtualization product such as EMC’s InVista or Incipient’s iNSP. These products aggregate existing storage systems to present one logical storage pool to the server infrastructure as well as creates a common console to perform common storage management functions such as data migrations and provisioning.

How soon these emerging mega-trends come to pass remains to be seen. But dropping storage costs, the need for tighter relationships between IT and legal, and maturing storage technologies are contributing to the likelihood of these trends getting a foothold in 2008 and accelerating from there.

October 31, 2007  1:36 PM

Just in case you thought Sun and NetApp were kidding around

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Sun has launched yet another set of countersuits against NetApp, this time in California. “Sun was legally obligated to respond in Texas to the initial suit brought on September 5, 2007 by Network Appliance to forestall competition from the free ZFS technology,” Sun said in a statement emailed to press this week. The statement continued:

Today we filed additional counterclaims in California, and specifically under the Lanham Act and California Business and Professions Code, based on Network Appliance’s false statements to the public about the alleged use of Network Appliance patents in ZFS. In parallel, we will be bringing a motion before the court in California asking that the case filed in Texas be consolidated with the case filed today for trial in the Bay Area, headquarters to both Sun and Network Appliance. Today’s filing includes counterclaims against the entirety of Network Appliance’s product line, including the entire NetApp Enterprise Fabric Attached Storage (FAS) products, V-series products using Data ONTAP software, and NearStore products, seeking both injunction and monetary damages.

Since Sun was forced to litigate, we feel California is a more appropriate venue to do so for several reasons. First, Sun and Network Appliance are both headquartered in Northern California, within 10 miles of each other. Second, most discovery will take place in California, as many of the key inventors on the patents and primary counsel for both parties are based in California. From both a judicial and economic standpoint, it makes much more sense for the case to be in California.

Sun has accused NetApp of “venue shopping” by choosing the Eastern District of Texas. A Sun statement responding to NetApp’s original suit called it “a legal jurisdiction (East Texas) long favored by ‘patent trolls.'” The choice of district has been the source of head-scratching even from people who are still reserving judgement, given that the two companies are both located in California, as Sun’s statement points out.

Who knows what the truth is. It could also be that since the district has a history of patent litigation, NetApp might have felt that court would be better able to discern the truth out of the he-said she-said better than a court with less experience in California.

But the longer this goes on (and boy, has it gone on), the more I start to think that even with the technical background I’ve picked up and the familiarity I have with both companies from covering storage for years, I’m not sure I would be able to sort out who’s right here. If this ever gets to trial, I do not envy the judge or potential jurors. Not one bit.

October 26, 2007  7:41 AM

Quantum’s balancing act

Dave Raffo Dave Raffo Profile: Dave Raffo

Of all the storage companies competing to sell data deduplication, Quantum is unique. That’s because it is primarily a tape vendor and data deduplication was developed to replace tape.

Look at some the other vendors involved in what Data Domain CEO Frank Slootman calls a “land grab” for deduplication customers. Domain, Sepaton, Diligent Technologies and FalconStor sell virtual tape  libraries (VTLs), EMC and Network Appliance sell massive  disk arrays, and Riverbed sells WAN optimization. Their sales forces wouldn’t know LTO-4 tape from masking tape.

Then there is Quantum, which after gobbling up rival ADIC last year will sell close to $1 billion worth of tape products this year. Quantum CEO Rick Belluzzo isn’t buying into the “tape is dead” line you hear from most de-duplication vendors.

 “Tape will continue to have an important role,” he said. “Very few customers are looking to go tapeless.”

Quantum won’t be the only tape vendor selling deduplication devices for long. Overland Storage will come out with its deduplication appliance soon. Still, most deduplication vendors disagree with Belluzzo about the long-term future of tape. Sepaton spelled backward is “no tapes,” and the company was built on the premise that tape is going away. So was Data Domain, and Slootman says when Data Domain sells appliances, “We replace tape in almost every instance.”

Belluzzo said that’s because Data Domain sells to remote office and midsized companies. Quantum’s strategy is to push into the enterprise, with the DXi7500 enterprise system coming in a few months to go with Quantum tape libraries. He says it doesn’t have to be one or the other in large shops.

“I hear our competitors say, ‘It’s clear that tape is dead.’ That has no credibility with customers,” Belluzzo said. “We still sell tape. We see tape replacement along the edge, where they collect data and replicate it to the data center. But tape plays a critical role in centralized data centers and consolidated SAN backup schemes. The whole story is, in midsized and enterprise data centers, people are buying disk and tape together.”

Quantum claims 120 customers for its DXi de-duplication appliances over the last six months. Market leader Data Domain has about 400 over that same period.

Another area where Quantum does a balancing act is with its de-duplication patent. With deduplication’s popularity rising and other vendors looking to get into the act, Quantum could license its technology and let others sell it. Data Domain paid a $5.4 million royalty for the patent earlier this year. And Quantum is suing Riverbed for patent infringement.

Belluzzo said Quantum is a product company, so licensing its technology takes a back seat. He won’t rule it out, though.

“It’s always a balance you face: do you hold onto it and let the market work around your, or do you exploit it for commercial purposes and let the market come to you?” he said. “We’re trying to balance that now.”

October 24, 2007  3:57 PM

Another day, another unencrypted backup tape lost

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

I have yet to get a letter from an institution with which I do business that starts like this:

Dear Current or Former PEIA, WVCHIP, or AccessWV Member:

We are writing to you because of a recent data security incident. On October 16, 2007, a mainframe computer tape containing your and your dependents’ name, address, and social security number was reported as lost by United Parcel Service (UPS) while en route to PEIA’s data analyst.

But the longer I stay on the storage beat, the more I feel like the day is coming. Continued »

October 24, 2007  2:48 PM

Hitachi GST claims 40 percent power reduction in desktop drives

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Hitachi GST is back at it again this week with another update to its disk drives, this time with a redesign of its desktop SATA and PATA drives for power efficiency. Hitachi claims the updates to its silicon on the new Deskstar P7K500 drive can reduce the drive’s power consumption by up to 40 percent–or down to as low as 6 watts when active and 2 to 3 watts while idle.

The new specs were accomplished in a couple of different ways, one of which is the use of a new system on a chip model for power modules, and changing the power regulator on each drive from a linear architecture to a switched one. The moves were made with the new Energy Star 4.0 spec for PCs released in July, which allots a “budget” of 50 watts in idle mode for the whole system while idle, of which it’s estimated 8.3 go to the disk drive. With the new 250 GB version of the Deskstar, Hitachi is claiming a draw of 3.6 watts in idle mode, and 4.8 watts for the 320, 400 and 500 GB models.

This savings won’t necessarily make a dent in anyone’s home electric bill, according to Lee Johnson, 3.5-inch Product Marketing Manager for Hitachi. “But with the additional watts left over, PC makers can use that added wiggle room to design PCs with more RAM, more features on the motherboard, or a higher processor clock speed,” she said.

Hitachi plans to add similar power-savings technology to its enterprise-class drives, but IDC’s John Rydning says that may not necessarily be practical–nor lead to significant cost savings in enterprise disk systems.

“At the enterprise level there’s not a lot of impact on the overall system by reducing idle drive power draws,” he said, noting that turning drives completely off through MAID is the way the enterprise is headed. “But if you’re a large enterprise organization with hundreds or thousands of PC workstations, this might make a difference.”

October 24, 2007  8:24 AM

SNW’s winners and losers

Maggie Wright Profile: mwright16

Last week, I met with more vendors and was briefed on more new technologies than I thought possible in a 3-day period at Storage Networking World (SNW) in Dallas, TX. However, now that I am back in the comforts of Omaha, NE, (if one can ever call Omaha comfortable), here are some of the briefings and interviews that I found to be the most interesting. And some that I thought were totally unremarkable.

Sun’s director of storage marketing, Dave Kenyon, and I met under the pretense of doing an interview for an upcoming article for Storage magazine on VTLs that manage disk and tape. But, whatever Dave was on during our interview, I need to get me some of that. I’m guessing Dave was up all night with the SNW crowd and his coffee was just kicking in when we sat down for our 9:00 am interview on Wednesday morning, because he let it rip. From blasting how backup software manages disk to wondering aloud why open systems vendors and users fail to learn the same lessons that the mainframe folks learned years ago, Dave solved backup’s problems (and most of the world’s) in the 30 minutes we met.

I also met with Isilon Systems’ director of marketing, Brett Goodwin. In the last year, Isilon Systems has gone from Wall Street darling and supposed NetApp-killer to a stock price collapse and whispers on the street that their product was having problems.

Brett explained that Isilon Systems had initially set earning expectations too high and then when Isilon System failed to meet lowered earnings expectations, they were promptly punished by Wall Street. As far as the rumors about their IQ product not working well, it was more a matter of Isilon’s VARs selling into accounts that they had little or no business selling into. Isilon Systems IQ series operates best when it is used in conjunction with video streaming applications, not in most business environments where random file access is the norm.

On the other end of the spectrum, I had a most unremarkable briefing with SeaNodes. SeaNodes provide clustered software that shares unused capacity on internal hard drives on Linux servers between Linux servers. Now, I thought this idea was dumb five years ago when a company named Monosphere attempted to do something similar for Windows servers. Monosphere has since seen the light and moved on to more intelligent pursuits, so I was dumbfounded that another company would try the same thing.

In SeaNode’s defense, at least they are just shooting for the clustered, high performance Linux server market that uses 500 and 750 GB internal drives where their aggregate of excess storage capacity on internal drives probably reaches the hundreds of TBs. However, users should only look at this technology if they are as geeky as the people who run clustered server computing farms and would rather be saving a few terabytes of storage than trying to figure out how they can squeeze time in their schedule to hit the golf course before the first snow of the season flies.

October 22, 2007  3:34 PM

SAS vs SATA: SATA on the ropes.

Tskyers Tory Skyers Profile: Tskyers

Not sure if I mentioned this before, but I’m a geek. I like blinking lights and shiny things. I do math and physics for fun. I’d chose a good computer magazine over Maxim. . .well, maybe not THAT much of a geek, but you get the point.

So what’s provoked my geekitude this time? SAS benchmarks!

My friend Karl and I go back and forth about SAS disk benchmarks. I follow him in his quest to get past the 200MBps ceiling on his desktop. I poke fun at his pursuit while secretly hoping he’ll find that right combo to break the 200MBps mark so I can buy it.

Further fueling my mental yoga over disks is the fact that SAS has invaded our server room at work like a plague. A good plague, but a plague all the same. I went to work one day and realized we don’t use SCSI in anything but our older legacy machines. Honestly, I love it, the performance of SAS drives is great, they are small (we use 2.5-inch SAS on IBM blades) and they don’t make as much noise or heat, don’t use as much electricity and have a reasonable capacity.

So what’s the problem? The problem is that I go home (well, sometimes, anyway) and I don’t have SAS at home, I have SATA.

Mind you, my SATA array sits behind an Areca 8-port RAID controller with 128 MB of cache on a PCI-Express based card, so it’s no slouch. But it’s not SAS, not by a long shot.

I now.  . .must. . .have it. I neeeeeeeeeeed it. I don’t care what body part it’ll cost me! I want the speed and lightning response I get when I click the start menu or do some data migration chore on a SAS-based machine.

Vendors are now offering SAS cards with no RAID 5 or write cache available for about $150. The drives are about $250, which makes a small array at home not out of the question. (I just have to come up with a compelling argument to submit to the home finance committee. BTW, consider this an official cry for help to come up with an argument that will avoid the dreaded giant red “Denied–resubmit in 90 days” stamp the chair of said home finance committee has in her possession.)

But while trying to come up with this argument, it hit me. Traditional SCSI is dead as a doornail, and I missed the funeral.

In the meantime, if SATA ever slows down in its capacity growth, it had better look out too.

If I’m willing to sacrifice a bit of space for the speed, who else out there is willing to do the same? A decent capacity SATA disk will run you $200;  a 150 GB Western Digital Raptor (10k rpm SATA) will run you $220. So why bother? Why not spend the extra $30 and get SAS? The controllers are about the same cost now for quality brands, the cabling and power envelope are roughly the same, acoustics on the 2.5-inch drives are not bad and the thermal footprint is not outrageous.

And there’s a downside to size. How long would it take to rebuild a RAID 5 or 6 array made up of 4 TB drives ? How would I cope if I lost 4 TB of data?

My future holds a 32 GB to 64 GB RAID 1 solid-state disk for my OS, with capacity SAS for the 3 TB that Office 2010 is going to take up. IBM has already released a 16 GB SSD for their blades with the 32 GB models soon to be widely available. Not only that, but you can set them up in RAID 1. (Every time I say “RAID 1 SSD” I have to giggle.)

Can someone give me an irrefrangible (Thanks for the SAT submission! More more!!!) argument why SAS will not someday soon be the SATA of today?

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: