Storage Soup


January 28, 2008  4:19 PM

EMC slaps its logo on the Boston Red Sox

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

As a storage reporter who’s also a fanatical Red Sox fan, I’m in good position to comment on EMC’s latest marketing move: the agreement with the Boston Red Sox to place a small patch with the EMC logo on the Red Sox uniform shirt during the team’s trip to Japan in April.

The ‘work’ side of me understands why both the team and EMC would be interested in this joint venture. EMC has already sponsored an entire level of Fenway Park, and its logo is plastered about in many places at the old grounds. The Red Sox, under MLB’s mandate to expand its global reach, need to bring a $200 million team halfway around the world for Opening Day, and ticket prices are already $90. Doesn’t seem like they have a whole lot of choice.

But the sports fan in me remembers the flak when there was talk of displaying ads for Spiderman II on the bases used in games. Heck, in Boston, the sanctity of the Green Monster–historically a billboard anyway–has been cited in decrying advertisers. The problem for Red Sox management is that they are working with a very valuable, but very finicky brand.

At the end of the day, the Red Sox are a sports franchise, and a business, and entertainment. But many people in Boston have deeper feelings about the team–it’s a cultural institution for many people, and for some, even a sacred one. Putting an advertiser’s logo on one of the bases at the same park where Ted Williams played. . .well, you might have an easier time convincing a churchgoer to accept corporate sponsorship on the altar. I know some Red Sox fans who fear that baseball will eventually become like NASCAR, with jerseys so bedecked in ads you can’t tell what team they’re supposed to be for. To see this happen to the Red Sox, for many people in EMC’s home state, would be agony.

Not every Red Sox fan feels this way, and I can’t speak for everyone in Boston–and certainly not Japan, where the logos will be displayed in part to announce EMC’s sponsorship of Major League Baseball. But I will say that the popular fan blog / Boston Globe subsidiary Boston Dirt Dogs posted a photo yesterday of Larry Lucchino holding up the patch at a press conference with the headline, “Nothing’s sacred.” Even though it’s only going to be in Japan, and even though it’s just one logo, it’s the first time the Red Sox uniform has displayed any corporate logo that didn’t belong to the sporting-goods company that manufactured it. I don’t count on a lot of Red Sox fans buying that it’s not a slippery slope.

To people outside the day-to-day baseball melodrama that surrounds the Red Sox, I understand why that might seem silly. And a little hypocritical, if you think about it, because recent attempts by a Boston City Councilman to remove the giant neon Citgo sign from the roof of a building in Kenmore Square in protest of Hugo Chavez met with derision from Sox purists. I can also understand why EMC would want to become another Citgo–to have its logo become another cultural icon, particularly as they try to expand into the consumer storage space, and for the first time have a message for the consumers that fill the ballpark.

Problem is, I don’t think it’ll work. Things are different than when the Citgo sign was installed. Nowadays the sign isn’t really seen as an advertisement so much as a landmark, and its visibility just over the top of the Green Monster from inside the park has made it as much a part of the landscape there as home plate. But in general, corporations and their products are not seen as friendly companions or benevolent institutions. People are going to the ballpark in Boston for entertainment, yes, but also to reconnect with an experience that feels genuine, a throwback to a simpler time. An advertising logo on a uniform that’s barely changed in 100 years isn’t going to sit well in that context.

January 24, 2008  8:29 AM

Symantec shines spotlight on Backup Exec 12

Dave Raffo Dave Raffo Profile: Dave Raffo

Symantec updated its NetBackup enterprise backup product last year with version 6.5, and now is preparing to upgrade its Windows-based Backup Exec software.

CEO John Thompson said Backup Exec 12 is due out around March during Symantec’s Wednesday night earnings conference call with analysts. Thompson didn’t go deeply into details, but said the new version would more tightly integrate Symantec’s security with the backup product it acquired from Veritas. Backup Exec’s last major upgrade was version 11d in late 2005.

“It candidly does some of the things that we had envisioned when we brought the two companies together, where you can have a vulnerability alert trigger a more frequent backup process,” Thompson said of the upcoming 12.0. “So it’s our belief that we’re starting to see some of the real benefits that we had envisioned a few years ago in bringing security and security-related activity closer to where information is being either managed or stored.”

Thompson told analysts both Backup Exec and Net Backup had strong sales last quarter, driven by a move to disk-based backup. “Every major customer that I speak to is absolutely thinking about how do they move away from tape,” he said.

Thompson did not address – nor was he asked about — two things Symantec is late on delivering: storage software as a service (SaaS) and the integration of the continuous data protection (CDP) it acquired from Revivio in late 2006.


January 16, 2008  5:52 PM

If EMC releases solid-state drives in a forest…

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

As soon as EMC’s announcement that it had added support for solid-state drives (SSDs) to Symmetrix crossed the wire, guess who called? If you’ve been watching the storage space, you know it had to be Hitachi Data Systems (HDS), whose high-end USP array has been do-si-doing around Symmetrix in the high-end disk array market for the last year and a half.

Turnabout’s fair play for HDS–as soon as it beat EMC to thin provisioning with the announcement of the USP-V last September, EMC went on the attack while both storage giants ignored the fact that they’d been soundly beaten to the feature by startups. I had a brief chat with HDS chief scientist Claus Mikkelsen yesterday, to see what HDS had to say about EMC’s blue-ribbon finish in the race to “tier zero.”

Generally when vendors gather to pooh-pooh one another’s products, they take one of two tacks: either poke holes in the soundness of the technology (EMC’s tactic in the earliest days of USP-V) or say there’s no market for it. In this case, HDS has taken the latter approach.

“Hitachi was in the solid-state disk business and the demand was very, very slight,” Mikkelsen began. Further questioning revealed that Hitachi’s disk division was offering standalone solid-state devices in the late 90’s…not quite the same business as flash drives embedded in an array, but I heard him out.

“Currently, flash has a limited number of writes before its memory layers wear out, and the use is limited to applications which are almost 100% very random reads,” he continued. “Even if EMC ships 10,000 solid state drives this year, it’s only .25 percent of their total shipments.”

Sour grapes? Maybe. “The drives they’re using have a SATA interface, you should be able to just pop them into any array,” Mikkelsen sniffed. “If they’ve created a market here, we’ll just jump right in.”

But out of curiosity, I also called a user for a major telecom which is a petabyte-plus EMC shop. This user and I have gotten into the nitty-gritty about performance-tuning storage before, and performance is king in his transaction-heavy environment. If this guy isn’t buying in, I thought, then who is?

Turns out he isn’t. “I think it’s great someone’s trying to make progress in this space–it’s been ignored,” he said. But even for his blue-chip company (he didn’t want it named in conjunction with his vendor), the whopping price tag for solid state drives is too much. “It has yet to get to the point where it’ll balance against savings on Tier 1 storage,” he said, though he admitted he has yet to do an in-depth analysis. “There might be certain cases…if we were less budget constricted or the timing was right, like we were going through a product refresh, we might look at it sooner, but for me this year, it’s not going to happen.”

EMC says it has the solid-state drives in beta tests in several of its “household name” customers’ shops. But did those shops pay for the drives? Did they pay full price? Will they put them into production? We don’t know for right now–EMC says they’re not available for interviews.


January 16, 2008  4:54 PM

Reyes draws 21 months

Dave Raffo Dave Raffo Profile: Dave Raffo

 Former Brocade CEO Greg Reyes received a 21-month sentence and $15 million fine for his role in backdating options given employees of the SAN switch vendor.

Reyes won’t do jail time yet. U. S. District Judge Charles Breyer let Reyes go free pending an appeal. And it could’ve been worse; prosecutors had recommended that Breyer sentence Reyes to at least 30 months, fine him $41 million and ask him to repay Brocade for legal fees after his conviction last August.

Breyer said he was swayed by nearly 400 letters of support he received on Reyes’ behalf and took that into consideration when he sentenced the disgraced CEO.

And no, those letters didn’t all come from other CEOs whose companies have been investigated for improperly reporting backdated options. Breyer did at least make it clear why Reyes is facing jail time despite claims from his supporters who say he should not have been convicted because he did not personally benefit financially and there was no victim in this crime.

“This offense is about honesty,” Breyer said in handing down his sentence, reminding all that it’s not all right when a CEO breaks the law to recruit talent or place his company in a better light.

For his part, Reyes apologized, said he regretted his actions and admitted “there were many things I would have done differently” if he could turn back the clock. That didn’t sound like someone who pleaded not guilty and sought a new trial after claiming a witness changed her story, as Reyes did.


January 16, 2008  11:05 AM

Storage and VMware walk virtual tightropes

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

It all started with a pretty run-of-the-mill partnership agreement. FalconStor announced its storage virtualization, snapshot and replication software will support Virtual Iron’s virtual servers last week, and my SearchServerVirtualization.com colleague Alex Barrett and I agreed to take briefings.

FalconStor is walking a s tightrope here, because it’s also partnered with Virtual Iron’s large rival VMware. But FalconStor has to come up with reasons to use Virtual Iron over VMware, (i.e. ways to promote that partnership). This led Alex to begin an  interesting conversation with FalconStor’s vice president of business development Bernie Wu about the pros and cons of virtualizing storage with VMware vs. Virtual Iron. Wu pointed out what he’d later reprise in a separate call with me: that the use case for FalconStor’s IPStor storage virtualization software is in many ways stronger with Virtual Iron, because VI doesn’t have its own file system, like VMware does.

As Burton Group senior analyst Chris Wolf patiently explained to me later, VMware’s file system means that its hypervisor (the software layer that controls the host server, guest OSes, and their interaction with the rest of the network) is handling virtual hard disk mapping on back-end storage systems. You can use VMware with raw device mapping (RDM), but then you turn off many of the features VMware users have come to like about VMware, such as VMotion. (RDM also has a slightly limited “virtual mode” as of 3.0, but that’s a tangential discussion.) This makes virtual hard disk mapping performed by storage virtualization products, whether appliances or software, at least somewhat redundant.

So I asked Wu, “what are users missing out on if they can’t use your storage virtualization software with VMware?”  His first answer was large-scale data migrations.

Up until VMware’s Virtual Infrastructure 3.5, VMware had no ability to move the data it managed in its virtual hard disks on back-end storage; hence storage virtualization devices stepped in to fill the gap. With Storage VMotion in 3.5, that gap was at least partially closed. Storage VMotion is still a difficult way to do a large-scale migration, however, because it migrates data one host at a time. So storage virtualization devices, which perform migrations en masse, still have that advantage. At least, until and unless Storage VMotion adds that capability.

Aside from large-scale migrations, Wu also told me that thin provisioning is another capability IPStor offers that VMware doesn’t. That’s a big deal–VMware’s best practices recommend that users allot twice the amount of disk space they actually plan to write to; the ability to expand capacity on the fly helps everyone avoid buying 2x the amount of storage they need.

The Burton Group’s Wolf pointed out plenty more gaps in VMware’s storage capabilities–heterogeneous array support; heterogeneous multiprotocol support (Storage VMotion doesn’t support iSCSI yet);  I/O caching; and heterogeneous replication support.

Some of these gaps will likely be filled by VMware or the storage industry. For instance, when it comes to multiprotocol support, VMware’s MO with new features has always been to support Fibre Channel first and they usually get around to iSCSI soon after. And what happens to the need for heterogeneous multiprotocol support if FCoE ever takes off? What of I/O caching, when and if everybody’s working with 10 Gigabit Ethernet pipes? And VMware’s launching its own management software for heterogeneous replication support (even if it’s not doing the replication itself).

So it seems that storage virtualization players will have to start coming up with more value-adds for VMware environments as time goes on.

VMware has its own tightrope to walk, too. Take replication for example–VMware supports replication from its partners, saying it doesn’t want to reinvent the wheel. But that’s the kind of thing it said when users were asking for Storage VMotion back in 2006, too.

“Deep down, I believe that VMware isn’t going to push its partners out,” Wolf said. And indeed, VMware did make a good-faith gesture last fall with the announcement of a certification program for storage virtualization partners. Wolf also pointed out, “A lot of organizations are afraid to deploy something in the data path unless their hardware vendors will support and certify it–without the support of their partners, VMware would have a tough time playing ball.”

But that might not be the case as much now as back when EMC first bought VMware in 2003 and everybody in the storage world scrateched their heads and wondered why. Now, VMware has its own muscles to flex as its billion-dollar 2007 IPO for 10 percent of the company proved.

More and more analysts are telling me that the hypervisor will become the data center operating system of the future. Over in the server virtualization world, Wolf says VMware competitors argue that the hypervisor is a commodity, and VMware says it isn’t. “In order to keep the hypervisor from becoming commoditized, they have to keep adding new features,” he said.

Which suggests to me that storage virtualization vendors should probably be working on new features, too.


January 11, 2008  10:18 AM

The eBay effect on storage

Tskyers Tory Skyers Profile: Tskyers

Have you ever heard of the “butterfly effect“? In essence, it is a way to conceptualize the possibility that the flapping of butterfly wings in the Amazon Jungle can be the catalyst for a hurricane in Texas.

I think–now, mind you, this is just a thought–the same is going to occur on eBay with storage.

All the innovation currently going on in the enterprise storage arena–10 Gigabit Ethernet (10 GbE) and iSCSI come to mind — is going to be a catalyst that makes businesses retire storage technology faster than usual, filling the secondary market with great usable stuff.

So if you’re a budding storage geek, or an established one looking for the next challenge, eBay is going to become the place to shop. It is not for the faint of heart–that array you spent $200,000 on? Well, $1000, “Buy It Now” and nominal shipping may be able to snag it!! (A slight exaggeration, but I DID find a StorageTek Flex D280 for sale for $2000, with disks!!)

The butterfly here is progress. There will be a tropical storm from progress flapping its wings when these folks enter the workforce, or bring ideas they’ve tried out at home, without limits on uptime or intervention from management, to work. This will cause a fresh round of grassroots innovation that will come from people who can tinker untrammeled (The English language never ceases to amaze me. Thanks for this submission!).

The Linksys WRT-54Gl is a great example of how a group of tinkerers can influence a large company. Check out DD-WRT. Ever wonder why so many wireless routers are coming with USB or e-SATA ports on them nowadays? That started with some hackers wanting to add storage.

eBay has allowed me to build what I otherwise couldn’t find a solid business justification for, or create an ROI schedule around, at work. I have the ability to test various scenarios and provide services to my toughest IT customers: my wife and 9 year old son! Not to mention I have a really geeky conversation piece.

Using myself as an example, I’ve been able to build 3 different versions of a home SAN using technology I purchased from eBay. The first was 1 Gbps FC from Compaq. I picked up a disk shelf for $29 and $40 for shipping. The disks cost a whopping $100 for 14, shipping included. The Fibre cards were $3 a pop. From that, I learned that proprietary SAN technology stinks. Open is the only way to go, so … I started the second iteration which cost a bit more to build.

The second iteration of my SAN was a bit more of an investment both time and money but a tenth the cost of new. I bought a new Areca 8 port SATA array controller with onboard RAM (the only new part in my SAN). I plugged it into a dual Opteron based motherboard with guts from eBay as well, and bought a lot of 10 (2 for spares) 250gb SATA drives. The drives were a deal of a lifetime, for just about $300 I got 2 Terabytes of storage!! Apparently they came out of a Promise array and the person upgraded to 500GB drives.

At the time, I didn’t have any Gigabit Ethernet ports so I opted to buy used 2Gbps fiber channel cards and a used fiber switch. This was a bit more costly than the first SAN so I put some of my old goodies over on eBay (!)  to foot the bill.

The third iteration is the one I’m currently constructing. The second SAN or Second of Many as I like to call it (an ode to Star Trek: Voyager) is still “in production” and servicing my Vmware and file sharing needs, but I felt the need to make it more modular. So far, I’ve gotten an unmanaged Gig-E switch, first generation TOE (TCP Offload Engine) NICs, the controller “head” and a couple SAS disks. I’m making the switch from a SATA controller to a SAS controller to allow for mix and match of speed and capacity on the same bus. I’ve sold some of my fiber and am going to try ISCSI this time.

The hurricane blows when progress has put 8 Gig fiber and 10Gig Ethernet in the datacenter en masse. This will push managed Gig-E components and 4Gbps fiber components out into the secondary market, and make folks like me and value conscious SMB buyers VERY happy!

How long this is going to take to appear I’m not so sure, but if this year is truly the year if iSCSI, I would suggest you open the Web filters to allow a little eBaying at lunch time. Type SAN into eBay see what you come up with.


January 10, 2008  9:33 AM

What’s up with CDP for 2008

Tskyers Maggie Wright Profile: mwright16

Some analysts touted CDP as being the dark horse technology for corporate adoption in 2007. As we all know, that didn’t occur and the multitude of CDP technologies ended up confusing analysts, press and IT alike as they each tried to sort out the differences between available CDP products and what CDP’s true value proposition was. All of these factors contributed to spoiling CDP’s debut.

However, I anticipate CDP will make a comeback in 2008 for two reasons: corporate needs for data replication and higher availability. Data replication has been around for a long time (only recently under the moniker of continuous), so it is a mature technology and well understood by storage professionals in the field.

“Higher availability” is the more important feature of CDP. Companies now must choose between high availability and semi-availability. High availability is associated with synchronous replication software and provides application recoveries in seconds or minutes but at an extremely high cost. At the other extreme, is backup software that only delivers semi-availability so it can take hours, days or even weeks to recover data. CDP delivers higher availability which is an acceptable compromise between these two extremes as it can quickly recover data (typically under 30 minutes) to any point in time and at a price that is competitive with backup software.

CDP also compliments deduplication. While some may view CDP and deduplication as competing technologies (and in some respects they are), the real goal of data protection is data recovery.

This is where CDP and deduplication part ways. CDP captures all changes to data but keeps the data for shorter periods of time, typically 3 to 30 days, to minimize data stores. Deduplication’s primary objective is data reduction, not data recovery. Faster recoveries may be a byproduct of deduplication since the data is kept on disk but it is not the focus of deduplication so recoveries from deduplicated data do not approach the granularity that CDP provides.

So what’s in store for CDP in 2008? The staying power of new data protection technologies is now largely determined by whether it is adopted by small and midsize businesses. If it’s practical and works there, it will find its way into the enterprises because more and more enterprises work as a conglomeration of small businesses despite corporate consolidations. So, it is not a matter of if CDP will gain momentum in 2008, it is a question of how quickly it will become the predomimant technology that companies use to protect all of their application data.


January 8, 2008  9:55 AM

EMC overshoots SMBs again

Dave Raffo Dave Raffo Profile: Dave Raffo

If EMC set out to improve its Clariion AX150, then it succeeded with the AX4 it launched today. But if it wants to offer a system optimized for SMBs, then it still has some work to do.

The AX4 follows the blueprint that storage companies used when they first started going after SMBs a few years back. They took their larger SAN systems and scaled them down in size and features. That didn’t work, and none of the large SAN vendors has made much of a dent on the vast SMB market. EMC and its partner Dell are now in their third generation of AX systems without much to show.

Meanwhile, EMC’s competitors Hewlett-Packard, Network Appliance, Hitachi Data Systems, and even Dell with its PowerVault MD3000 have delivered storage systems designed from the ground up for SMBs. And they cost less than the $8,000-plus price tag EMC puts on the AX4.

EMC counters that it delivers more capacity and technology for the money with the AX4. But SMBs want simplicity; do they really care about a Fibre Channel option or SAS/SATA intermix? Those features are aimed at EMC customers who want a storage system for a department or remote office that is compatible to their larger Clariions, not SMBs looking to network their storage for the first time.

Dell is more realistic with its positioning of the new system, which it calls the AX4-5. Dell product manager Eric Cannell says the PowerVault MD3000 is for small businesses and the AX4-5 for largers SMBs and “can scale up to the bottom of what you would consider a midrange array.” Dell also prices the system at $13,858 and up, clearly not a true SMB price point.

But the new system puts Dell in a sticky situation with its EqualLogic acquisition. Cannell declined to talk about where all Dell’s iSCSI products fit until the EqualLogic deal closes, but customers are likely confused. If the AX4-5 is so good, why did Dell spent $1.4 billion on EqualLogic?

In the long run, Dell has more at stake than EMC. When you sell the most systems with six-figure and even seven-figure price tags as EMC does, it doesn’t hurt much to lose out on $8,000 deals. But SMBs are Dell’s main line of business, and it’s crucial for Dell to get it right.


January 4, 2008  4:04 PM

Storage analyst goes to the Dell Side

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

The industry’s been abuzz this week with rumors that ESG analyst Tony Asaro is headed to Dell, and today Asaro confirmed that’s the plan. He’s going to join Dell as a director of product marketing; today is his last official day as an analyst.

Asaro said his new role will be in creating Dell’s storage strategy and evangelizing  their storage products (some would say this is the role of analysts in the market today, anyway). When asked why he’s leaving his analyst gig, Asaro said he’s excited by the position iSCSI is taking in the market and Dell’s direction following the $1.4 billion acquisition of EqualLogic in November. In other words, a boilerplate answer.

In fairness, Asaro has focused on iSCSI during his time as an analyst and has been bullish about that market’s future. Maybe he didn’t want to sit on the sidelines anymore, watching money roll in elsewhere. In that way, it’s refreshing to see an analyst put his career where his predictions are.

However, he’ll need to be careful to avoid the fate of another former analyst, Randy Kerns, who left the Evaluator Group to become a vice president of strategy and planning at Sun in September 2005, shortly after Sun completed a blockbuster acquisition of its own. Less than a year later, he left Sun, resurfacing in October 2006 as CTO of ProStor Systems.

Still, this news, along with Dell’s acquisition of The Networked Storage Co. in December, will be welcome to EqualLogic users concerned with customer support in the wake of the acquisition. Folding in added storage expertise shows Dell’s at least trying to make the right moves.


January 4, 2008  3:34 PM

A cluster of clusters to begin the year

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Storage vendors are always looking for the next big thing, and they bang the drums loudly when they think they’ve found it — often long before customers are ready to buy. Now they are making a lot of noise around clustered systems, particularly when it comes to selling to new types of businesses such as Web-based merchants and service providers.

Sun, EMC and IBM, which have mostly slideware at this point, have all disclolsed intentions to tackle this space over the past month or so. IBM backed up the talk by acquiring grid storage system vendor XIV this week in a deal reportedly worth $300 to $350 million.

But it’s still early in the cluster game, as Isilon painfully found out. Isilon went public on the strength of its success in the clustered NAS market, but that market apparently isn’t as big as Isilon’s execs and investors expected. Its first year as a public company was marked by disappointing revenues and a plummeting stock price.

NetApp, meanwhile, is finding it hard to cluster its traditional non-clustered NAS. More than four years after it acquired cluster technology from Spinnaker, NetApp hasn’t had great success with its OnTap GX product. Users report that many  features they’ve come to expect from NetApp aren’t working yet with GX.  NetApp considers its clustered product mainly for the small high performance computing (HPC) market at this stage. Others, such as startup Panasas, also sell clusters mainly to HPC customers .

So it will be interesting to see how much success IBM has with XIV’s Nextra systems, and what EMC and Sun come up with. 

There is one Web 2.0 company that has successfully deployed a highly parallel compute farm in massive scale production, but it developed the technology in-house. That is Google, which built Googleplex with nary a single storage vendor’s salemsan present.  But Google’s system isn’t for sale — much to the relief of storage vendors but not their would-be customers.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: