Storage Soup

December 15, 2008  2:23 PM

Latest addition to the storage swag collection

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

A while back I did a riveting post for this blog covering my collection of storage s.w.a.g. (stuff we all get) picked up in my travels through the storage industry. I thought I’d follow up with a notice about the latest showpiece in my collection:


It’s a T-shirt, from EMC/Mozy. Occupying a place of honor (and a place of not-fitting) in my work area at the office, it doubles as a conversation piece and warning to intruders.

December 15, 2008  12:43 PM

Deduping slows but doesn’t stop data growth

Dave Raffo Dave Raffo Profile: Dave Raffo

While most organizations are likely planning to trim their IT budgets next year, a lot of them are also no doubt finding that cutting storage capacity will be difficult if not impossible.

Take Victaulic Company. The pipe joining manufacturing company purchased 30 TB of usable capacity with its new Sepaton S2100-ES2 VTL in September, and infrastructure manager Fred Railing says he’s already ordered 10 TB more because of an increase in data being backed up. And that’s with a 39-1 deduplication ratio from Sepaton’s DeltaStor software.

“[The VTL] was sized appropriately when we bought it, but the amount of data we have to back up has increased 30 percent already,” Railing says. “We keep six weeks worth of backup, and we’re close to capacity right now. We have to keep an eye on that to make sure it doesn’t fill up.”

Railing says the increase in data stored hasn’t come from an acquisition or any unusual situation, and he doesn’t see it slowing down much soon.

“We have quite a few projects going on now, and we’ve added lot more servers in the last nine months,” he said. “It always will increase, maybe not as dramatically as it has been lately, but our engineering and email data keeps growing and giving us more to back up.”

Victaulic dedupes everything it backs up, Railing says, although reducing data wasn’t the original reason for going to the VTL. He set out to reduce backup windows by using disk, and his backups have gone from 24 hours to 12.

“At first we were looking at just getting a VTL to shrink our backup windows,” he said. “We thought it was worth getting dedupe option because we would end up buying so much more disk. Now we’re deduping everything we back up.”

December 15, 2008  9:41 AM

HDS adds solid-state drives to USP disk arrays

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Bringing up the rear among major vendors pledging support for solid-state drives (SSD), Hitachi Data Systems today said it would ship SSDs in its enterprise Universal Storage Platform and midrange USP-VM disk arrays in the first quarter of next year.

HDS did not reveal any partners for the SSDs it will ship in 73 GB and 146 GB capacities next year. The press release said  HDS plans to also offer the SSDs being developed by Hitachi GST and Intel, but those products are not expected to ship until 2010. STEC is the other manufacturer of enterprise FC and SAS drives currently on the market.

HDS is ringing in the new year with a different tune than the one it sang in early 2008. Shortly after EMC announced support for SSDs in Symmetrix in January, HDS sought out reporters to throw water on the idea, saying there was no market for SSDs. (Though chief scientist Claus Mikkelsen also mentioned that HDS could support the drives and would “jump right in” if EMC had in fact “created a market” for SSDs).

December 11, 2008  4:28 PM

Shoah Foundation tames 8 PB with tape and automation

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Add this as a point in the ‘tape’ column if you’re scoring the ancient debate at home.

The Shoah Foundation, founded by Stephen Spielberg to preserve Holocaust survivors’ narratives after Schindler’s List and now a part of the University of Southern California, has conducted interviews with thousands of survivors in 56 countries. The Foundation has 52,000 interviews that amount to 105,000 hours of footage.

CTO Sam Gustman says the footage was originally shot on analog video cameras, then converted to digital betacam and MPEGs for distribution online. It currently amounts to 135 TB. However, the Foundation is converting the footage to Motion JPEG 2000, which will create bigger files–about 4 PB of data, Gustman estimated. Each video will be copied twice, bringing the total to 8 PB.

Gustman says the Foundation received a $2 million donation of SL8500 tape libraries, Sun STK  6540 arrays and servers from Sun Microsystems in June. The Foundation has an automated transcoding system running on the servers, and that takes up the 140 TB of 6540 disk capacity for workspace. Sun’s SAM-FS software will automate the migration of data within the system, to the 6540 and then to the SL8500 silo for long-term storage.

We’re hearing a lot in the industry these days about rich content applications such as this one moving to clustered disk systems, but Gustman said disk costs too much for the Foundation’s budget.  He sees the potential for an eventual move to disk storage, but “disk is still too expensive–four to five times the total cost of ownership, mostly for powerand cooling.”

Another advantage to the T10000 tape drives the Foundation plans to use is that they will eliminate having to migrate the entire collection to disk during copying, transcoding and technology refreshes. One T10000 drive can make copies or do conversions directly between drives in the robot, and the virtualization layer with SAM-FS means that can happen transparently.

However, as an organization charged with the historic preservation of records, Gustman agreed with others I’ve talked to about this subject in saying that there’s still no great way to preserve digital information in the long term. “The problem with digital preservation right now is that you have to put energy into it–you can’t just stick it in a box and hope it’s there 100 years from now,” he said. “Maybe there’ll be something eventually that you don’t have to put energy into, but it doesn’t exist yet.”

December 11, 2008  2:48 PM

The vagaries of disaster recovery, cont’d

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Last week, Tory Skyers wrote a post about the unforeseen complexities of disaster recovery after his PC’s motherboard fried. This week, I had an interesting discussion with somebody working with a much bigger enterprise infrastructure who also found that people, process, and sometimes luck–good and bad–can influence disaster recovery planning more than any technology.

Mark Zwartz, manager of information technologies for privately held real estate conglomerate JMB Companies, has been signed on with SunGard’s Availability Services for virtual server-based DR since August.  Last week Zwartz gave me a deeper look behind the scenes at his disaster recovery planning process, and the ways it wasn’t so simple.

For one thing, it might not have happened at all without a contract re-negotiation with SunGard. “Our original contract was a wacky deal with [one subsidiary] that expanded to the other entities, but the contracts were so goofy and webbed into each other that if we called with a problem or needing to fail over, the people at SunGard might not have any idea which company or machines we were talking about,” he said. “We saved money on lawyers and negotiations, but quite honestly, if we had to failover, nobody at SunGard might’ve known what to turn on.”

The renegotiation of that contract happened to coincide with the beta program for the SunGard service. “That was the foot in the door to change things,” Zwartz said. Because it was a beta program, the companies got three free months of testing, which caught the attention of Zwartz’s management.

Meanwhile, JMB was in the midst of a hardware refresh, as well as rolling out virtualization. Still, “one of the hardest parts was selling virtualization–these are highly intelligent people who are handling millions if not billions of dollars, and it’s hard to explain the concept that they don’t actually ‘own’ anything [with virtual servers],” he said.

A broker at a hedge fund firm in the conglomerate was highly reliant on Outlook contacts and notes in each contact file to do her daily business. Meanwhile, the company still worked with traditional tape backup, which wouldn’t offer the granular protection to recover all of those contacts in the event of an outage.

I’m sure most of you out there in blogland know what happened next. “She was synching her BlackBerry herself, and blew out her contacts,” according to Zwartz. The only option for restoring Exchange backups was to restore the entire Exchange database from tape to a separate server, which the company had declined to buy. “They lost a significant part of a trade before that,” he said. “Nobody realized such a small thing would make her unproductive.”

Of course, without a fully staffed and replicated secondary environment, Zwartz acknowledged, it’s impossible to be disaster-proof. But the incident also had a silver lining when it came to convincing management to participate in the new SunGard program. “It would’ve cost $1,500 to get a new backup device,” he said. “It wound up taking six weeks to get one contact back and cost between $15,000 and $20,000. It was a selling point when it came to virtual servers with SunGard.”

December 9, 2008  4:26 PM

NetApp discontinues replication app

Dave Raffo Dave Raffo Profile: Dave Raffo

NetApp today quietly pulled the plug on its SnapMirror for Open Systems (SMOS) heterogeneous data replication software, acquired from startup Topio for $160 million in 2006.

In a press release posted on the NetApp Web site – but not distributed – the vendor said it would discontinue SMOS and close the former Topio development facility in Haifa, Israel on Jan. 15. According to the release, NetApp “has not made final employment decisions” on the 51 employees in Haifa.

NetApp acquired Topio for its Data Protection Suite, at least partly in response to EMC’s purchase of Kashya six months before. But while EMC built Kashya’s replication and CDP into RecoverPoint – a staple of its replication platform — Topio’s heterogeneous replication offering never caught on, even after NetApp re-released it as ReplicatorX and then SMOS.

NetApp’s release blames the product’s failure on a lack of interest in replication between multiple vendors’ products. “Our decision to terminate SMOS product development was based on customer priorities and actual purchase histories,” the release said. “The market for replication products for disaster recovery purposes is dominated by homogeneous, rather than multivendor, solutions. Our ‘any-to-any’ solution with SMOS was never adopted by customers in the way we anticipated.”

NetApp added that it remains committed to its other SnapMirror versions for “any-to-NetApp” data protection. SMOS customers will get three years of maintenance and technical support.

December 9, 2008  1:55 PM

Pillar pledges SSD support in 2009

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Pillar’s CEO Mike Workman dropped by our office today, and said that while Pillar retains its earlier reservations about SSDs not being best utilized behind a network loop, the company will support them next year. The systems vendor will not have an exclusive partner for the drives, Workman said, though he mentioned Intel as one supplier.

Pillar’s Axiom arrays separate disk capacity from the storage controller with components called bricks (disk) and slammers (controllers). Workman says the Axiom will support SSDs in the bricks, and the arrays’ QoS features will be updated to support moving workloads to SSDs. This can happen either according to policy or automatically (with prior user approval) when the system is under intense workload.

This is definitely a change in tune, though Workman has always said Pillar’s systems were capable of supporting SSDs and probably would. He just thought network latency was too great, and he hasn’t retreated from that position. “It’s there,” he said. “There’s no way to get around that.”

But Workman says the biggest obstance to SSD now is price. “When we show people how they work, they say, ‘Fine,'” he said. “Then  we tell them how much it costs, and that’s when they keel over.”

Despite offering an 80% utilization guarantee earlier this year, Workman said only about 15% of Pillar’s customers are at 80%. But the company hasn’t been paying out lots of guarantee money, either. The details of the offer are vague to begin with: the terms are negotiated on a case by case basis, “to remediate any issue, as well as financial pain.” The terms of the guarantee would have to be negotiated as part of the original sale.

Analysts also said users might be wary of pushing utilization that high given that it requires capacity planning to be precise. “I can’t make [customers] write data to the system,” Workman said. “The guarantee was not that they will but that they can.” He added that the average Pillar customer’s utilization currently is 62%.

December 9, 2008  7:31 AM

EMC and Dell extend partnership, add NAS

Dave Raffo Dave Raffo Profile: Dave Raffo

With sales of EMC storage systems sold through Dell on the decline, EMC CEO Joe Tucci declared last October “there’s a lot more we could and should be doing together” to strengthen the EMC-Dell relationship.

Today, Dell and EMC say they’ve extended their agreement to co-brand Clariion midrange SAN systems to 2013 and added EMC’s Celerra NX4 to the deal. Whether that’s doing “a lot more” or not will depend on how well Dell does with the NX4, which gives Dell another NAS product to sell along with its Windows-based NAS systems. Dell will begin selling the NX4 early next year.

Dell and EMC are calling the new deal a five-year extension, although it’s really a two-year extension because their previous deal was to run through 2011.

“The EMC-Dell relationship has been extremely successful,” says Peter Thayer, director of marketing for EMC’s multi-protocol group. “If you look at the storage industry, you won’t find any relationship as successful as this.”

He’s probably right, considering how many Clariion systems Dell has sold in the seven years since the co-marketing alliance began. But the relationship hasn’t been the same since Dell acquired EqualLogic for $1.4 billion in January, giving it its own midrange SAN system.

EqualLogic’s products are iSCSI only, so Dell still relies on Clariion for Fibre Channel SANs. But it’s probably no coincidence that Dell has sold fewer Clariion systems still picking up EqualLogic. Overall, EMC’s revenues from Dell declined 26% year over year last quarter, and Dell has gone from 15.8% of EMC revenue to 10.4% in a year. Dell accounted for 35% of Clariion revenue a few years back, but less than 30% last quarter.

Wachovia Capital Markets financial analyst Aaron Rakers calls the extension good news because it could end speculation about the two vendors’ commitment to each other. “There have clearly been increased questions surrounding this partnership throughout 2008, or rather since Dell has increasingly focused its attention on driving its EqualLogic business,” he wrote in a note to clients.

Financial analyst Kaushik Roy of Pacific Growth agrees that investors have been concerned about the direction the EMC-Dell relationship is going and the extension should help, but he isn’t sure about how much.

“We will have to wait to see if the relationship bears any fruit,” Roy said of the extension. “While we are not expecting EMC’s revenues from Dell to ramp up materially, investors would be happy if revenues do not decline precipitously.”

EMC is also involved with Dell’s plans to  add data deduplication next year. While it hasn’t disclosed any products yet, Dell last month said its dedupe platform will include Quantum software and be compatible with EMC disk libraries.

December 8, 2008  6:05 PM

IDC report shows steady storage sales in third quarter, some warning signs

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

IDC’s quarterly tracker numbers for the third quarter of 2008 show disk storage and storage software sales holding steady at a time when many industries are feeling the effects of recession.

According to an IDC press release, “worldwide external disk storage systems factory revenues posted 8.8% year-over-year growth totaling $4.9 billion…total disk storage systems market grew to $6.6 billion in revenues, up 1.1% from the prior year’s third quarter, driven by softness in server systems sales.”

Meanwhile, the storage software market grew year-over-year for the 20th consecutive quarter with revenues of $3.1 billion, up 11.6% over last year’s third quarter.

On the disk side, companies with server businesses showed declines. IBM disk revenue declined 18.1%, Dell dropped 8.7% and HP was down 0.5% in overall disk system revenue (including servers).  Fujitsu Siemens and NEC also declined. But for external (networked) storage, HP increased 3.3% and Dell was up 8.6a% over last year. Storage system-only vendors EMC (16.2%) and NetApp (13.8%)  gained significantly over last year, as did Sun (up 25%).

Year-over-year numbers looked similar on the software side, with outliers like HP increasing revenue 106.2% in storage management software from one year to the next. However, nearly all the storage software vendors stumbled from the previous quarter. HP took a 19.7% sequential hit in storage management. Storage infrastructure software slipped 7.8% from the previous quarter, with NetApp revenue declining 18.3% in that category.

According to IDC’s software press release, storage software revenues in the third quarter are traditionally slower before a typically strong fourth quarter. However, the overall economy has been going in the opposite direction. Many industries look at the third quarter as the calm before the storm, with dire predictions of declines coming for next year. And having covered these trackers before, I can say anecdotally I don’t recall seeing quite such sharp declines one column (the quarterly comparison) affecting almost all companies and almost all categories.

Other industry experts have told that the worst is yet to come. According to a report issued in October by Forrester Research, the third quarter for IT companies remained relatively stable because most vendors are still working through a sales pipeline. But poor sales are predicted for all IT vendors in the fourth quarter.

In the meantime, however, while budget growth may be constrained next year, storage managers have said they aren’t expecting their daily tasks to change drastically because of the recession.

December 3, 2008  5:28 PM

Users look increasingly to storage virtualization as data grows, analyst says

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

I had an interesting conversation today with TheInfoPro’s managing director of storage research Rob Stevenson about the results of his firm’s latest survey of 250 Fortune 1000 and midsize enterprise storage users. Fortune 1000 users surveyed by TIP cited block virtualization as having the biggest impact on their environment this year. TIP expects 50% of the Fortune 1000 to have virtualization in use by the end of 2009. All of this is in response to ongoing and relentless data growth.

“Impact” is difficult to define, as TIP doesn’t offer definitions or parameters to the open-ended question for users, instead letting the responses shape the definition. (Midrange users cited server virtualization as having the biggest impact, and we all know that there are good and bad impacts involved).

What really stood out to me, though, when I discussed the results (as well as the semantics of the word impact) with Stevenson, was how block virtualization is being used and for what purposes. My general impression has been that block storage virtualization has not lived up to its initial round of hype as a “silver bullet” for single-pane-of-glass management of an overall storage environment. I wondered, had that changed when I wasn’t looking?

According to Stevenson, while adoption for block virtualization has risen steadily even since this past February (number of respondents with the technology “in use” went from 21% in February’s Wave 10 to 23% in Wave 11), 23% said they’re using it with just 2% of the overall storage capacity.

Stevenson said the users in this case were petabyte-plus shops that in the past year or so have seen storage balloon from the single petabyte range to 2.5 petabytes or more, with no signs of stopping. These admins are scrambling to consolidate storage, move to new technologies that offer better utilization, and automate tasks. Where block virtualization comes in for most of them is performing data migration while moving to new technologies or systems — hence the relatively small proportion of data being managed by block virtualization devices from day to day.

Meanwhile, midrange enterprises are increasingly looking to maximize their resources on the server side.  Close behind that, though, come utilization improvement technologies for storage like thin provisioning and data deduplication.

It’s largely a matter of consolidating resources and improving utilization rates. “But the big Fortune 1000 shops have a bigger ‘legacy drag’ of data that they have to move,” Stevenson said. Hence the use of block virtualization tools.

Stevenson said continued data growth and an increasing amount of complexity to go along with it–storage managers are not only managing an average of 400 TB each compared to 200 TB each a year ago, but the number of LUNs to manage within that volume is also increasing. That drives a need for automated management. While the most popular use case for virtualization seems to be data migration, Stevenson said users are finding day-to-day data movement is also increasing, bringing these devices to the forefront once again for management.

Does this mean we could be seeing block virtualization tools proliferate once the midrange market reaches the petabyte level? (After all, as the old chestnut goes, a megabyte used to be a lot of data). That’s where Stevenson’s crystal ball grows, well, cloudier.

Right now, one tentative theory is that users whose data centers already have large amounts of data under management tend to also already have specialized staff. But as the midrange market comes up against a need to scale staff as well as technology, they may turn to service providers before turning to storage virtualization devices.

“In large data centers, we see the pooling of resources among multiple data center groups to balance workload, like moving storage networking to a networking team or data classification and archiving to server and application groups,” Stevenson said. “When it comes to midsize admins having to start ‘not doing things’, it’s probably not going to come with an increase in internal staffing–instead they may look to offshoring those tasks to cloud service providers.”

But that’s not to say large enterprises won’t be looking up at the clouds, too. “We’re still working out the ‘competing futures’ if you will,” Stevenson said.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: