Storage Soup

July 21, 2009  9:06 AM

Group Logic looks to ease Mac integration headaches for file archives

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

The maker of software that connects Mac workstations with Windows servers is launching a new product  that it claims will prevent “bad Mac behavior” with data archive stub files.

Group Logic’s main product is ExtremeZ-IP, software used to connect Mac clients with Windows servers. According to CEO Reid Lewis, a problem can arise when Mac clients are attached to Windows file servers where a file archiving program is leaving stubs.

Apple’s Mac OS X operating system includes features for end users called Quick Look, which shows users a preview of documents in the OS X file system. According to Lewis, the call that Quick Look makes to the primary file share can make archiving software think the files are being called back from the stub location. “When the Mac tries to render a prieview, the archive sees that as a read and bumps the file back up to primary storage.” It’s easy to imagine a scenario from there where a quick flip through all the contents of a folder could clog up the primary file server, Lewis added.

Group Logic’s new ArchiveConnect software, when installed on the Mac client, can provide a translation that allows for Quick Look while preventing stub files in the archive from being restored during a preview operation. Group Logic is charging $1.60 per GB of archive data addressed by Mac clients, and contemplating a per-client licensing scheme as well.

It’s a niche issue, said Brian Babineau, senior analyst with the Enterprise Strategy Group (ESG), and it would be easier for users if this kind of integration came directly from an archiving vendor rather than a third party.

However, he added, non-Windows applications remain an area that has largely been ignored in the enterprise archiving world to date. “We rare all aware of the benefits file archiving can bring–however, Mac environments that need archiving need more than just HSM because the type of data that they store is usually different than your traditional Windows or Linux environment,” Babineau said. “Solutions that can support the applications which generate more content types and archive the data right from the application are more compelling from my standpoint.”

July 20, 2009  7:20 PM

SunGard exec says storage will be ‘final domino’ in cloud interoperability

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

SunGard’s technical officer for cloud computing, Don Norbeck, talked with Storage Soup this afternoon on the service provider’s participation in the Distributed Management Task Force (DMTF) Open Cloud Standards Incubator and the “physics problem” that currently stands between IT and true application portability.

Storage Soup: Tell me about the standards body you joined and why…
Norbeck: DMTF has a good track record with previous initiatives. They brought VMware, Microsoft and Citrix to the table and got them to agree to include metadata to allow a base level of interoperability between them for the Virtualization Management Initiative (VMAN). The Open Virtualization Format (OVF) is similarly impressive to us.

SS: Did you just join the group this week? Is it a new initiative?
Norbeck: It’s relatively new – the group formed this April, and SunGard was part of that initial discussion. The news today is that we petitioned to be included in the leadership board and were just approved.

SS: Who else is participating in this standards effort?
Norbeck: Other members of the initiative include Cisco, EMC, VMware, Microsoft, HP, AMD, Rackspace, Savvio and Sun. Right now it’s an incubator discussion to define basic components of the cloud and how they should be administered. We don’t often participate in standards efforts, but we see extreme value as a service provider in being involved in this conversation early on.

SS: What kinds of things will the incubator be defining? What does it have to do with SunGard’s disaster recovery business?
Our first hypothesis is that there are going to be hundreds of different clouds out there with different characteristics – some optimized for speed, some for cost and some for availability. The cloud will serve two purposes: avoiding downtime and the expansion of infrastructure for peak demand. How much capacity you can spin up and how quickly you can fail over to a cloud data center depends on an up-front information exchange between the end user and the provider to tell how much and what to spin up for true application portability.

SS: I always thought cloud standards had more to do with interoperability between service providers – I thought the way users send data to service providers is already relatively well understood.
Before you can float workloads between service provider infrastructures, you have to figure out first how users move the workload beyond their firewall. That’s the first step. If we can all agree on application portability standards within that framework, we may be able to set something up where you can follow the sun from an electrical power perspective.

SS: Will the standard address how to move data over distance? Seems like that’s a hurdle VMware is trying to overcome right now, for example.
We’re still limited by distance. Network data transmission capacity is still a scarce resource. I’m not sure what the solution is – maybe enabling content delivery networks for branch offices so there are small bits of critical data everywhere, or leveraging some WAN acceleration technology in between. Storage is going to be the final domino to fall for the model of computing platform cloud aspires to be.

SS: How would you answer those who say it’s too early at this stage of the cloud to start imposing standards?
With any standards effort, the proof is in the actual utilization of the standard. This effort is more at the discussion stage, in which we’re looking to agree to language that will enable our customers to utilize us better. It’s too early to impose World Wide Web (WWW) type standards on cloud computing, but it’s not too early for the conversation.

July 20, 2009  2:46 PM

EMC takes controlling interest in Data Domain

Dave Raffo Dave Raffo Profile: Dave Raffo

EMC today revealed it has acquired more than 82% of Data Domain shares, which means any chance of another company swooping in with a better offer is gone. Under terms of the July 8 agreement between the vendors, EMC is paying $33.50 per share for Data Domain stock for a total of $2.1 billion.

EMC reiterated in a news release today that it expects to close the deal by the end of July. It also said Data Domain will be the centerpiece of a new product division for disk backup products, headed by Data Domain CEO Frank Slootman. Slootman will report to EMC CEO Joe Tucci and Frank Hauck, EVP of the storage business. EMC forecasts the division will have $1 billion in revenue in 2010.

Tucci first laid out plans to make Data Domain the key piece of its new product division June 1 when EMC made its first offer to buy Data Domain. Data Domain rejected that offer for a NetApp bid, but accepted EMC’s next offer.

EMC didn’t say which products will be included in the new division, but it’s likely to include Avamar host-based data deduplication software and whatever backup disk libraries EMC keeps after the deal closes. The biggest question centers around the EMC Disk Library (EDL) platform: will EMC continue to offer dedupe from Quantum on the EDL, replace the Quantum software with Data Domain software, or replace the entire EDL line with Data Domain devices?

In his June 1 comments, Tucci talked about making a family out of the Disk Library platform, so you can expect that brand to survive.

July 17, 2009  12:43 PM

HP acquires scale-out NAS vendor Ibrix

Dave Raffo Dave Raffo Profile: Dave Raffo

Hewlett-Packard today acquired its clustered file system partner Ibrix, which is best known for selling its Fusion software through partners to studios that make animated movies.

HP did not disclose financial terms of the deal. Its press release said the transaction will likely close within 30 days, and the Ibrix business will become part of the StorageWorks division in HP’s Technology Solutions Group.

HP has resold Ibrix software with its SAN systems as well as ProLiant and BladeSystem servers. DreamWorks has used a combination of Fusion and HP hardware for rendering of its animated movies. Ibrix also counts Pixar as a customer, and has sales partnerships with EMC, Dell and IBM.

According to HP’s press release, Ibrix’s software “solidifies the company’s leadership in the emerging market of scale-out and high-performance computing storage, cloud storage, and fixed content archiving.”

We’ll have an update on SearchStorage following HP’s webinar today.

July 17, 2009  8:20 AM

07-16-2009 Storage Headlines

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

It’s that time of the week again…

Stories referenced:

(0:28) IBM adds thin provisioning to DS8000, asynchronous mirroring to XIV Storage System

(2:26) DataCore Software debuts Advanced Site Recovery for physical and virtual disaster recovery

(4:06) Barracuda Networks adds data deduplication with Yosemite integration

(5:35) Hewlett-Packard launches first external 6 Gbps Serial-Attached SCSI enclosures

(7:29) 3PAR blames poor sales on tight budgets

July 16, 2009  8:49 PM

Copan CEO spins out after rocky year

Dave Raffo Dave Raffo Profile: Dave Raffo

Mark Ward has stepped down as CEO of Copan Systems after three-and-a-half year on the job, but the executive team says it’s “business as usual” for the MAID pioneer and archiving vendor.

A Copan executive responded to calls about Ward’s status by delivering a statement from the board confirming Ward has left while adding the CEO’s departure does not signal a change in direction.

“Mark Ward is no longer the CEO. The executive team is doing all the day-to-day activities, and its business as usual,” said the spokesman, who asked not to be quoted by name. “We’re committed to achieving our 2009 goals.”

Copan’s 2009 goals aren’t as lofty as they were a year ago when the company was expanding and Ward talked about taking the company public. Copan struggled when the economy crashed, forcing it to slash staff last November while waiting for funding. Copan did land $18.5 million in funding in February, but Ward said at the time he did not expect to increase staff.

A source outside of the company with knowledge of the situation said Ward departed because of a disagreement over strategy with the board. The executive who confirmed Ward’s departure said the board would not say if it was searching for a replacement. With several large storage vendors shopping, Copan would have to be considered a potential acquisition target.

Copan’s management team still includes two founders, CTO Chris Santilli and president of the federal division Will Layton.

Ward, a former sales executive at EMC and StorageTek, became Copan CEO in January of 2006, about a year and a half after the Longmont, Colo.-based vendor began shipping its first MAID disk spin down systems.

July 15, 2009  8:18 PM

Widespread IT staffing cuts this year, according to new survey

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

A new survey of some 200 IT executives across a dozen vertical markets by IT research firm Computer Economics found that 46% of respondents plan to reduce headcount this year, while 27% plan to increase headcount.

The report says healthcare and energy are faring better than other industries, with 60% of healthcare respondents and roughly half of energy and utility organizations reporting staffing increases. Retail, manufacturing and insurance will see the biggest declines, according to the report.

While capital purchases seem to be the most sensitive area for organizations with slashed IT budgets, operational expenditures are a murkier area. The question of staffing and how different organizations are addressing storage efficiency – through technology or operational improvements – seems to come down to organizational philosophy. I’ve talked to users during this economic downturn who say that their IT spending is going up because IT projects are being implemented to automate processes or cut down on spending elsewhere.

The Computer Economics report identifies finance as one industry where this phenomenon is taking place. “Certain sectors, however, are showing positive growth in their 2009/2010 IT operational budgets. These sectors include banking and finance at 4.9%, healthcare providers at 4.7%, professional and technical service firms at 4.0%, and utilities and energy at 1.3%.” These operational budget increases seem to run counter to some vendor marketing in the down economy encouraging users to trade some capital costs for a reduction in operational costs through automation.

Some vendors, like EMC Corp. have also been predicting stabilization in the economy and IT organizations by the end of this year, but the survey results show “the worst may not be over,” according Computer Economics’ press release. “Many IT executives expect further budget reductions in the future. About 49% reported that they expect to spend less than the amount allocated in their 2009/2010 IT spending plans compared to only 9% who anticipate being able to increase their IT budgets.”

Though it’s an interesting set of data points within the ongoing discussions of the economy and storage efficiency, I would also point out that with a sample size of 200 administrators, it’s not necessarily a definitive report. I’m hoping more research like this is being done which can be compared and contrasted with these results.

July 15, 2009  12:46 PM

3PAR blames poor sales on tight budgets

Dave Raffo Dave Raffo Profile: Dave Raffo

If 3PAR’s results are an indication, storage spending failed to show any signs of a rebound last quarter.

3Par Tuesday afternoon said its revenue for last quarter was below its previous forecast, and down from the previous quarter. The storage systems vendor disclosed that it expects to report revenue in the range of $44.2 million to $44.5 million, compared to its previous guidance of $48 million to $50 million. Its revised forecast is around an 8% to 9% drop from the previous quarter and a 3% to 4% increase from last year. 3PAR also expects to report a net loss for the quarter.

3PAR reported that “sluggishness” in spending grew worse later in the quarter, which suggests budgets aren’t loosening up yet.

“The weakness was more widespread than what we saw last quarter when it was mostly Internet companies. It was more broad-based this quarter,” 3PAR CEO Dave Scott said in a conference call with analysts. “There are clear signs of budget restraints that remain in place.”

Along with tight budgets, Scott blamed the poor results on delays of customer installations of large systems previously ordered (3PAR recognizes revenue when systems are installed instead of upon taking orders). He said there was some “pricing pressure” (discounts) from competitors but said 3PAR was not losing business to rivals. He said 3PAR ran into EMC’s new V-Max system “far less than we expected to” and told an analyst on the call that talk of the New York Stock Exchange replacing 3PAR with Compellent is not true.

“The New York Stock Exchange remains a good customer, and I am unaware of the replacement of any business at New York Stock Exchange by Compellent,” he said.

Scott added, “We are clearly disappointed by our execution this quarter, and we have every intention of improving our performance in the future.”

Not much in the immediate future, though. 3PAR also lowered its guidance for the current quarter too, dropping its revenue estimate to $43 million to $47 million – below financial analysts’ $50.8 million consensus estimate.

We’ll get a better idea of whether 3PAR’s results last quarter were typical of the industry over the next few months when larger vendors report their earnings.

July 13, 2009  8:20 PM

EMC makes virtual provisioning free

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Call it a Lutheran Reformation for the 21st century. This time, instead of 95 theses nailed to a church door to challenge the Catholic Church, EMC customer and blogger Martin Glassborow posted one thesis on his blog to challenge EMC on the cost of virtual provisioning, also known as thin provisioning.

Even as storage vendors have been touting the cost savings of thin provisioning, it has cost customers extra to deploy the feature. Wrote Glassborow:

HDS and EMC are both extremely guilty in this regard, both Virtual Provisioning and Dynamic Provisioning cost me extra as an end-user to license. But this is the technology upon which all future block-based storage arrays will be built. If you guys want to improve the TCO and show that you are serious about reducing the complexity to manage your arrays, you will license for free. You will encourage the end-user to break free from the shackles of complexity and you will improve the image of Tier-1 storage in the enterprise.

(HDS might have some quibble with this – another blogger, storage consultant Chris M. Evans, points out that HDS’ Switch It On promotion offers free UVM, Dynamic Provisioning (first 10TB only) and Tiered Storage Manager on existing USP-V deployments. Evans also notes HDS’s promo is for existing as well as new deployments; EMC told me today existing Symm deployments will also be eligible, but there appears to be some confusion about that.)

Glassborow’s wish was granted. In response, EMC blogger Barry Burke, also chief strategy officer for Symmetrix, wrote:

In his post, Martin insists that the current pricing strategies for thin provisioning from both HDS and EMC are a disincentive to the adoption of the otherwise compelling feature that makes enterprise arrays easier and more cost-effective to manage and deploy.

These very conversations have been going on within the walls of EMC, and it has been decided that Virtual Provisioning will in fact be included at no charge and with no capacity limitations for all Symmetrix V-Max and DMX 4 orders beginning this quarter.  As a result, all Symmetrix V-Max and DMX 4 customers will be able to leverage the speed and ease of storage provisioning, improved capacity utilization and the inherent benefits of wide striping afforded by Virtual Provisioning, all at no extra charge.

We’ll see if others follow suit.

We shall, and if it happens soon, call me cynical, but I will wonder about the timing of this decision on EMC’s part. As Burke notes, this isn’t the first time Glassborow has come knocking with his pricing protest (though I think he deserves credit for his good points and persistence). Should we be anticipating another vendor to be heard from when it comes to free thin provisioning?

And what about Clariion? EMC added thin provisioning to the CX4 last year, but the free thin provisioning is only available for Symmetrix so far.

July 10, 2009  3:55 PM

Even without Data Domain, NetApp has backup dedupe options

Dave Raffo Dave Raffo Profile: Dave Raffo

NetApp came up short to EMC in its bid for Data Domain, but people in the storage industry expect there will still be an acquisition in NetApp’s future. In the swirl of speculation, NetApp is considered both a prime target as well as a buyer on the prowl.

As the Data Domain situation shows, NetApp isn’t sitting around waiting to be bought. And there are more companies that NetApp can buy than larger companies that would take on an acquisition the size of NetApp. So it’s likely that NetApp will make its next move as a buyer, and it has several options if it wants to replace Data Domain with another data protection/deduplication supplier. There are even a few companies who can give NetApp what Data Domain can’t – namely, backup software or global dedupe.

Here’s a list of candidates we think NetApp might pursue, in order of likelihood, with the advantages and disadvantages of each:


Pros: Gives NetApp an instant storage software business in addition to deduplication. Its technology is well respected, with little if any overlap with current NetApp products.

Cons: CommVault’s market share is tiny compared to industry leaders such as Symantec and EMC, especially in large enterprise accounts. More than 10% of CommVault’s revenue comes from OEM deals with Dell and Hitachi Data Systems, who may give CommVault the boot if it goes to their storage rival Netapp.

Pros: Would bring NetApp solid VTL and continuous data protection (CDP) software as well as second-generation deduplication, including global dedupe.

Cons: FalconStor’s dedupe reputation took a hit when its VTL partners EMC and IBM went in other directions for dedupe. FalconStor gets at least 20% of its revenue from EMC VTLs, and a big chunk of its business comes from iSCSI — which probably isn’t of much interest to NetApp because it offers iSCSI on its current storage platform.

Pros: solid dedupe IP and patents acquired from Rocksoft (through ADIC), and a dedupe-based VTL platform that Quantum has improved after a rocky start. EMC has been pushing Quantum on the market as its OEM dedupe partner for nearly a year.

Cons: comes with baggage – a lot of debt from its own acquisitions, and a lot of tape. For all its talk about dedupe, Quantum still gets most of its revenue from tape. Quantum can also be seen as an EMC castoff in the wake of its Data Domain buy.

Pros: global dedupe and a VTL platform that can help NetApp fulfill its goal of becoming more of an enterprise play.
Cons: NetApp might want more of an established company. Also, Hewlett-Packard OEMs Sepaton and could outbid NetApp to prevent Sepaton from getting away.

ExaGrid Systems
Pros: startup has built a solid business in the midrange, primarily as a lower-priced option to Data Domain.
Cons: Can be seen as Data Domain-light, and would likely require much investment to become an enterprise play.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: