Storage Soup

November 3, 2009  4:28 PM

EMC, Cisco make joint venture official

Dave Raffo Dave Raffo Profile: Dave Raffo

EMC and Cisco today officially confirmed their long-awaited private cloud venture, called Acadia.

In a joint press release, the vendors referred to Acadia as “a joint venture focused on accelerating customer build-outs of private cloud infrastructures through an end-to-end enablement of service providers and large enterprise customers.” Cisco and EMC are Acadia’s lead investors, with VMware and Intel involved as limited partners. Acadia will begin customer operations in the first quarter of next year, the vendors say.

Acadia is an offshoot of an alliance between EMC, Cisco and VMware called the Virtual Computing Environment (VCE), also officially disclosed today after months of speculation. The vendors today also today launched product bundles called Vblock Infrastructure Packages.

Vblock packages include Cisco’s Unified Computing System (UCS), Nexus 1000v and MDS Fibre Channel switches, EMC Symmetrix V-Max or Clariion storage systems and VMware vSphere software. Vblock 1 is a midsized configuration for 800 to 3,000 virtual machines, and Vblock 2 is a high-end configuration that scales from 3,000 to 6,000 virtual machines. Vblock 0 is an entry level configuration supporting 300 to 800 virtual machines.

The vendors describe Vblock Infrastructure Packages as a “better approach to streamlining and optimizing IT strategies around private clouds.”

The VCE alliance also includes joint sales, support and professional services teams. Professional services include Cloud-based Business Advisory Service, Private Cloud Strategic Impact Advisory Service, Private Cloud Architecture Impact Advisory Service, Virtual Desktop Advisory Service, Cloud Computing Strategy Service, and Vblock Design and Implementation Service.

During a webcast to discuss Acadia and VCE today, CEOs Joe Tucci of EMC and John Chamber said the joint venture would consist of about 130 employees but they have yet to hire its CEO.

November 2, 2009  8:43 PM

Reports resurface of EMC/Cisco joint venture

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Rumors began swirling around the time of VMWorld in September that EMC and Cisco would be creating a joint venture to sell infrastructure to support VMware. Last week, two stories appeared on the news wires indicating an announcement may be imminent.

A story from the Dow Jones Newswire that appeared on the Wall Street Journal’s website said the partners are set to launch the venture this week with a product dubbed V-Block. According to this report, the new joint venture will have its own CEO.

Meanwhile, according to a Reuters report that also appeared Friday,

One part of the partnership calls for the two companies to form a joint venture that will sell vBlock as a hosted service. Customers can pay for that service based on the amount of computing power and storage that they need, accessing it via the Internet.

That joint venture will assemble computer systems for customers, integrating all necessary hardware and software to make the systems work.

According to reports and previous rumors, the joint venture would involve Cisco’s Unified Computing System and EMC storage. VMware, Cisco and EMC have had a longstanding alliance, dubbed VCE. What would make this different is that it would be a separate company with its own sales force, meaning the companies wouldn’t have to pay multiple commissions to multiple sales people for the same sale. It’s unclear what this joint venture would mean for customers that the existing partnership doesn’t offer today outside of one throat to choke for support.

October 30, 2009  9:11 AM

10-29-2009 Storage Headlines

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

(0:23) Signs point to licensing problems as Apple discontinues ZFS development

(2:18) Quantum: ‘Our strategy is working’
SAN sales boosted by need for storage efficiency

(4:39) EMC lays out data archiving and eDiscovery plans

(6:19) Emulex leads its convergence strategy with Ethernet

(7:16) FalconStor expands cloud hype with HyperFS

(7:49) 3PAR strengthens software for InServe Storage

October 29, 2009  4:03 PM

FalconStor expands cloud hype with HyperFS

Dave Raffo Dave Raffo Profile: Dave Raffo

FalconStor Software is preparing its cloud strategy around a new file system it developed in collaboration with the Chinese Academy of Sciences.

Executives on the vendor’s earnings call Wednesday evening described HyperFS as a storage virtualization/SAN-based clustered file system that will be used to scale the storage layer of FalconStor’s cloud architecture. FalconStor CEO ReiJane Huai said HyperFS will be ready to launch next year.

FalconStor executives said they have signed an OEM deal with a large rich media content provider to sell a system built on HyperFS.

“That’s the opening shot,” VP of business development Bernie Wu added. “We are definitely going to be expanding our business with that file system. It’s massively scalable, good for cloud computing.”

October 29, 2009  3:27 PM

Emerging vendors take a SaaS approach to storage reporting

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Two startups are taking a software-as-a-service (SaaS) approach to reporting on storage assets.

Storage Fusion Ltd, a UK company spun off by a private investment firm last year, claims to be reporting on storage environments of up to 60 PB, and signed a licensing agreement with GlassHouse Technologies last fall. According to managing director Graham Wood, the 15-person company is currently working with about 50 active customers, all with more than 50 TB, not counting “one-off” analytics done with some partners. The company was not able to provide an end user for an interview.

Storage Fusion’s product, Storage Resource Analysis (SRA), consists of a series of scripts which customers download and execute to collect data on EMC, Hitachi Data Systems, Hewlett-Packard, IBM, or NetApp arrays in their environment. The data generated by the scripts are then sent back to Storage Fusion’s data center, where Storage Fusion performs the analysis and provides results to the customer via the Web. According to Wood, if users get their data uploaded before 3 p.m., the analysis will be available the same day.

The company’s Web portal provides analysis according to gegraphical parameters, or the total resources, allocation and utilization at each data center location; a consumer view, which shows the hosts connected to the storage; a provider view, which normalizes the view of resources across heterogeneous storage providers into one report on the total storage environment; an environmental view, which provides energy consumption statistics according to published power and cooling specs from vendors; and an optional add-on business view, which translates storage capacity data into business-relevant statistics like dollars and cents. The reporting tool also looks for “exceptions” and provides error warnings and information on “orphan” reclaimable storage. The tool can decompose virtualization layers and supports thin provisioned arrays under its tiering tab.

Some customers, particularly large ones, might be wary of a third party peeking into their environment or sending data about their environment out of their data center. But Storage Fusion sales and operations director Peter White said “from a security perspective, our scripts are completely open — we hid nothing, and prior to running them, the user can look at them and see they’re just service log commands, the kind of command line utilities they execute all day.” The Web portal is also accessed via an SSL connection.

On the other side of the pond, and the other side of the customer-size spectrum, is Waltham, Mass.-based Aprigo, whose Ninja product has gotten several hundred free-version downloads since August. This first free version of the product collects file metadata regardless of hardware vendor on common file attributes such as name, type, size, and date modified. Aprigo compliles those attributes in a single view, and presnets them along with a cost calculator to show the dollar value of storing information on a yearly basis.

“It can be used for archiving or tiered storage business justification,” Aprigo CEO Gill Zimmerman said. Customers can also store up to 500 GB or 5 previous historical scans for trending reports. Aprigo is also working to put together a “community intelligence” report where users can compare themselves anonymously against other Aprigo customers.

Aprigo has a midmarket focus and doens’t use the term SRM because it’s reporting on file data rather than physical devices, according to Zimmerman. It’s working on a collector for other SaaS-based file systems like Google Docs. The company plans a Nov. 15 release that will also report on access control lists for file systems.

While some analysts have said the SRM space, which has seen its share of ups and downs, won’t mature until services and help for customers interpreting analytics results are more widely available, the SaaS or services-delivery model of SRM tools is not a new idea — Aptare has sold its backup and storage reporting tools to service providers for years; similarly, IBM’s Storage Enterprise Research Planner (SERP) storage resource management tools are deployed through IBM Global Services. CommVault began offering a SaaS-based backup reporting service in 2008; Dell has pledged a SaaS approach to services; and Continuity Software also offers a SaaS option for its disaster recovery change management tool.

October 28, 2009  5:13 PM

Thales launches key manager, gets support from Brocade and HDS

Dave Raffo Dave Raffo Profile: Dave Raffo

The encryption technology from defunct Neoscale is alive and well, and – after winding its way about Europe – shipping in a key management appliance that Hitachi Data Systems has started reselling.

A few years back, Neoscale, Decru (now part of NetApp) and a few others were part of what was seen as an emerging market for tape encryption devices. The market never took off, and U.K. enterprise security vendor nCipher bought Neoscale’s assets for $1.9 million in late 2007. Thales Group, a French-based company, then acquired nCipher for $100 million in July 2008.

Thales still sells CryptoStor tape encryption devices that Neoscale developed, and brought more Neoscale IP to market in Thales Encryption Manager for Storage (TEMS). Thales said this week that Brocade has qualified TEMS for it s Encryption Switch and the FS8-18 Encryption Blade that fits into Brocade’s DCX director switch.

Thales also secured a reseller deal with HDS to sell TEMS with Brocade switches. Thales is looking for similar deals with other storage vendors as it tries to become a player in the storage security game. TEMS handles encryption keys for devices from multiple vendors.

“We are attractive to Hitachi because we are security only, and not a storage and security company,” Thales director of product marketing Kevin Bocek said.

In other words, Thales is different than its rival RSA Security, which is owned by EMC.

“We’re not affiliated with a storage vendor,” Bocek said. “We’re neutral.”

Thales is part of the Key Management Interoperability Protocol (KMIP) alliance that is developing an industry standard, which Bocek expects to become ratified next year. Once that happens, he says, it will help vendors bring encryption products to market faster.

“Within a year or two, many more encryption devices will be brought into storage systems,” he said, sounding an optimistic tone for a market that hasn’t yet amounted to much.

October 28, 2009  2:12 PM

Quantum: ‘Our strategy is working’

Dave Raffo Dave Raffo Profile: Dave Raffo

Quantum execs are using their results from last quarter as evidence that the vendor can survive – and perhaps thrive – without EMC as a data deduplication partner.

Quantum reported a strong increase in disk and data deduplication revenue last quarter, the first since EMC’s $2.1 billion acquisition of Quantum’s dedupe rival Data Domain.

Quantum’s $28.2 million in disk and software revenue represented a 47% increase from the previous quarter and 36% from last year. That revenue includes Quantum’s StorNext clustered file system, but executives on the company’s earnings call Tuesday night said VTLs with dedupe made up a majority of the $28.2 million. They pointed to a “significant” increase in Quantum DXi disk sales, a “modest” increase in StorNext sales and slight decline in license revenue from EMC.

Quantum also cited three customer deals of more than $1 million each for its DXi7500 in the quarter. The increased disk sales helped Quantum beat Wall Street expectations with $175 million in revenue – up 9% from the previous quarter while down 19% from last year. Quantum’s $11 million profit was its second straight quarter in the black after a string of losses.

The numbers show Quantum still has a long way to go – EMC reported Data Domain alone brought in $105 million last quarter. But CEO Rick Belluzzo said Quantum gained momentum, and he hopes to take advantage of “disruption” caused by EMC-Data Domain by scoring new OEM deals and adding channel partners to transform the company.

“This company has historically been mostly about tape,” Belluzzo told SearchDataBackup after the earnings report. “Now I suspect people are watching the deduplication segment more closely.”

Belluzzo said the split with EMC prompted Quantum to chase large enterprise deals rather than rely on EMC to land those deals and pay Quantum a licensing fee.

“The EMC change did clarify things,” he said. “We made an aggressive shift in our go to market focus as the EMC relationship went through a dramatic change. We always struggled with large accounts on whether we would compete with our partner there. But after EMC bought Data Domain, we said we’re going to play to win. ”

On the earnings call, Quantum executives said they see opportunities to partner with other storage vendors looking to add deduplication. Afterward, Belluzzo said he’s still talking to new possible OEM partners.

“It’s too early to suggest the timing of a deal, but we expect to have another OEM partner,” he said. “We have a number of opportunities, although they could take different forms from the EMC deal.”

Belluzzo sees other possible openings for Quantum in the wake of the EMC-Data Domain deal. Quantum is already a Symantec OpenStorage (OST) partner, and Belluzzo says the vendor is looking to add resellers who feel left out in the wake of the EMC-Data Domain deal.

“There’s confusion in the reseller channel,” he said. “Some may have a relationship with EMC and not Data Domain, or the other way around, and there’s some insecurity around that. We get that feedback from a lot of people.”

Quantum is hoping its new DXi6500 midrange dedupe system can bolster sales through the channel.

“This was a critical quarter for Quantum given the economic downturn and changes in the deduplication landscape,” Belluzzo said. “Despite these factors, we were able to deliver strong results.”

Quantum’s stock price increased a whopping 21.5% today, up 35 cents to $1.98.

October 27, 2009  7:15 PM

Emulex leads its convergence strategy with Ethernet

Dave Raffo Dave Raffo Profile: Dave Raffo

Emulex used its analyst day today to officially roll out its OneConnect Universal Converged Network Adapters (UCNAs) and underscore its strategy of taking a 10-Gigabit Ethernet path to converged Fibre Channel and Ethernet networks.

Emulex said the OneConnect adapters it first talked about in February are available for partners to ship, and IBM has agreed to OEM its 10-GigE NIC and 16 Gbps Fibre Channel HBAs with the Power Systems server platform.

Emulex is taking a different path than its main rival QLogic, which already has several partners selling its single-chip 8100 Series CNAs. Emulex is releasing its 10-GigE adapter first with TCP/IP and TCP Chimney support, and hopes to deploy a pay-as-you-go strategy where customers will later upgrade with iSCSI and FCoE connectivity. Emulex gets its 10-GigE silicon through an OEM deal with ServerEngines.

Emulex is counting on 10-GigE and Intel’s Nehalem servers driving convergence, with storage connectivity to follow.

“We’ve taken an Ethernet approach rather than a storage-centric approach,” Emulex VP of corporate marketing Shaun Walsh told StorageSoup. He says the Emulex approach lets customers pay for only 10-GigE first, instead of having to pay for FC connectivity they might not use yet.

“Every NIC card has the potential to be an FCoE card,” Emulex CEO Jim McCluney said during his analyst day presentation.

Emulex customer Lars Linden, SVP of data center services for Royal Bank of Scotland, spoke at the analyst day to express his eagerness for a converged network. Linden said convergence will eventually help him simplify management, reduce cables and increase utilization, adding that it can’t arrive soon enough for him. He says he RBS spends about $500,000 a year on cabling.

“I have people on staff who do nothing but cabling all day long,” he said. “They’re very clever people and I would like to have them do higher value activity, but this is table stakes for running a data center. As soon as there is a commercially available set of capabilities and technologies supporting convergence, there will be a rapid adoption.”

Linden compared the eventual move to consolidated networks to virtualization as a “game changer” technology.

Nobody expects 16-gig FC any time soon, despite Emulex’s touted design win. Emulex executives acknowledged the market moved to 8-gig FC much slower than it went from 2-gig to 4-gig FC so there’s no hurry to push out 16-gig products.

“This is a future announcement,” McCluney said. “We’re not going to see any 16-gig revenue for quite some time.”

The suspicion here is that Emulex and IBM announced the 16-gig FC design win to end speculation that QLogic might replace Emulex FC HBAs on the Power Series after securing a CNA OEM win with IBM last week. Emulex is IBM’s exclusive partner for 4-gig and 8-gig HBAs on the Power Series.

“We believe investors may have questions regarding QLogic’s recent announcement that its FCoE CNAs had been qualified within IBM’s System p (Unix) server platforms given that Emulex has long been the sole-source provider of FC HBAs into these IBM server platforms,” Stifel Nicolaus Equity Research analyst Aaron Rakers wrote in a note to clients after the Emulex analyst day.

October 27, 2009  1:50 PM

Industry bloggers debate dedupe to tape

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

It just wouldn’t be the storage industry if there weren’t technical debates popping up on a daily basis.

One that caught my eye today is an ongoing conversation between some storage bloggers about data deduplication to tape, and whether or not it’s a crazy idea. Or, more accurately, whether it’s “good crazy” or “bad crazy.”

Backup expert W. Curtis Preston got things started with a blog written after he visited CommVault’s headquarters in Oceanport, N.J., and discussed the concept of CommVault’s data deduplication to tape feature added in Simpana 8. “Dedupe to tape is definitely crazy.  But is it crazy good or crazy bad?” Preston wrote.

Everyone (including the CommVault folks) agrees that no one would want to do any significant portion of their restores from deduped tape.  But I also agree that if I typically do all my restores from within the last 30 days, and someone asks me for a 31 day-old file, it’s generally going to be the type of restore where the fact that it might take several minutes to complete is not going to be a huge deal.  (In the case that you did need to do a large restore from a deduped tape set, you could actually bring it back in to disk in its entirety before you initiate the restore.)

Now here’s the business case. Anyone who has done consulting in this business for a while has met the customer where everyone knows that 99% of the restores come from the last 30-60 days — and yet they keep their backups for 1-7 years.  What a waste of resources.  CommVault is saying, “Hey.  If you’re going to do that, at least dedupe the tapes.”  They showed me two business cases from two customers that doing this was saving them over $500K per year in their Iron Mountain bill.

Curtis made some declarative statements in that blog post, and when that happens you can expect someone in the storage blogosphere to write a post in opposition. EMC Networker data backup consultant Preston de Guise did the honors this time, with a reponse titled “Dedupe to tape is “crazy bad” if the architecture is crazy.”

Yes, it’s undoubtedly the case that the CommVault approach will reduce the amount of data stored on tape, which will result in some cost savings. However, penny pinching in backup environments has a tendency to result in recovery impacts – often significant recovery impacts. For example, NetBackup gives “media savings” by not enforcing dependencies. Yes, this can result in in saving money here and there on media, but can result in being unable to do complete filesystem recoveries approaching the end of a total retention period, which is plain dumb.

The CommVault approach while saving some money on tape will significantly expand recovery times (or require large cache areas and still take a lot of recovery time). Saving money is good. Wasting a little time during longer-term recoveries is likely to be perceived as being OK – until there’s a pressing need. Wasting a lot of time during longer-term recoveries is rarely going to be perceived as being OK.

An IT admin/blogger writing at Standalone Sysadmin picked up on de Guise’s post and had this to say:

My problem with this is tape failure. If one of the 50 individual backup tapes fails, it’s no problem. Sure, you lose that particular arrangement of the data, but it’s not that big of an issue. Unfortunate, sure, but not tragic. If you lose the 1 tape that contains the deduplicated data, though, then you immediately have a Bad Day(tm).

Essentially, you are betting on one tape not failing over the course of (in the argument of Mr Preston) 7+ years. And if something does happen in that 7 years, whether it’s degaussing, loss, theft, fire, water, or aliens, you don’t lose one backup set. You lose every backup that referenced that set of data.

So I would, if I could afford one, buy a deduplicated storage array in a heartbeat for my backup needs. But I would not trust a deduplcated archival system at all. The odds of loss are too great, and it’s not worth the savings. I’d rather cut the frequency of my backups than save money by making my archives co-dependent.

Of course, another user we talked to around the launch of Simpana 8 felt differently:

The global deduplication with Simpana 8 also extends to tape, making it the first product of its kind to allow for writes to physical tape libraries without requiring reinflation of deduplicated data. “That’s very appealing,” said Paul Spotts, system engineer for Geisinger Health, a network of hospitals and clinics in central Pennsylvania. “We added a VTL [virtual tape library] because we were running out of capacity in our physical tape libraries, but we lease the VTL, so we’re only allowed to grow so much per quarter.”

What say the rest of you?

October 27, 2009  10:37 AM

LiveDrive looks to hop across the pond with online data backup

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

U.K.-based LiveDrive, a competitor to consumer online backup services like MozyHome and Carbonite, is getting U.S. distribution thanks to a new partnership with LifeBoat Distribution.

Online marketing manager Jamie Brown says LiveDrive has 300,000 unique accounts worldwide, 120,000 of them already located in the U.S. LiveDrive creates a network drive that shows up on users’ PCs. Any files sent to that L:\ drive will be backed up to LiveDrive’s cloud; data can be stored there for safekeeping or users can use LiveDrive to keep the L:\ drive synced and share data among multiple machines. Users can also access their data through LiveDrive’s Web portal, which also offers mini-applications that allow users to edit or play back photos and video.

The company has data center infrastructure in the United States through collocation, but currently all users access data through load balancers in the UK. Brown said there are plans to expand the US infrastructure organically, but won’t rush, saying currently users aren’t experiencing performance issues with the way the infrastructure is set up.

After a year, though, Brown said LiveDrive hopes to have an office in the US within 12 months, and may also add a business-level service to compete with services like MozyPro and i365’s EVault Small Business Edition. It currently does not offer service level agreements or geographic redundancy for consumer users.

Despite its claims about its client base, LiveDrive was unable to provide a public customer reference before the announcement this morning.

Enterprise Strategy Group analyst Lauren Whitehouse said this is becoming an increasingly tough space for new players to differentiate themselves in. “Mozy has more than a million customers, and for Symantec’s SwapDrive the number’s even greater,” she said. “LiveDrive has plenty of formidable competition.”

One factor that might hurt LiveDrive, at least in the beginning, is the fact that data must currently be accessed through the U.K. “Anyone who has discomfort sending data out to the cloud might have more discomfort knowing there’s a geographic distance there,” Whitehouse said. Even if performance isn’t bad, “there could be ramifications if there’s a dispute.”

As for the differentiation of being able to manipulate content within the cloud, Whitehouse said LiveDrive will also face competition from players like Memeo and Ricoh’s Quanp, to say nothing of photo-sharing sites like Flickr and Photobucket, boith of which offer small photo-editing software suites with their services. “It’s somewhat of a Wild West situation right now ith different companies trying to do a ‘land grab’, capturing customers and then building from there,” she said, including LiveDrive in that mix.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: