Storage Soup


May 10, 2010  1:33 PM

EMC releases VPlex “active-active” storage

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

This morning’s big announcement at EMC World is called VPlex, which EMC says will allow for federation of data over geographic distance.

VPlex was first publicly discussed at last year’s VMWorld conference. At the time, EMC officials referred to it as “active-active” storage to support distance VMotion. The key difference between this and metro clusters is cache coherency, enabled by EMC’s acquisition of technology from YottaYotta three years ago. While the stretched array cluster remains locally “array aware” — integrating with EMC FAST, for example — it can propagate data as a distributed pool quickly enough to support running applications being VMotioned over distance.

The VPlex device is an appliance which begins at 1U and can scale up to 4U, with 32 GB cache, two quadcore processors per appliance, and can front any of EMC’s arrays. The goal, according to Pat Gelsinger, President and Chief Operating Officer, EMC Information Infrastructure Products, is to be able to front third-party arrays as well, although Brian Gallagher, President, Symmetrix and Virtualization Product Group, said those third party arrays are not fully supported yet.”

Two separately licensed versions of VPlex are available today — VPlex Local, which covers local data center data migrations, which starts at $77,000 as an up-front fee or $26,000 for subscription-based pricing. VPlex Metro is also becoming available today and will support data over over distances up to 100 km (5 ms latency) using synchronous replication.

In early 2011, officials said, EMC will release VPlex Geo, which will support “thousands of virtual machines over thousands of miles” and asynchronous replication. Finally, VPlex Global, also due out next year, will support multi-site pooling using asynchronous or synchronous replication.

Stay tuned for more on this announcement and other news from the show.

May 7, 2010  12:33 PM

05-06-2010 Storage Headlines

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

(0:24) Data Robotics CEO: Drobo makes RAID data storage easier

(1:49) SMB data storage hardware competition heats up with Iomega ix12-300r

(3:31) CA adds disk backup application to ARCserve Backup software

(5:06) Transaction performance management vendor integrates with EMC FAST

(6:17) NetEx gets VMware seal of approval


May 5, 2010  8:55 PM

Transaction performance management vendor integrates with EMC FAST

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

With EMC World fast upon us, announcements have begun to take on an EMC theme, including one from Precise Software Inc. that its transaction performance management software is integrated with EMC’s Fully Automated Storage Tiering (FAST) to offer transaction-by-transaction monitoring and storage tier migration.

The software has been generally available since the end of 2009 as part of the EMC Select program. Customers interested in linking critical database and other application transactions with the performance boost available from SSDs can have Precise’s software create a list of “suggestions” of what volumes and transactions could best benefit from Flash storage. An integration between Precise and Symmetrix Management Console can then ‘hand off’ that list of suggestions to FAST, which will perform the migration to higher tiers of storage accordingly. In the ‘handoff’ scenario, the storage manager would manually approve the data movement suggested by Precise.

EMC offers some application performance management through its Ionix IT Operations Intelligence products, but that monitoring is focused on the network rather than transactions,” Precise’s EVP of Products and Marketing Zohar Gilad said.

Of course, EMC FAST is far from the only automated storage tiering software currently available. Gilad said integration with other vendors’ storage tiering software is on the roadmap, but declined to disclose who else Precise might be working with.


May 4, 2010  7:22 PM

NetEx gets VMware seal of approval

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

As certification announcements go, this one is more interesting, I think, than most others, if only because it harkens back to one of the most interesting product announcements/demonstrations I saw last year.

At last year’s VMWorld in San Francisco, Cisco and VMware demonstrated distance VMotion, a technology that will be key to VMware’s vision of data center federation and fluidity between public and private clouds. However, distance VMotion as of that conference had several limitations, the most significant of which from a storage perspective is the need to migrate potentially large volumes of data over distance very quickly in order to support VMotion between data centers.

VMware said last year it will support customers if they deploy distance VMotion using the Cisco network, but its support statement included extensive fine print, including a minimum network bandwidth of 622 Mbps, or an OC12 connection.

Partners were scrambling at that time to step in to solve the data migration problem (including EMC, which was developing “active-active” storage to support distance VMotion), and some of the exhibitors on the show floor, including NetEx and F5 Networks, claimed to be able to solve the problem today. At the time, however, no WAN optimization products were certified for distance VMotion with VMware.

Today, NetEx announced certification of its HyperIP software as VMware Ready, which according to a press release means “HyperIP integrates consistently with VMware technology and is ready for deployment in customer environments.” The press release doesn’t mention distance VMotion specifically, but a NetEx spokesperson said a large oil and gas company has deployed the software for distance VMotion. That customer is not open to taking questions from press, the spokesperson said.


May 3, 2010  3:13 PM

Will zettabytes of data push enterprises to the cloud?

Dave Raffo Dave Raffo Profile: Dave Raffo

According to IDC’s 2010 Digital Universe report, digital data grew 62% last year as 800,000 PB were added. IDC says 1.2 million PB (1.2 zettabytes) will be added this year, and that will increase to 35 ZB in 10 years.

While those numbers may look staggering on a page, they probably don’t shock anybody charged with managing data storage. The real shocking – and frightening – number is that IDC says the amount of IT staff to manage all this data will only grow by a factor of 1.4 by 2020. If IDC is correct, than the dreaded “do more with less” mantra will become a long-term way of life.

So how will this all change the way we manage data? Chuck Hollis, global marketing CTO of EMC – which sponsored the IDC study – says the data growth will push a lot more of it to the cloud this year. Hollis says the IT staffs at large enterprises that he talks to are ready to set up private clouds to manage data.

“For tech guys, this is the year of putting your cloud strategy together,” Hollis says. “We’re way beyond the ‘What is the cloud?’ discussion, and it’s a very mature discussion with the IT guys I talk to.

“The larger enterprises say, ‘We’re big, we can do this ourselves. We can build a private cloud behind the firewall and get comfortable with it.’ They’re saying, ‘We pay the same price for this stuff – the processors, server, storage – there’s no reason I can’t do what Amazon does.’”

Hollis says as long as organizations feel they can control their data in the cloud, they’re willing to move it there.

“The cloud works when enterprise guys can be in control,” he said. “Ask them to give up control, and it’s not that attractive a proposition for them. You can’t outsource responsibility and accountability. In financial services, a trillion dollars a day floats around the global economy over the cloud. Most days we’re OK with that. Clouds, schmouds, it doesn’t matter as long as enterprise guys feel they’re in control.”

Other emerging methods of managing growth aren’t quite as mature, Hollis says. That includes data deduplication for primary data. While EMC is now the leader in backup dedupe, Hollis says the success of primary deduplication “has a lot to do with processors being fast enough to do it without impacting performance. If you have a SAP application with 10,000 demanding users, maybe it [deduplication]’s a false savings. The concern is, at what cost? The technology gets better year over year, but some are of the opinion this is just a temporary fix, you’re just buying yourself some time. A lot of information is not compressible, like JPEGs. You can’t compress something that’s already compressed.”

Flash solid state storage is another area where EMC has been out front, but it’s another technology where the greatest benefits are still down the road. “If you take what processors have done in the last 10 years as far as density, price and performance, then start with flash in 2010 and forecast it out in 10 years, it could actually get cheaper than disk,” Hollis said. “That would be an interesting world.”


April 30, 2010  5:33 PM

FalconStor rearranges its OEM chairs

Dave Raffo Dave Raffo Profile: Dave Raffo

FalconStor Software officially reported revenue Thursday, confirming what if first said in a preliminary report April 19 – it had a lousy quarter.

FalconStor’s $17.1 million in revenue was down from $21 million a year ago, and it lost $5.5 million compared to a loss of $900,000 in the same quarter last year.

FalconStor’s problem is it sells mainly through OEM partners, and its largest partnerships were disrupted last year. Its biggest OEM partner, EMC, bought Data Domain and now sells more Data Domain data deduplication boxes and fewer of its VTLs that use FalconStor software. So FalconStor revenue from EMC declined $300,000.

Sun is another partner, but Sun was in the process of getting acquired by Oracle for most of 2009 and its been unclear which of its products would survive the acquisition. FalconStor revenue from Sun dropped $1.1 million last quarter. FalconStor also took a hit when Hewlett-Packard acquired FalconStor partner 3Com, although FalconStor executives say they expect a rebound now that 3Com is integrated into HP. Another FalconStor partner, Copan, effectively went out of business last year before SGI acquired its assets and resurrected its archiving product.

FalconStor says it will cut spending and has imposed a hiring freeze until it becomes profitable again. More importantly, it is finding new OEM partners. As VP of business development Bernie Wu put it, “We had an unusually high level of disruption with our OEM partners last year, and we’re forming a new foundation of partnerships.”

FalconStor executives say they expect to launch two new Tier 1 OEM deals late this year. One will be for a cloud services offering. They didn’t such much about the other, but one possibility is a deal with Hitachi Data Systems for FalconStor’s File-interface Deduplication System (FDS) software.

HDS so far has a piecemeal approach to backup data deduplication. It resells IBM Diligent ProtecTier, but doesn’t push a product owned by its rival IBM. HDS salespeople have financial incentive to sell the new Sepaton VTLs built on HDS disk, but there’s no formal reseller deal. HDS OEMs CommVault’s Simpana that includes deduplication and certifies FalconStor’s dedupe, but lacks one main dedupe product.

During the earnings call Thursday, Wu said FalconStor had a “significant pipeline” with HDS for the FalconStor software it resells and “we expect that partnership to deepen.”


April 30, 2010  3:31 PM

4-29-2009 Storage Headlines

Dave Raffo Dave Raffo Profile: Dave Raffo

(0:24)
Compellent zNAS adds ZFS multiprotocol storage access to Storage Center SANs

(1:38)
EMC SAN failure blamed for Intermedia hosted email outages

(2:57)
Nimbus Data Systems launches Nimbus S-class all-solid-state, no-disk storage system

(5:01)
LSI CEO likes his chances with Oracle

(6:25)
Unitrends marches to different data reduction drummer

(7:57)
What’s keeping data storage out of the cloud?


April 29, 2010  1:10 AM

LSI CEO likes his chances with Oracle

Dave Raffo Dave Raffo Profile: Dave Raffo

Ever since Oracle said it would end its OEM deal with Hitachi Data Systems for its enterprise storage systems, people in the industry have wondered if Oracle would also sever its midrange storage OEM deal with LSI.

Oracle executives say they killed their HDS deal because they don’t make enough money selling other vendors’ storage, which doesn’t bode well for LSI.

But LSI CEO Abhi Talwalkar says he’s optimistic about continuing with Oracle. During LSI’s earnings call Wednesday evening, Talwaker even talked about expanding the partnership.

“We are pleased with our competitive position at Oracle,” he said. “Oracle recently posted a pdf on [its] website to address the partnership with Hitachi Data Systems. We believe there have been positive developments for LSI, including the termination of the HDS relationship. This will give LSI more room to grow, and Oracle also mentions support for technology partners associated with the [Sun StorageTek] 6780 system and 6000 series. which is all leveraging LSI system technology.”

Oracle hasn’t said anything publicly either way about LSI. During Oracle’s earnings call last month, CEO Larry Ellison says OEM relationships with HDS and Symantec Veritas backup software have ended but did not mention LSI. In explaining the HDS and Symantec decisions he said “we add no value so we are out of that business.” He did mention expanding the SunStorageTek 7000 midrange storage as well as high performance and high end servers platforms, but not the 6000. “Where Sun was specifically a distributor of someone else’s intellectual property and lost money doing it, we are out of that business,” Ellison said.

But what if Oracle/Sun is making money on LSI’s storage? LSI reported its second straight strong quarter ifor storage system sales with revenue of $221 million, up 40% from a year ago. Besides Oracle, LSI’s OEM deals include midrange storage systems for IBM and entry-level enclosures for Dell and other smaller vendors.

Talwaker also said LSI will launch a new 6 Gbps SAS entry level platform with up to four times performance and twice the capacity of its current platform in the second half of this year.


April 28, 2010  4:27 PM

One storage pro’s response to Intermedia’s hosted email outage

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Earlier this week, we ran a story about email hosting provider Intermedia attributing a recent outage to a failure in its EMC SAN. After the story ran, we received feedback from Bob Adams, a storage systems engineer at a leading Boston teaching hospital, on the case:

“I can’t see how Intermedia can truly blame this on EMC,” Adams wrote in an email.

First of all, the EMC SAN referred to here is clearly an EMC CLARiiON based on the information provided.  The fact that one of the storage processor’s had a failure, probably a bugcheck panic (like a windows BSOD…CX’s run Windows OS on the SP’s) due to a bug in the firmware aka FLARE code is a case that their SAN Admin hadn’t been patching/updating the FLARE code on a regular basis as he/she should be doing. 

 

Then with the failure and having to run on one storage processors is something the CLARiiON is designed to be able to do for fault tolerance as well as load balancing, again the SAN admin was at fault for this CLARiiON was clearly over utilized.  The utilization on the storage processors has to be within a CPU percentage range so that if an SP had a failure the second SP could handle its own load plus the load of the other.  Meaning if the utilization of say SPA was 75% and the utilization of SPB was 75%, there is no way if SPA failed SPB will be able to handle the load.  Which sounds what happened here.  I see this as more of Intermedia’s own fault over EMC.

What do you think? Comments operators are standing by…


April 27, 2010  9:10 PM

Unitrends marches to different data reduction drummer

Beth Pariseau Beth Pariseau Profile: Beth Pariseau

Unitrends Inc. has put its own spin on data reduction for small and midsized businesses (SMBs) that use its backup appliances.

Previously, Unitrends has offered file-level compression and post-process subfile data deduplication with its products, but said the CPU overhead of doing subfile level deduplication on its customers’ relatively small data sets resulted in the need for beefier processors and appliance hardware. This in turn might be more expensive for some small customers than just buying more disk, according to Unitrends COO Mark Campbell.

Unitrends today announced what it calls Adaptive Deduplication, and Campbell says the goal is to offer users the best storage utilization possible without compromising performance. Adaptive Deduplication adds a light content-aware algorithm that evaluates the type of data (structured or unstructured) as well as its size as it comes into the system, and determines how the best data reduction ratio can be achieved. All files are compressed as they come into the system, but only larger data objects will be pulled apart for sub-file dedupe later.

“Typically structured data is better served by the compression ratio — files almost always dedupe pretty quickly,” he said. Now, if a user is making small incremental changes to a database, the system won’t have to pull apart every small block to look for additional data reduction beyond compression — it can just compress the data and move on.

While the high performance overhead of doing data deduplication has been a major issue with the technology since its inception, users at midsized and larger companies have been willing to pay the price for processors in order to contain unmanageable backup capacity growth.

But Campbell brings up an interesting challenge to dedupe-as-panacea: Unitrends customers are often in small shops that require as little as seven days data retention, and “they don’t get great ratios with traditional block-level deduplication. When disk drives are so cheap, it’s not necessarily a no-brainer to purchase next generation hardware to push subfile dedupe.”

The compression and file-level dedupe will be included with the software that comes on all Unitrends appliances, and current customers will be able to download it beginning next month. For customers still looking for subfile dedupe, Unitrends will also come out with a new appliance heavier on processors than capacity later this year, which will make subfile deduplication more likely under Adaptive Deduplication.  “It sounds funky and weird and why not just put [a new appliance] out there, but it’s a price-performance issue,” Campbell said.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: