Storage Soup


February 17, 2012  5:26 PM

HDS enhances non-disruptive data migration to VSP

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Hitachi Data Systems Corp. has added a non-disruptive data migration capability to its flagship Virtual Storage Platform (VSP) that minimizes downtime for hosts and applications when moving data  to the VSP from other storage arrays.

Hitachi Nondisruptive Migration moves data from HDS’ older Universal Storage Platform (USP) to the VSP. The target array spoofs the host operating system so it thinks it’s talking to the original system even as data is moved to the target. That allows applications to continue running during migration.

Patrick Allaire, an HDS senior product marketing manager, said this capability is based on three components:  a new persistent virtual Logical Device Identity (LDEV ID) that is used to manage multiple hosts, a logical identity takeover function, and Hitachi’s Universal Volume Manager (UVM).

The virtual LDEV ID is embedded in the SCSI layer of the Fibre Channel protocol. A virtual LDEV ID  is created and mapped from the source to the target, then the takeover function spoofs the operating system.  Finally, the Hitachi UVM copies or migrates the volume from the source to the VSP target.

Typically, a physical LDEV ID is used for this process but it makes it more manually intensive. “If you don’t have a virtual persistent ID, it forces you to do a shutdown of the application when changing the data path from the old storage system to the new one,” Allaire said.

With the new virtual LDEV ID and takeover function, Hitachi claims to maintain the quality of service levels for applications. Hitachi now supports host clustering configurations by synchronizing persistent group reserves on source volumes and keeping the host cluster operational. It also supports parallel migration of up to eight source systems to one Hitachi VSP.

February 17, 2012  3:57 PM

Actifio banks on IBM giving it a PAS with service providers

Dave Raffo Dave Raffo Profile: Dave Raffo

IBM and Actifio struck up a partnership this week that startup Actifio hopes will bring its Protection and Availability (PAS) platform to the cloud and IBM sees as a way to fill data protection needs for service providers.

IBM and Actifio said they will offer bundles to cloud service providers and VARs. The packages include Actifio’s PAS data protection with IBM DS3500 Express, IBM Storwize V7000, XIV Gen3 and SAN Storage Volume Controller (SVC) systems.

IBM has its own backup, replication, disaster recovery and data management products, so it’s unclear why it needs Actifio. But Mike Mcclurg, IBM VP of global midmarket sales, said Actifio provides one tool to handle all those functions.

“We approach managed service providers form a business perspective,” he said. “How can a partnership with IBM grow their business? It’s challenging for managed service providers to find cost effective data solutions that requires cobbling together a lot of backup, replication, snapshot, and data management tools. Actifio is an elegant way of replacing a lot of technology and overlapping software products.”

Maybe the partnership is the beginning of a deeper relationship between the vendors. Actifio president Jim Sullivan is former VP of worldwide sales for IBM system storage. He maintains that the startup is keeping its partnership options open, but he is also counting on IBM to bring Actifio into deals the startup can’t land on its own.

“This is not an exclusive deal,” he said. “But we’re driving this with IBM. Showing up with service providers with IBM is a great opportunity for us to get reach and credibility.”


February 16, 2012  4:42 PM

HP goes all-flash with new LeftHand iSCSI system

Dave Raffo Dave Raffo Profile: Dave Raffo

Hewlett-Packard today quietly launched an all solid-state drive (SSD) version of its LeftHand iSCSI SAN array.

Unlike the server and services announcements HP made at its Global Partner Conference, HP made its storage news with little fanfare on a company blog.

The HP P4900 SSD Storage System has 16 400 GB multi-level cell (MLC) SAS SSDs – eight in each of the system’s two nodes. Each two-node system includes 6.4 TB, and customers can add 3.2 TB expansion nodes to scale to clusters of 102.4 TB. Expansion nodes increase the system’s IOPS as well as capacity.

The systems use the HP SMARTSSD Wear Gauge, which is firmware that monitors SSD drives and sends out alerts when a drive gets close to the end of its life. The monitoring firmware is part of the P4000 Management Console.

HP claims the monitoring and scale-out architecture solve the major problems with solid-state storage arrays. “When it comes to SSDs in general, they are great for increasing IOPS and benefitting a business with lower power/cooling requirements,” P4000 product marketing manager Kate Davis wrote in the blog. “But the bad comes with unknown wear lifespan of the drive. And then it turns downright ugly when traditional dual-controller systems bottleneck the performance that was supposed to be the good part. … Other vendors must build towers of storage behind one or two controllers – LeftHand scales on and on.”

The large storage vendors offer SSDs in place of hard drives in their arrays, and there’s no reason they can’t ship a system with all flash. But the P4900 is the first dedicated all-flash system from a major vendor. Smaller vendors such as Nimbus Data, Pure Storage, SolidFire, Violin Memory, Whiptail and Texas Memory Systems have all-SSD storage systems.

A 6.4-TB P4900 costs $199,000. The expansion unit costs $105,000.


February 16, 2012  9:21 AM

NetApp developing server-side flash software

Dave Raffo Dave Raffo Profile: Dave Raffo

NetApp CEO Tom Georgens says he expects server-side flash to become a key part of his vendor’s flash strategy. However, NetApp will take a different approach than its rival EMC.

Asked about EMC’s VFCache product during NetApp’s earnings call Wednesday, Georgens said server-side flash is “a sure thing,” but NetApp will focus on data management software that works with PCIe cards instead of selling the cards. He doesn’t rule out selling cards either, though.

“I don’t think the opportunity is simply selling cards into the host, although we may do that,” he said. “But our real goal is we’re going to bring the data that’s stored in flash on the host into our data management methodology for backup, replication, deduplication and all of those things. It isn’t as simple as we’re going to make a PCI flash card. Our focus this year is the software component and bringing that into our broader data management capability.”

With VFCache, EMC sells PCIe cards from Micron or LSI with the storage vendor’s management software. NetApp appears intent on selling software that will work with any PCIe cards – or at least the most popular cards. The question is whether it can develop software that is integrated as tightly with many cards instead of focusing on one or two.

Georgens said NetApp was correct all along with its contention that using flash as cache is more effective than replacing hard drives in an array with solid-state drives (SSDs). NetApp’s Fast Cache card goes into the array to accelerate performance. It is included on all FAS6000 systems and as an option on NetApp’s other FAS systems. NetApp does offer SSDs in the array, but recommends flash as cache.

“Flash is going to be pervasive,” Georgens said. “I think you’re going to see it everywhere in the infrastructure. Our position all along has been that flash as a cache is where it has the most impact. And I would say that we actually see probably more pervasive deployment of flash in our systems than anybody else in the industry.”

On the hard drive front, Georgens said the impact from shortages caused by floods in Thailand weren’t as bad as anticipated last quarter although it will take another six to nine months before the “uncertainty” lifts.

“While drive vendors had little forward delivery visibility, most of the disk drives shipped in excess of initial estimates,” Georgens said. “However, not all drive types were universally available and some spot shortages impacted revenue and will likely do so in the upcoming quarter as well. … We expect the drive situation to continue to inject uncertainty into the revenue for the next nine months as availability, cost and pricing settle out in the market.”


February 15, 2012  6:16 PM

SanDisk acquires FlashSoft to accelerate flash performance

Dave Raffo Dave Raffo Profile: Dave Raffo

One by one, solid-state flash vendors are adding caching software to enhance their products. SanDisk picked up startup FlashSoft today in a move designed to make applications run faster with SanDisk’s and other vendors’ PCIe and solid-state drive (SSD) products.

Enterprise PCIe flash pioneer Fusion-io began the trend by acquiring IO Turbine last August, and OCZ picked up Sanrad for its PCIe caching software in January. Solid-state vendor STEC internally developed its EnhanceIO caching software, and EMC’s caching software and FAST auto-tiering appliance play a big role in its VFCache server-side flash product.

The acquisition of FlashSoft leaves startups Nevex, Velobit and perhaps a few other vendors still in stealth as obvious targets for solid-state vendors. Nevex and Texas Memory Systems last week said they were jointly developing software that would speed applications running on TMS SSD storage.

FlashSoft software turns SSD and PCIe sever flash into a cache for the most frequently accessed data. The company came out of stealth last June with FlashSoft SE for Windows and later added FlashSoft SE versions for Linux, VMware vSphere and Microsoft Hyper-V.

SanDisk said it will sell FlashSoft SE as standalone software and with the Lightning Enterprise SSDs and upcoming PCIe-based devices based on technology that it acquired by buying Pliant last May for $327 million. SanDisk’s SSDs are used by Dell EqualLogic, NetApp, Hewlett-Packard and others through OEM deals.

“We think this is the next step in our performance acceleration journey,” said Greg Goelz, VP of SanDisk’s enterprise storage solutions group.

Goelz said FlashSoft software was appealing because it can work with any hardware and that fits with SanDisk’s OEM model, and it scales better than competitors. “We looked at how did they scale in capacity? If you move from 100 gigabytes of SSDs to terabytes, does the metadata scale exponentially? Is the overhead low? Does it have the best approach to support what’s out there today and to support the evolution from single server to virtualization and clusters? FlashSoft was well ahead of anybody in the market by a substantial lead.”

SanDisk did not disclose the purchase price for FlashSoft.


February 15, 2012  9:17 AM

Today’s data growth requires new management approaches

Randy Kerns Randy Kerns Profile: Randy Kerns

Information Technology storage professionals are looking at a grim situation. The amount of capacity they need to store their organizations’ data is beyond the scope of what they can deal with given their current resources.

The growth in data that they will have to deal with comes from several areas:

• The natural increase of the amount of data required for business continuance and expansion of current operations. This data represents the normal business requirements.

• New applications or business opportunities. While this is a positive indicator for the business, it represents a potentially significant increase in the amount of data under management.

• The machine-to-machine data from pervasive computing generates an overwhelming amount of data that most IT people have not had to deal with before. The data is used for “big data” analytics or business intelligence, and it will be left to IT to manage for the data scientists.

The problem is really one of scale. Because operational expenses typically are not scaled properly to address the management required for that amount of data, there is insufficient budget to handle the onslaught of data.

Storage professionals are looking at different approaches to address the increased demands. These include more efficient storage systems. Greater capacity efficiently – making better use of capacity – is a big help. So are storage systems that support consolidation of workloads onto one platform.

Data protection is a continuing problem. The process is viewed as a necessary requirement but not as a revenue-enhancing area. Consequently, data protection needs are dramatic but often lack the financial investment to accommodate the capacity increases. This means storage pros must either find products that can be more effective while fitting within the financial constraints or re-examining the entire data protection strategy by using technologies such as automated, policy-controlled archiving and data reduction.

Exploiting point-in-time (snapshot) copies on storage platforms for immediate retrieval demands, implementing backup to disk, and reducing the schedule for backups on removable media to monthly or less frequently are considerations for stretching backup budgets.

Storage professionals need to be open to new ideas for dealing with the massive influx of data. Without addressing the greatly increasing capacity demand, managed storage becomes an oxymoron.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


February 14, 2012  4:15 PM

IBM wants EMC’s storage customers

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

IBM Corp. is gunning for EMC with its XIV storage system, and “Big Blue” claims it is making a dent in EMC’s lead.

IBM last week added a multi-level cell (MLC) solid-state drive (SSD) cache to its Gen3 XIV Storage System, while it also disclosed some numbers to show it is eating into its top storage rival’s customer base.

Bob Cancilla, vice president of IBM Storage Systems, said his division has shipped 5,200 units of both its Gen2 and Gen3 XIV Storage Systems since the end of last year, and IBM added 1,300 new open-system customers to its storage division with the XIV. Of those 1,300 customers, about 700 replaced EMC’s high-end enterprise Symmetrix VMAX or midrange VNX storage systems, he said.

“They are our biggest bull’s eye,” Cancilla said of EMC. “They have seen the impact.”

Cancilla acknowledged IBM “had a poor presence in the tier-one enterprise open space” before acquiring the privately held, Israel-based XIV company in January 2008. IBM re-launched the XIV system under its brand in September 2008. In the fourth quarter of  2011, XIV “was 75 percent of my shipments,” said Cancilla. More than 59 customers have 1 PB of useable storage on XIV systems, while at least 15 customers have more than 3 PB of storage. More than 65% of the XIV systems have at least one VMware host attached to it.

“We are doing a lot of work to ensure we have the latest and greatest VMware interoperability,” Cancilla added.

XIV didn’t have an SSD option until last week, and that is becoming a must-have feature for enterprise storage. The XIV SSD announcement came one day after EMC rolled out its VFCache (“Project Lightning”) server-side flash caching product to great fanfare.  The XIV SSD tier sits between the cache, which uses DRAM, and the disks in the system so when the cache gets full the data spills over to the SSDs.

“You have 360 GBs of DRAM cache and now it goes to 6 Terabytes,” Cancilla said. “It’s a huge jump. It’s a 20x improvement in the cache capability.”

IBM offers SSD drives and automatic tiering software as an option for other storage systems, but this is the first SSD option for XIV. XIV systems include one tier – either SATA or high-capacity SAS drives. Although IBM’s caching option is limited to one of its products, the concept is similar to what EMC is doing throughout its storage array lineup with VFCache. It is speeding read performance for data that needs it while passing writes through to the array.

“It’s like Project Lightning, but in the array,” Silverton Consulting president Ray Lucchesi said. “It’s a similar type of functionality. The differences are IBM is using SSD instead of a PCIe card and it’s at the storage instead of the server. But all the reads go to cache and the writes get destaged to the array.”

IBM also added a mirroring capability to XIV, so customers can replicate data between Gen2 and Gen3 XIV systems.


February 13, 2012  5:18 PM

Is Starboard Storage a startup or Reldata 2.0?

Dave Raffo Dave Raffo Profile: Dave Raffo

Starboard Storage Systems launched today, portraying itself as a brand new startup with a new technology and architecture for unified storage. But Starboard is in many ways a re-launch of Reldata, which had been selling multiprotocol storage for years.

Starboard didn’t volunteer information about its Reldata roots, although representatives freely admitted it when asked. With its new AC72 storage system, Starboard wants to appear as a fresh, shiny company rather than one that has been around the block many times without making much of an impact on the storage world.

“It’s not a rebranding of Reldata but Starboard is not your typical startup,” said Starboard chief marketing officer Karl Chen, who joined the company after it became Starboard. “[Reldata] had great technology, so why not absorb Reldata and reduce our time to market? This way, we were able to get to market a lot faster by leveraging what Reldata had. We had the option of starting brand new or taking something that would accelerate our time to market.”

Starboard has the same CEO, CTO, engineering VP and sales chief as Reldata, and has not yet raised any new funding. Starboard’s 30 employees are a mix of Reldata holdovers and new hires. The AC72 includes Reldata intellectual property and was developed in part by Reldata engineers.

“Absolutely, there is technology that we are leveraging from Reldata to build the Starboard Storage product,” Chen said. But he points out that the Starboard product is a new architecture with a different code base. The RelData 9240i did not support Fibre Channel, it was a single controller system, and used traditional RAID blocks. There was no dynamic pooling or SSD tier. “It’s a completely different product from what (Reldata) was selling,” Chen added.

The company also moved from Parsippany, NJ to Broomfield, Colo., which has a deep workforce with storage experience. Starboard CEO Victor Walker, CTO (and Reldata founder) Kirill Malkin, VP of engineering John Potochnik and director of sales Russell Wine were all part of Reldata. They are joined by chairman Bill Chambers, the LeftHand Networks founder and CEO who sold the iSCSI SAN company to HP for $360 million in 2008.

Starboard will continue to service Reldata 9240i systems, but will not longer sell the Reldata line.

(Sonia R. Lelii contributed to this blog).


February 10, 2012  10:04 AM

Red Hat brings GlusterFS to Amazon cloud

Dave Raffo Dave Raffo Profile: Dave Raffo

Red Hat has been tweaking and expanding the NAS storage products it acquired from Gluster last October. This week Red Hat brought GlusterFS to the cloud with an appliance for Amazon Web Services (AWS).

Last December, Red Hat released a Storage Software Appliance (SSA) that Gluster sold before the acquisition. Red Hat replaced the CentOS Gluster used with the Red Hat Enterprise Linux (RHEL) operating system. This week’s release — Red Hat Virtual Storage Appliance (VSA) for AWS -– is a version of the SSA that lets customers deploy NAS inside the cloud.

The VSA is POSIX-compliant, so — unlike with object-based storage — applications don’t need to be modified to move to the cloud.

“The SSA product is on-premise storage,” Red Hat storage product manager Tom Trainer said. “This is the other side of the coin. The VSA deploys within Amazon Web Services with no on-premise storage.”

The VSA lets customers aggregate Amazon Elastic Block Storage (EBS) and Elastic Compute Cloud (EC2) instances into a virtual storage pool

Trainer said Red Hat takes a different approach to putting file data in the cloud than cloud gateway vendors such as Nasuni and Panzura.

“They built an appliance that sits in the data center, captures files and puts them in an object format and you ship objects out to Amazon,” he said. “We said ‘that’s one way to do it.’ But the real problem has been having to modify your applications to run in the cloud because cloud storage has been built around object storage. If we could take two Amazon EC2 instances and attach EBS on the back end, we could build a NAS file server appliance right in the cloud. Users can take POSIX applications from their data center and install them on EC2 instances. They can take applications they had been running in the data center and run them in the cloud.”

Red Hat prices the VSA at $75 per node (EC2 instance). Customers must also pay Amazon for its cloud service.

Trainer said Red Hat plans to support other cloud providers, and customers would be able to copy files via CIFS if they wanted to move from one provider to another. But Amazon is the only provider currently supported for the Red Hat VSA.


February 9, 2012  8:36 AM

IBM puts SSD cache in XIV

Dave Raffo Dave Raffo Profile: Dave Raffo

EMC rolled out its VFCache (“Project Lightning”) server-side flash caching product to great fanfare this week. IBM made a quieter launch, adding a solid-state drive (SSD) caching option to its XIV storage system.

IBM’s XIV Gen3 now includes an option for up to 6 TB of fast-read cache for hot data. IBM offers SSD drives and automatic tiering software as an option for other storage systems, but this is the first SSD option for XIV. XIV systems include one tier – either SATA or high-capacity SAS drives.

Although IBM’s caching option is limited to one of its products, the concept is similar to what EMC is doing throughout its storage array lineup with VFCache. It is speeding read performance for data that needs it while passing writes through to the array.

“It’s like Project Lightning, but in the array,” Silverton Consulting president Ray Lucchesi said. “It’s a similar type of functionality. The differences are IBM is using SSD instead of a PCIe card and it’s at the storage instead of the server. But all the reads go to cache and the writes get destaged to the array.’

The XIV SSD cache is also similar to what NetApp does with its FlashCache, a product that IBM sells through an OEM deal with NetApp. IBM also sells Fusion-io PCIe cards on its servers. EMC has also been selling SSDs in storage arrays since 2008. So we’re seeing that flash is showing up in enterprise storage systems in many ways, and those options will keep expanding.

“As SSDs become more price performant, customers are putting them in for workloads that require quick response times,” said Steve Wojtowecz, IBM’s VP of storage software. ”We’re seeing real-time data retrievals, database lookups, catalog files, and hot data going to SSDs and colder data going to cheaper devices.”

The other big enhancement in XIV Gen3 is the ability to mirror data between current XIV systems and previous versions of the platform. That is most helpful migrating data from older to newer arrays, although IBM is also pushing it as a way to use XIV for disaster recovery.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: