Storage Soup


February 15, 2012  9:17 AM

Today’s data growth requires new management approaches

Randy Kerns Randy Kerns Profile: Randy Kerns

Information Technology storage professionals are looking at a grim situation. The amount of capacity they need to store their organizations’ data is beyond the scope of what they can deal with given their current resources.

The growth in data that they will have to deal with comes from several areas:

• The natural increase of the amount of data required for business continuance and expansion of current operations. This data represents the normal business requirements.

• New applications or business opportunities. While this is a positive indicator for the business, it represents a potentially significant increase in the amount of data under management.

• The machine-to-machine data from pervasive computing generates an overwhelming amount of data that most IT people have not had to deal with before. The data is used for “big data” analytics or business intelligence, and it will be left to IT to manage for the data scientists.

The problem is really one of scale. Because operational expenses typically are not scaled properly to address the management required for that amount of data, there is insufficient budget to handle the onslaught of data.

Storage professionals are looking at different approaches to address the increased demands. These include more efficient storage systems. Greater capacity efficiently – making better use of capacity – is a big help. So are storage systems that support consolidation of workloads onto one platform.

Data protection is a continuing problem. The process is viewed as a necessary requirement but not as a revenue-enhancing area. Consequently, data protection needs are dramatic but often lack the financial investment to accommodate the capacity increases. This means storage pros must either find products that can be more effective while fitting within the financial constraints or re-examining the entire data protection strategy by using technologies such as automated, policy-controlled archiving and data reduction.

Exploiting point-in-time (snapshot) copies on storage platforms for immediate retrieval demands, implementing backup to disk, and reducing the schedule for backups on removable media to monthly or less frequently are considerations for stretching backup budgets.

Storage professionals need to be open to new ideas for dealing with the massive influx of data. Without addressing the greatly increasing capacity demand, managed storage becomes an oxymoron.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).

February 14, 2012  4:15 PM

IBM wants EMC’s storage customers

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

IBM Corp. is gunning for EMC with its XIV storage system, and “Big Blue” claims it is making a dent in EMC’s lead.

IBM last week added a multi-level cell (MLC) solid-state drive (SSD) cache to its Gen3 XIV Storage System, while it also disclosed some numbers to show it is eating into its top storage rival’s customer base.

Bob Cancilla, vice president of IBM Storage Systems, said his division has shipped 5,200 units of both its Gen2 and Gen3 XIV Storage Systems since the end of last year, and IBM added 1,300 new open-system customers to its storage division with the XIV. Of those 1,300 customers, about 700 replaced EMC’s high-end enterprise Symmetrix VMAX or midrange VNX storage systems, he said.

“They are our biggest bull’s eye,” Cancilla said of EMC. “They have seen the impact.”

Cancilla acknowledged IBM “had a poor presence in the tier-one enterprise open space” before acquiring the privately held, Israel-based XIV company in January 2008. IBM re-launched the XIV system under its brand in September 2008. In the fourth quarter of  2011, XIV “was 75 percent of my shipments,” said Cancilla. More than 59 customers have 1 PB of useable storage on XIV systems, while at least 15 customers have more than 3 PB of storage. More than 65% of the XIV systems have at least one VMware host attached to it.

“We are doing a lot of work to ensure we have the latest and greatest VMware interoperability,” Cancilla added.

XIV didn’t have an SSD option until last week, and that is becoming a must-have feature for enterprise storage. The XIV SSD announcement came one day after EMC rolled out its VFCache (“Project Lightning”) server-side flash caching product to great fanfare.  The XIV SSD tier sits between the cache, which uses DRAM, and the disks in the system so when the cache gets full the data spills over to the SSDs.

“You have 360 GBs of DRAM cache and now it goes to 6 Terabytes,” Cancilla said. “It’s a huge jump. It’s a 20x improvement in the cache capability.”

IBM offers SSD drives and automatic tiering software as an option for other storage systems, but this is the first SSD option for XIV. XIV systems include one tier – either SATA or high-capacity SAS drives. Although IBM’s caching option is limited to one of its products, the concept is similar to what EMC is doing throughout its storage array lineup with VFCache. It is speeding read performance for data that needs it while passing writes through to the array.

“It’s like Project Lightning, but in the array,” Silverton Consulting president Ray Lucchesi said. “It’s a similar type of functionality. The differences are IBM is using SSD instead of a PCIe card and it’s at the storage instead of the server. But all the reads go to cache and the writes get destaged to the array.”

IBM also added a mirroring capability to XIV, so customers can replicate data between Gen2 and Gen3 XIV systems.


February 13, 2012  5:18 PM

Is Starboard Storage a startup or Reldata 2.0?

Dave Raffo Dave Raffo Profile: Dave Raffo

Starboard Storage Systems launched today, portraying itself as a brand new startup with a new technology and architecture for unified storage. But Starboard is in many ways a re-launch of Reldata, which had been selling multiprotocol storage for years.

Starboard didn’t volunteer information about its Reldata roots, although representatives freely admitted it when asked. With its new AC72 storage system, Starboard wants to appear as a fresh, shiny company rather than one that has been around the block many times without making much of an impact on the storage world.

“It’s not a rebranding of Reldata but Starboard is not your typical startup,” said Starboard chief marketing officer Karl Chen, who joined the company after it became Starboard. “[Reldata] had great technology, so why not absorb Reldata and reduce our time to market? This way, we were able to get to market a lot faster by leveraging what Reldata had. We had the option of starting brand new or taking something that would accelerate our time to market.”

Starboard has the same CEO, CTO, engineering VP and sales chief as Reldata, and has not yet raised any new funding. Starboard’s 30 employees are a mix of Reldata holdovers and new hires. The AC72 includes Reldata intellectual property and was developed in part by Reldata engineers.

“Absolutely, there is technology that we are leveraging from Reldata to build the Starboard Storage product,” Chen said. But he points out that the Starboard product is a new architecture with a different code base. The RelData 9240i did not support Fibre Channel, it was a single controller system, and used traditional RAID blocks. There was no dynamic pooling or SSD tier. “It’s a completely different product from what (Reldata) was selling,” Chen added.

The company also moved from Parsippany, NJ to Broomfield, Colo., which has a deep workforce with storage experience. Starboard CEO Victor Walker, CTO (and Reldata founder) Kirill Malkin, VP of engineering John Potochnik and director of sales Russell Wine were all part of Reldata. They are joined by chairman Bill Chambers, the LeftHand Networks founder and CEO who sold the iSCSI SAN company to HP for $360 million in 2008.

Starboard will continue to service Reldata 9240i systems, but will not longer sell the Reldata line.

(Sonia R. Lelii contributed to this blog).


February 10, 2012  10:04 AM

Red Hat brings GlusterFS to Amazon cloud

Dave Raffo Dave Raffo Profile: Dave Raffo

Red Hat has been tweaking and expanding the NAS storage products it acquired from Gluster last October. This week Red Hat brought GlusterFS to the cloud with an appliance for Amazon Web Services (AWS).

Last December, Red Hat released a Storage Software Appliance (SSA) that Gluster sold before the acquisition. Red Hat replaced the CentOS Gluster used with the Red Hat Enterprise Linux (RHEL) operating system. This week’s release — Red Hat Virtual Storage Appliance (VSA) for AWS -– is a version of the SSA that lets customers deploy NAS inside the cloud.

The VSA is POSIX-compliant, so — unlike with object-based storage — applications don’t need to be modified to move to the cloud.

“The SSA product is on-premise storage,” Red Hat storage product manager Tom Trainer said. “This is the other side of the coin. The VSA deploys within Amazon Web Services with no on-premise storage.”

The VSA lets customers aggregate Amazon Elastic Block Storage (EBS) and Elastic Compute Cloud (EC2) instances into a virtual storage pool

Trainer said Red Hat takes a different approach to putting file data in the cloud than cloud gateway vendors such as Nasuni and Panzura.

“They built an appliance that sits in the data center, captures files and puts them in an object format and you ship objects out to Amazon,” he said. “We said ‘that’s one way to do it.’ But the real problem has been having to modify your applications to run in the cloud because cloud storage has been built around object storage. If we could take two Amazon EC2 instances and attach EBS on the back end, we could build a NAS file server appliance right in the cloud. Users can take POSIX applications from their data center and install them on EC2 instances. They can take applications they had been running in the data center and run them in the cloud.”

Red Hat prices the VSA at $75 per node (EC2 instance). Customers must also pay Amazon for its cloud service.

Trainer said Red Hat plans to support other cloud providers, and customers would be able to copy files via CIFS if they wanted to move from one provider to another. But Amazon is the only provider currently supported for the Red Hat VSA.


February 9, 2012  8:36 AM

IBM puts SSD cache in XIV

Dave Raffo Dave Raffo Profile: Dave Raffo

EMC rolled out its VFCache (“Project Lightning”) server-side flash caching product to great fanfare this week. IBM made a quieter launch, adding a solid-state drive (SSD) caching option to its XIV storage system.

IBM’s XIV Gen3 now includes an option for up to 6 TB of fast-read cache for hot data. IBM offers SSD drives and automatic tiering software as an option for other storage systems, but this is the first SSD option for XIV. XIV systems include one tier – either SATA or high-capacity SAS drives.

Although IBM’s caching option is limited to one of its products, the concept is similar to what EMC is doing throughout its storage array lineup with VFCache. It is speeding read performance for data that needs it while passing writes through to the array.

“It’s like Project Lightning, but in the array,” Silverton Consulting president Ray Lucchesi said. “It’s a similar type of functionality. The differences are IBM is using SSD instead of a PCIe card and it’s at the storage instead of the server. But all the reads go to cache and the writes get destaged to the array.’

The XIV SSD cache is also similar to what NetApp does with its FlashCache, a product that IBM sells through an OEM deal with NetApp. IBM also sells Fusion-io PCIe cards on its servers. EMC has also been selling SSDs in storage arrays since 2008. So we’re seeing that flash is showing up in enterprise storage systems in many ways, and those options will keep expanding.

“As SSDs become more price performant, customers are putting them in for workloads that require quick response times,” said Steve Wojtowecz, IBM’s VP of storage software. ”We’re seeing real-time data retrievals, database lookups, catalog files, and hot data going to SSDs and colder data going to cheaper devices.”

The other big enhancement in XIV Gen3 is the ability to mirror data between current XIV systems and previous versions of the platform. That is most helpful migrating data from older to newer arrays, although IBM is also pushing it as a way to use XIV for disaster recovery.


February 8, 2012  10:59 AM

Beware of IT inertia

Randy Kerns Randy Kerns Profile: Randy Kerns

In information technology, the part of Newton’s laws of motion regarding inertia where a body at rest tends to stay at rest is often prevalent. In IT, this law means changes are increasingly difficult to make because change is often resisted. Resistance to change means missed opportunities to integrate new technologies, improve processes and become more effective.

The reasons for resistance can be rationalized effectively by management making excuses. That does not mean these reasons are correct, it just puts perspective on why seemingly incomprehensible choices are made that defy logic when considered in the entirety of IT management.

I’ve heard these reasons for not making a strategic change:

Avoidance of risk. Making a change to introduce a new technology introduces risk of some type. The risk is really about potential failure and the implications of that failure on the organization and the people making the decision.

Inability to schedule the time to implement a new technology or process. “We don’t have time to do this” is a common explanation for not doing something. The conversation goes into limited budgets for staffing and not enough people to take on the extra work. The advantages that might be gained by the implementation are typically dismissed out of hand.

Limited budget to invest in the technology. The blame for this is usually placed on “executive management” or “the business” making choices that would limit IT’s ability to invest in new technology.

IT decision makers want to wait until other organizations have proven the technology works. There usually is some justification here because we all know of past products and technologies that did not last for an extended period of time. Organizations want assurance that they are not investing in a transient technology. In IT, the time expectation for something new is perceived to be 10 years.

Complexity introduced into IT over time increases risk or makes change more difficult. This means that over time, the effort to avoid complexity proves too great and causes a greater resistance to change in the future.

The negatives for resisting new technologies or new procedures in IT are easy to argue with. Many new technologies can bring value to organizations that properly implement them. These include tiered storage systems with solid-state drive (SSD) technology, storage virtualization, scale-out NAS storage, data reduction technologies, IT as a Service, and ‘big data’ analytics. To not move forward with technologies and processes that will have staying power means missing economic advantages. The lack of advances in IT can have a parallel effect on the IT leadership.

Changes will have to be made eventually, and may be more costly the longer they are put off. I recently heard about an organization looking to replace a data center because it was more than eight-years-old. The justification was that the efficiency gain of a new data center was worth the financial investment. That might be true statistically, but it does not seem to be an intelligent overall investment. Evolving through the introduction of new technologies, improvements in changing procedures, and educating IT personnel has to be a better answer than a complete discard and start over

But if the barriers – the arguments given to avoid introduction of technology or change – are so overwhelming and inhibit greater efficiencies, it may be the path of least resistance. The force on a body at rest — the inertia of IT to not do something new — may not have enough impact to start the motion forward. Education on the technologies and economic advantages need to have the net force to move IT forward.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


February 7, 2012  9:09 AM

EMC’s Gelsinger: We don’t want to sell servers

Dave Raffo Dave Raffo Profile: Dave Raffo

EMC executives say the storage vendor’s VFCache strategy is to work with server vendors, not to take their business.

During Monday’s VFCache launch, EMC president of information infrastructure products Pat Gelsinger said EMC’s move into server-side flash does not mean it has designs on becoming a server company. It only wants to sell the flash that goes into servers.

“From our side, this is truly cooperation [with server vendors],” Gelsinger said. “We’re not competing with them. There is no coopetition. This is just another card that goes into the server. We’re not in the server business. We’re extending the storage array on the server side and bringing the I/O stack into the server. We’re not going into the server.”

Gelsinger said the VFCache PCIe card is certified to run on servers from Cisco, Dell, Hewlett-Packard and IBM. There are no reseller or OEM deals with the server vendors for VFCache, although Gelsinger said there may be in the future.

Except for EMC’s close ally Cisco, the other three server vendors also sell storage. It will be interesting to see how they react to VFCache. But this isn’t the first time EMC has extended its technology into the server without actually selling servers. As the parent company of VMware, EMC is already a major player in server technology.

So even if EMC doesn’t want to sell servers, it wants a front-row seat to view the server world from.
“The biggest vulnerability EMC has in competing with the IBMs, HPs and Dells of the world is those other guys have access to the entire stack because they sell the servers and everything in between,” said Arun Taneja, founder of the Taneja Group analyst firm. “VMware gave EMC leverage to the server side and put the rest of the industry on notice – if you want to compete you have to buy stuff from EMC.”

David Flynn, CEO of EMC’s largest server-side flash competitor Fusion-io, maintains that EMC is trying to extend its vendor lock in with VFCache. He wonders why EMC doesn’t only sell its management software and let customers pick their own PCIe cards to place in the server.

While it plays nice with all the top server vendors with VFCache, EMC has made it clear that Micron is its favorite PCIe flash partner. Gelsinger – who asked for a moment of silence Monday for Micron CEO Steve Appleton, who was killed in a plane crash last week – emphasized that Micron is EMC’s preferred partner for VFCache although he acknowledged LSI is also a partner in a multi-vendor arrangement.

“Micron has extraordinary I/O performance,” Gelsinger said. “This is the best technology in the industry for PCIe flash.”


February 3, 2012  5:03 PM

Cloud storage customer experiences painless migration across providers

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Data migration can be a nightmare for any company, so imagine what an IT manager feels like when his cloud storage vendor tells him, “Hey, we are planning to move about one terabyte of your data from one cloud provider to another and, we promise, you won’t experience any downtime.”

True, that is supposed to be one of the key attributes of the cloud. The storage services provider or cloud provider takes on all work and responsibility associated with data migration, and the user isn’t supposed to notice a hiccup. That’s what the IT manager at a California energy company experienced last year when its storage services vendor, Nasuni Corp., moved 1 TB of its primary storage from cloud provider Rackspace to Amazon S3. The project took about six weeks and it was completed in January this year.

“Initially, I was concerned,” said the manager, who asked not to be identified because his company does not allow him to talk to the media. “The data we had in Rackspace was our working data, so it was our only copy. I was concerned about how it would work. I thought for sure I would feel a glitch here and there, but I did not.”

He said Nasuni made a full copy of the data, then replicated the changes to keep the Amazon copy current so the data could be switched later. Nasuni basically set up a system in Rackspace, and sent data copies and version history from Rackspace’s cloud to Amazon. The customer’s network was not used during the process. The energy company now has 9 TB from two data centers on Amazon S3 — 6 TB of primary production data and 3 TB of historical production backup data.

Rob Mason, Nasuni founder and president, said the energy company’s data migration was part of Nasuni’s larger project to concentrate all of its customers data on either Amazon S3 or Microsoft Azure because its stress testing of 16 cloud providers showed those two providers could meet Nasuni’s SLA guarantees of 100% availability, protection and reliability. Previously, Nasuni had 85% of its customers’ data residing in Amazon with the rest spread out in about six other cloud providers. Rackspace held 10% of Nasuni’s customer data.

“We couldn’t offer our SLAs on Rackspace,” Mason said. “All our customers on our new service are either on Amazon or Azure now. For customers who wish to move from our older gateway product to the new service, which includes SLAs, if they are not already on Amazon or Azure, we will move them to one of those two providers are part of the upgrade.”

For the energy company, the goal is to have all of its data — about 15 TB — eventually residing in the cloud. “It was a constant struggle for more disk space,” the IT manager said. “And, my God, the RAID failures. It’s not supposed to fail but it did.”


January 30, 2012  8:38 AM

Storage efficiency and data center optimization

Randy Kerns Randy Kerns Profile: Randy Kerns

Optimizing the data center is a major initiative for most IT operations. Optimization includes using resources more effectively, adding more efficient systems with greater capabilities and consolidating systems using virtualization and advanced technologies.

The goals for optimization are reducing cost and increasing the operational efficiency. Capital cost savings come from getting more effective use of what has been purchased and operational cost savings come from reducing administration and physical resources such as space, power, and cooling. Optimized operations make IT staffs more capable of addressing the demands for business expansion or consolidation.

Along with server virtualization, storage efficiency is a major focus area for data center optimization (DCO) initiatives because of the opportunity for major savings. This Evaluator Group article provides an IT perspective on measuring efficiency. Storage efficiency can be accomplished in the following ways:

• Making greater use of storage capacity through data reduction technologies (compression and deduplication) and allocation of capacity as needed (thin provisioning).
• Supporting more physical capacity for a storage controller by enabling greater performance from the controller.
• Increasing performance and responsiveness of a storage system with storage tiering and intelligent caching using solid-state technology.
• Improving data protection with advanced snapshot and replication technologies and data reduction prior to transferring data.
• Scaling of capacity and performance in equal proportion (scale out) to support greater consolidation and growth.
• Providing greater automation to minimize administrative requirements.

DCO requires a strong overall strategy. Storage has a regular cadence of technology transition and product replacement, and DCO requires adding products and upgrading systems already in place. Evaluating the best product to meet requirements is a major part of the execution of the plan. There are many complex factors to consider and the decisions are not straightforward.

As DCO initiatives continue, storage efficiency will remain a competitive battleground for vendors and an opportunity for customers.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


January 27, 2012  8:20 AM

Quantum closing in on cloud backup

Dave Raffo Dave Raffo Profile: Dave Raffo

Quantum over the coming months will offer cloud backup through a combination of its DXi deduplication appliances, vmPro virtual backup and … well, that’s all we know so far.

Quantum CEO Jon Gacek teased what he called the “cloud offering” several times during the backup vendor’s earnings call this week but didn’t go deep into details beyond “our vmPro technology, along with our deduplication technology, is the basis of a cloud-based data protection offering that we will be introducing in the coming months.” In an interview after the call, he let on that the DXi would provide the backup, and there will likely be a service provider partner.

“We’ll probably launch with a partner first and go from there,” Gacek said.

Last October, Quantum revealed it plans to let SMB customers replicate data to the cloud from a new Windows-based NAS product. But that’s apparently not the same as what Gacek talked about this week. The SMB replication uses Datastor Shield software, which is different than the DXi software.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: