Storage Soup


March 30, 2011  3:26 PM

Establishing data relevance can help archiving strategy

Randy Kerns Randy Kerns Profile: Randy Kerns

As highlighted in many reports, the massive amounts of data being created today cause concerns for Information Technology pros – especially those who manage storage.

These concerns involve how to process data, what to keep and where to put it. Issues include how to present the information, where to store data, what are the requirements for that data, and how much will it cost to retain it. Most of the newly created data is in the form of files.
.
The massive amount of data being created is great news for storage vendors because it means that more storage is required. But storing all this newly created data may be unsustainable for organizations because of the cost required, as well as the physical space and power that storage systems use.

All of this data may require a new approach to storage and archiving. That approach involves creating a method to establish data relevance as part of the analytics performed when data is ingested. Relevance implies that there would be immediate data analysis. For example, data received from a source (monitoring equipment, feedback data, etc.) would immediately go into a data analytics process. The relevant source data received would go to an archiving storage system while the analytic processing continued. The valuable information in intermediary form would be retained in the analytic nodes or on a shared storage system.

The source data sent to the archive would be available for data mining or reprocessing if required. The archive system would handle the data protection process – a one-time protection for new data based on the requirements established for the business.

The processing for establishing the data relevance could be performed by the analytics engine or as part of advanced functions in a storage system. The data relevance engine would move the relevant data to the most appropriate location based on a set of rules on the analysis. Some data could be retained on primary storage but the majority would be stored directly on a more economical archiving system.

This may not really be a new model, but it reduces the steps and time it takes to manage the data. Making a solution like this available for IT would have high economic value and immediate benefit in dealing with the massive amounts of data being created.

March 29, 2011  12:40 PM

Permabit aims its primary dedupe at low-end NAS market

Dave Raffo Dave Raffo Profile: Dave Raffo

Permabit is expanding its Albireo primary data deduplication application with the Albireo Virtual Data Optimizer (VDO), which can bring dedupe to Linux-based SMB NAS systems.

Albireo, launched by Permabit last year, lets storage vendors embed inline post-process deduplication in their systems. OEM partners BlueArc, Xiotech, and LSI Engenio have signed on to use Albireo for enterprise storage systems. VDO is aimed at products on the other end of the storage market.

Permabit CTO Jered Floyd said the original Albireo is a software library that vendors can integrate with their file systems or block storage systems. It sits out of the data path, and lets the storage system control the data. VDO is a plug-in that sits in the data path between the file server and disk infrastructure. Floyd said that lets VDO add compression if its OEM partners choose to.

“It’s different from the Albireo deduplication library, because now we do own the data,” he said. “It’s not a concern for these OEMs, who are already depending on [open source] third party software for data placement and other capabilities.”

NAS vendors such as Overland Storage’s Snap platform, Buffalo Technology, NetGear, and Cisco Linksys would be candidates for VDO. Permabit CEO Tom Cook said there are at least 25 vendors who would fit the bill, and they make up about 18% of the NAS market. VDO can also be used in block and unified storage systems.

“They could do their own development, but they have other priorities,” Cook said of these NAS vendors. “Their other option would be a standalone dedupe appliance that sits in front of their system, and that drives dedupe right out of the market from a price standpoint.”

He expects VDO to be available to OEMs in the second half of the year.

Cook also confirmed what had been a poorly kept secret in the storage industry – LSI is working to add Albireo to its SAN platform. And he said he doesn’t expect NetApp’s pending purchase of LSI’s Engenio storage division to change that. NetApp has its own primary dedupe for its FAS storage platform, which is one of the main competitors for Albireo.

“The NetApp [Engenio] deal presents interesting business options for us,” Cook said. “I can’t comment beyond that.”


March 24, 2011  1:10 PM

DataDirect Networks ready to aim directly at NetApp NAS

Dave Raffo Dave Raffo Profile: Dave Raffo

When NetApp closes its $480 million acquisition of LSI’s Engenio storage division, it will move into head-to-head competition with high performance computing storage vendor DataDirect Networks in markets where NetApp barely plays today. And DDN will soon respond by moving into NetApp’s mainstream NAS space.

DDN is preparing to launch – probably next month – a NASScaler product that DDN’s EVP of strategy and technology Jean-Luc Chatelain said will be “aimed at the NetApp market” rather than HPC.

“It has standard IT NAS-type behavior,” Chatelain said. “We realized the demand for the density, bandwidth, capacity and performance that we used to see in specialty machines has migrated toward the traditional NAS market. It’s the standard NFS behavior on top of high performance computing.”

The NASScaler will be DDN’s fourth file storage system, to go with its xStreamScaler for media and entertainment, GridScaler for cloud and HPC and ExaScaler for supercomputing.

DDN bills itself as the largest private storage vendor, an assessment that IDC agrees with. DDN executives claim the vendor generated $180 million in revenue in 2010 and grew about 40% in 2009 and 2010. The vendor’s storage sells into what EMC calls “big data” markets, which are the same ones NetApp intends to chase with LSI Engenio. Those markets include HPC, media and entertainment, digital security, and as a platform for cloud providers.

It will take awhile before DDN can provide NetApp with solid competition in mainstream NAS, but the vendors will contend for both end user customers and OEM partners in the HPC space. The Engenio 7900 Storage System competes with DDN’s products, and is sold by OEMs including Cray, Teradata and SGI.

“It will be interesting to see what happens now,” Chatelain said. “NetApp is not focused on the domain where we play. NetApp is not a brand name in the world of high performance computing or rich media. We are known as people committed to those verticals.”


March 21, 2011  1:20 PM

Storage tiering draws interest, with good reason

Randy Kerns Randy Kerns Profile: Randy Kerns

Storage tiering has generated a good amount of interest lately, largely because of the performance improvements it can bring to organizations using solid state drives (SSDs) in their storage setups.

When it comes to tiering, it’s worth listening to what storage vendors have to say. There are actual benefits, and several approaches with real competitive differentiation.

Performance and cost are the big benefits of storage tiering. The goals are to balance the workload across the different storage elements and maximize the value returned from the investment.

Using SSD in a tiering approach provides improvements in latency and transfer rate. Intelligent tiering can maximize SSD usage as a resource, which in turn can reduce the number (and cost) of SSDs in a system.

If you’re considering adding tiers or changing your tiering strategy, you might first want to look at the available methods.

Storage systems have tiers usually consisting of several types of hard disk drives (HDDs) and perhaps SSDs. There may be two types of HDDs – a high RPM drive with less capacity for higher performance and a lower RPM HDD with higher capacity. Customers seem to be moving towards SSDs for performance and larger capacity HDDs as the tiers of choice. The movement of data between tiers is based on analysis of access patterns to data.

Most of the tiering systems being offered by vendors now have sub-LUN movement of the data (early implementations only moved entire volumes/LUNs). Key choices to make around the implementations deal with the “real-time” analysis and the automation around the movement of the data. Another difference for consideration is the potential contention created by the movement using system resources that are also needed to satisfy host requests.

Another tiering method is through intelligent placement of sub-LUN segments of data along with caching tiers. This approach uses the analysis of patterns of data access to determine where to place the data, and cache tiers to control performance. The goal with this approach is to provide the performance based on data access characteristics without introducing additional resource consumption for movement.

There are also hybrid approaches where intelligent placement and caching are combined with limited background movement. These approaches try to strike a balance where the intelligence of the tiering controls the resource usage while balancing the potential performance gains.

The Evaluator Group has a review of major vendors’ automated tiering systems.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


March 18, 2011  8:31 PM

Nirvanix says cloud can ease data concerns in Japan

Dave Raffo Dave Raffo Profile: Dave Raffo

Cloud proponents look at the disasters in Japan and say even the most horrific incidents don’t have to cause people to lose data.

By cloud proponents, I mean people who sell cloud products and services. Cloud storage vendor Nirvanix has one of its five worldwide data centers in Japan. Although the Nirvanix data center is 200 miles from the area where Japan has been hit by an earthquake, tsunami and potential nuclear meltdown, it has offered to move any customer data from the Japan site to one of its other data centers for free. Nirvanix also has facilities in Los Angeles, New York, Dallas, and Frankfurt, Germany.

Several of Nirvanix’s approximately 50 customers in Japan have taken the vendor up on its offer, Nirvanix CEO Scott Genereux said. But he said the offer was made mostly to give customers peace of mind. He said the data center has its own power and shouldn’t be affected by rolling blackouts expected in Japan. And even if the building were wiped out, all customer data could be restored at another site.

“Without having to touch tape filers, aging NAS filers onsite or a physical RAID box on the floor, customers can move their data,” he said. “They don’t have to wait for a repair specialist to come and add drives.

“This is the power of the public cloud. Customers can select the location of their data. You can’t do that with a private cloud. That’s just isolated storage.”

Of course, companies using private clouds or no clouds can also migrate data offsite. That’s one of the keys to good disaster recovery – getting data to a safe location in case your main location becomes unavailable. And the cloud can be a simple way to do that.

Dick Mulvihill, managing partner of Chicago-based data protection cloud provider HexiSTOR, said major disasters have eased a lot of his customers’ fears about trusting the management of their data to a separate party.

“The whole mindset of worry about data going off site to the cloud has changed,” he said. “It’s becoming a more popular option because of the simplicity and automation of the cloud. For dealing with natural and man-made disasters, using the cloud is good governance.”


March 18, 2011  12:21 PM

Dell set to roll out NAS, dedupe across storage platform

Dave Raffo Dave Raffo Profile: Dave Raffo

Now that Dell has closed on its Compellent acquisition, the next steps in its storage product expansion will be to add Exanet scalable NAS and Ocarina primary data deduplication to its two main platforms.

The $820 million Compellent acquisition gives Dell a second SAN product to go with its 2008 EqualLogic iSCSI SAN buy. Dell executives say the smaller 2010 acquisitions of Exanet and Ocarina will complement those SAN purchases.

Dell is close to completing the integration of Exanet’s clustered NAS file system with EqualLogic and entry level PowerVault storage. Those products will launch in mid-year, probably at the inaugural Dell Storage Forum conference in June. Later this year, they expect to add Ocarina’s content-aware compression and data deduplication into the Exanet file system.

Longer range plans – probably in 2012 — include integrating the Exanet file system with Compellent storage, developing dedupe for block storage, and coming up with a common management application for EqualLogic, Compellent and Exanet.

“Exanet and Ocarina will become ubiquitous technologies across the two platforms,” said Travis Vigil, Dell’s executive director of product marketing for enterprise storage. “We’ll have a file system that will provide unified storage for EqualLogic, PowerVault and Compellent. Once you get that common file system, we can take things that EqualLogic and Compellent have done with automated tiering and load balancing within the array and do that across the storage environment.”

Vigil admitted there is overlap between EqualLogic – which plays in the low to middle of the midrange – and Compellent, which sells into the middle and higher end of the midrange. The plan is to extend Compellent higher into the enterprise to fill some of the gap left when Hewlett-Packard outbid Dell for 3PAR last year. “If you don’t have overlap, you have a gap,” Vigil said.

Vigil said Dell will not sell the Ocarina Storage Optimizer dedupe appliances that Ocarina sold before the acquisition, but will support customers who bought the products.

Dell also will continue to OEM EMC Clariion SAN, Celerra NAS and the smaller Data Domain backup devices for the current lifecycle of those products, but Vigil said the “our focus is on Dell IP and the fluid data architecture.” Dell will resell but not OEM the EMC VNX unified storage systems that will eventually replace the Clariion and Celerra families.


March 16, 2011  8:36 PM

IT Pulse shows importance of storage in IT

Randy Kerns Randy Kerns Profile: Randy Kerns

 

I’m always looking at how storage is perceived by the IT industry and its customers. It’s obvious to me that storage is critical for successful IT, but that understanding seems to wax and wane with others. One positive indicator of the importance of storage was the IBM Pulse event a few weeks ago.

 

IBM Pulse focuses on service management, mostly software solutions that provide services for IT. These services involve the data center, security, facilities, and asset management. Content is divided into six streams, and each stream is divided into tracks. That provides attendees with the opportunity to check out a succession of sessions on topical areas. That also made it easy to gauge the coverage of storage in this service management conference.

 

I was impressed by the quantity of storage related presentations and the level of information provided. One entire track in the Service Management for the Data Center stream was dedicated to storage management. Another stream covered Security and Compliance Management and touched on storage. Product presentations were given at the level of what a systems administrator and an IT manager would need to know.

 

In addition to the speakers on the topics for storage, there were exhibits and labs for discussions and demonstrations. I  had interesting and lively discussions with technical people at these events. 

 

The importance of storage is clear among all vendors today – so much so that the discussions cross many areas that may not have included storage in the past. IBM deals with the entire scope of IT, yet its conference that is not perceived to be about storage covers a great deal of storage management software and storage systems. IBM’s GM of storage Doug Balog and vice president of storage marketing Dan Galvan were at the event.

 

This shows that storage is a prime area of focus for IT vendors — so much so that the discussions cross many areas that may not have included storage in the past. It also shows that storage is not only a necessary piece of IT, it’s a critical one.

 

 


March 15, 2011  8:17 PM

Better MLCs should help enterprise SSD adoption

Dave Raffo Dave Raffo Profile: Dave Raffo

When it became evident that solid state drives (SSDs) were still moving at a trickle into the enterprise about a year ago, industry experts said it would take two things to push their adoption. One was automatic tiering software and the other was enterprise-grade multi-level cell (MLC) flash that would bring down the price.

Now most storage array vendors have automated tiering software. And with Seagate’s launch of its Pulsar SSDs today, several of the leading drive makers have MLC devices to go with the more expensive single layer cell (SLC) drives on the market. So will SSDs now ramp quickly in the enterprise?

Probably not, although their use will certainly increase at a greater rate than they been.

Gartner analyst Joseph Unsworth said the enterprise SSD market went from around 320,000 units and $485 million in sales in 2009 to over 1 million units and over $1 billion in sales last year. He forecasts 9.4 million units and $4.2 billion in sales by 2015.

Seagate’s internal projections see SSDs catching up to 3.5-inch hard drives in the enterprise in units shipped in 20013 but not making much of a dent on 2.5-inch hard drives.

“The enterprise doesn’t turn on a dime,” Unsworth said. “There are long qualifications and there’s still confusion from end users. They know SSDs are important, but traditional data center professionals aren’t saying ‘We have to have this.’ They’re deployed in application workloads where they make most sense, like search, social media, data warehousing and stream video.”

MLC drives are cheaper than SLC drives but they wear out a lot faster. Until recently they were limited to consumer devices but are working into the enterprise. IBM said it will ship STEC MLC drives on midrange and enterprise storage arrays, and Seagate’s MLC drives will likely find their way into arrays from EMC and other array vendors.

Still, Unsworth added that pricing is decreasing at a slower rate for SSDs in storage systems than in servers.

He said the price erosion for the server side is forecasted for about 34% over the next five years with storage expected to drop about 30%. “It will take more MLC, more [product] competition and more scale to bring prices down,” he said.


March 10, 2011  3:37 PM

Non-compliance = big fines, bad rep

Randy Kerns Randy Kerns Profile: Randy Kerns

The Department of Health and Human Services has levied a hefty fine of $4.3 million against Maryland health care provider Cignet Health for HIPAA violations.

This is a significant event for institutions that deal with information governed by regulations for storing and managing records. The article’s statement that this is the first enforcement of the HIPAA regulations is inaccurate, but it is the first enforcement since the more stringent HiTECH Act was passed. Previous enforcements involved regional hospitals and did not receive significant publicity.

So why did the Department of Health and Human Services strike now? HHS is being punitive with the fine and public notification because of what seems like willful disregard for protecting information. The HHS said Cignet refused to provide 41 patients with copies of their medical records and failed to respond to repeated requests from the HHS Office of Civil Rights.

But the fine also sends a clear message to other healthcare organizations to comply or face fines and — more importantly — public embarrassment.

As a quick review, the HIPAA (Health Insurance Portability and Accountability Act of 1996) and the Health Information Technology for Economic and Clinical Health Act (HITECH Act) of 2009 impose requirements on control of access, breach notification, and storage of information. Evaluator Group articles about the need to meet compliance requirements for HIPAA are at www.evaluatorgroup.com.

The fine against Cignet reminds me of a conversation I had with the CIO and other senior management of a regional hospital about 18 months ago. We spoke about the archiving requirements for Electronic Medical Records (EMR) and the different retention requirements based on that type of information.

After discussing the retention requirements and the need for using storage systems that met compliance requirements that would pass an audit, the CIO said the hospital was storing all of its data on standard disk systems. When asked about meeting compliance requirements, he said he was not concerned.

He explained that the public depended on this regional hospital. If it was audited due to some complaint or had a loss of data, the public could not do without it and would have to support it. He said his budget did not allow for taking the proper measures for storing data to comply with regulations.

That was an interesting discussion. He was admitting the hospital knowingly violated the regulations regarding the privacy of data but was unwilling to even consider doing something about it. Aside from being appalled, I thought the arrogance would cause an even greater impact when an incident occurred.

Maybe with some institutions a $4.3 million fine is not a major impact. But for most it would be. I would think it tough to put on a budget line item.

But the damage to the institution goes beyond the impact on its budget. The bad publicity can harm its reputation and affect its support over the long term. For the healthcare information professional, the peer group will be aware of failings. Not only will this cause the institution and its staff to be held with a low regard, it may have an effect on potential future employment opportunities.

The media, customers and the Department of Health and Human Services all have long memories. Any other type of incident will cause the lack of privacy protection to be brought up repeatedly. While a fine is a one-time event, the bad reputation may be permanent.


March 9, 2011  10:46 PM

NetApp bags LSI’s Engenio storage group for $480 million

Dave Raffo Dave Raffo Profile: Dave Raffo

NetApp CEO Tom Georgens acquired his old company today, when he said NetApp will buy LSI Corp.’s Engenio storage business for $480 million.

 

No storage acquisition should be considered a surprise these days, and there had been rumblings for months that LSI was looking to sell off Engenio. But the acquirer and the price were a little unexpected. NetApp had shied away from big acquisitions since EMC outbid it for Data Domain in the middle of 2009, and it already has a storage platform that stretches from low end through the enterprise. And $480 million seems like a low price in the wake Hewlett-Packard’s $2.35 billion acquisition of 3PAR, EMC’s $2.25 billion pickup of Isilon and Dell’s $820 acquisition of Compellent in the last six months.

 

The deal is expected to close in about 60 days.

 

NetApp’s management team certainly knows what it is getting. Georgens was Engenio’s CEO for two years before joining NetApp in 2007, and NetApp chief strategy officer Vic Mahadevan went from LSI to NetApp last year.

 

 

“At first I was surprised, but recognizing Tom had come from [Engenio], it started making more sense,” Evaluator Group analyst Randy Kerns said. “It makes sense when you consider NetApp got a reasonable price, new market opportunities and OEMs it didn’t have before.”

 

In a conference call today to discuss the deal, Georgens said Engenio’s systems will allow NetApp to go after different workloads than NetApp’s FAS line. Those workloads include video capture, video surveillance, genomics sequencing and scientific research that Georges characterized as bandwidth intensive.

 

“There are workloads we’re not going to service with [NetApp] DataOntap,” he said. “This is targeted at workloads that are separate and distinct from OnTap.”

 

The deal also should strengthen NetApp’s relationship with IBM. LSI sells its Engenio storage exclusively through OEMs, and IBM sells more LSI storage than any other vendor. IBM also sells NetApp’s FAS platform. LSI’s other OEM partners include Oracle,Teradata and Dell. Georgens said NetApp will also sell LSI systems through NetApp channels.

 

One thing he wants to avoid doing is setting up a competitive situation between FAS and LSI’s platforms. NetApp executives say having one unified platform sets it apart from their major competitors, who sell different platforms in the midrange and enterprise and for NAS and SAN customers.

 

Georgens said NetApp’s new portfolio will be different than EMC’s situation with its midrange Clariion and enterprise Symmetrix platforms.

 

“The problem with Symmetrix and Clariion is, the target markets are overlapping,” he said. “They all have replication, snapshots and other things in parallel. This [LSI] is not our SAN product. We have a SAN product called FAS. This is targeted at workloads where we’re not going to sell FAS.”

 

For more on this deal, see SearchStorage.com

 

 

 


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: