Storage Soup


October 17, 2011  12:56 PM

Storage tiering and caching bring different values, costs

Randy Kerns Randy Kerns Profile: Randy Kerns

We hear a lot these days about tiering and caching in storage systems. These are not the same thing. Some systems implement tiering across types of media, while others cache data into a solid-state device as transient storage. Other storage systems have both capabilities.

IT professionals may wonder what the differences between tiering and caching are, and whether they need to tier or cache data. There are clear differences, but the performance implications between the approaches vary primarily based on the specific storage system implementation.

Tiering storage systems use different devices such as solid-state devices, high-performance disks, and high-capacity disks. Each of these device types make up a tier. The systems intelligently move data between the tiers based on patterns of access — a process known as automated tiering.

Tiering greatly increases the overall system performance, with access to the most active data coming from the highest performance devices. The higher performance allows the systems to support more demanding applications. Tiering also lets an organization get by with smaller amounts of the most expensive types of storage by moving less frequently accessed data to cheaper drives.

Caching systems use memory or solid state to store highly active data as transient data that may be accessed from the higher performing technology. The caching data resides in a permanent location in the storage system in addition to the cache.

Caching may be done in RAM or in solid-state devices used specifically for caching. RAM cache can be protected by a battery or capacitor.

Caching has been used effectively to speed storage performance for many years. In the mainframe world, the caching is controlled with information communicated from the operating system. In open systems, the storage systems contain the intelligence to stage or leave copies of active data in the cache. Storage systems can cache read data only, or they can also accelerate writes.

If a storage system features tiering and caching, the features need to work in concert to avoid wasted or conflicting data movement. There can be improved performance if the two capabilities work together.

IT professionals need to consider the cost/benefit tradeoffs of tiering and caching. What performance is gained versus the cost? The overall performance benefit needs to be considered in the context of the workload from the applications that use the stored information. Most of the vendors of tiered storage systems have effective tools that analyze the environment and report on the effectiveness of tiering. This is necessary to optimize performance.

There is no easy answer to the choice of tiering, caching, or doing both in a storage system. It becomes a matter of maximizing the performance capabilities of the storage system and what value it brings in consolidation, reduced costs, and overall efficiency gains. An analysis of the value gained versus the cost must be done for any individual system.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).

October 14, 2011  1:16 PM

FCoE still lacking support

Randy Kerns Brein Matturro Profile: Brein Matturro

By Sonia R. Lelii, Senior News Writer

Brocade showcased its 1860 Fabric Adaptor at Storage Networking World (SNW) in Orlando, Fla., this week, which gives customers the option to implement 16 Gbps Fibre Channel, 10 Gigabit Ethernet (10 GbE) or Fibre Channel over Ethernet (FCoE) connectivity. The company describes the adapter as “any I/O.” But Brocade product marketing manager James. D. Myers doesn’t see many companies  implementing FCoE so far.

“There isn’t a lot of adoption yet,” Myers said. “They are buying a lot of converged networks but they are not turning (FCoE) on yet. There are a few early adoptors. Most are hedging their bets.  I think it will take upwards of a decade for FCoE to be prevalent.”

Brocade hasn’t been a huge advocate for FCoE the way its rival Cisco Systems has been. But at least one SNW attendee confirms Myers’ thoughts.  Mitchel Weinberger, IT manager for the Seattle-based GeoEngineers,  said he researched FCoE and found the performance gain wasn’t significant enough to introduce a new technology into his infrastucture. The company uses an iSCSI SAN from Compellent that connects 10 GbE switches to virtual servers.

“We don’t see the benefit,” Weinberger said. “All the studies I’ve seen say the benefits are minimal. We really didn’t see enough advantage to put Fibre Channel over Ethernet. It’s another technology for us to learn, and we don’t have the staff.”

FCoE basically encapsulates Fibre Channel frames over Ethernet networks, and the benefits includes the reduction of I/O adapters, cables and switches in the data center. But the convergence of Fibre Channel and Ethernet means storage and network administrators must share management responsibilities, or one team must cede control to the other. That can be a big problem in organizations where the two groups don’t get along.

“It makes total sense,” said Howard Marks, chief scientist at DeepStorage.net. “Except for the politics.”


October 13, 2011  7:00 PM

Sepaton sets its sights on Big Data

Randy Kerns Brein Matturro Profile: Brein Matturro

By Sonia R. Lelii, Senior News Writer

Sepaton is looking to move beyond its data protection specialty into Big Data.

At Storage Networking World in Orlando, Fla., this week, new Sepaton CTO Jeffrey Tofano offered a broad description of where the vendor plans to go within the next five years. Tofano didn’t offer too many technology specifics, but said Sepaton’s plan is to position itself for the broader Big Data market.

The idea is to expand its use of NAS protocols to its backup products within the next year, then over the subsequent two years provide “solution stacks” for snapshot archiving, specialized archiving and data protection environments.The goal is to use all of that technology for nearline storage and big data. “Our technology is skating where the buck will be, and that’s Big Data,” said Tofano, who was previously CTO of Quantum Corp.

There still is a lot of marketing hype around the term Big Data, but Tofano puts it into two buckets: either large data sets or analytics of petabytes of data for business intelligence. Sepaton will be targeting both, he said, by using the company’s technology to “bring specialized processors closer to the storage to do clickstreaming, web logs or email logs.

“It turns out that Big Data is a perfect fit for our [technology] core, which is a scalable grid, content-aware deduplication and replication technology,” Tofano said. “Our technology is not the limiting factor. We have a lot of the pieces in place. We are not building a new box. We are refining a box to get into the Big Data market. Right now, we have a scalable repository bundled behind a Virtual Tape Library [VTL] personality.”

Tofano said the VTL market is mature, and this new direction does not mean Sepaton will get out of the  backup space. Obviously, he said, it depends on revenues. “We will become more general purpose over time. We will do storage and support loads outside of data protection,” Tofano said.


October 12, 2011  7:06 PM

Dell adds Ocarina compression to object storage

Dave Raffo Dave Raffo Profile: Dave Raffo

Dell today revealed the first product it will release using data reduction technology from its Ocarina acquisition 15 months ago: The DX6000G Storage Compression Node (SCN) for its DX object storage system.

The DX6000G SCN is an appliance based on the Dell PowerEdge R410 server that connects to its DX6000 object storage nodes. Dell director of DX product marketing Brandon Canaday said the compression appliance can reduce data by 90%, depending on file types. Although Ocarina technology can dedupe or compress files, the object storage appliance will only use compression. It has two modes — Fast Compression mode is optimized for performance and Best Compression mode is optimized for capacity reduction. Customers can choose one or both modes.

Canady said customers can set policies to use fast compression when data is first brought onto the storage system and then switch to the best compression after a pre-configured time period. The appliance uses different compression algorithms depending on file type.

“It’s like applying tiered intelligent compression,” Canady said. “Because we maintain metadata with the file inside of the storage device, we can employ algorithmic policies as part of the lifecycle management of content.”

List price for the DX6000G SCN will begin at about $25,000, depending on the amount of data ingested. The appliance will become generally available next week.

Dell plans to incorporate Ocarina’s compression and deduplication across its storage systems, with more reduction products expected early next year. Canady said the performance and compression modes will likely show up in all of the data reduction appliances.

“Each implementation is likely to be slightly different, but we see value in having a performance approach and a capacity approach,” he said.


October 12, 2011  2:35 PM

IBM Global Services picks Nirvanix for cloud storage

Dave Raffo Dave Raffo Profile: Dave Raffo

IBM Global Services unveiled a cloud storage service for archived and infrequently accessed data today, based on technology from Nirvanix.

The OEM deal with Nirvanix provides IBM with the storage portion of its IBM SmartCloud Enterprise services. IBM bills the SmartCloud storage service as best suited for unstructured data for companies in media and entertainment, healthcare and financial services. Nirvanix’s object storage is designed for content shared across geographical locations and for data that must be retained for long periods. IBM pitches SmartCloud storage as an alternative to tape for backups and archiving.

Nirvanix CEO Scott Genereux said IBM is taking Nirvanix’s existing service without modification, but he plans for the startup to eventually offer IBM services optimized for the SmartCloud service. “There will be integration at the software level,” he said. “A year from now, IBM will have compelling differentiators from anybody else who sells our technology.”

Genereux said the partnership will help IBM and Nirvanix compete against storage vendors pushing cloud implementations, and gives Nirvanix extra ammunition to go up against Amazon S3, Microsoft Azure, and Google Cloud Storage services.

“There’s a big difference when IBM walks in and tries to sell the service than when we walk in as a startup,” he said.

Genereux also said IBM is looking to deliver a true cloud service, unlike storage vendors who try to sell what he calls “cloud in a box” by cloud-washing their hardware products.

“IBM Global Services and Nirvanix are both services companies, and we’re in complete synchronization selling services in a pay-by-the-drink model,” he said.


October 7, 2011  4:32 PM

Storage technology adoption is a slow process

Randy Kerns Randy Kerns Profile: Randy Kerns

In the last few years we’ve seen advances in storage technology that have tremendous potential for IT customers. Some of these are enabled by investments made in developing flash solid-state drive (SSD) technology and adapting it to enterprise storage systems.

These new storage technologies improve the processes of storing and retrieving information. They reduce costs, and lead to greater storage efficiencies and capabilities.

Notable storage technologies that have been delivered relatively recently to customers include:

Tiering products with the ability to place or move data based on a probability of access to maximize the performance of the storage system.
• All-SSD storage systems designed for near- zero latency to provide the highest level of performance in a storage system.
• Faster networks and network convergence where the pipes used in storage to move data allow greater bandwidth and bring the ability to standardize on one type of infrastructure throughout an organization.
• SSD add-in technology where the non-volatility and performance of SSDs can be exploited in a direct manner.
Forward error-correction technology as a new way to protect data from a failure.
Scale-out NAS systems to address enterprise demands for the volumetric increase in unstructured data to be stored.

There’s another type of product that has not received as much attention, perhaps because it is not simply a matter of using a new technology. That is the process of automating information management to make it easier for administrators to understand the value of data. This area may be more difficult to master, but new developments may yield greater economic value – primarily in operational expense – than some of the newer technologies.

An example of automated management would be what is called “Active Management.” This Active Management is similar to Storage Resource Management (SRM), but besides monitoring storage resources it takes automated actions based on conditions and rules defined by default or through customization by the storage administrator.

Yet despite their clear benefits, the adoption rate for these technologies and processes will be slow. That’s mainly because IT is conservative in adopting new concepts and technologies. This does not mean that the technologies won’t be embraced and be successfully deployed. It will just take longer.

The excitement around a new technology needs to be kept in perspective with how long it takes to be deployed successfully by IT. The technology adoption rate reflects the inertia and conservative nature for handling the critical task of storing data.

The longer technology adoption rate can kill start-up companies when they can’t get the investments required to get to the point of profitability. Many investors have an aggressive profile that defies the reality of adoption rates. Larger companies can handle the conservative adoption more successfully, but with much internal angst.

But no matter how great a new storage technology may be, the customer needs to understand it to make correct decisions about investing in it before it takes hold in data centers.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


October 6, 2011  1:05 PM

FlashSoft makes quick upgrade to SSD-caching software

Dave Raffo Dave Raffo Profile: Dave Raffo

Startup FlashSoft, which was first out with flash caching software, is looking to stay ahead of impending competition with a major upgrade to its FlashSoft SE barely four months after making the software generally available.

FlashSoft today said FlashSoft SE 2 is in beta, with support for Linux, larger solid-state drives (SSDs) and read-only cache. FlashSoft SE improves application performance with solid-state storage by turning SSD and PCI Express (PCIe) server flash into a cache for the most frequently accessed data.

FlashSoft developers have been busy. CEO Ted Sanford claims 800 feature enhancements in version 2. Besides Linux support, the biggest improvements are an increase in the maximum cache from 256 GB to 1 TB (it now can accelerate up to 10 TB of active data), and the ability to detect a failing SSD, issue an alert, and switch the cache to pass-through mode to avoid writing data to any SSD without a redundant backup. FlashSoft SE 2 also has a new GUI that provides runtime data and statistics through the Windows Management Interface (WMI) API.

The software now supports read-only caching as well as read-write caching. Sanford said read-write caching works best for most applications, but apps with large read requirements such as video can benefit from read-only. FlashSoft also increased its read cache performance for large files and data sets.

In addition, FlashSoft upgraded FlashSoft SE-V for virtual servers to 2.0, adding support for virtual servers running on Linux and Windows.

FlashSoft SE pricing starts at $2,000 and depends on the number of servers accelerated and the type of SSDs used. FlashSoft sells FlashSoft SE as standalone software, but Sanford said he is working on partnerships to bundle the software with SSDs.

Sanford said the new features were mostly suggestions from early customers and organizations evaluating FlashSoft SE software. FlashSoft is probably also hearing footsteps from other flash cache competitors entering the market.

EMC‘s PCIe server-side cache “Project Lightning” product that uses EMC’s FAST software to improve performance is due out by the end of the year, as is Fusion-io’s ioCache that uses software that Fusion-io acquired from early FlashSoft competitor IO Turbine. SSD vendor STEC is sampling its EnhanceIO SSD Cache Software to partners and early customers, startup VeloBit is in beta with its SSD caching software, and Nevex and perhaps other startups are on the way with  competing products.

“We feel like it’s a significant accomplishment that we’re shipping a 2.0 release in advance of anybody else shipping 1.0,” Sanford said. “We believe we will continue to advance the art. Leveraging flash as cache in existing servers or being added to new servers coming out is a large market. When you can improve an application’s performance by three to five times, that’s a fundamentally strong value proposition. There will be room for significant success for several companies.”


October 4, 2011  2:59 PM

DataCore adds support for cloud as storage tier

Dave Raffo Dave Raffo Profile: Dave Raffo

When DataCore added automated tiering to its SANsymphony-V storage virtualization software in July, it left out support for one tier – the cloud.

Today, DataCore addressed that omission through a partnership with cloud storage gateway vendor TwinStrata.

SANsymphony-V virtualizes storage across pools of heterogeneous systems, adding management features such as thin provisioning, RAID striping, asynchronous replication, and snapshots. The new tiering feature lets customers dynamically move disk blocks among different pools of storage devices.

Beginning in late October, when a customer purchases SANsymphony-V — which DataCore calls a “storage hypervisor” — it will include a 1 TB version of TwinStrata’s CloudArray virtual appliance at no extra cost. That lets DataCore customers move data off to the cloud, although they need a subscription with a cloud storage provider such as Amazon S3 or Nirvanix. A DataCore customer can also go beyond a 1 TB gateway by upgrading the appliance through TwinStrata, which charges $4,995 for unlimited capacity. The CloudArray software deduplicates, compresses and encrypts data before moving it to the cloud.

TwinStrata also sells its cloud gateway as an appliance. DataCore CEO George Teixeira said SANsymphony-V will also work with the hardware appliance, using it as cache to speed backups.

“Now we’ve allowed a cloud tier to be part of our storage hypervisor,” Teixeira said. “When data gets to a lower tier, it can be put on an iSCSI device that is actually a cloud disk.”

There are other cloud gateway products on the market, but Teixeira said he picked TwinStrata because it supports iSCSI while most of the others are for files and backup. As the name implies, SANsymphony-V is a SAN application, so it required a block storage gateway.

“Most of them [gateways] are doing things at the file system,” Teixeira said. “These guys [TwinStrata] present an iSCSI disk. We can use storage virtualization across all disk and this looks like another disk we are auto-tiering, so it plays into our model. This doesn’t require a lot of thinking on the part of the customer. It’s just another tier, and you can choose to pick a pay-as-you-go model.”

Although TwinStrata can handle primary storage, Teixeira said he expects his customers to use the cloud mostly for backup and archiving. “I think we have some ways to go to get to primary storage in the cloud,” he said. “We’re looking mostly at backup, archiving and scratch storage. We think most production data will stay on-premise, but why not put non-production and backup data on the cloud?”

TwinStrata is looking to use partnerships to get its gateway into the market, even if it has to give it away at first. Last month the startup began offering a free 1 TB CloudArray appliance to Veeam Backup & Replication customers who want to back up to the cloud.

Gartner research director Gene Ruth said organizations are interested in moving to the cloud, but are still looking for the best way to go about it. He said storage virtualization and gateways are two potential starting points.

“It’s hard for people to get their arms around the idea that they’ll put all their data out on the cloud, but they know they have some data they can put out there,” he said. “I see storage virtualization in general as one of those building blocks to help consolidate a cloud storage environment around a common provisioning point. It’s not the end-all, but it’s a good start.”

Ruth said he also sees gateways as key enablers of cloud storage, and not just as standalone devices.

“It seems a pretty obvious step for major vendors to put that [gateway] functionality into disk arrays and file servers,” he said. “I don’t think it’s that difficult to add a gateway to their arrays.”


October 3, 2011  1:18 PM

Storage companies have many faces

Randy Kerns Randy Kerns Profile: Randy Kerns

Large companies are represented by many people filling various roles to the outside world. These people are trying to accomplish different things. A marketing person is trying to create information that will highlight the company and give customers a reason to consider or continue with its products. Sales people are on the front lines with the customers, trying to influence them with the company’s offerings. Press and analyst relations people try to pass along useful information and increase the awareness of the company and its products.

At any given time, all of these people serve as the face of the company they represent.

This is no different in the storage industry than with other types of companies. But because information is stored for a long time, customers may not interact with storage companies as frequently. That means their impressions could have a longer term impact.

As an example, I was working with an IT director of a major company recently and the director was upset with what a salesperson had done in engaging the CIO of the company. The IT director did not have a problem with the specific person, but with the company. This director vowed to not buy a solution from that company for the problem we were trying to solve. It didn’t matter whether the sales person was using company-endorsed sales practices or not. A new barrier had been created.

This isn’t an isolated incident. I find that the negative perception caused by an individual can affect a sale even when the customer knows the vendor representative doesn’t represent the entire company. The customer might consciously or sub-consciously associate the vendor with one person, and the way that the customer has been dealt with may impact all future relations with that vendor. That could impact the customer’s ability to make objective decisions. But that is human nature and does require additional thought or effort.

The lesson here is that at any moment during external communications, the company employee must remain keenly aware that he or she is the “face of the company” right then. Saying outrageous things or taking a condescending attitude ultimately reflects on the company. It only takes one bad encounter to give someone a negative view. In the case of storage, this view could last for years.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


September 30, 2011  2:25 PM

Storage Headlines for September 30, 2011

Mkellett Megan Kellett Profile: Mkellett

Check out our Storage Headlines podcast, where we review the top stories of the past week from SearchStorage.com and Storage Soup.

Here are the stories we covered in today’s podcast

(0:18) Violin Memory launches all-flash storage for the enterprise
(1:24) FalconStor founder Huai found dead
(2:17) Arkeia adds dedupe, SSDs to backup appliances
(3:12) QLogic takes another whack at converge storage networks
(4:01) Storage Products of the Year 2011


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: