Storage Soup

A SearchStorage.com blog.


December 2, 2011  10:59 AM

Physical, virtual backup still mostly a two-headed beast



Posted by: Dave Raffo
data backup, quest vranger, veeam, virtual machine backup

We received a couple of reminders this week about how important backing up virtual machines is in an organization’s data protection strategy.

First, virtual server backup specialist Veeam released Backup & Replication 6. That in itself wasn’t a huge development. Veeam revealed full details of the product back in August, and said it would be shipping by end of year. It even leaked the most important detail – support of Microsoft Hyper-V – six months ago.

The most interesting part of the launch was the reaction it brought from backup king Symantec. Symantec sent an e-mail reminding that it too does virtual backup (through its X-ray technology) and claimed “point products are complicating data protection.” Symantec released a statement saying “In the backup world, two is not better than one. Using disparate point products to backup virtual and physical environments adds complexity and increases management costs … Organizations should look for solutions that unite virtual and physical environments, as well as integrate deduplication, to achieve the greatest ROI.”

Sean Regan, Symantec’s e-Discovery product marking manager, posted a blog extolling Symantec’s ability to protect virtual machines.

In other words, why bother with products such as Veeam and Quest Software’s v-Ranger for virtual machines when Symantec NetBackup and Backup Exec combine virtual and physical backup? But the established backup vendors opened the door for the point products by ignoring virtual backup for too long. Symantec didn’t really get serious about virtual backup until the last year or so.

Randy Dover, IT officer for Cornerstone Community Bank in Chattanooga, Tenn., began using Quest vRanger for virtual server backup last year although his bank had Symantec’s Backup Exec for physical servers. Dover said he would have had to put agents on his virtual machines with Backup Exec and it would have cost considerable more than adding vRanger.

“Before that, we were not backing up virtual machines as far as VMDK files,” he said. “If something happened to a VM, we would have to rebuild it from scratch. That’s not a good scenario, but basically that’s where we were.”

Dover said vRanger has cut replication time and restores for his 31 virtual machines considerably. And he doesn’t mind doing separate backups for virtual and physical servers.

“Using two different products doesn’t concern us as much,” he said. “We generally look for the best performance option instead of having fewer products to manage.”

Quest took a step towards integrating virtual and physical backup last year when it acquired BakBone, adding BakBone’s NetVault physical backup platform to vRanger.

Walter Angerer, Quest’s general manager of data protection, said the vendor plans to deliver a single management console for virtual and physical backups. He said Quest would integrate BakBone’s NetVault platform with vRanger as much as possible. It has already ported NetVault dedupe onto vRanger and is working on doing the same with NetVault’s continuous data protection (CDP).

“We are looking forward to an integrated solution for for virtual, physical and cloud backup,” Angerer said. “I’m not sure if either one will go away, but we will create a new management layer. The plan is to have a single pane of glass for all of our capabilities.”

December 1, 2011  8:57 AM

Business critical applications drive storage requirements



Posted by: Randy Kerns
business continuity, business critical applications, disaster recovery, storage management

Discussions for buying storage typically begin with determining the company’s requirements,  and usually focus on meeting the needs of business critical applications — also known as tier 1 applications.

As the term implies, these applications are the most critical to an organization. In most cases, downtime or interruption to business critical applications causes a significant negative impact to the company. This negative impact can be financial or an embarrassment that could lead to loss of future business.

When companies quantify the business impact of the loss of critical apps, they usually measure it in financial terms such as a material loss of ‘x’ dollars per hour of unavailability. They also look at longer term impacts, such as the number of customers that will go to a competitor because of the downtime. Not only will that business be lost, but the likelihood of the next transaction going somewhere else impacts future business.

A more jarring measurement that some IT professionals use to explain the justification for a business continuance/disaster recovery strategy is how long of an outage would be impossible to recover from, forcing the company out of business. These numbers vary widely by industry, but they certainly get a lot of attention when measured in days or hours.

Storage is a key element in meeting business critical application availability needs, although the amount of management they require on the storage end varies by application. Requirements for storage systems used for business critical applications start with four key areas:

  • Data Protection – The potential data loss due to operational error (from a variety of causes), corruption from the application, or a hardware malfunction is real. A recovery time objective (RTO) and recovery point objective (RPO) need to be established for business critical applications. This will dictate the frequency of protection with the generations retained, the data protection technology needed to meet the time and capacity requirements, and the recovery procedures. The data protection strategy used for a business critical application may be different than secondary -– or Tier 2 — applications.
  • Business Continuance / Disaster Recovery – BC/DR is a storage-led implementation where the replication of data on the storage systems is the most fundamental element. A solid BC/DR plan requires storage systems that can provide coherent replication of data to one or more geographically dispersed locations. This capability is necessary to ensure the operational availability of the critical app.
  • Security – Secure environments and secure access to information are implied with business critical applications. From a storage standpoint, the control of access to information is an absolute requirement and is not always addressed adequately when developing a storage strategy. Block storage systems protect access through masking and physical connection limitations, moving the security problem to the servers. File storage for unstructured data uses a permissions set that relies on the diligence of administrators and has potential openings that must be addressed with careful consideration. This area will improve as more investments are made in storage for unstructured data.
  • Performance – Most of the time, business critical applications demand high performance. For storage, quality of service and service level agreements are defined to meet minimum requirements for operation that do not degrade or impede the application’s execution. These require measurement and monitoring of the storage to determine impacting events and degradations where actions can be taken. Isolating performance issues is a complex task that requires skilled storage administrators with tools that work with the storage systems and networks.

Organizations must give careful consideration to their storage for business critical apps. There needs to be a process for understanding the requirements, evaluating the choices for systems that can meet the requirements, and a strategy for the overall business of storing and protecting information.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


November 28, 2011  4:39 PM

NetApp’s backup plan examined again



Posted by: Dave Raffo
CommVault, data deduplication, data domain, netapp, Quantum

NetApp’s failed attempt to buy Data Domain in 2009 brought a lot of speculation that the storage systems vendor would shift its attention to another backup vendor.

NetApp executives played down the speculation. They said they didn’t need a backup platform, but they wanted Data Domain because its leading position in data deduplication for backup was disruptive and driving strong revenue growth. EMC, which paid $2.1 billion to outbid NetApp for Data Domain, has continued to grow that business despite a plethora of competitors.

NetApp has since made several smaller acquisitions – the largest was LSI’s Engenio systems division – but stayed away from backup. But a few rough quarters have caused NetApp’s stock price to shrink, and now the rumors have returned that it is hunting for backup.

A Bloomberg story today pegged backup software vendor CommVault and disk and tape backup vendor Quantum as the main targets. The story was based more on speculation from Wall Street analysts than sources who said any deals were in the works, but such an acquisition wouldn’t surprise many in the industry.

“I think NetApp needs to acquire companies and technologies, and bring in talent from the outside,” Kaushik Roy, managing director of Wall Street firm Merriman Capital, told Storage Soup.

CommVault and Quantum were among the companies believed to be on NetApp’s shopping list in 2009. A few things have changed. NetApp signed an OEM deal to sell CommVault’s SnapProtect array-based snapshot software earlier this year. That deal is in its early stages. NetApp hasn’t sold much CommVault software yet, but perhaps the partnership is a test run for how much demand there is and could lead to an acquisition.

Quantum was EMC’s dedupe partner before it bought Data Domain. If NetApp bought Quantum in 2009, it could’ve been taken as NetApp picking up EMC’s leftovers. But Quantum has revamped its entire DXi dedupe platform since then, expanded its StorNext archiving platform and acquired virtual server backup startup Pancetera. Those developments could prompt NetApp to take another look.

There are also smaller dedupe vendors out there, most notably Sepaton in the enterprise virtual tape library (VTL) space and ExaGrid in the midrange NAS target market.

However, people who suspect NetApp will make a move expect it will be a big one. CommVault would be the most expensive with a market cap of $1.9 billion and strong enough revenue growth to stand on its own without getting bought. Quantum, which finally showed signs of life in its disk backup business last quarter, has a $524 million market cap but most of its revenue still comes from the low-growth tape business.

Storage technology analyst Arun Taneja of the Taneja Group said buying CommVault would make the most sense if NetApp wants to take on its arch rival EMC in backup. While NetApp was the first vendor to sell deduplication for primary data, it is missing out on the lucrative backup dedupe market.

“NetApp needs to get something going in the data protection side,” Tanjea said. “They’ve missed millions of dollars in the last two years [since EMC bought Data Domain].

“If they want to be full competitors against EMC – and what choice do they have -– CommVault would be better for NetApp to buy. In one fell swoop, CommVault covers a lot of ground against EMC — backup software, dedupe technology at the target and source, and archiving, too.”


November 23, 2011  8:46 AM

Efficient storage systems and data management add value



Posted by: Randy Kerns
data management, storage efficiency

Although IT professionals and vendors often think of storage efficiency in different ways, there are usually two main methods of handling it. One is through efficient storage systems that maximize resources. The other is through data management that determines where data is located and how it is protected.

Efficient storage systems control the placement of data within the storage system and the movement of data based upon a set of rules. The systems maximize capacity and performance in several ways:

Data reduction through data deduplication or compression
Tiering with intelligent algorithms to move data between physical tiers such as solid state drives (SSDs) and high capacity disk drives
• Caching to maintain a transient copy of highly active data in a high speed cache
• Controlling data placement based on quality of service settings for performance guarantees.

Efficient data management requires dynamically changing the data’s location. This may involve moving data beyond a single storage system. The initial data placement and subsequent movement is based on information about the data that determines its value. This information determines performance needs and frequency of access, data protection requirements including disaster recovery and business continuance demands, and the volume and projected growth of the data. Most importantly, the process takes into account that these factors change over time.

Managing data efficiently presumes that there are classes of storage with different performance and cost attributes, and a variable data protection strategy that can be adapted according to requirements.

When data value changes, it must be moved to a more optimal location with a different set of data protection rules. The movement must be seamless and transparent so the accessing applications are not aware of the location transitions.

Data protection changes must also be transparent so that recovery from a disaster or operational problem always involves the correct copy. Efficient data management must be automated to operate effectively without introducing additional administration costs.

This type of data management existed in the mainframe world for a long time as Data Facility Systems Managed Storage (DFSMS) before moving into open systems.

An interesting area that should be watched closely is migration capabilities built into storage systems that can move data across systems based on policies administrators set up. The IBM Storwize V7000 Active Cloud Engine, Hitachi Data Systems BlueArc Data Migrator and EMC VMAX Federated Live Migration are a few examples of these. The EMC Cloud Tiering Appliance also does this, but is not built into the storage system.

This will be a competitive area because there is great economic value in managing data more efficiently. Watch this area for significant developments in the future.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


November 22, 2011  8:45 AM

Brocade: 16-gig Fibre Channel switches moving fast



Posted by: Dave Raffo
16 Gbps Fibre Channel, brocade, fcoe

Brocade executives say the 16 Gbps Fibre Channel (FC) switches they rolled out earlier this year have been an immediate hit in the market, with customers upgrading at a faster pace than they did with 8 Gbps and 4 Gbps switches.

During the vendor’s earnings results call Monday evening, Brocade reported nearly $40 million in revenue from 16-gig directors and switches in the first full quarter of availability. Brocade’s total FC revenue was approximately $303 million last quarter. The total FC revenue grew about 10% from the previous quarter, although it was down about 4% from last year. Brocade execs pointed out that all of the major storage vendors have qualified their 16-gig FC gear while rival Cisco has yet to support 16-gig FC.

Brocade execs said server virtualization and PCIe-based flash are pushing customers to the higher performing FC. They also say customers are sticking to FC instead of moving to Fibre Channel over Ethernet (FCoE).

“We saw a faster-than-expected ramp of our 16-gig portfolio of products,” Brocade CEO Mike Klayko said on the call. “This is perhaps the fastest and smoothest qualification process [with OEM partners] of any new product portfolio among our OEMs.”

Jason Nolet, Brocade’s VP of data center and enterprise networking, said FC remains the “premier storage networking technology for mission-critical apps.” He said Brocade is selling FCoE in top-of-rack switches but there is “almost no take-up” in end-to-end FCoE implementations. “Because of that, Fibre Channel continues to enjoy that kind of premier place in the hierarchy of technologies for storage networking,” he said.

The Brocade executives also played up the monitoring and diagnostics built into their 16-gig switches, suggesting the vendor will make more of a push into this area. Brocade customers have turned to third-party tools for this, such as Virtual Instruments’ Performance Probe. But Virtual Instruments CEO John Thompson recently complained that Brocade has been telling its customers not to use Virtual Instruments products despite having a cooperative marketing relationship in the past. The management aspect of Brocade switches will be worth watching in the coming months.


November 17, 2011  10:17 AM

Thailand floods have NetApp treading water



Posted by: Dave Raffo
hard-drive shortage, netapp, Thailand floods

NetApp executives admit they are concerned about how the hard drive shortage caused by floods in Thailand will affect their sales over the next few months. Those concerns prompted the vendor to lower its forecast for next quarter and brought negative reactions on Wall Street.

Of course, NetApp isn’t the only company that will feel the sting of a hard-drive shortage. All storage companies will suffer from a shortage of drives, and customers will suffer too as prices go up. But NetApp also has other issues. Its revenue from last quarter was slightly below expectations, and NetApp is facing increased competition from rival EMC in clustered NAS and unified storage. There is also a potentially sticky matter concerning sales of NetApp storage into Syria for use of an Internet-surveillance system – a situation that NetApp claims it had nothing to do with.

NetApp Wednesday reported revenue of $1.507 billion last quarter, slightly below analysts’ consensus expectation of $1.54 billion. NetApp said its sales were lower than expected in nine of its 46 largest accounts, causing the miss. “The rest of the business was generally positive,” NetApp CEO Tom Georgens said, although he admitted the recent rollout of the vendor’s FAS2240 system prompted customers to hold off on buying entry-level FAS2000 products.

At least four Wall Street analysts downgraded NetApp’s stock today following its results and comments on next quarter, and the price of NetApp’s shares dropped 9% in pre-market trading. NetApp executives said they have adequate hard drive supply through the end of the year, but they have a difficult time predicting what will happen after that.

“The impact of the Thailand flooding can potentially be the biggest swing factor [over the next six months],” Georgens said. “Although enterprise class drives are considered to be the least impacted, we still anticipate some amount of supply and pricing complexity. We have all heard the predictions of the industry analyst and the drive vendors themselves. Some of the information is conflicting, and most of it is changing daily in regards to scope and ultimate impact.”

On the positive side, NetApp said its FAS6000 enterprise platform revenue grew 100% year-over-year, its midrange FAS3000 series increased 34%, and the E-Series acquired from LSI Engenio earlier this year increased 11% from the previous quarter.

Georgens said NetApp’s solid state Flash Cache product is becoming common on high-end arrays, and hinted that NetApp would add data management software for caching data on server-based PCIe flash cards. “I think you’ll continue to see innovation on the flash side from NetApp, both inside and outside the array,” he said.

Still, NetApp executives were forced to deal with unpleasant topics during their earnings call:

EMC Isilon
Isilon’s scale-out NAS platform sales have spiked this year since EMC acquired Isilon and put its massive sales force behind it. But Georgens said NetApp’s new DataOnTap 8.1 software allows greater cluster capability for FAS storage, and its E-Series (from Engenio) and StorageGrid (from Bycast) object storage platform also increase its “big data” value against Isilon.

EMC’s VNX
Georgens said NetApp’s problems haven’t been caused by EMC’s new VNX unified storage series, despite EMC’s push of its SMB VNXe product into the channel. “The VNX has not caused much of a change in dynamics in many accounts,” he said. “The VNXe in terms of EMC’s channel incentives is something that we’ve seen more of. That’s been the strongest part of our portfolio, so I don’t think they’ve slowed us down much. Nevertheless … that’s been something that’s generated more discussion within NetApp than actually the VNX itself. I think the VNX itself has been inflicting more pain on Dell than on NetApp.”

Sales to Syria
Georgens ended the call by addressing a news story reported by Bloomberg News last week that U.S. congressmen are calling for an investigation into the roles played by NetApp and BlueCoat involving sales of their products into Syria. According to Bloomberg, NetApp’s products appeared in blueprints for an Internet surveillance system being implemented in Syria by an Italian company. Georgens said NetApp did not support the sale of its storage to Syria and “we are just as disturbed that his product is in a banned country as anybody else.” He also pointed out that NetApp only sells storage, not the applications that Syria could use to intercept e-mails.

Georgens added that NetApp is helping the U.S. government in its investigation. “I can tell you we did not actively seek out, we did not choose to sell to the Syrian government, and we’re not looking for a way to circumvent U.S. law to sell to the Syrian government,” he said. “We have no interest in providing product to a banned country. I just wanted to make sure that was clear.”


November 16, 2011  6:26 PM

Crossroads launches LTFS StrongBox



Posted by: Sonia Lelii
Crossroads Systems, LTFS, StrongBox

Crossroads Systems this week said in December it will start shipping its NAS-based StrongBox data vault that supports Linear Tape File System (LTFS) to provide disk-like access capabilities across a back-end LTO-5 tape library for archiving.

StrongBox will be among the first products to take advantage of LTFS, which allows tape to act like disk so that users can retrieve data from LTO-5 cartridges by searching a file system directory. As a disk-based device, StrongBox ingests data and stores it on internal disk before archiving it to an external tape library. With LTFS, users no longer have to go through the cumbersome process of manually searching for data from tape.

“LTFS in itself is great, but how that technology is appropriated is what is important,” said Debasmita Roychowdhury, Crossroad Systems’ senior product manager for StrongBox. “With Strongbox, there is no dependency on backup and archiving applications. The tape behaves just like a file system.”

StongBox comes in two models. The T1 is a 1U server with 5.5 TB of capacity that supports 200 million files at a 160 MBps transfer rate over dual Gigabit Ethernet (GbE) ports. It can write data to LTO tape libraries or external disk arrays via dual 6 Gbps SAS ports. The T3 is a 3U device that can hold up to 14 TB of capacity and handle up to 5 billion files. It can input data at speeds up to 600 MBps over quad GbE ports, and write data to back-end tape libraries or disk arrays via four 6 Gbps SAS ports or four 8 Gbps Fibre Channel ports. Both models contain solid-state drives (SSDs) to backup the appliance configurations and a database to contain mapping file system data. Both versions support Windows, Linux, and Mac systems via CIFS or NFS network data protocols.

StrongBox allows IT managers to mount CIFS and NFS file shares, and the device provides a persistent view of all the files, whether stored on disk or tape. Data lands on the NAS system and it is stored onto disk for one hour after it was last modified. Thereafter, the files are read only, then policies are applied per file share, a hash is calculated for each file and then files are moved onto tape. For retrieval, data is pulled from tape onto the StrongBox and is sent to the application making the request.

LTFS allows tape to act like disk because it does media partitioning. One partition has a self-contained hierarchical file system index and a second partition holds the content. When a tape is loaded into a drive, the index and contents can be viewed by a browser or any application that has the tape attached to it. LTFS allows any computer to read data on an LTO-5 cartridge.

StrongBox has self-healing and monitoring capabilities that automatically detect media failures and degradation. If problems with the media are detected, it migrates data off the bad media to another non-disruptively. Future versions will have a dual-copy export policy, so that one tape can be shipped off for archiving, and a WAN-accelerated replication policy so that data can be replicated between two StrongBox systems.

Currently, StrongBox supports IBM and Hewlett-Packard tape libraries, and Crossroads is testing the product with libraries from Quantum and Spectra Logic. The T1 model is priced at $21,750 with 10 TB of capacity, while the T3 is priced at $30,700 with 10 Terabytes. To hold up to 5 billion files, it would cost another $4,560.


November 16, 2011  11:41 AM

Dell storage revenue tumbles after changes



Posted by: Dave Raffo
dell storage

Dell CEO Michael Dell likes to refer to his company as a hot storage startup because it is overhauling its storage technology through acquisitions and internally developed products. And like most startups, Dell storage is experiencing growing pains. Right now, Dell storage is actually experiencing non-growing pains.

Like its overall results last quarter, Dell storage sales were below expectations. Year-over-year storage revenue dropped 15% to $460 million, mostly due to revenue lost because of its divorce from partner EMC. Dell-developed storage revenue increased 23% year over year to $388 million, but that was down from $393 million the previous quarter.

Dell executives point to improved margins gained from owning its storage IP following acquisitions of EqualLogic and Compellent, instead of selling EMC’s storage. But Michael Dell didn’t seem thrilled when discussing his company’s storage revenues during the earnings conference call Tuesday night.

“It wasn’t completely up to our expectations,” he said. “There’s some room for improvement there. Growth in Dell IP storage was 23% year over year and it’s now 84% of our overall storage business. We had good demand from Compellent, we launched a whole new product cycle in EqualLogic. There’s definitely more to do here and … we have put a lot of new people into the organization, and they’re becoming productive, and we still remain very optimistic about our ability to grow that business.”

According to a research note issued today by Aaron Rakers, Stifel Nicolaus Equity Research’s analyst for enterprise hardware, the overall storage market grew approximately 12% year over year for the quarter. In comparison, EMC storage revenue increased 16%, Hitachi Data Systems grew 24% and IBM was up 8% over last year.

Rakers wrote that he expected Dell’s storage revenue for the quarter to be around $542 million, and he believes “overlap between EquaLogic and Compellent has been a challenge.” He added that he assumes “muted EqualLogic revenue growth” for the quarter while Dell expanded its Compellent channel into 58 countries – up from 30 countries a year ago.


November 16, 2011  8:32 AM

Making storage simple isn’t easy



Posted by: Randy Kerns
storage management, storage systems

IT managers want storage systems that are simple to administer. They measure ease of installation in time and the number of steps it takes. Ongoing administration is viewed as a negative – “Just alert me when there is a problem and give me choices of what to do about it” is a familiar response I hear when talking to IT people.

The problem is, simplicity is hard to accomplish when making storage systems. To design a complex storage system, it is difficult to bake in automation in an intelligent fashion to make it “simple.”

Simplicity in storage has come in many forms, including the advanced GUI you see in products such as the IBM XIV and Storwize V7000 or the automation designed into EMC’s FAST VP tiering software. These products are complicated to develop and require an understanding not only of the storage system but of the dynamics encountered in an operational environment. Designing in simplicity can be expensive, too. It requires a substantial investment in engineering and the ongoing support infrastructure to deal with problems and incremental improvements.

But, simplicity in the storage system should seem natural, with overt signs of complexity. The best comment I’ve heard from an IT person was, “Why wasn’t it always done this way?”

In many customers’ minds, simplicity doesn’t translate to extra cost. A “simple” system should cost more because it is more expensive to produce, but people often think it should cost less because it is … well, simple.

A potential problem for vendors comes when they highlight specific characteristics of a product to differentiate it from competitors. This seems logical – you want to show why your system is different. Unfortunately, talking about the underlying details of a product and how simple it is are contradictory. If it is simple, the underlying details shouldn’t have to be explained. But if the vendor marketing team does not explain them, they may not be able to distinguish their product from others. This leads to confusion as well as marketing messages that are ignored or incorrectly received.

One way out of this for the vendors is to list their features and why they are different while — more importantly — giving the net effect of those differences. They should explain the value of features such as performance and reliability, and couple that with the simplicity message. Think of this type of message: Simple is good, complex is bad. Give the simplicity story and show the contributing elements to that. And for the underlying details that may be a differentiator, explain them in the bigger context where the effect is measured and independent of the simplicity that required a large investment.

It is too easy to focus on product details while hiding the real measures needed for a storage product decision. If product details are the only message, then maybe something else is lacking.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


November 15, 2011  2:58 PM

TwinStrata delivers cloud SANs



Posted by: Sonia Lelii
DAS, NAS, private cloud, public cloud, SAN, twinstrata

TwinStrata is extending the capabilities of its CloudArray gateway device, adding support for on-premise SAN, NAS , direct attached storage devices and private clouds.

TwinStrata launched CloudArray as an iSCSI virtual appliance in May 2010, and added a physical appliance later in the year. The gateway moves data off to public cloud providers. With CloudArray 3.0, TwinStrata is trying to appeal to customers who want to expand their SAN and private clouds.

“We are broadening our ecosystem of the private and public cloud, and also leveraging existing storage as a starting point,” TwinStrata CEO Nicos Vekiarides said. “We are enabling customers to create a hybrid configuration to combine existing assets.”

Vekiarides said by letting customers use CloudArray on existing storage, they can access their data from anywhere. He claims TwinStrata is enabling a Cloud SAN, with multi-tenancy and multi-site scalability along with local speed performance, data reduction, high availability, encryption, centralized disaster recovery and capacity management.

With CloudArray 3.0, TwinStrata also has automated its caching capability. TwinStrata’s appliance uses the storage on a JBOD or host RAID controller or array for cache. Previously, the cache capacity had to be manually configured.

TwinStrata also added support for Nirvanix’s Cloud Storage Network, Rackspace and OpenSpace private and public clouds.

Mike Kahn, managing director at The Clipper Group, said TwinStrata’s 3.0 release allows customers to “put a veil over existing storage so it can be used as one or more tiers of storage. And over time, they can move to a public or private cloud,” he said.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: