Storage Soup

A SearchStorage.com blog.


October 13, 2011  7:00 PM

Sepaton sets its sights on Big Data



Posted by: Brein Matturro
big data, deduplication, Sepaton, vtl

By Sonia R. Lelii, Senior News Writer

Sepaton is looking to move beyond its data protection specialty into Big Data.

At Storage Networking World in Orlando, Fla., this week, new Sepaton CTO Jeffrey Tofano offered a broad description of where the vendor plans to go within the next five years. Tofano didn’t offer too many technology specifics, but said Sepaton’s plan is to position itself for the broader Big Data market.

The idea is to expand its use of NAS protocols to its backup products within the next year, then over the subsequent two years provide “solution stacks” for snapshot archiving, specialized archiving and data protection environments.The goal is to use all of that technology for nearline storage and big data. “Our technology is skating where the buck will be, and that’s Big Data,” said Tofano, who was previously CTO of Quantum Corp.

There still is a lot of marketing hype around the term Big Data, but Tofano puts it into two buckets: either large data sets or analytics of petabytes of data for business intelligence. Sepaton will be targeting both, he said, by using the company’s technology to “bring specialized processors closer to the storage to do clickstreaming, web logs or email logs.

“It turns out that Big Data is a perfect fit for our [technology] core, which is a scalable grid, content-aware deduplication and replication technology,” Tofano said. “Our technology is not the limiting factor. We have a lot of the pieces in place. We are not building a new box. We are refining a box to get into the Big Data market. Right now, we have a scalable repository bundled behind a Virtual Tape Library [VTL] personality.”

Tofano said the VTL market is mature, and this new direction does not mean Sepaton will get out of the  backup space. Obviously, he said, it depends on revenues. ”We will become more general purpose over time. We will do storage and support loads outside of data protection,” Tofano said.

October 12, 2011  7:06 PM

Dell adds Ocarina compression to object storage



Posted by: Dave Raffo
data reduction, dell, object storage

Dell today revealed the first product it will release using data reduction technology from its Ocarina acquisition 15 months ago: The DX6000G Storage Compression Node (SCN) for its DX object storage system.

The DX6000G SCN is an appliance based on the Dell PowerEdge R410 server that connects to its DX6000 object storage nodes. Dell director of DX product marketing Brandon Canaday said the compression appliance can reduce data by 90%, depending on file types. Although Ocarina technology can dedupe or compress files, the object storage appliance will only use compression. It has two modes — Fast Compression mode is optimized for performance and Best Compression mode is optimized for capacity reduction. Customers can choose one or both modes.

Canady said customers can set policies to use fast compression when data is first brought onto the storage system and then switch to the best compression after a pre-configured time period. The appliance uses different compression algorithms depending on file type.

“It’s like applying tiered intelligent compression,” Canady said. “Because we maintain metadata with the file inside of the storage device, we can employ algorithmic policies as part of the lifecycle management of content.”

List price for the DX6000G SCN will begin at about $25,000, depending on the amount of data ingested. The appliance will become generally available next week.

Dell plans to incorporate Ocarina’s compression and deduplication across its storage systems, with more reduction products expected early next year. Canady said the performance and compression modes will likely show up in all of the data reduction appliances.

“Each implementation is likely to be slightly different, but we see value in having a performance approach and a capacity approach,” he said.


October 12, 2011  2:35 PM

IBM Global Services picks Nirvanix for cloud storage



Posted by: Dave Raffo
Cloud storage, IBM Global Services, Nirvanix

IBM Global Services unveiled a cloud storage service for archived and infrequently accessed data today, based on technology from Nirvanix.

The OEM deal with Nirvanix provides IBM with the storage portion of its IBM SmartCloud Enterprise services. IBM bills the SmartCloud storage service as best suited for unstructured data for companies in media and entertainment, healthcare and financial services. Nirvanix’s object storage is designed for content shared across geographical locations and for data that must be retained for long periods. IBM pitches SmartCloud storage as an alternative to tape for backups and archiving.

Nirvanix CEO Scott Genereux said IBM is taking Nirvanix’s existing service without modification, but he plans for the startup to eventually offer IBM services optimized for the SmartCloud service. “There will be integration at the software level,” he said. “A year from now, IBM will have compelling differentiators from anybody else who sells our technology.”

Genereux said the partnership will help IBM and Nirvanix compete against storage vendors pushing cloud implementations, and gives Nirvanix extra ammunition to go up against Amazon S3, Microsoft Azure, and Google Cloud Storage services.

“There’s a big difference when IBM walks in and tries to sell the service than when we walk in as a startup,” he said.

Genereux also said IBM is looking to deliver a true cloud service, unlike storage vendors who try to sell what he calls “cloud in a box” by cloud-washing their hardware products.

“IBM Global Services and Nirvanix are both services companies, and we’re in complete synchronization selling services in a pay-by-the-drink model,” he said.


October 7, 2011  4:32 PM

Storage technology adoption is a slow process



Posted by: Randy Kerns
new storage technologies, solid state storage, storage adoption rates

In the last few years we’ve seen advances in storage technology that have tremendous potential for IT customers. Some of these are enabled by investments made in developing flash solid-state drive (SSD) technology and adapting it to enterprise storage systems.

These new storage technologies improve the processes of storing and retrieving information. They reduce costs, and lead to greater storage efficiencies and capabilities.

Notable storage technologies that have been delivered relatively recently to customers include:

Tiering products with the ability to place or move data based on a probability of access to maximize the performance of the storage system.
• All-SSD storage systems designed for near- zero latency to provide the highest level of performance in a storage system.
• Faster networks and network convergence where the pipes used in storage to move data allow greater bandwidth and bring the ability to standardize on one type of infrastructure throughout an organization.
• SSD add-in technology where the non-volatility and performance of SSDs can be exploited in a direct manner.
Forward error-correction technology as a new way to protect data from a failure.
Scale-out NAS systems to address enterprise demands for the volumetric increase in unstructured data to be stored.

There’s another type of product that has not received as much attention, perhaps because it is not simply a matter of using a new technology. That is the process of automating information management to make it easier for administrators to understand the value of data. This area may be more difficult to master, but new developments may yield greater economic value – primarily in operational expense – than some of the newer technologies.

An example of automated management would be what is called “Active Management.” This Active Management is similar to Storage Resource Management (SRM), but besides monitoring storage resources it takes automated actions based on conditions and rules defined by default or through customization by the storage administrator.

Yet despite their clear benefits, the adoption rate for these technologies and processes will be slow. That’s mainly because IT is conservative in adopting new concepts and technologies. This does not mean that the technologies won’t be embraced and be successfully deployed. It will just take longer.

The excitement around a new technology needs to be kept in perspective with how long it takes to be deployed successfully by IT. The technology adoption rate reflects the inertia and conservative nature for handling the critical task of storing data.

The longer technology adoption rate can kill start-up companies when they can’t get the investments required to get to the point of profitability. Many investors have an aggressive profile that defies the reality of adoption rates. Larger companies can handle the conservative adoption more successfully, but with much internal angst.

But no matter how great a new storage technology may be, the customer needs to understand it to make correct decisions about investing in it before it takes hold in data centers.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


October 6, 2011  1:05 PM

FlashSoft makes quick upgrade to SSD-caching software



Posted by: Dave Raffo
FlashSoft, PCIe flash, solid state storage, SSD caching software

Startup FlashSoft, which was first out with flash caching software, is looking to stay ahead of impending competition with a major upgrade to its FlashSoft SE barely four months after making the software generally available.

FlashSoft today said FlashSoft SE 2 is in beta, with support for Linux, larger solid-state drives (SSDs) and read-only cache. FlashSoft SE improves application performance with solid-state storage by turning SSD and PCI Express (PCIe) server flash into a cache for the most frequently accessed data.

FlashSoft developers have been busy. CEO Ted Sanford claims 800 feature enhancements in version 2. Besides Linux support, the biggest improvements are an increase in the maximum cache from 256 GB to 1 TB (it now can accelerate up to 10 TB of active data), and the ability to detect a failing SSD, issue an alert, and switch the cache to pass-through mode to avoid writing data to any SSD without a redundant backup. FlashSoft SE 2 also has a new GUI that provides runtime data and statistics through the Windows Management Interface (WMI) API.

The software now supports read-only caching as well as read-write caching. Sanford said read-write caching works best for most applications, but apps with large read requirements such as video can benefit from read-only. FlashSoft also increased its read cache performance for large files and data sets.

In addition, FlashSoft upgraded FlashSoft SE-V for virtual servers to 2.0, adding support for virtual servers running on Linux and Windows.

FlashSoft SE pricing starts at $2,000 and depends on the number of servers accelerated and the type of SSDs used. FlashSoft sells FlashSoft SE as standalone software, but Sanford said he is working on partnerships to bundle the software with SSDs.

Sanford said the new features were mostly suggestions from early customers and organizations evaluating FlashSoft SE software. FlashSoft is probably also hearing footsteps from other flash cache competitors entering the market.

EMC‘s PCIe server-side cache “Project Lightning” product that uses EMC’s FAST software to improve performance is due out by the end of the year, as is Fusion-io’s ioCache that uses software that Fusion-io acquired from early FlashSoft competitor IO Turbine. SSD vendor STEC is sampling its EnhanceIO SSD Cache Software to partners and early customers, startup VeloBit is in beta with its SSD caching software, and Nevex and perhaps other startups are on the way with  competing products.

“We feel like it’s a significant accomplishment that we’re shipping a 2.0 release in advance of anybody else shipping 1.0,” Sanford said. “We believe we will continue to advance the art. Leveraging flash as cache in existing servers or being added to new servers coming out is a large market. When you can improve an application’s performance by three to five times, that’s a fundamentally strong value proposition. There will be room for significant success for several companies.”


October 4, 2011  2:59 PM

DataCore adds support for cloud as storage tier



Posted by: Dave Raffo
auto tiering, cloud backup, datacore, storage cloud gateway, storage virtualization, twinstrata

When DataCore added automated tiering to its SANsymphony-V storage virtualization software in July, it left out support for one tier – the cloud.

Today, DataCore addressed that omission through a partnership with cloud storage gateway vendor TwinStrata.

SANsymphony-V virtualizes storage across pools of heterogeneous systems, adding management features such as thin provisioning, RAID striping, asynchronous replication, and snapshots. The new tiering feature lets customers dynamically move disk blocks among different pools of storage devices.

Beginning in late October, when a customer purchases SANsymphony-V — which DataCore calls a “storage hypervisor” — it will include a 1 TB version of TwinStrata’s CloudArray virtual appliance at no extra cost. That lets DataCore customers move data off to the cloud, although they need a subscription with a cloud storage provider such as Amazon S3 or Nirvanix. A DataCore customer can also go beyond a 1 TB gateway by upgrading the appliance through TwinStrata, which charges $4,995 for unlimited capacity. The CloudArray software deduplicates, compresses and encrypts data before moving it to the cloud.

TwinStrata also sells its cloud gateway as an appliance. DataCore CEO George Teixeira said SANsymphony-V will also work with the hardware appliance, using it as cache to speed backups.

“Now we’ve allowed a cloud tier to be part of our storage hypervisor,” Teixeira said. “When data gets to a lower tier, it can be put on an iSCSI device that is actually a cloud disk.”

There are other cloud gateway products on the market, but Teixeira said he picked TwinStrata because it supports iSCSI while most of the others are for files and backup. As the name implies, SANsymphony-V is a SAN application, so it required a block storage gateway.

“Most of them [gateways] are doing things at the file system,” Teixeira said. “These guys [TwinStrata] present an iSCSI disk. We can use storage virtualization across all disk and this looks like another disk we are auto-tiering, so it plays into our model. This doesn’t require a lot of thinking on the part of the customer. It’s just another tier, and you can choose to pick a pay-as-you-go model.”

Although TwinStrata can handle primary storage, Teixeira said he expects his customers to use the cloud mostly for backup and archiving. “I think we have some ways to go to get to primary storage in the cloud,” he said. “We’re looking mostly at backup, archiving and scratch storage. We think most production data will stay on-premise, but why not put non-production and backup data on the cloud?”

TwinStrata is looking to use partnerships to get its gateway into the market, even if it has to give it away at first. Last month the startup began offering a free 1 TB CloudArray appliance to Veeam Backup & Replication customers who want to back up to the cloud.

Gartner research director Gene Ruth said organizations are interested in moving to the cloud, but are still looking for the best way to go about it. He said storage virtualization and gateways are two potential starting points.

“It’s hard for people to get their arms around the idea that they’ll put all their data out on the cloud, but they know they have some data they can put out there,” he said. “I see storage virtualization in general as one of those building blocks to help consolidate a cloud storage environment around a common provisioning point. It’s not the end-all, but it’s a good start.”

Ruth said he also sees gateways as key enablers of cloud storage, and not just as standalone devices.

“It seems a pretty obvious step for major vendors to put that [gateway] functionality into disk arrays and file servers,” he said. “I don’t think it’s that difficult to add a gateway to their arrays.”


October 3, 2011  1:18 PM

Storage companies have many faces



Posted by: Randy Kerns
storage vendors

Large companies are represented by many people filling various roles to the outside world. These people are trying to accomplish different things. A marketing person is trying to create information that will highlight the company and give customers a reason to consider or continue with its products. Sales people are on the front lines with the customers, trying to influence them with the company’s offerings. Press and analyst relations people try to pass along useful information and increase the awareness of the company and its products.

At any given time, all of these people serve as the face of the company they represent.

This is no different in the storage industry than with other types of companies. But because information is stored for a long time, customers may not interact with storage companies as frequently. That means their impressions could have a longer term impact.

As an example, I was working with an IT director of a major company recently and the director was upset with what a salesperson had done in engaging the CIO of the company. The IT director did not have a problem with the specific person, but with the company. This director vowed to not buy a solution from that company for the problem we were trying to solve. It didn’t matter whether the sales person was using company-endorsed sales practices or not. A new barrier had been created.

This isn’t an isolated incident. I find that the negative perception caused by an individual can affect a sale even when the customer knows the vendor representative doesn’t represent the entire company. The customer might consciously or sub-consciously associate the vendor with one person, and the way that the customer has been dealt with may impact all future relations with that vendor. That could impact the customer’s ability to make objective decisions. But that is human nature and does require additional thought or effort.

The lesson here is that at any moment during external communications, the company employee must remain keenly aware that he or she is the “face of the company” right then. Saying outrageous things or taking a condescending attitude ultimately reflects on the company. It only takes one bad encounter to give someone a negative view. In the case of storage, this view could last for years.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


September 30, 2011  2:25 PM

Storage Headlines for September 30, 2011



Posted by: Mkellett
storage headlines

Check out our Storage Headlines podcast, where we review the top stories of the past week from SearchStorage.com and Storage Soup.

Here are the stories we covered in today’s podcast

(0:18) Violin Memory launches all-flash storage for the enterprise
(1:24) FalconStor founder Huai found dead
(2:17) Arkeia adds dedupe, SSDs to backup appliances
(3:12) QLogic takes another whack at converge storage networks
(4:01) Storage Products of the Year 2011


September 29, 2011  1:35 PM

QLogic takes another whack at converged storage networks



Posted by: Dave Raffo
16 Gbps Fibre Channel, converged networks, fcoe, storage networking

QLogic is taking the stance that having multiple personalities is the sane way to approach converged storage networking. With Fibre Channel (FC) remaining the dominant protocol and Ethernet becoming a better candidate for SANs, QLogic has new gear that supports the latest flavors of both.

The storage networking vendor updated its product platform to 16 Gbps Fibre Channel this week, including a switch that supports FC and 10 Gigabit Ethernet (10GbE) ports to give it what QLogic calls “dual personalities.” QLogic also launched its 8300 Series Converged Network Adapter (CNA) that supports Ethernet, Fibre Channel over Ethernet (FCoE) and iSCSI, and the 2600 Series 16 Gbps FC HBA.

The Universal Access Point 5900 (UA5900) can be configured to run 16 Gbps Fibre Channel or 10 GbE traffic. Customers can start with 24 device ports and grow to 68 ports by adding licenses. Four of the ports can be used as 64 Gbps Fibre Channel trunking ports, and the switches can stack to 300 device ports. The UA5900 can be a Fibre Channel or Ethernet edge switch, and — with a Converged Networking license – it can serve as a top-of-rack FCoE switch to compete with Brocade’s 8000 and Cisco’s 5548UP devices.

QLogic also said it would bring out an intelligent storage router – called the iSR6200 – with support for Fibre Channel, FCoE and iSCSI. The router is designed for SAN-over-WAN connectivity.

The UA5900 and adapters are expected to ship through QLogic’s OEM and channel partners in early 2012, with the iSR6200 expected late next year.

QLogic was one of Cisco’s early allies in delivering FCoE gear years ago, and is on its third generation of converged networking devices. But FCoE has gained little adoption and Fibre Channel isn’t going away. QLogic execs say they expect Fibre Channel to remain strong while FCoE is a longer term item for many organizations. “We expect over the longer period, FCoE will gain momentum,” QLogic director of product marketing Craig Alesso said. “But Fibre Channel is still the workhorse for most enterprises.”

When FCoE does gain momentum, what role will hardware adapters play? Intel has launched software FCoE initiators that use host processing power and work with any network adapters. Intel’s plan is to eliminate the need for CNAs, but Alesso said QLogic’s adapters will have a big role in running FCoE. He maintains that CNAs are better suited for I/O processing and server CPUs should be used for applications.

“People can run FCoE initiators, but there’s a [performance] cost,” he said. “We free up servers to do what customers want to do with servers – run multiple virtual machines and multiple applications. The CPU should be used for running applications, not the I/O. We should run the I/O. Also, with [software] initiators, you lose management. You don’t have the common look and feel among management utilities.”


September 28, 2011  12:43 PM

Arkeia adds dedupe, SSDs to backup appliances



Posted by: Dave Raffo
Arkeia, backup appliance, cloud backup, data deduplication, disaster recovery

Arkeia Software CEO Bill Evans has watched Symantec roll out a steady stream of backup appliances over the last year, and he asks, “What took so long?”

Arkeia began delivering its backup software on appliances four years ago, and this week launched its third generation of appliances. They include the data deduplication that Arkeia added to its software a year ago, solid state drives (SSDs) to accelerate updates to the backup catalog, and up to 20 TB of internal disk on the largest model.

“Since 2007, we’ve been telling everybody that appliances would be big,” Evans said. “Symantec has validated the market for us.”

Evans said about 25% of Arkeia’s customers buy appliances. Because they take less time to set up and manage, he said appliances are popular in remote offices and among organizations without much IT staff.

The new appliances are the R120 (1 TB usable), the R220 (2 TB, 4 TB or 6 TB), the R320 (8TB or 16 TB) and the R620 (10 TB or 20 TB). The two smaller models include optional LTO-4 tape drives while the two larger units support 8 Gbps Fibre Channel to move data off to external tape libraries and RAID 6. They all include Arkeia Network Backup 9 software and built-in support for VMware vSphere. Arekeia’s progressive dedupe for source and target data is included with the R320 and R620, and optional with the R220. Pricing ranges from $3,500 for the R120 to $47,000 to the R620 with 20 TB.

The R620 includes 256 GB SSDs, enough to manage the backup catalog. “We would never put backup sets on SSDs, that would be too expensive,” Evans said. “But it makes sense to use SSDs to manage our catalog, which is a database of our backups. The catalog is random, and updating the catalog could be a performance bottleneck.”

“If we were simply a cloud gateway and combined SSDs and disk in a single package, we wouldn’t know what incoming data should live on SSD and what should live on disk. It all looks the same. Because we wrote the [backup] application, we could say ‘this data lives on disk and this data lives on SSD.’”

For disaster recovery, the appliances can be used to boot a failed machine by downloading software from a backup server to the failed machine. The appliances can also replicate data to cloud service providers.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: