Storage Soup

April 11, 2013  7:09 AM

Don’t forget management when developing object storage systems

Randy Kerns Randy Kerns Profile: Randy Kerns

The massive amount of unstructured data being created has vendors pushing to deliver object storage systems.

There are many object systems available now from new and established vendors, and others are privately talking about bringing out new object systems soon.

Objects, in the context of the new generation of object storage systems, are viewed as unstructured data elements (think files) with additional metadata. The additional metadata carries information such as the required data protection, longevity, access control and notification, compliance requirements, original application creation information, and so on. New applications may directly write a new form of objects and metadata but the current model is that of files with added metadata. Billions of files. Probably more than traditional file systems can handle.

Looking at the available object storage systems leads to the conclusion that these systems are not developed to meet the real IT needs. Vendors are addressing the issue of storing massive number of objects (and selling lots of storage), but the real problem is about organizing the information. File systems usually depend on users and applications to define the structure of information as they store the information. This is usually done in a hierarchical structure that is viewed through applications, the most ubiquitous being Windows Explorer.

We need a way to make it easier to organize the information according to a different set of criteria, such as the type of application, user (person viewing the information) needs, age of information, or other selectable information. The management should include controls for protection and selectivity for user restores of previously protected copies of information. Other information management should be available at the control view rather than through management interfaces of other applications. This seems only natural but it has not turned out this way.

Vendor marketing takes advantage of opportunities to ride a wave of customer interest. Vendors will characterize some earlier developed product as an object file system just as today almost everything that exists is being called “software-defined something.” But the solution for managing the dramatic growth of unstructured data must be developed specifically to address those needs and include characteristics to advance management of information as well as storage.

The investment in addressing object management needs to be made, otherwise, the object storage systems will be incomplete. Linking the managing of information and the object storage systems seems like a major advantage for customers. This will be an interesting area to watch develop.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).

March 31, 2013  10:03 PM

Silver Peak launches more VRX WAN optimization software

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Silver Peak Systems Inc. is building out its Virtual Acceleration Open Architecture (VXOA) that allows storage administrators to bypass network administrators when they need to improve application performance through WAN acceleration.

The company announced Web-based downloadable software products aimed at increasing accelerating offsite data replication workloads. The SilverPeak VRX-2, VRX-4 and VRX-8 software are virtual WAN-optimizing products that support VMware vSphere, Microsoft Hyper-V, Citrix Xen  and KVM hypervisors. The virtual WAN optimization software is compatible with IP-based array replication software from Dell, EMC, IBM, Hitachi Data Systems, Hewlett-Packard and NetApp.

SilverPeak VRX-2 can handle up to  per replication throughput per hour, while the VRX-4 can handle 400 GBs per replication throughput per hour and the VRX-8 handles up to 1.5 TB per replication throughput per hour. Annual licenses for each cost $2,764, $8,297 and $38,731, respectively.

Silver Peak CEO Rick Tinsley said the VRX-8 is positioned more for large deployments such as EMC’s EMC Symmetrix Remote Data Facility (SRDF) asynchronous product, RecoveryPoint and EMC DataDomain backup. The small VRX versions are tailored more for Dell EqualLogic replication.

In December 2012, Silver Peak Systems brought out Virtual Acceleration Open Architecture 6.0 WAN optimization software with expanded support for virtualization hypervisors. The WAN acceleration software, which operates on Silver Peak’s NX physical and VRX virtual appliances are part of the company’s strategy to give storage administrators the ability to more efficiently improve application performance, reduce bandwidth costs without involving network administrators to re-configure network switches and routers.

“Back in December, we did make enhancements to our software that made it easier for storage managers to deploy our technology, which we call our Velocity initiative, but it was not productized specifically for storage managers at that time,” according to a SilverPeak spokesperson. “This is the next phase and culmination of those Velocity developments, where these new VRX software products are uniquely priced and positioned with the storage managers in mind by addressing storage concerns such as ‘shrinking RPOs’ and how many terabytes-per-hour can be moved to an offsite location.”

In March, Silver Peak announced its Virtual Acceleration Open Architecture (VXOA) software can be used for WAN optimization in Amazon cloud deployments for off-site replication and lower disaster recovery costs.

March 25, 2013  11:20 AM

Measuring storage value vs. storage cost

Randy Kerns Randy Kerns Profile: Randy Kerns

A recent conversation I had about the cost of storage made me think that talking about the cost of storage is the wrong way to approach it. The discussion should be about the value that storage delivers.

Trying to explain the complex nature of meeting specific demands for storing and retrieving information and advanced features for management and access is difficult when discussing it with someone who is focused only on how much it costs to store the information.

When storage costs, there is an implicit assumption that all factors are equal in storing and retrieving information. But several factors should take priority:

• How fast must the information be stored and retrieved? The ingestion rate (how fast data arrives) and how long it takes for the data to be protected on non-volatile media with the required number of copies has a big impact on applications and potential risk. Retrieving information is about how fast the data can be accessed (latency) and the amount of IOPS or continuous transfer (bandwidth) that can be sustained.

• What type of protection and integrity are required? Information has different value and the value changes over time. Information protection may be as simple as a single copy on non-volatile storage or as complex as multiple copies with geographical dispersion. Integrity is another concern. Protection from external forces so the loss of one or more bits of data can be detected and corrected is highly valuable and often assumed without understanding what is involved. Additional periodic integrity checking is another assurance for the information. It also answers the question posed for many in IT: “How do you know that is the same data that was written?”

• The longevity of the information can have a major influence on storing and retrieving. A significant percentage of information is kept more than 10 years. Compliance requirements dictate the length of time and manner of control of information in regulated industries. Storing information on devices that have limited lifespans (such as when you can no longer purchase a new device to retrieve information), means that other considerations must be made. If the information can be transparently and non-disruptively migrated to new technology without additional administrative effort or cost, that should be a factor in the selection process.

Here’s an example of how this works with a real IT operation that needed to increase its transactions per second. Increasing the number of transactions allowed the organization to get more done over a period of time, expand its business and provide better customer service. In this case, more capacity was not the issue – the capacity for the transaction processing was modest. After evaluating where the limitations were, it was clear that adding non-volatile solid state technology for the primary database met and even exceeded the demands for acceleration. Storage selection was not based on the cost as function of capacity ($/GB). It was based on the value returned in improving the transaction processing and gaining more value from the investments in applications and other infrastructure elements.

Storage must be evaluated on the value it brings in the usage model required. Comparing costs as a function of capacity can make for bad judgments or bad advice.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).

March 21, 2013  1:02 PM

Pure Storage hears footsteps, offers money-back guarantee

Dave Raffo Dave Raffo Profile: Dave Raffo

With large established vendors planning to launch all-flash storage arrays, startup Pure Storage is offering a money-back guarantee to customers who want to try their systems now.

Pure calls the promotion “Love your storage” and is telling customers they can return their FlashArray for a full refund within 30 days if they’re not happy for any reason.

Matt Kixmoeller, Pure’s VP of product management, said Pure’s guarantee is different than other vendors who offer guarantees if certain performance conditions are not meant. He said Pure will cancel the sale unconditionally, as long as the array isn’t damaged.

“If a customer doesn’t love Pure Storage, we don’t deserve to have their money,” he said. “We don’t define love, we let the customer define love. If they’re not happy for any reason, all they have to do is raise their hands and we will return their money.”

Pure and a few other startups such as Nimbus Data, Violin Memory, Whiptail and Kaminario have had the all-flash array market to themselves for the past year or so. But that is changing. IBM already has Texas Memory Systems, EMC is preparing to make its XtremIO arrays generally available in a few months and NetApp has pre-announced its FlashRay that won’t go GA until 2014. Also, Hitachi Data Systems is working on an all-flash array and Hewlett-Packard is making its 3PAR StoreServ arrays available with all flash.

But it was likely the recent announcements that NetApp and EMC made that spurred Pure to its money-back offer. NetApp and EMC wanted to make it clear they will enter the market, which could prompt some of Pure’s would-be customers to wait.

“The reason they did pre-announcements was they want to freeze the market, but customers are smarter than that,” Kixmoeller said. “We suggest customers get one of ours and they try it out.”

Besides fending off vendors that don’t have their products out yet, Pure and the other startups find their potential customers wondering about their long-term fate. The all-flash startups are well funded, but a lot of people in the industry are waiting for the next acquisition. Hybrid flash startup Starboard Storage has publicly admitted it is for sale, as Texas Memory did before IBM acquired it. But Pure execs say they are committed to staying independent.

CEO Scott Dietzen hears so many acquisition questions that he wrote a blog this week claiming he refused to even discuss deals with large companies who have approached him, and has no intention of selling.

“As more companies get acquired, we get more customers asking what our long-term future is,” Kixmoeller said. “We’re committed to growing our company.”

March 19, 2013  10:01 AM

Fusion-io grabs more flash software, with other acquisitions to follow

Dave Raffo Dave Raffo Profile: Dave Raffo

Fusion-io CEO David Flynn said Linux and open source have emerged as the keys to software development for flash, and that is why his company this week acquired U.K.-based ID7.

ID7 developed the open source SCSI Target Subsystem (SCST)for Linux. SCST is a SCSI target subsystem that allows companies to turn any Linux box into a storage device. It links storage to the system’s SCSI drivers through Fibre Channel, iSCSI, Ethernet, SAS, Fibre Channel over Ethernet and InfiniBand to provide replication, think provisioning, deduplication, automatic backup and other storage functions.

Fusion-io already licenses SCST for its ION Data Accelerator virtual appliance that turns servers into all-flash storage devices. But the ID7 acquisition gives Fusion-io greater control of the SCST technology, as well as the engineers who developed it.

Flynn said Linux is the crucial operating system for flash developers, and SCST is used by most vendors who build flash storage systems.

“Linux is the new storage platform and open is the new storage architecture,” Flynn said. “Anybody building a flash memory appliance is using Linux. We believe software-defined storage systems are the future, Linux is the foundation of that, and we have accumulated many key Linux kernel contributors.”

Flynn won’t say how much Fusion-io paid for ID7 or even how many engineers it will add from the acquisition. He did say he is committed to honoring ID7’s license deals, maintaining an open source version of SCST and contributing to the open source distribution.

“We believe in open systems,” he said. “We will continue to support the industry, competitors included. But our only real competitor is EMC.”

EMC positions their new XtremSF PCIe cards – sold through OEM deals with other vendors – as Fusion-io killers. The SCST web site lists EMC as a user of the technology.

Flynn said he expects Fusion-io to be an active acquirer of flash technology that it does not develop internally, such as the caching software it gained by buying startup IO Turbine for $95 million in 2011.

“Flash changes the game in a lot of ways,” Flynn said. “The industry is growing so quickly it would be silly to presume we can build everything internally.”

March 18, 2013  8:01 AM

Starboard Storage puts itself up for sale, throws sales overboard

Dave Raffo Dave Raffo Profile: Dave Raffo

Starboard Storage is looking for a buyer or strategic partner to license its hybrid unified storage systems, 13 months after the re-launch of the startup previously known as Reldata.

Starboard has slashed sales and marketing staff, and notified its reseller partners that it would concentrate on developing its intellectual property instead of sales until it finds a buyer or OEM partner.

Tom Major, who joined Starboard as president in January, told StorageSoup the new strategy came after the company went looking for funding. He said Starboard ’s investors, venture capitalists Grazia Equity GmbH and JP Ventures GmbH, were approached by strategic partners and decided to explore an acquisition. He said Grazia and JP Ventures have invested more money into Starboard to fund the transition period.

“We received interest from outside companies,” Major said. “Then we thought, ‘Who else might be interested?’ And that list gets long.”

Major said Starboard has received “more than one, but less than five” inquiries from suitors. The board will also pursue others in the industry. “I wouldn’t say a deal is imminent, but we are having conversations,” he said. “The board has decided to focus on technology and continue to develop it. We still have a small number of sales and marking resources, but we’re not actively seeking resellers and VARS now. We are aggressively talking to companies that could take the technology to market through an acquisition or licensing arrangement.”

He said potential suitors include established storage vendors and others looking to get into storage, particularly solid-state storage.

All of Starboard’s AC Series of multiprotocol arrays use solid state drives and DRAM to accelerate reads and writes. Lee Johns, Starboard’s VP of product management, said the vendor will upgrade its operating system over the next few months with enhanced caching algorithms, multiple write caches and the ability to compress data on the cache.

“Our IP is in being able to effectively leverage high speed and lower speed media together,” Johns said.

Starboard built on unified storage technology sold by Reldata, and several Reldata executives – including CEO Victor Walker and CTO Kirill Malkin – were part of the original Starboard team in February 2012. But the current Starboard team has a strong influence of former LeftHand Networks execs, including Major and CEO Bill Chambers. Johns also worked with LeftHand technology as director of product marketing for Hewlett-Packard after HP acquired iSCSI SAN vendor LeftHand.

March 12, 2013  3:48 PM

InfraScale tries to lure companies away from Dropbox with 1 year of free service

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

InfraScale, Inc. is gunning for Dropbox. The newcomer is offering organizations a year’s worth of free online file sharing service for IT administrators who are willing to drop their Dropbox service.

InfraScale will give free FileLocker accounts with 100 Gigabytes of storage per user to Dropbox customers with between 250 and 500 employees. Dropbox is the leader in this crowded space, and InfraScale’s FileLocker is trying to set itself apart from the pack by emphasizing how rogue online file sharing accounts — also called shadow IT — presents a security risk for companies.

“Dropbox says it has 95 percent of the organizations in the U.S.,” said Sheilin Herrick, InfraScale’s director of marketing. “So this is primarily for IT administrators that want to drop Dropbox.”

The offer is good until April 30.

Dropbox moved to strengthen its security features in the latest version of its business-focused Dropbox for Teams service released last month.

InfraScale has focused on security from the start with FileLocker, which launched in November 2012. FileLocker has a three-tier security model, in which the service is installed behind the company’s firewall for private cloud deployments. It also secures data in transit with 256-Bit SSL encryption connection and 256-Bit AES encryption for data at rest.

“We want to help IT managers deal with shadow IT,” said Stephen Gold, InfraScale’s director of business development.”

This service allows IT managers to control permissions, set up bulk accounts, delete files and accounts other centralized controls. Fueled by the BYOD movement, many employees have started to deploy online file sharing products like Dropbox as a way to synchronize data with their mobile devices.

“But rogue accounts represent a serious security and compliance risk to organizations. When end-users store company files in the OFS provider’s data center in a public cloud, the files are placed outside the reach of the organization’s privacy policies and security controls,” according to an Enterprise Strategy Group report titled “Spotting and Stopping Rogue Online File Sharing.”

March 12, 2013  11:51 AM

SwiftStack enters software-defined storage race

Dave Raffo Dave Raffo Profile: Dave Raffo

SwiftStack, which claims to be building software-defined object storage, today said it has raised $7.6 million in seed and series A funding. CEO and founder Joe Arnold said the San Francisco-based vendor will have more to say about its product next month but describes it as Amazon “S3-style” but not S3-compatible because it uses a different API.

The funding is a tiny amount for a storage startup these days when flash vendors seem to write their own blank checks, but Arnold said it will be enough to expand the 14-person company’s sales and development teams.

According to SwiftStack’s funding release:

“The platform decouples the management from the underlying storage infrastructure, enabling customers to build pools of storage on commodity hardware. As a result, they are able to achieve greater scale, more flexibility and higher durability. SwiftStack’s storage system helps organizations with considerable amounts of data simplify operations to reduce overall operational costs.”

Arnold said several SwiftStack’s engineers are from Rackspace, and built a software-defined storage product based on OpenStack Swift. He said his company is going after a different target market than the other vendors selling object storage.

“We’re a software company,” he said. “We sell a decoupled storage controller that allows customers to take commodity hardware and use that to manage the infrastructure.”

By decoupled controller, he means a controller that coordinates all the nodes in a system by orchestrating data placement and establishing one pane of glass to manage each node.

According to information on its web site, the SwiftStack Node software can run as a service that streams monitoring information to the controller or the SwiftStack Controller can be installed on-premise behind a company firewall.

SwiftStack will use a utility subscription pricing model. According to the web site, the first TB is free for 12 months. Beyond 1 TB, monthly subscriptions start at $10 per TB used for 100 TB and drops gradually to $3 per TB used for more than 1.3 PB used.

Mayfield Fund is SwiftStack’s lead investor with Storm Ventures and UMC Capital participating in the A round.

March 11, 2013  9:50 AM

Solid state storage: consider the long term

Randy Kerns Randy Kerns Profile: Randy Kerns

The interest in deploying solid state storage is still building, but there are already a handful of ways to introduce solid state technology into existing IT infrastructures:

• As a PCIe solid state memory card installed in a server with software to manage caching and sharing of data.
• As a caching appliance to accelerate certain applications.
• As an extended cache added to a traditional disk storage system.
• As a tier in a traditional storage system using solid state drives (SSDs) along with spinning disk drives. There may also be a traditional storage system with only SSDs installed.
• As a storage system specifically designed for all solid state, typically with solid state modules and a custom controller to manage the memory.

Solid state technology will continue to evolve over time as the value from performance acceleration and other benefits such as reducing power, space and cooling while increasing reliability justify further development. IT customers who purchase solid state storage systems need to realize that the systems are an investment that not only provide immediate benefits but have a long-term positive impact as well. The investment may be optimized with operational changes and infrastructure improvements. The selection of product and vendor for this momentous long-term decision must be carefully considered.

Some of the considerations include:

• Will the vendor’s system design be operationally the same if the underlying solid state technology is updated with the latest developments? Today’s systems are primarily NAND flash solid state memory, which will continue for years with improvements in durability and cost but will inevitably be replaced with another technology with greater advantages. IT should look at the investment to ensure that it will continue if the vendor can transparently introduce new solid state technology. Vendors that only focus on flash may not have considered the long-term investment.
• Does the solid state system fit seamlessly into the overall management environment? Simply put, does the management of the system work with the vendor’s other management tools, including top-level orchestration? This could require an exception now, but may change with further product development or with the next generation.
• Is the storage network attachment capable of meeting the performance latency and bandwidth the solid state storage system can deliver? Exploiting the high performance characteristic requires low latency, which can be achieved with direct connection or though storage networks. You need to consider network performance and expandability for solid state storage.

Deployment of solid state storage systems will become more pervasive and benefit from the continuation of investments made. It is important to look at the long-term when making a strategic decision about selection of a product and vendor.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).

March 5, 2013  2:56 PM

Fusion-io (Brand F) ready to rumble with EMC

Dave Raffo Dave Raffo Profile: Dave Raffo

Whenever EMC rolls out PCIe flash products, it paints a bull’s eye on Fusion-io.

Just as they did last year when they brought out VFCache, EMC spokesmen compared benchmarks against Fusion-io today during a webcast hyping their XtremSF flash products. EMC marketing materials used in the webcast show its cards beating “Brand F” in a series of IOPS and latency results.

And as he did in response last year, Fusion-io CEO David Flynn said all the attention around XtremSF is good for his company. He pointed out that EMC is reselling PCIe cards that Fusion-io already competes with, and competes well enough to stand as the server-based flash market leader

“We are quite flattered by EMC and its introduction of more products across the market we have created,” Flynn said. “EMC is making a renewed push to try and be relevant in server-side flash. They’ve incorporated three vendors – Micron, Virident and LSI – none of which have been competent at competing with Fusion-io. Now they’re trying to highlight those vendors’ competitive stance relative to us.”

Flynn said EMC is “cherry picking” its IOPS and latency numbers, mixing results from different partners that make them look good against Fusion-io instead of making apples-to-apples comparisons. He also said EMC’s benchmarks are more fitting for storage than for application server performance.

EMC isn’t the only vendor encroaching on Fusion-io’s turf. Most of the solid-state drive (SSD) vendors have added server-side flash, and flash array vendor Violin Memory launched its first PCIe flash cards this week. Flynn said Fusion-io’s early entrance into the market gives it an advantage not only in technology but in distribution partnerships.

“It’s one thing to have a component, it’s something else to have access to a market,” he said. “We have a sales team, but we also have partnered with the server vendors. All the server vendors and [storage vendor] NetApp have aligned themselves with Fusion-io. The only systems companies not aligned with Fusion-io are EMC and Oracle. Only EMC is an enemy, and we have them to thank for others aligning themselves with us. It’s a case of ‘My enemy’s enemy is my friend.’”

Fusion-io is making some impressive IOPS claims of its own. He said the vendor will demonstrate one of its 365 GB ioDrive2 hitting 9.6 million IOPS March 26 during a Technology Open House at its Salt Lake City, Utah, headquarters. He said that performance is enabled by Fusion-io APIs that integrate flash into host systems as well as the vendor’s Auto-Commit Memory software. The APIs allow flash to bypass operating system bottlenecks, Auto-Commit Memory is designed to maintain flash persistence in nanoseconds running on Fusion-io’s directFS, eliminating duplicate work between the host file system and flash memory software.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: