Storage Soup


January 20, 2012  1:07 PM

Nexenta gets $21 million in funding, seeks world domination with open storage

Dave Raffo Dave Raffo Profile: Dave Raffo

Nexenta scored a $21 million funding round this week, and the open-source ZFS-based software vendor will use the money to expand globally and market its new virtual desktop infrastructure (VDI) product.

Nexenta’s NexantaStor software runs on commodity servers, turning them into multiprotocol storage systems. Nexenta CEO Evan Powell said Nexanta software was sold in $300 million of its partners’ hardware deals last year. The startup has more than 250 resellers. The largest is Dell, which uses Nexanta software in the Compellent zNAS product.

Powell said 50% of Nexenta’s sales are already international, and the vendor only has one person working outside of the U.S – in Beijing. He plans to add staff in China and open offices in Japan and the Netherlands and probably other countries.

On the product front, the vendor is preparing to launch NexentaVDI, a virtual appliance that integrates with VMware View. NexentaVDI lets customers quickly provision storage for virtual desktops, and helps optimize performance by allowing thresholds for IOPS per desktop.

Nexenta previewed the VDI software during VMware World Europe in Copenhagen last October. NexentaVDI is in beta, and Powell said he expects to launch around April.

Powell said another change coming is that he expects Nexenta software running on more solid-state device (SSD) storage systems this year. NexentaStor has been optimized to run on SSDs, but the hardware will continue to come from partners.

“As a software company, we can remove the pernicious vendor lock-in on storage,” Powell said. “Storage is one of the last bastions of lock-in business models. Customers want to know how much they’re going to pay for storage in the future, and there’s a pent-up demand to get back at storage vendors who have exploited their customers for 10 or 20 years. We publish our prices and we don’t lock you in [to hardware]. But users like to buy arrays, they want to buy a box, plug it in, see the lights blink, and they have storage. So we reach out to vendors who sell arrays.”

Nexenta could lose its biggest array partner, however. Dell has made it clear that it is integrating clustered NAS technology it acquired from Exanet into Compellent SAN arrays to make them multiprotocol systems. After that, will Dell need Nexenta?

Powell is hoping that Dell will continue offering zNAS as an option for Compellent. He said one prospective customer is looking at a multi-petabyte deployment including zNAS. “I believe there’s room for both proprietary scale-out NAS with Exanet and zNAS with NexentaStor,” Powell said.

We’ll have to wait to see if Dell agrees.

January 19, 2012  5:40 PM

All-flash storage array startup WhipTail secures funding

Dave Raffo Dave Raffo Profile: Dave Raffo

WhipTail, the all-flash storage array vendor tucked away in Whippany, N.J., closed a Series B funding round and revealed a high-profile customer this week.

Although WhipTail failed to disclose the amount of its funding, but industry sources say it was about $9.5 million. That’s not in the same ballpark of the $35 million and $40 million funding rounds its rival Violin Memory secured last year, but WhipTail CEO Dan Crain said his company is close to profitable with close to 100 employees and is picking up about 20 customers per quarter.

“We are well-capitalized,” Crain said.

WhipTail bills its XLR8r as a cost-effective enterprise all-flash array, using multi-level cell (MLC) memory drives. The vendor goes after customers with a virtual desktop infrastructure (VDI), but Crain said it serves many types of industries.

AMD’s System Optimization Engineering Department said it replaced 480 15,000 RPM Fibre Channel drives with WhipTail’s solid-storage arrays for a 50-times improvement in latency and 40% performance increase.

AMD did not say how much flash capacity it bought from WhipTail, but Crain said is average deal is in the 25 TB to 30 TB range.

WhipTail isn’t the only all-flash array vendor out there. Nimbus Data, SolidFire, Texas Memory Systems, and Violin have all-SSD systems, Pure Storage is in beta and the large storage vendors will likely follow. Unlike a lot of the all-flash vendors, though, Crain said “We don’t compete on price. We solve a myriad of problems around performance.

“The field is still narrow for credible SSD manufacturers. The storage industry inherited NAND, and there is a lot of science and engineering that has to go into making NAND work in the enterprise,” he said. “We understand this stuff. We treat NAND and flash memory like flash, we don’t treat it like a hard disk.”


January 18, 2012  6:56 PM

Symantec gobbles up LiveOffice for $115 million

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Symantec Corp. plans to use its newly acquired LiveOffice, a cloud-based archiving service, to provide end users with better search and data analysis capabilities for legal documents stored in the cloud.

Symantec announced on Monday it acquired LiveOffice for $115 million, a transaction that was completed on Jan. 13 and now gives Symantec an in-house, cloud-based archiving tool for legal documents, e-mail, file-sharing services and communications on social media sites such as Facebook, LinkedIn and Twitter. Symantec and LiveOffice have had an OEM relationship since 2010 and the archiving service was rebranded as EnterpriseVault.cloud in April 2011.

LiveOffice already has some level of integration with Symantec’s Enterprise Vault and Clearwell eDiscovery platform to provide email storage management, legal discovery and regulatory compliance. Now Symantec can more tightly integrate LiveOffice with Clearwell so end users can perform more detailed data analysis and generate narrower results when searching for legal documents. The archiving tool serves as the knowledge repository while the eDiscovery platform provides the analysis capability.

“When you are looking for these legal documents, it’s like trying to find a needle in a haystack,” said Brian Dye, vice president for Symantec’s Information Intelligence Group. “Many times in these cases what you are looking for boils down to four or five documents. If you can get tighter and tighter results, you are transferring less data.”

Symantec also plans to build a common user interface and account provisioning tool for LiveOffice and its anti-spam Symantec MessageLabs Email Security.cloud

“We don’t have a time frame [for delivering the enhancements] right now,” Dye said. “We will have one quickly.”

LiveOffice has nearly 20,000 customers, Forrester analyst Brian Hill wrote in a blog about the deal. The company “historically marketed to small- and mid-sized financial services firms. Over the past couple of years, however, the vendor has steadily bolstered its archiving and broader information governance functionality, lined up productive partnerships with major technology vendors, and met with success in selling to larger organizations across a wider set of vertical markets,” Hill wrote.


January 18, 2012  7:58 AM

Tuning storage and cars

Randy Kerns Randy Kerns Profile: Randy Kerns

There are similarities to the advances in storage systems and the advances we’ve seen in automobiles. When you’ve spent most of your life working on both, the similarities become noticeable.

Storage systems today have a focus on improving simplicity. That’s simplicity from the standpoint of being easy to install and operate. Installation simplicity is measured by the number of steps or time it takes to provision volumes or file systems.

Beyond that, storage systems simplify management with self-tuning capabilities. The tiering built into many of the more sophisticated storage systems is an example of simplified management. Tiering can be automated movement of data between different classes of storage device – the most popular being solid-state devices (SSDs) and high capacity disks. Tiering can also be done by using caching with solid-state technology or DRAM. Most of these tiering features operate automatically.

These developments mean administrators no longer need specific training to handle many storage systems. The phrase used to describe the person that manages this storage is an “IT Generalist.” This development changes the requirements for IT staffing.

The analogy between storage systems and automobiles may be superficial, but makes for an interesting discussion. Tuning used to be a complex process. Tuning up an automobile meant setting the points and adjusting the timing by using settings on the flywheel that have been replaced with electronic ignitions. No more tuning is required – or possible – in most cars. Adjusting the carburetor was another seemingly never-ending task. You had choke control settings, air mixture valve settings, and don’t forget balancing in a multi-barrel carburetor. Fuel injection systems have changed all that. There are no adjustments now.

There are also many other monitoring and reporting systems for cars. Rather than listen or do examinations (sometimes called an Easter-egg hunt) to find a problem, it can be located through on-board diagnostics. This all makes is much difficult to make any adjustments and to “fix it yourself.” Few people have the detailed knowledge of the systems in their cars. Fewer still would know what to do about a problem.

So the car now has an IT generalist who can take the car to a specialist who owns the right equipment when there is an issue. With a storage system, the vendor support group — with the right tools — will diagnose and make repairs. As for tuning the storage system, there are systems that allow that to be done. But it takes a specialist with the correct training and tools to do it.

Overall, this is better for IT. The savings in training and personnel costs are evident. But there’s still that Ford with a 289-cubic-inch engine with a Holley carburetor that needs some minor adjustments.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


January 17, 2012  6:59 AM

EMC gives Project Lightning a name

Dave Raffo Dave Raffo Profile: Dave Raffo

EMC failed to push its Project Lightning server-based PCIe solid-state flash product out the door by the end of 2011 as the vendor pledged, but industry sources say it will officially launch soon under the name of VFCache.

EMC recently registered a trademark for the VFCache name under the description: “Computer hardware, namely, data caching devices including flash memory devices and computer software for data storage and data management.” The vendor previewed Project Lightning at EMC World in May and released it to beta later in the year. The only surprises left are what partners – if any – EMC will use for the product. VFCache is expected to work with EMC’s FAST tiering software, but whose PCIe flash will EMC use?


January 11, 2012  10:53 AM

Dell upgrades Compellent’s capabilities

Dave Raffo Dave Raffo Profile: Dave Raffo

Along with launching a new backup deduplication appliance, Dell made other storage additions and enhancements today in London at its first Europe Dell Storage Forum. The biggest rollout, besides the DR4000 backup box, was an upgrade to Compellent Storage Center 6.0 software with new 64-bit support that doubles memory size.

The upgrade -– along with extended VMware support –- is part of Dell’s strategy to make the Compellent Fibre Channel SAN platform a better fit for the enterprise. The 64-bit support is a precursor to the addition of Ocarina Networks’ primary data reduction technology to Compellent systems, because deduping and compressing data will require more processing power. Another advantage for Compellent 6.0’s 64-bit support is it enables tiering data in smaller block sizes to automatically tier data more efficiently.

Compellent also now supports the full copy offload and hardware-assisted locking features that are part of VMware vSphere Storage APIs for Array Integration (VAAI). The storage vendor also added a Dell Compellent Storage Replication Adapter (SRA) for VMware’s Site Recovery Manager 5, and vSphere 5 Client Plug-in and Enterprise Manager to help manage virtualized storage pools with the latest version of vSphere.

Randy Kerns, senior strategist for the Evaluator Group IT consultant firm, said the 64-bit support will enable Compellent to better take advantage of next-generation Intel chip advances. He said that’s a nice benefit because of Compellent’s architecture and licensing model. “People underestimate the importance of this, but Compellent is about storage as an application and the applications are loaded on powerful Intel servers,” he said. “With Compellent, you buy a license and you don’t have to re-buy a license when you upgrade. This also lets them track the new technology brought out by Intel and leverage Intel’s research and development.”

Before the Dell acquisition, Compellent sold into the midrange. Dell already has the EqualLogic platform for the midrange and is looking for something more competitive with EMC VMAX, Hewlett-Packard 3PAR, IBM DS8000, Hitachi Data Systems Universal Storage Platform and NetApp FAS6200 systems. But to become a true enterprise option, Compellent may have to scale beyond its current two-controller limit.

“When Dell did not get 3PAR, Compellent was the only option left worth looking at, but it doesn’t go high enough,” said Arun Taneja, consulting analyst for the Taneja Group. “Dell is feverishly working on taking Compellent upstream. One of the elements needed is 64-bit support. But to compete with the likes of 3PAR and VMAX, Compellent has to go to more than two controllers. What if Dell cannot take it to four or eight controllers, what are they going to do? The next 12 months will be telling. For five years, the Compellent people have been telling me they can go beyond two controllers. We’ll find out if they were telling the truth.”

Dell also added support for 10-Gigabit Ethernet Force10 switches on its EqualLogic iSCSI SAN platform and support of Brocade 16-Gbps Fibre Channel switches for Compellent.


January 10, 2012  9:08 AM

OCZ grabs Sanrad for PCIe caching software

Dave Raffo Dave Raffo Profile: Dave Raffo

Not surprisingly, the first storage acquisition of 2012 involved solid-state flash. That technology figured prominently in 2011 acquisitions, and the trend is certain to accelerate this year with larger companies buying technology from smaller vendors.

OCZ Technology kicked off the year’s M&A Monday by dropping $15 million on privately held Sanrad. The acquisition is part of OCZ’s push into enterprise flash, specifically PCIe cards.

Sanrad has been around since 2000. It started off selling iSCSI SAN switches, and then adapted those switches for storage and server virtualization. But OCZ is most interested in the software that runs on those switches. Sanrad last September launched VXL software that caches data on flash solid-state storage.

VXL runs as a virtual appliance and distributes data and flash resources to virtual machines. The software enables caching more efficiently and lets customers distribute flash across more VMs without a performance hit. VXL software does not require an agent on each VM and supports VMware vSphere, Microsoft Hyper-V and Citrix Xen hypervisors.

Sanrad’s StoragePro software lets administrators manage storage across servers or storage devices as a single pool. Sanrad sold StoragePro with its V Series virtualization switches.

During OCZ’s earnings call Monday evening, CEO Ryan Peterson said the Sanrad software will be packaged with OCZ’s Z-Drive PCIe SSDs. The move can be considered competitive to Fusion-io’s acquisition of caching software startup IO Turbine last year.

Peterson said the Sanrad acquisition is part of OCZ’s strategy to see PCIe “as more than simply a component and truly as a storage system, which includes things like having VMware, virtualization capability, and support for vMotion, where there is mobility among the virtual machines of the cache …”

Ryan didn’t mention any plans for Sanrad’s switches. He said Sanrad’s revenue was in the “low single-digit millions” over the past few years, indicating low sales despite OEM deals with Brocade and Nexsan.

OCZ also revealed a new PCIe controller platform developed with chip maker Marvell. The new Kilimanjaro platform will be used in the next version of the Z-Drive, R5. That card will have a PCIe 3 interface. It can deliver about 2.4 million 4K file size IOPS per card and approximately 7 GBps of bandwidth, according to OCZ and Marvell. OCZ is demonstrating the R5 at CES and Storage Visions with an IBM server this week in Las Vegas.

It is also demonstrating new 6 Gbps SATA-based SSD controllers based on its 2011 acquisition of Indilinx.

OCZ’s push to the enterprise is beginning to pay off. Peterson said OCZ’s enterprise-class SSD revenue increased approximately 50% year over year last quarter and now makes up approximately 21% of its SSD sales.


January 5, 2012  9:13 AM

Life after RAID

Randy Kerns Randy Kerns Profile: Randy Kerns

Recent developments point to a change in how we protect the loss of a data element on a failed disk. RAID is the venerable method used to guard against damage from a lost disk, but RAID has limitations – especially with large-capacity drives that can hold terabytes of data. New developments address RAID’s limitations by providing advantages not specific to disk drives.

The new protection technology has been called several things. The name most associated with research done in universities is called information dispersal algorithms, or IDA. Probably the more correct term as it has been implemented is forward error correction, or FEC. Another name used based on implementation details is erasure codes.

The technology can address the loss of a disk drive that RAID was targeted to protect. It can also prevent the loss of a data element when data is distributed across geographically dispersed systems. The following diagram gives an overview of the coverage protection for data elements.  The implementation allows for a selection of the amount of coverage of protection across data.  An example that is commonly used is a protection setting of 12 of 16, which means only 12 of 16 data elements are needed to recreate data from a lost disk drive.

Vendors with products that use FEC/erasure codes include Amplidata, Cleversafe, and EMC Isilon and Atmos. Each uses a slightly different implementation, but they are all a form of dispersal and error correction.

The main reason to use erasure codes is for protection from multiple failures. This means multiple drives in a disk storage system could fail before data loss would occur. If data is stored at different geographic locations, you can handle having several locations unavailable to respond and still not lose data. This makes erasure codes a good fit for cloud storage.

Other advantages include shorter rebuild times after a data element fails and less performance impact during a rebuild. A disadvantage of erasure codes is they could add latency and require more compute power when making small writes.

One of the most potentially valuable benefits from using erasure codes is the reduction in service costs for disk storage systems. Using a protection ratio that has a long-term coverage probability (meaning multiple failures will not occur with the potential to lose data for a long period of time), a storage system may not require a failed device to be replaced over its economic lifespan. This would reduce the service cost. For a vendor, this reduces the amount of warranty reserve.

This form of data protection is not prevalent today and it will take time before a large number of vendors offer it. There are good reasons for using this type of protection and there are circumstances when it is not the best solution. Storage pros should always consider the value it brings to their environment.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


December 29, 2011  10:25 AM

Storage wrap: take a last glance at 2011, early peek at 2012

Dave Raffo Dave Raffo Profile: Dave Raffo

In case you weren’t paying attention to the storage world for most of 2011, don’t worry, we have you covered.

Our series of stories looking back at the highlights of 2011 will catch you up on what you may have missed:
Solid state, cloud make mark on storage in 2011
Hard drive, SSD consolidation highlights 2011 storage acquisitions
Top cloud storage trends of 2011
Compensation rose for storage pros in 2011

And if you want to get a jump on 2012, we have your back there too. These look-ahead stories highlight the key storage issues and technologies for the coming year:
What 2012 has in store for storage
2012 preview: more flash with auto tiering, archiving, FCoE

We’ve also gathered useful information to help do your job more efficiently if you work with data:
Popular storage tips of 2011
Top data deduplication tips of 2011
Top remote data replication tips of 2011
Top disaster recovery outsourcing tips of 2011
Top SMB backup tips of 2011


December 28, 2011  2:06 PM

Rear view mirror metrics don’t tell full story

Randy Kerns Randy Kerns Profile: Randy Kerns

I read all the reports on the how the storage industry is doing. These include many segments in storage hardware and software, sometimes going into great detail. These reports often come from data that is self-reported by vendors on how they’ve done in shipping products.

They draw comparisons with the previous quarter, the same quarter of the previous year and through the calendar year. These give us an idea of where we’ve been and how the different segments have fared.

But, these results look in the rear view mirror. They do not tell us how any of these vendors or the industry will do in the future. Determining future performance requires looking out the windshield.

A forecast is usually based on a projection of the trends that have occurred in the past. This indicator is often used in planning and estimating around investments, ordering, staffing, and other elements critical to making business decisions that have tremendous financial implications.

Even forecasts that meant to look through the windshield are usually based on past trends. One technique to project future trends is to look at what occurred in recent years, and assume that pattern will continue. That may be a bad assumption, and bring serious consequences.

Others use surveys to predict the opportunity, but surveys can also mislead. A survey’s accuracy depends on how the questions are asked, and who is responding to them. There is another factor that I can relate to in personal experience: the quality of the answers depends on when the questions are answered. There can be bad days…

I’ve found that conversations with IT professionals lead to a deeper understanding of what their problems are, and what they are doing. With enough of these conversations, a general direction emerges that can be used as guidance in a particular area with much greater confidence. There’s no sure-fire means, however. The best that can be done is to understand the limitations of the input you receive and use multiple inputs.

Another measure for me is gauging what the vendors believe the storage market is doing. This is much easier because the briefings, product launches, and press releases represent investments that are evidence of their belief in the opportunity. Lately, the briefings and announcements have increased – even as approaching the holidays and year-end distractions. Things do look good in the storage world – out through the windshield.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: