Storage Soup


July 31, 2014  10:32 PM

IBM Fellow Andrew Walls discusses flash strategy

Carol Sliwa Carol Sliwa Profile: Carol Sliwa
Storage

IBM Fellow Andrew Walls thinks most if not all active data will eventually reside on flash storage, and when Walls speaks, IBM Storage listens.

Walls is responsible for setting the strategy for the company’s storage portfolio and defining the architecture for next-generation flash arrays and storage class memories. He received the highest distinction of his career this year when IBM named him a fellow, making him one of only 257 total and 87 active IBM employees (out of more than 400,000) to achieve the honor.

On the eve of next week’s Flash Memory Summit in Santa Clara, California, Walls took time out today to discuss IBM’s flash strategy with SearchStorage.com. Interview excerpts follow:

What do you see as the optimal architecture for next-generation flash?

Andrew Walls: I’ve been with the acquisition of Texas Memory Systems since its beginning, and before that, I was the chief architect and CTO for our flash strategy. Through the years, we’ve adapted to what’s been happening, and I think the tipping point is there, and we really are at a point where data reduction combined with the technology that we have today can be used to put all active data, or most active primary data, on flash.

So, I see the future being to continue that reduction in overall cost per gigabyte, based on data reduction and next-generation flash, as well as enabling things like [triple-level cell] TLC, if possible, to continue lowering the cost, but to do that perhaps as a tiering strategy and to also look at next-generation storage class memories also with tiering and decrease the latency by being able to use phase-change memory or resistive RAM or next-generation storage class memories as the tier for hot data.

I see continuing with certainly the [multi-level cell] MLC flash that is there, continuing to reduce the cost, do more features and data reduction, but then looking at other technologies to see if they also can be used in the all-flash arrays to improve the performance and further decrease the cost.

Do you think TLC is realistic for enterprise storage? Will it be TLC or 3D NAND?

Walls: I think 3D NAND for sure is going to come in. Back in 2008, 2009, we were working closely with different companies, and we said MLC was going to come into the enterprise. There were a lot of people who said, ‘No, it’s going to be [single-level cell] SLC for a long time.’ And we were the first to really bring MLC into enterprise storage.

When I look at TLC, it’s even more of a challenge, of course. You’re talking about in some cases a few hundred cycles. But, we are looking to see if we can bring it in in innovative ways . . . You could think of maybe read-mostly applications or a tiered architecture where most of the hot accesses are serviced out of DRAM or out of MLC, and you’d have some TLC. We think that the benefits are enough that it really [merits] a serious look to see if it can also be used to further reduce the costs.

IBM’s all-flash arrays use eMLC flash in contrast to a lot of purpose-built flash arrays that use cheaper MLC drives. How important is the type of flash these days now that manufacturers have figured out ways to improve the reliability and endurance? Why is IBM still using eMLC?

Walls: It is true to a certain extent that the flash manufacturers have figured out how to improve the endurance of the devices themselves. However, as the geometries continue to shrink, the endurance that you get out of the 20-nanometer and 15- or 16-nanometer bare MLC flash is only 3,000 write/erase cycles. That’s all that the manufacturer will guarantee.

So, we believe that in this generation with the [FlashSystem] 840 that the eMLC allows us . . . to be able to get a 10x improvement in endurance without having to worry about it and pass that on to our customers. We think eMLC right now is a very valuable add, and other competitors use it as well. There are many who don’t, but I think one has to be careful to see how they make sure that they aren’t going to wear out. We believe eMLC right now is an important part of our strategy.

July 31, 2014  2:52 PM

Categorizing solid state storage systems

Randy Kerns Randy Kerns Profile: Randy Kerns
Flash arrays, Solid-state storage, Storage

There are many types of implementations of solid state or flash storage systems.  At Evaluator Group, we regularly field questions and work on projects regarding solid state storage with our IT clients. In addition to the performance explanations and evaluation guide for solid state, we find it necessary to categorize the different implementations to aid their understanding.

The categorizations do not necessarily match what vendors would say in positioning their products. The important point is categorization has served us well in communicating to our IT clients.  However, we understand that nothing is static in the area of storage.  Like the technology, these explanations will evolve with new developments.

Here are the categories and explanations that have worked well so far:

  1. All-solid state (flash) storage systems – These are new systems designs for solid state from the start. These designs optimize performance for the given amount of system hardware.
  2. Hybrid arrays in a new design system – Hybrid arrays use both solid state (usually in the form of Flash SSDs) and hard disk drives (HDDs) with the idea that large capacity HDDs will decrease the overall price of the system.  As a new design, all I/O goes through the SSDs and the HDDs serve as backing storage.
  3. All-solid state (flash) storage systems based on traditional storage systems with major modifications – These are traditional storage systems designed for spinning disks but modified to take advantage of solid state with the addition of embedded software. The Evaluator Groups looks at the design changes made to determine the significance.
  4. Hybrid arrays based on traditional storage systems – This large segment includes the traditional storage systems designed for spinning disks where solid state drives (SSDs) are added for cache and/or tiered storage.  In these systems, small percentages of SSDs will max out the performance of the system quickly, increasing the aggregate system performance by 2x to 4x.
  5. As technology evolves, there will be changes to these categories. Certainly, acquisitions will occur – changing what vendors offer and product positioning.  Over time, more extensive changes will be made to traditional systems that are limited by their spinning disk designs.

The biggest evolution of all will be the introduction of new solid state technology. Forward-thinking system designers have anticipated this and will seamlessly (and optimally) advance to the new technology when the economics are favorable.  This is one of the reasons we use the solid state storage terminology rather than wholly referring to only the current implementation of NAND flash.  We will adapt out categorization used with our IT clients to fit the current implementation. Meanwhile, it is great to see the continued advances in technology and implementations for storage systems.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


July 30, 2014  3:36 PM

Overland builds higher performance, snapshots into its GuardianOS

Sonia Lelii Sonia Lelii Profile: Sonia Lelii
Overland Storage, SnapServer, Storage

Overland Storage has upgraded its GuardianOS operating system, which powers its SnapServer Dx1 and Dx2 NAS, to make its devices more compatible with Windows in mixed environments. The upgrade is also designed to boost performance, and includes a new BitTorrent-powered sync-and-share feature for mobile devices.

The GuardianOS integrates replication, thin provisioning, snapshots, backup, file sharing and security for the SnapServer Dx1 and Dx2. The Dx1 is a 1U system that scales to 160 TB while the Dx2 is a 2U server that scales to 384 TB.

The software Windows-only Tree improves permission handling and authentication in Windows and Mac mixed environments. Each time a Window or Mac user opens a file, the updated file will be written with Windows data attributes. Typically, a Mac system  will switch the data attributes when a file is open for updates.

“They will remain in Windows attributes because you want to keep a certain attribute type,” aid Jeremy Zuber, Overland’s product marketing manager. “If attributes are flip-flopped, you can run into issues.”

The GuardianOS also has been enhanced with the Lightweight Directory Access Protocol (LDAP), allowing administrators to set permissions and specify access to directories through name lookup to and from a unique user identifier (UID).  The software also uses  Server Message Block (SMB) 2.0 for improved read and write performance for Windows clients and servers when accessing SnapServer storage.

The operating system’s snapshot capability has been upgraded for higher performance with a more efficient copy-on-write process.


July 25, 2014  1:51 PM

Unitrends adds Hyper-V support to re-branded virtual backup

Dave Raffo Dave Raffo Profile: Dave Raffo
PHD Virtual, Storage, Unitrends

Unitrends has gone GA with its first new version of the PHD Virtual Backup application since acquiring PHD Virtual last December. Virtual Backup 8 released last week gained support for Microsoft Hyper-V, and lost the PHD brand.

The product is now called Unitrends Virtual Backup. It joins a roster of Unitrends branded software that includes Unitrends Enterprise Backup (UEB) and Unitrends ReliableDR (previously a PHD product), and the Unitrends Certified Recover Suite that bundles the other three apps. Unitrends also sells a series of integrated appliances that runs UEB software.

Unitrends claims Virtual Backup 8 has more than 140 enhancements, most of them around making the product easier to use in hopes of taking on Veeam Software for virtual backups.

“Simplilcity has always been the No. 1 reason customers pick us,” said Joe Noonan, Unitrends senior product manager. “As we gain larger customers and get into deployments of 1,000 VMs or more, simplicity takes on new requirements.”

The application re-design includes the ability to backup and recover data in four clicks. It is also the first version of Virtual Backup with Microsoft Hyper-V support built in. PHD launched a separate version for Hyper-V earlier this year, but Virtual Backup now supports VMware, Citrix and Microsoft hypervisors in one application.

Virtual Backup does not yet work with Unitrends integrated backup appliances yet, but Noonan said that is on the roadmap. “The first step is to centrally manage everything,” he said. “Then we will integrate it all under the hood.”


July 24, 2014  11:36 AM

Quantum looks towards StorNext for a pick-me-up

Dave Raffo Dave Raffo Profile: Dave Raffo
Object storage, Storage

With both disk and tape backup target markets stagnating, Quantum is counting on its StorNext platform for growth.

Quantum’s “scale-out storage” – StorNext file system and Lattus object storage appliances and software – increased 41 percent over last year to $18.1 million last quarter while Quantum’s total revenue of $128.1 million dropped four percent (that decline does not count a $15 million one-time royalty payment last year).

The StorNext revenue was the same as DXi disk backup appliance revenue for the quarter, and it wasn’t that long ago that DXi was considered Quantum’s best chance for rapid growth.

StorNext revenue includes part of a $3 million-plus deal with what Quantum CEO Jon Gacek called a “leading consumer electronics company” for StorNext and Lattus products for managing video workflows. Approximately half of that revenue was recognized last quarter.

Quantum added 75 new StorNext and Lattus customers. It is going after media and entertainment companies, and Gacek said it landed a $200,000-plus deal at a major studio, an international broadcaster and a follow-up sale with a TV shopping network in the quarter. Quantum is also looking to replace aging Apple Xsan systems with its StorNext Pro high performance systems for post-production studios and small broadcasters.

Quantum also increased the number of partners selling StorNext by 12 percent since last year.

The amount of time Gacek spent talking about StorNext during the Wednesday earnings call was disproportionate to its overall contribution to the company, he admitted in an interview afterwards. But he said StorNext is the Quantum platform clearly on the rise.

“Tape is still the biggest part of our business, so I get in trouble internally when I do that [talk up StorNext],” he said. “But in the world of software, cloud and unique value, StorNext is where we have the most differentiation and overall growth.”

Gacek said the StorNext 5 release that pumped up the software’s performance and scalability has been a big boost. Quantum is also looking to capitalize on the emergence of object storage, which it sells through an OEM partnership with Amplidata on its Lattus appliances.

Video storage is a big use case for StorNext and Lattus. Gacek said Quantum previously sold to all major sports leagues, and now is looking to expand to the individual teams as they use more video for scouting and in-game entertainment.

Quantum added around 90 new DXi customers in the quarter, and Gacek said it won close to 55 percent of its deals with the disk backup appliance. He hopes DXi will get a boost from the DXi6900 launched this week. The 6900 is built on StorNext 5 software and scales from 17 TB to 510 TB.

“We said we’re going to grow at the same rate as the market,” Gacek said of DXi. According to IDC’s latest purpose built backup appliance tracker, that market decreased by 2.5 percent in the first quarter of 2014 although the research firm predicts it will return to growth.

As for its bread and butter, Quantum’s branded tape automation revenue dropped nine percent and OEM tape automation – sold through partnerships with vendors including Dell, Hewlett-Packard and IBM – fell 24 percent from last year. Gacek said the branded business probably gained share despite the decline, but OEM revenue – a minority of Quantum’s tape business – has been a sore spot.

“The OEM business was not up to expectation,” Gacek said. “That has been our weakest part over the last couple of quarters. Branded business will be our share grabber.”

Quantum’s $128.1 million total revenue was above the midway point of its guidance. It projected revenue of $130 million to $135 million this quarter.


July 23, 2014  9:00 AM

EMC’s Tucci will meet with investors but resists break-up

Dave Raffo Dave Raffo Profile: Dave Raffo
EMC, Storage, VMware

EMC CEO Joe Tucci today said he plans to meet with the investor group reportedly calling for the storage conglomerate to break itself up, although he maintained that his company benefits from its current structure.

Earlier this week, news reports said Elliott Management Corp. has taken $1 billion-plus stake in EMC (around two percent of the company) and would like to see it spin off parts, particularly its majority-owned VMware.

“We have not heard from Elliott Management other than a call to us saying they intend to be one of EMC’s largest investors,” Tucci said during EMC’s earnings call. “I have agreed to meet with them. We are always open and welcome a dialog with all of our shareholders, and we respectfully listen to their ideas beliefs as we form our strategic direction. … I want to hear what their proposals are, and I’m sure they would like to hear some of our plans.”

EMC’s earnings of $5.9 billion last quarter were slightly ahead of expectations, and Tucci said during the call that customer surveys show “our strategic relevance has never been greater and is rising.” He also said EMC would accelerate its share buyback and buy $3 billion of stock in 2014 instead of the previously planned $2 billion.

EMC Federation owns VMware, RSA Security and Pivotal, all which play largely outside of EMC’s core storage market. The Wall Street Journal and New York Times reported this week that Elliott will push EMC to break off some of those pieces to increase overall shareholder value.

Tucci said he agreed with Elliott that EMC stock is undervalued but expressed doubt that spinning off any pieces – especially VMware – would make the EMC Federation more valuable. He pointed out large competitors such as IBM, Cisco and Oracle also have products and services spanning the IT spectrum.

“Without a doubt, we have great assets and strong strategic vision – splitting them up, spinning out one of our most strategic assets – I don’t know of another technology company that has done that and been successful,” he said.

Calls to break up EMC are not new. Tucci made a strong case for EMC’s holding on to the 80 percent of VMware that it owns in May at an investor conference, claiming the EMC companies were “better together.”

EMC’s storage revenue increased one percent over last year, with its high-end enterprise storage down 14 percent. Tucci and EMC storage CEO David Goulden said they expect the recent launch of the new VMAX platform to raise enterprise sales by the end of this year.

“The pause ahead of this product (new VMAX) was hurting us,” Tucci said.


July 15, 2014  3:27 PM

Private clouds signal a change in what we call IT

Randy Kerns Randy Kerns Profile: Randy Kerns
Public Cloud, Storage

We have been working with IT clients who deploy private clouds as part of their operations. The reasons include implementing a change in the delivery of IT services, a new infrastructure for dealing with the overwhelming influx of unstructured data, and a way to deploy and support new applications developed for mobile technology. Each of these reasons makes sense and can fit into an economic model given the projected demands.

The types of private clouds also vary. The simplest we have seen from IT clients is a private cloud that is an object storage system on premise used as a content repository. The most common types of these are:

• A large-scale repository that used by new or modified applications that must deal with large amounts of unstructured data.
• A large, online storage area where data is moved off primary (or secondary) storage to less expensive storage with different data protection characteristics.
• As the repository of data from external sources used for analytics. This data may not require long-term retention.

Most of these operations have built-in the ability to use public cloud resources and they use the term hybrid-cloud. Public cloud use may be in the form of long-term archive of data (deep archive) where the access time is much more relaxed, as sharable information in the form of file sync and share implementations, or as an elastic storage area to handle large influxes that may not be retained for extended time periods. Usually there is a mechanism to handle the transfer and security of data to and from public cloud. This is usually done by gateway devices or software. Storage system vendors are starting to provide built-in storage gateways to manage data movement to the cloud.

These are all justifiable reasons and usages for traditional IT operations to deploy private clouds. But what do you call IT that has changed operations to achieve this? Continuing to use the IT term is simple but it does not convey the fact that IT has fundamentally transformed operations and the value provided. IT as a Service (ITaaS), which is the outcome goal of most transformations, is a significant value and is different from IT of the past. The term cloud is an ambiguous identity used in many different contexts and probably will not have staying power over time.

Historically, what is known as IT has changed names over time, representing major transitions in the industry. Many remember DP or Data Processing as the term for centralized IT of the past. Some of us even go back to an earlier point when the term was EAM, which is an artifact acronym for Electronic Adding Machine. There was also the term IS, which stood for Information Systems and later Information Services. Information Technology is the current broadly accepted definition for business, but a change is in order with the services and capability transitions underway.

What should the new identity be? Maybe there should be a naming contest. Probably the worst thing would be for one vendor’s marketing organization to drive its descriptive name. The new name in this case would be to promote that vendor’s vision and how its products serve the necessary requirement. In this case, there may be a full court press for that identity with paid professionals promoting the name. The new identity needs to convey the data center transformation that has occurred. Hopefully, the name will not have “cloud” in it.

The name change will occur and probably start as one thing and quickly evolve. A few years from now, it will be commonplace. The identity change is a step in the logical progression of computing services (keeping in mind the value is in deriving information from data). Terms such as client-server will become footnotes in the history of the industry. Other ideas that were detours down a rough road will seem like another learning experience everyone had to go through. This is an interesting period to see transitions occurring.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


July 14, 2014  12:35 PM

Box raises $150 million but can it overcome the Snowden effect?

Sonia Lelii Sonia Lelii Profile: Sonia Lelii
Box, Dropbox, Storage

Online storage provider Box last week raised another $150 million from two investment firms, pushing the total amount of its funding to $564 million as it prepares to go public.

While it can be seen as a good sign for the vendor that it can raise that much money in one round, you also have to wonder about the long-term viability of a company that has gone over $500 million in funding and has yet to break even.

The Los Altos, California-based company also got a vote of support from research firm Gartner, which named Box a leader in the enterprise file synchronization and sharing market along with Citrix, EMC and Accellion. It’s definitely a milestone for a company to rise to the top in a crowded market that initially purported to have more than 50 startups.

Box, however, remains unprofitable with a reported $168 million loss for the past year. It has an expensive business model, with a high burn rate as it tries to convert users who are lured into the service with free storage to paying customers.

Box also has to mature beyond basic  sync-and-share , which appears to be turning into a feature more than a full-blown product as companies integrate the technology into other cloud products. Box has opportunities in integrating their technology with other enterprise applications, such as Salesforce.com

“The basic sync-and-share is a feature,” said Terri McClure, senior analyst at Enterprise Strategy Group. “[Box’s] advanced collaboration and data management is starting to become more compelling. They have a rich API set that allows integration into enterprise applications. They certainly are one of the market leaders in terms of functionality. Dropbox has an API strategy but it’s not as far along.”

McClure said that although Box faces the tough and expensive challenge of converting free users into paying customers, the company is making strong gains in that area. Seven percent of its 25 million users are now paying customers, translating to 1.75 million users. It also has 34,000 companies paying for accounts.

“It’s likely that a good number of those 1.75 million users are corporate users,” McClure said. “When (you are) seeing losses of $168 million against revenue of $124 million, it is easy to point fingers and call it questionable. But can this model work? Yes, over time and with the right investments.”

She said Box needs to build some on-premise functionality into its product to move into the enterprise and will have to make heavy investments in security. McClure said the company needs a European data center.

“Today, Box stores all customer data and files in the United States,” McClure wrote in a research brief. “Box did not report a geographic revenue breakout in the S1 but given the geopolitical and regulatory environment, companies outside the U.S. are hesitant to begin or are prohibited from storing data in the U.S. Cisco and IBM have discussed the fact that security concerns (specific to NSA spying) are inhibiting international sales of hardware. So you can say it’s impacting cloud SaaS and storage.”

Cloud companies also are dealing with what McClure calls the “Snowden effect.” Users are concerned that if a cloud provider holds their data and the encryption keys, then the data can be turned over to the federal government if it is subpoenaed.

“Concerns would be mitigated if Box offers users a method of holding and managing their own key,” McClure said.


July 11, 2014  2:36 PM

Microsoft Scouts InMage, adds it to Azure roster

Dave Raffo Dave Raffo Profile: Dave Raffo
Storage

Microsoft today acquired cloud disaster recovery vendor InMage, which it will use as part of its Windows Azure cloud services.

InMage Scout combines continuous data protection backup software with replication to move data off-site and to the cloud for DR.

In a blog announcing the acquisition, Microsoft VP Takeshi Numoto wrote that Scout already lets customers migrate to Azure, but Microsoft will integrate the software more deeply with its cloud. Microsoft will sell Scout through Azure Site Recovery, which supports replication and recovery of an entire site directly Azure.

“This acquisition will accelerate our strategy to provide hybrid cloud business continuity solutions for any customer IT environment, be it Windows or Linux, physical or virtualized on Hyper-V, VMware or others,” Numoto wrote. “This will make Azure the ideal destination for disaster recovery for virtually every enterprise server in the world. As VMware customers explore their options to permanently migrate their applications to the cloud, this will also provide a great on-ramp.”

He added that Microsoft will work with managed service providers who sell InMage’s ScoutCloud DR as a service.

It appears that Microsoft’s strategy for InMage technology is similar to the one it has followed with the cloud storage gateways it acquired from StorSimple in 2012. Although StorSimple supported other large public clouds before the acquisition, Microsoft has integrated the gateways more tightly with Azure and now sells them only to customers who have Azure subscriptions.

The InMage acquisition certainly fits into the “cloud-first” strategy Microsoft CEO Satya Nadella laid out for employees Thursday.

InMage had $36 million in venture funding. Microsoft did not disclose the acquisition price.


July 11, 2014  12:50 PM

CA’s arcserve goes out for a spin

Dave Raffo Dave Raffo Profile: Dave Raffo
ArcServe, Backup software, Data protection, Storage

When CA Technologies launched its arcserve Unified Data Protection (UDP) platform in May, it was considered a new direction for the backup and recovery platform. It turns out the direction it is going is away from CA. This week the arcserve team revealed plans to spin out of CA, which signed an agreement with Marlin Equity Partners to divest the assets of the data protection business.

The move is similar to Syncsort’s data protection team spinning out of the parent company last year, eventually re-branding itself as Catalogic Software.

Mike Crest, GM of CA’s data management business unit, will become CEO of the new arcserve company. It will have headquarters in Minneapolis (CA is in New York), and all 500 or so of the arcserve product team are expected to join the new company.

The reason for the spinoff? CA is focused on large enterprise customers (it will retain its mainframe backup software) while arcserve software is a best fit for SMBs and small enterprises.

Crest wrote in a letter to arcserve customers:

“For CA Technologies, the divestiture of arcserve is part of a portfolio rationalization plan to sharpen the company’s focus on core capabilities such as Management Cloud, DevOps and Security across mainframe, distributed, cloud and mobile environments. As part of that plan, CA has a strong commitment to thoughtfully placing divested assets, such as arcserve, in environments that benefit enable customers, partners, employees and shareholders mutual gain.”

As arcserve VP of product marketing Christophe Bertrand put it, “the markets we serve are not the traditional markets CA serves today. CA will continue to sharpen its focus and manage its portfolio accordingly. It made perfect sense to look at this as the next step for the arcserve business.”

Bertrand and arcserve VP of product delivery Steve Fairbanks said the new company will build its technology around the new UDP platform. UDP combines previous arcserve data protection products — Backup, D2D and High Availability and Replication — under a common interface along with new features. When UDP was released, Fairbanks called it a re-invention of arcserve.

Bertrand and Fairbanks said they are convinced Marlin will provide all the backing arcserve needs to succeed.

“Marlin has indicated they want to invest in arcserve as a platform,” Fairbanks said. “As we grow organically and we grow revenue, we will re-invest back in the business and grow the size of the company over time.”


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: