Storage Soup


July 22, 2013  9:10 AM

Hand-me-down storage strategies evolve with solid state

Randy Kerns Randy Kerns Profile: Randy Kerns

When talking to clients about their IT needs and storage vendors about system development, I notice a change in the expected longevity of storage systems. This change is largely due to the rise of solid state storage systems.

Many of our high-end enterprise clients have had an established practice of using disk-based primary storage systems for three years in that role. After that, they move tier 1 application data to a new storage system and relegate the old system (now an ancient three-years old) to secondary storage for two more years.

The five-year lifespan and the plan to update or upgrade storage systems was based on the reliability of the system (meaning component failure rate change over time), the advantages of improved technology every few years, and the cost for maintenance once the warranty period expires.  This has been built in depreciation schedules and business plans.

Now I’m seeing a new dynamic around how long people expect their storage systems to provide value. This is an outcome of the availability and maturing of solid-state storage systems and plans to use them with tier 1 applications and virtualization. Most vendors have made great strides in addressing wear-out issues and the reliability improvements are driving down the service cost estimates for flash. Some vendors are quoting longer warranty periods or reduced maintenance costs.

Our IT clients see the dramatic performance improvements from all-SSD systems. They are purchasing more systems and capacity while still getting the benefits from the initial system they deployed. The depreciation schedules can change and the main driver for purchasing is not for replacement, but for addition.

All-flash storage systems are changing the thinking around the longevity of storage systems.  The planned hand-me-down process may no longer be the norm.  Some vendors see the longevity as a competitive opportunity and are aggressively pursuing that with a longer warranty period, lower maintenance costs, or a more easily negotiated warranty extension.  Other vendors have finance people that see maintenance revenue as a birthright annuity, and they are not changing. This is a competitive area and using negative marketing information to hold onto that annuity revenue will only work until the competitors have made enough inroads with customers to take away the business.

This will be another interesting storage trend to watch develop.  It will be an important competitive area and will distinguish vendors where innovation and opportunity is held back by maintaining that financial annuity while letting the competition move forward.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).

July 9, 2013  12:56 PM

Appliances add options — and decisions — for storage purchasers

Randy Kerns Randy Kerns Profile: Randy Kerns

We’re seeing many different vendor initiatives for appliances or special purpose applications that run inside storage systems. In contrast to general purpose storage systems, these appliances consist of special purpose applications running on servers. The most common type of appliance is for backup. Another type includes archive systems.

For backup systems, appliances add deduplication and compression to reduce the amount of data stored, and replication for making disaster copies. Some integrated appliances include the media server software preloaded on the backup application.

With archive systems, value add features can include data movement software with a policy engine for selective controls and replication, functions for retention management, a search function with indexing, and compliance features such as immutability, audit trails, and security.

The value of these appliances comes from the integration enabling all the pieces to work together effectively for quick deployment, usually by IT staff. The appliance should be supported as a single system rather than independent elements. (If that’s not the case, that can be a problem.) The system may even be less expensive than putting the individual elements together.

The trade-offs of using general purpose storage with servers and software versus an integrated appliance are well understood. One issue raised is who has the authority to make the purchase. The decision for improving or deploying new backup technology may be with the backup team and selecting an appliance or system whose primary usage is for backup may be much easier than using some the primary storage under someone else’s control. The same holds true for archiving, which is usually a separate initiative with separate funding and not part of the backup area of responsibility in most large IT organizations.

The line between these trade-offs and roles blurs as vendors begin to offer storage systems with the ability to run selected applications on the storage controller. Now IT is faced with another choice: the purchase of a storage system that can have an application loaded in addition to the storage control function with embedded software. Implemented as a virtual machine in storage, it really is a similar choice to using general purpose storage with the application on a server, only now the server is in the storage system.

This raises issues around flexibility, support, customization, and potential lock-in. The organizational boundaries for purchasing will still hold and be the determining factor in many environments. But, it is another option to consider.

The option may be exploited effectively by system integrators and resellers who can put together solutions that combine several functions that would have been multiple appliances before. With service and support, this is an opportunity for integrators and VARs to deliver more value to their customers.

Appliances, storage systems with the option to load application software, and general purpose storage with separate servers and applications are all possibilities. Not all fit the needs of an IT organization. It’s important to understand the basic needs or problems to be solved, look at the options and then develop a strategy that will deliver the best overall solution.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


July 3, 2013  11:21 AM

Struggling FalconStor shuffles CEOs again

Dave Raffo Dave Raffo Profile: Dave Raffo

FalconStor has been looking for a buyer for seven months. Now it may be looking for a CEO, too.

Jim McNiel suddenly resigned as CEO and president Monday and gave up his seat on the board. FalconStor named Gary Quinn interim CEO. The company did not say if it is conducting a search for a permanent CEO.

FalconStor may not need to find a full-time replacement. It could give the job to Quinn eventually or it may not need one at all. The storage software vendor last November revealed it hired investment banker Wells Fargo to pursue strategic options, which could be a sale or funding. Quinn could merely be a caretaker until there is a sale. Then again, McNiel’s resignation could be a sign that there are no serious bidders for FalconStor.

McNiel replaced FalconStor founder ReiJane Huai as CEO in 2010 after Huai’s role in a customer bribery scandal came to light. Huai committed suicide a year later. FalconStor agreed to pay $5.8 million in June 2012 to settle criminal and civil charges that it bribed JP Morgan Chase to buy its software.

The bribery scandal came with FalconStor already reeling following the loss of OEM partners for its virtual tape library (VTL) software. Most of the major storage players used FalconStor software before developing or acquiring their own data deduplication software for backup.

FalconStor lost $3.4 million in the first quarter of 2013 and its $15.3 million in revenue was down 21% from the same quarter last year. With $28.2 million in operating cash, the vendor cannot sustain those kinds of losses for long without getting acquired or finding funding.

But in some ways, FalconStor is operating as it will be around long-term. It hired Rob Zecha as chief product officer in April, putting him in charge of product management, quality assurance, software development, engineering and research and development. And last month, FalconStor signed an OEM deal to sell its VTL software on IBM hardware.

Quinn joined FalconStor in April 2012 as vice president of North America sales and marketing and was promoted to executive vice president and COO in April 2013. He previously held executive positions at software vendor CA Technologies and Suffolk County, NY.


June 28, 2013  8:01 AM

Common cloud storage topics: archiving and gateways

Randy Kerns Randy Kerns Profile: Randy Kerns

Two storage aspects always enter the discussion when talking about the cloud: archiving and the use of gateways to adapt on-premise environments.

Archiving information to a cloud location is interesting for several reasons:

  • The growth of file data precipitates a discussion of what to do with all the files over time.  IT is hesitant to establish and enforce deletion policies and business owners are either unmotivated or unwilling to address the increasing capacity. Moving files that are rarely or never accessed off to a cloud can free up IT resources related to storage systems, data protection processes and administration. It can also allow IT to transfer the charge for storing the inactive data back to the business owners, which may prompt them to establish retention policies.
  • Retained backups are a good fit for the cloud. These have retention governance or business practices but do not include recovering individual files that are damaged or deleted. The long-term nature of retained backups and the unlikely event they will be required for immediate usage make them valuable targets for cloud storage.
  • Potential sharing of information within an enterprise with geographic access requirements looks to be an area that IT can address with a software service using cloud providers.

Examining how to move data to the cloud usually leads IT to two primary choices:   applications written for specific usages with management of cloud storage or use of a gateway device with existing applications. Both approaches are viable but a gateway device is the least disruptive to existing environments. The gateway presents itself as a traditional NAS system with NFS and CIFS/SMB access and manages moving data to and from a cloud service provider as required.

Gateways can be storage systems or appliances that include the hardware and pre-loaded, pre-configured software or a software application that IT must install, configure, and support. Some software gateway applications may be delivered as virtual machines, which reduces the installation effort.  No matter which delivery method is used, the gateway must have a policy engine for IT administrators to use to set the rules regarding movement of data and management of the access to the cloud service provider along with billing and chargeback capabilities.

There are several factors that can differentiate gateways:

  • The performance required to move and retrieve data must be in sync with the requirements. If used for more active data, performance can be a big concern. More typical usage for inactive data reduces the performance concerns except for initial deployments where there may be a large amount of data to move to the cloud location at the start.
  • Cost of storage and retrieval grows over time with the usage and the amount of data offloaded to the cloud.
  • Migration of data from one cloud provider to another is another concern as the landscape changes with cloud providers. Changes in providers’ pricing or poor service may necessitate a change. The ability to seamlessly transfer data between cloud providers lessens the IT impact and addresses a major concern.

Archiving data into the cloud using application software that was developed for that purpose or by using gateways (sometimes called on-ramps) for existing applications is usually done as a funded project.  Either the project is a new application deployment or a business operation that is being re-engineered, or it is a funded initiative inside of IT to improve operations. Ultimately, the funding of the program dictates the action because it does require strategy, planning, and project management in addition to the operational process changes.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


June 20, 2013  8:51 AM

EMC eyes ScaleIO — a good fit for ViPR?

Dave Raffo Dave Raffo Profile: Dave Raffo

Word is coming out of Israel that EMC is deep in talks to acquire software-based storage startup ScaleIO for a price tag in the range of $200 million to $300 million. ScaleIO came out of stealth last December with software that it positions as a virtual storage appliance for enterprises and cloud service providers.

ScaleIO would fit the software-defined storage strategy that EMC laid out for its ViPR software last month at EMC World. EMC’s vision for ViPR combines the ability to pool storage across any hardware with cloud management capabilities. ScaleIO’s Elastic Converged Storage (ECS) can help on both fronts.

ECS agents install on servers running hypervisors, databases and other applications. The software aggregates capacity on those servers, turning them into a large storage network. The concept is similar to the virtual storage appliance (VSA) approach taken by Hewlett-Packard’s StoreVirtual VSA, built on LeftHand technology and VMware’s vSphere Storage Appliance, but those products are for small- and medium-sized businesses and small enterprises.

ECS would be a fit for public cloud storage, which tends to be more server-based than SAN- and NAS-focused enterprise storage. ScaleIO founder and CEO Boaz Palgi claimed at launch that he had one customer using ECS on 260 nodes. He said customers can add nodes on the fly and the software automatically rebalances performance.

It’s too soon to say how much of ScaleIO technology would end up in ViPR, but it’s unlikely that EMC would make a software buy now without considering it part of its software-defined strategy.

“This possible move leaves us to think more deeply about EMC’s willingness to aggressively step in front of potential meaningful disruptive secular changes,” Stifel Nicolaus Equity Research financial analyst Aaron Rakers wrote in a report to clients today. Rakers added that he sees the possible acquisition as “another move to position itself against the evolution toward open source solutions, pubic cloud providers and other software vendors pushing into software-defined storage.”

ScaleIO, which is based in Israel but has a Palo Alto, Calif. office, raised $12 million in funding from Greylock Partners, Norwest Venture Partners (NVP) and private investors.


June 18, 2013  3:02 PM

Nutanix leaves empty seat on software-defined storage bandwagon

Dave Raffo Dave Raffo Profile: Dave Raffo

There is at least one vendor who refrains from defining its software-driven storage technology as software-defined storage.

Despite its reliance on software for differentiation, hyper-converged storage vendor Nutanix refrains from using the software-defined storage tag that many vendors are embracing.

Greg Smith, Nutanix senior director of product and technical marketing, said Nutanix qualifies for that label because it delivers storage as software natively in the virtualization tier so “the virtualization manager no longer has to manage software, it becomes invisible. But we think there’s enough tangible value to customers that we do not need to resort to that level of marketing.”

Nutanix shifted marketing gears with its product name, switching from the Nutanix Complete Cluster to Nutanix Virtual Computing Platform. Smith explained by change by saying “as our product category evolves, we need to describe a class of solutions that converges compute and storage at a platform or appliance level.”

However Nutanix is marking its systems, it seems to be working. Nutanix claims it is on an annualized run rate of over $80 million in revenue, which means it had more than $20 million in sales in the first quarter of 2013. That’s a healthy number for a startup, especially in a quarter where storage disk sales declined year-over-year for the first time in four years.


June 11, 2013  3:56 PM

NetApp scales up Clustered Ontap

Dave Raffo Dave Raffo Profile: Dave Raffo

NetApp unleashed Clustered Data Ontap 8.2 today, using the launch to again make its case as the king software-defined storage.

Since EMC revealed its ViPR software-defined storage technology plan last month, NetApp executives have claimed they do much of the same things in Clustered Data Ontap.

Part of 8.2 is about making its quality of service more granular and improving scalability to support 69 PB of storage and 24 controller nodes, 49,000 LUNs, and 12,000 NAS volumes supporting over 100,000 clients. It supports 20 PB in a single container. NetApp also added availability for SnapVault, which was not supported in Clsutered Ontap 8.1.

The more important piece of the upgrade is enhanced storage virtual machines (SVMs), which have some of the capabilities that EMC is claiming for ViPR.

SVMs are virtualized storage arrays defined in Ontap than run inside NetApp FAS  or V-Series controllers. Customers can grow them, shrink them, or move them on demand. Hundreds of SVMs can run on one piece of hardware, according to Brendon Howe, NetApp VP of product marketing.

SVMs evolved from the VirtualFiler, or vFiler, that NetApp added with Ontap 7. SMVs, however, are not tied to the underlying hardware. They can be moved across devices while retaining full Ontap storage services, Howe said.

Unlike NetApp’s Data Ontap Edge virtual storage appliances (VSAs) that run on server hardware virtualized by VMware vSphere, SVMs run on storage arrays.

Data Ontap Edge also plays a role in NetApp’s software-defined storage strategy, and the vendor plans to use Edge to deploy clustered Ontap on x86 hardware.

NetApp has supported server virtualization through its V Series controllers since 2006, before anybody called that software-defined storage. But Howe said NetApp has taken a different approach to pooling storage than other vendors and its version fits the software-defined storage label.

“Software-defined storage is closely tied to traditional storage virtualization,” he said. “We’ve assured that all rich management capabilities of our storage systems are made available in that virtual layer, instead of federating systems. We said, ‘What if you pool systems and don’t sacrifice any functions of any systmems in that pool?’

“Software-defined is an emerging discussion these days. I think it’s a discussion of how you enable services to be dynamically provisioned.”


June 10, 2013  12:40 PM

Battle for control of sTec heats up

Dave Raffo Dave Raffo Profile: Dave Raffo

Solid-state storage vendor sTec’s management team is in a fight to maintain control of the company ahead of next month’s board of directors election. The vendor and unhappy investment firm Balch Hill Partners today sent letters to shareholders making their cases for the candidates they have nominated for election.

Balch Hill is the largest independent shareholder of sTec, with approximately 10 percent of its common stock. The unhappy investors want to remove sTec’s CEO, former CEO and another director from the board. The Balch Hill letter called for shareholders to hold the sTec board accountable and charged it with destroying more than $1.3 billion of shareholder value. It also called for accountability for the company’s poor financial performance, pointing out that revenues dropped 56% in the last quarter from the previous year and 77% from two years ago. During the same time, sTec increased research and development spending 68% as it moves from an OEM sales model to selling direct and through VARs.

Balch Hill said, if elected, its nominees would immediately hire an interim CEO and launch a search for a permanent CEO to re-engage large OEM customers, return the business model to focus on OEMs, re-evaluate the company’s PCIe, SATA, I/O software and other business initiatives and explore a sale of sTec if it cannot stand on its own.

STec chairman Kevin Daly sent a letter on the vendor’s behalf, claiming sTec is making progress with its new business plan and its goal of generating more than $200 million of revenue in 2014 (its 2012 revenue was $168.3 million, down from $308 million in 2011.)

Balch Hill also said that sTec’s operating losses of more than $103 million for last year and $25 million more in the first quarter of 2013 raises concerns that the vendor will run out of money by mid-2014.

“We believe the Company has lost incredible market share in the wake of increasing competition because the Board first failed to anticipate such market share losses and then, in response to rising competition, decided to pursue a flawed strategy that is focused on going after its direct end users (its customer’s customers) rather than trying to repair its relationships with its large storage OEM customers …” the letter read.

The Balch Hill letter also called sTec’s spending “excessive and ineffective.”

Balch Hill has nominated Adam Leventhal, Clark Masters and Eric Singer to replace three of sTec’s eight directors at the July 12 election. The investment firm is seeking to remove CEO Mark Moshayedi, his brother and former CEO Maounch Moshayedi and Matthew Witte from the board. Manouch Moshayedi stepped down as CEO last year after the Securities and Exchange Commission (SEC) charged him with insider trading. The Balch Hill letter also claimed “Mark Moshayedi has a significant cloud over him regarding his questionable trading practices and the continued underperformance of the Company under his leadership.” Mark Moshayedi has not been charged by the SEC but was a subject of the original SEC investigation that brought charges against his brother. Mark Moshayedi was sTec’s president and COO during the time when the SEC charges his brother with insider trading.

Balch Hill wants Witte removed because he has failed to “launch a proper CEO search” as chairman of the nominating and corporate governance committee. Witte’s committee has also failed to independently investigate the SEC’s claims, Balch Hill charged.

In an interview with SearchSolidStateStorage last month, Mark Moshayedi said is brother’s case “has nothing to do with the company. The company and I have been cleared of any wrongdoing. Obviously, Manouch has his case, and it’s something he’s dealing with.“

Daly’s letter today defended sTec’s new business plan that took effect this year.

“We are beginning to see significant traction from our new go-to-market strategy and are successfully diversifying our customer base to include enterprise end customers, a segment we believe is key to our growth objectives and long-term success,” Daly wrote.

“We believe that the successful execution of this carefully crafted strategy, with specifically delineated milestones over the course of the next few years, will deliver long-term value to all sTec shareholders.”

Daly also proposed adding Alan Baratz to the board as it expands from seven to eight directors. Baratz has served in senior management positions at Cisco, Sun, and storage startup Neopath Networks. Daly said sTec has offered to add two of Balch Hill’s candidates to the board, calling that “more than reasonable representation given their combined approximate 9.8% ownership position …” but that Balch Hill declined the offer.


June 7, 2013  11:58 AM

Panzura’s $25 million may be its last funding round

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Cloud storage gateway startup Panzura pocketed $25 million in a Series D funding round this week, and CEO Randy Chou said he expects it will be the last funding the company needs before becoming profitable.

Chou said he expects to double the company’s headcount to 200 people with the funding. He also plans to expand Panzura’s presence in Asia-Pacific and increase research and development for its products.

The Panzura is one of several startups that popped up in the past few years selling gateways to move data to public clouds. The main function of their controllers is to translate object storage to work with applications written to communicate with file and block storage. Others included Ctera Networks Ltd., TwinStrata, Nasuni Corp., and StorSimple Inc., which was acquired by Microsoft in late 2012.

Chou said Panzura’s business has accelerated since Microsoft purchased StorSimple last year. “The market picked up as a whole in the third quarter last year,” he said.

He said 75 percent of Panzura’s leads come from partners such as EMC, Hewlett-Packard, Hitachi Data Systems, Nirvanix, Dell, Google and Amazon.

In December, Panzura clinched a multi-million dollar deal with the Executive Office for U.S. Attorneys, a huge win that Chou said he hopes will open up doors for Panzura in other areas of government. He attributed the deal largely to Panzura’s Storage Controller receiving the Federal Information Processing Standard (FIPS) 140-2 security validation. Its product also has Advanced Encryption Standard (AES). The company also has a deal with the Department of Justice.

Founded in July 2008, Panzura raised $6 million in a Series A funding in September 2008 and another $12 million in October 2010. Venture capitalist Meritech Capital Partners led the latest round with participation from previous investors Matrix Ventures, Khosla Ventures, Opus Capital and Chevron Technology Ventures.


June 5, 2013  8:35 AM

Beware software-defined lock-in

Randy Kerns Randy Kerns Profile: Randy Kerns

“Avoid vendor lock-in” has been a mantra for a long time by other vendors and marketing promotions. Vendor lock-in is equated to a lack of choices or an impediment to making a change in the future. The lack of choices results in:

• Paying more for the next product or solution
• Failure to keep up with and benefit from new technology
• Reduced support or concern from the vendor in solving a problem.

These may be more fear-mongering from competitive vendors than reality, although there are companies that have demonstrated reprehensible behavior that is generally assigned to have a customer “locked-in.”

Recent marketing hype in the information systems and management industry has been focused on using “software-defined something” as a means to avoid vendor lock-in. In this case, they mean lock-in with hardware. Other valuable attributes for the software-defined message are added to the discussion but the most basic argument is the flexibility of using software on generic (general purpose) hardware.

In the case of software-defined storage (which has a wide range of meanings depending on which vendor is talking), the software seeks to take the value out of storage systems. The message is that removing the value of the storage system and using generic hardware and devices will remove vendor lock-in to a particular storage system from a vendor. With the messaging that vendor lock-in is bad and costs more, the software-defined argument builds an affinity value message.

But the real question is: did the lock-in just get moved to somewhere else? Rather than a storage system that is replaceable, albeit with effort to migrate data and change of operational procedures, the lock-in may move to software. In this case, the software determines where to place data. The software has control of all the fragments that are distributed across physical devices. The software in the storage system (embedded software or firmware in an earlier generation lexicon) and software-defined storage are doing relatively the same thing at one level.

If lock-in (as defined earlier) is being moved from a vendor storage system to software, the impacts of lock-in need to be evaluated. One consideration is the long-term financial impact. Software has a support cost – either from a vendor or from the IT staff in the case of open source. Additionally, some software is licensed based on capacity. These changes continue for as long as the software is in use. Storage systems are typically purchased and have a warranty that is often negotiated as part of the sale. It is common to get a four-year or five-year warrant. After that time, there is a maintenance charge. Some of the value-added features of the storage system are separately licensed which may be annualized or capacity-based. This is a competitive area, however, and some vendors include the value-add software for their systems in the base price.

Storage systems have had a consistent price decline over the years, transferring the economics of improving technology and the effect of competition to the customers. Software typically does not have commensurate price reductions. It is seen as an annuity for the vendor for maintaining and updating.

The vendor lock-in message triggers emotion and rapid conclusions that may not represent reality. Deeper analysis is required on specific situations. The value of “compartmentalizing” information handling to allow technology transitions or transformations rather than massive infrastructure change that become inhibitors cannot be discarded in considerations. The vendor lock-in message is not really that simple, and attributing the next new thing as being the answer is not well thought out.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: