Storage Soup


July 3, 2013  11:21 AM

Struggling FalconStor shuffles CEOs again

Dave Raffo Dave Raffo Profile: Dave Raffo

FalconStor has been looking for a buyer for seven months. Now it may be looking for a CEO, too.

Jim McNiel suddenly resigned as CEO and president Monday and gave up his seat on the board. FalconStor named Gary Quinn interim CEO. The company did not say if it is conducting a search for a permanent CEO.

FalconStor may not need to find a full-time replacement. It could give the job to Quinn eventually or it may not need one at all. The storage software vendor last November revealed it hired investment banker Wells Fargo to pursue strategic options, which could be a sale or funding. Quinn could merely be a caretaker until there is a sale. Then again, McNiel’s resignation could be a sign that there are no serious bidders for FalconStor.

McNiel replaced FalconStor founder ReiJane Huai as CEO in 2010 after Huai’s role in a customer bribery scandal came to light. Huai committed suicide a year later. FalconStor agreed to pay $5.8 million in June 2012 to settle criminal and civil charges that it bribed JP Morgan Chase to buy its software.

The bribery scandal came with FalconStor already reeling following the loss of OEM partners for its virtual tape library (VTL) software. Most of the major storage players used FalconStor software before developing or acquiring their own data deduplication software for backup.

FalconStor lost $3.4 million in the first quarter of 2013 and its $15.3 million in revenue was down 21% from the same quarter last year. With $28.2 million in operating cash, the vendor cannot sustain those kinds of losses for long without getting acquired or finding funding.

But in some ways, FalconStor is operating as it will be around long-term. It hired Rob Zecha as chief product officer in April, putting him in charge of product management, quality assurance, software development, engineering and research and development. And last month, FalconStor signed an OEM deal to sell its VTL software on IBM hardware.

Quinn joined FalconStor in April 2012 as vice president of North America sales and marketing and was promoted to executive vice president and COO in April 2013. He previously held executive positions at software vendor CA Technologies and Suffolk County, NY.

June 28, 2013  8:01 AM

Common cloud storage topics: archiving and gateways

Randy Kerns Randy Kerns Profile: Randy Kerns

Two storage aspects always enter the discussion when talking about the cloud: archiving and the use of gateways to adapt on-premise environments.

Archiving information to a cloud location is interesting for several reasons:

  • The growth of file data precipitates a discussion of what to do with all the files over time.  IT is hesitant to establish and enforce deletion policies and business owners are either unmotivated or unwilling to address the increasing capacity. Moving files that are rarely or never accessed off to a cloud can free up IT resources related to storage systems, data protection processes and administration. It can also allow IT to transfer the charge for storing the inactive data back to the business owners, which may prompt them to establish retention policies.
  • Retained backups are a good fit for the cloud. These have retention governance or business practices but do not include recovering individual files that are damaged or deleted. The long-term nature of retained backups and the unlikely event they will be required for immediate usage make them valuable targets for cloud storage.
  • Potential sharing of information within an enterprise with geographic access requirements looks to be an area that IT can address with a software service using cloud providers.

Examining how to move data to the cloud usually leads IT to two primary choices:   applications written for specific usages with management of cloud storage or use of a gateway device with existing applications. Both approaches are viable but a gateway device is the least disruptive to existing environments. The gateway presents itself as a traditional NAS system with NFS and CIFS/SMB access and manages moving data to and from a cloud service provider as required.

Gateways can be storage systems or appliances that include the hardware and pre-loaded, pre-configured software or a software application that IT must install, configure, and support. Some software gateway applications may be delivered as virtual machines, which reduces the installation effort.  No matter which delivery method is used, the gateway must have a policy engine for IT administrators to use to set the rules regarding movement of data and management of the access to the cloud service provider along with billing and chargeback capabilities.

There are several factors that can differentiate gateways:

  • The performance required to move and retrieve data must be in sync with the requirements. If used for more active data, performance can be a big concern. More typical usage for inactive data reduces the performance concerns except for initial deployments where there may be a large amount of data to move to the cloud location at the start.
  • Cost of storage and retrieval grows over time with the usage and the amount of data offloaded to the cloud.
  • Migration of data from one cloud provider to another is another concern as the landscape changes with cloud providers. Changes in providers’ pricing or poor service may necessitate a change. The ability to seamlessly transfer data between cloud providers lessens the IT impact and addresses a major concern.

Archiving data into the cloud using application software that was developed for that purpose or by using gateways (sometimes called on-ramps) for existing applications is usually done as a funded project.  Either the project is a new application deployment or a business operation that is being re-engineered, or it is a funded initiative inside of IT to improve operations. Ultimately, the funding of the program dictates the action because it does require strategy, planning, and project management in addition to the operational process changes.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


June 20, 2013  8:51 AM

EMC eyes ScaleIO — a good fit for ViPR?

Dave Raffo Dave Raffo Profile: Dave Raffo

Word is coming out of Israel that EMC is deep in talks to acquire software-based storage startup ScaleIO for a price tag in the range of $200 million to $300 million. ScaleIO came out of stealth last December with software that it positions as a virtual storage appliance for enterprises and cloud service providers.

ScaleIO would fit the software-defined storage strategy that EMC laid out for its ViPR software last month at EMC World. EMC’s vision for ViPR combines the ability to pool storage across any hardware with cloud management capabilities. ScaleIO’s Elastic Converged Storage (ECS) can help on both fronts.

ECS agents install on servers running hypervisors, databases and other applications. The software aggregates capacity on those servers, turning them into a large storage network. The concept is similar to the virtual storage appliance (VSA) approach taken by Hewlett-Packard’s StoreVirtual VSA, built on LeftHand technology and VMware’s vSphere Storage Appliance, but those products are for small- and medium-sized businesses and small enterprises.

ECS would be a fit for public cloud storage, which tends to be more server-based than SAN- and NAS-focused enterprise storage. ScaleIO founder and CEO Boaz Palgi claimed at launch that he had one customer using ECS on 260 nodes. He said customers can add nodes on the fly and the software automatically rebalances performance.

It’s too soon to say how much of ScaleIO technology would end up in ViPR, but it’s unlikely that EMC would make a software buy now without considering it part of its software-defined strategy.

“This possible move leaves us to think more deeply about EMC’s willingness to aggressively step in front of potential meaningful disruptive secular changes,” Stifel Nicolaus Equity Research financial analyst Aaron Rakers wrote in a report to clients today. Rakers added that he sees the possible acquisition as “another move to position itself against the evolution toward open source solutions, pubic cloud providers and other software vendors pushing into software-defined storage.”

ScaleIO, which is based in Israel but has a Palo Alto, Calif. office, raised $12 million in funding from Greylock Partners, Norwest Venture Partners (NVP) and private investors.


June 18, 2013  3:02 PM

Nutanix leaves empty seat on software-defined storage bandwagon

Dave Raffo Dave Raffo Profile: Dave Raffo

There is at least one vendor who refrains from defining its software-driven storage technology as software-defined storage.

Despite its reliance on software for differentiation, hyper-converged storage vendor Nutanix refrains from using the software-defined storage tag that many vendors are embracing.

Greg Smith, Nutanix senior director of product and technical marketing, said Nutanix qualifies for that label because it delivers storage as software natively in the virtualization tier so “the virtualization manager no longer has to manage software, it becomes invisible. But we think there’s enough tangible value to customers that we do not need to resort to that level of marketing.”

Nutanix shifted marketing gears with its product name, switching from the Nutanix Complete Cluster to Nutanix Virtual Computing Platform. Smith explained by change by saying “as our product category evolves, we need to describe a class of solutions that converges compute and storage at a platform or appliance level.”

However Nutanix is marking its systems, it seems to be working. Nutanix claims it is on an annualized run rate of over $80 million in revenue, which means it had more than $20 million in sales in the first quarter of 2013. That’s a healthy number for a startup, especially in a quarter where storage disk sales declined year-over-year for the first time in four years.


June 11, 2013  3:56 PM

NetApp scales up Clustered Ontap

Dave Raffo Dave Raffo Profile: Dave Raffo

NetApp unleashed Clustered Data Ontap 8.2 today, using the launch to again make its case as the king software-defined storage.

Since EMC revealed its ViPR software-defined storage technology plan last month, NetApp executives have claimed they do much of the same things in Clustered Data Ontap.

Part of 8.2 is about making its quality of service more granular and improving scalability to support 69 PB of storage and 24 controller nodes, 49,000 LUNs, and 12,000 NAS volumes supporting over 100,000 clients. It supports 20 PB in a single container. NetApp also added availability for SnapVault, which was not supported in Clsutered Ontap 8.1.

The more important piece of the upgrade is enhanced storage virtual machines (SVMs), which have some of the capabilities that EMC is claiming for ViPR.

SVMs are virtualized storage arrays defined in Ontap than run inside NetApp FAS  or V-Series controllers. Customers can grow them, shrink them, or move them on demand. Hundreds of SVMs can run on one piece of hardware, according to Brendon Howe, NetApp VP of product marketing.

SVMs evolved from the VirtualFiler, or vFiler, that NetApp added with Ontap 7. SMVs, however, are not tied to the underlying hardware. They can be moved across devices while retaining full Ontap storage services, Howe said.

Unlike NetApp’s Data Ontap Edge virtual storage appliances (VSAs) that run on server hardware virtualized by VMware vSphere, SVMs run on storage arrays.

Data Ontap Edge also plays a role in NetApp’s software-defined storage strategy, and the vendor plans to use Edge to deploy clustered Ontap on x86 hardware.

NetApp has supported server virtualization through its V Series controllers since 2006, before anybody called that software-defined storage. But Howe said NetApp has taken a different approach to pooling storage than other vendors and its version fits the software-defined storage label.

“Software-defined storage is closely tied to traditional storage virtualization,” he said. “We’ve assured that all rich management capabilities of our storage systems are made available in that virtual layer, instead of federating systems. We said, ‘What if you pool systems and don’t sacrifice any functions of any systmems in that pool?’

“Software-defined is an emerging discussion these days. I think it’s a discussion of how you enable services to be dynamically provisioned.”


June 10, 2013  12:40 PM

Battle for control of sTec heats up

Dave Raffo Dave Raffo Profile: Dave Raffo

Solid-state storage vendor sTec’s management team is in a fight to maintain control of the company ahead of next month’s board of directors election. The vendor and unhappy investment firm Balch Hill Partners today sent letters to shareholders making their cases for the candidates they have nominated for election.

Balch Hill is the largest independent shareholder of sTec, with approximately 10 percent of its common stock. The unhappy investors want to remove sTec’s CEO, former CEO and another director from the board. The Balch Hill letter called for shareholders to hold the sTec board accountable and charged it with destroying more than $1.3 billion of shareholder value. It also called for accountability for the company’s poor financial performance, pointing out that revenues dropped 56% in the last quarter from the previous year and 77% from two years ago. During the same time, sTec increased research and development spending 68% as it moves from an OEM sales model to selling direct and through VARs.

Balch Hill said, if elected, its nominees would immediately hire an interim CEO and launch a search for a permanent CEO to re-engage large OEM customers, return the business model to focus on OEMs, re-evaluate the company’s PCIe, SATA, I/O software and other business initiatives and explore a sale of sTec if it cannot stand on its own.

STec chairman Kevin Daly sent a letter on the vendor’s behalf, claiming sTec is making progress with its new business plan and its goal of generating more than $200 million of revenue in 2014 (its 2012 revenue was $168.3 million, down from $308 million in 2011.)

Balch Hill also said that sTec’s operating losses of more than $103 million for last year and $25 million more in the first quarter of 2013 raises concerns that the vendor will run out of money by mid-2014.

“We believe the Company has lost incredible market share in the wake of increasing competition because the Board first failed to anticipate such market share losses and then, in response to rising competition, decided to pursue a flawed strategy that is focused on going after its direct end users (its customer’s customers) rather than trying to repair its relationships with its large storage OEM customers …” the letter read.

The Balch Hill letter also called sTec’s spending “excessive and ineffective.”

Balch Hill has nominated Adam Leventhal, Clark Masters and Eric Singer to replace three of sTec’s eight directors at the July 12 election. The investment firm is seeking to remove CEO Mark Moshayedi, his brother and former CEO Maounch Moshayedi and Matthew Witte from the board. Manouch Moshayedi stepped down as CEO last year after the Securities and Exchange Commission (SEC) charged him with insider trading. The Balch Hill letter also claimed “Mark Moshayedi has a significant cloud over him regarding his questionable trading practices and the continued underperformance of the Company under his leadership.” Mark Moshayedi has not been charged by the SEC but was a subject of the original SEC investigation that brought charges against his brother. Mark Moshayedi was sTec’s president and COO during the time when the SEC charges his brother with insider trading.

Balch Hill wants Witte removed because he has failed to “launch a proper CEO search” as chairman of the nominating and corporate governance committee. Witte’s committee has also failed to independently investigate the SEC’s claims, Balch Hill charged.

In an interview with SearchSolidStateStorage last month, Mark Moshayedi said is brother’s case “has nothing to do with the company. The company and I have been cleared of any wrongdoing. Obviously, Manouch has his case, and it’s something he’s dealing with.“

Daly’s letter today defended sTec’s new business plan that took effect this year.

“We are beginning to see significant traction from our new go-to-market strategy and are successfully diversifying our customer base to include enterprise end customers, a segment we believe is key to our growth objectives and long-term success,” Daly wrote.

“We believe that the successful execution of this carefully crafted strategy, with specifically delineated milestones over the course of the next few years, will deliver long-term value to all sTec shareholders.”

Daly also proposed adding Alan Baratz to the board as it expands from seven to eight directors. Baratz has served in senior management positions at Cisco, Sun, and storage startup Neopath Networks. Daly said sTec has offered to add two of Balch Hill’s candidates to the board, calling that “more than reasonable representation given their combined approximate 9.8% ownership position …” but that Balch Hill declined the offer.


June 7, 2013  11:58 AM

Panzura’s $25 million may be its last funding round

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Cloud storage gateway startup Panzura pocketed $25 million in a Series D funding round this week, and CEO Randy Chou said he expects it will be the last funding the company needs before becoming profitable.

Chou said he expects to double the company’s headcount to 200 people with the funding. He also plans to expand Panzura’s presence in Asia-Pacific and increase research and development for its products.

The Panzura is one of several startups that popped up in the past few years selling gateways to move data to public clouds. The main function of their controllers is to translate object storage to work with applications written to communicate with file and block storage. Others included Ctera Networks Ltd., TwinStrata, Nasuni Corp., and StorSimple Inc., which was acquired by Microsoft in late 2012.

Chou said Panzura’s business has accelerated since Microsoft purchased StorSimple last year. “The market picked up as a whole in the third quarter last year,” he said.

He said 75 percent of Panzura’s leads come from partners such as EMC, Hewlett-Packard, Hitachi Data Systems, Nirvanix, Dell, Google and Amazon.

In December, Panzura clinched a multi-million dollar deal with the Executive Office for U.S. Attorneys, a huge win that Chou said he hopes will open up doors for Panzura in other areas of government. He attributed the deal largely to Panzura’s Storage Controller receiving the Federal Information Processing Standard (FIPS) 140-2 security validation. Its product also has Advanced Encryption Standard (AES). The company also has a deal with the Department of Justice.

Founded in July 2008, Panzura raised $6 million in a Series A funding in September 2008 and another $12 million in October 2010. Venture capitalist Meritech Capital Partners led the latest round with participation from previous investors Matrix Ventures, Khosla Ventures, Opus Capital and Chevron Technology Ventures.


June 5, 2013  8:35 AM

Beware software-defined lock-in

Randy Kerns Randy Kerns Profile: Randy Kerns

“Avoid vendor lock-in” has been a mantra for a long time by other vendors and marketing promotions. Vendor lock-in is equated to a lack of choices or an impediment to making a change in the future. The lack of choices results in:

• Paying more for the next product or solution
• Failure to keep up with and benefit from new technology
• Reduced support or concern from the vendor in solving a problem.

These may be more fear-mongering from competitive vendors than reality, although there are companies that have demonstrated reprehensible behavior that is generally assigned to have a customer “locked-in.”

Recent marketing hype in the information systems and management industry has been focused on using “software-defined something” as a means to avoid vendor lock-in. In this case, they mean lock-in with hardware. Other valuable attributes for the software-defined message are added to the discussion but the most basic argument is the flexibility of using software on generic (general purpose) hardware.

In the case of software-defined storage (which has a wide range of meanings depending on which vendor is talking), the software seeks to take the value out of storage systems. The message is that removing the value of the storage system and using generic hardware and devices will remove vendor lock-in to a particular storage system from a vendor. With the messaging that vendor lock-in is bad and costs more, the software-defined argument builds an affinity value message.

But the real question is: did the lock-in just get moved to somewhere else? Rather than a storage system that is replaceable, albeit with effort to migrate data and change of operational procedures, the lock-in may move to software. In this case, the software determines where to place data. The software has control of all the fragments that are distributed across physical devices. The software in the storage system (embedded software or firmware in an earlier generation lexicon) and software-defined storage are doing relatively the same thing at one level.

If lock-in (as defined earlier) is being moved from a vendor storage system to software, the impacts of lock-in need to be evaluated. One consideration is the long-term financial impact. Software has a support cost – either from a vendor or from the IT staff in the case of open source. Additionally, some software is licensed based on capacity. These changes continue for as long as the software is in use. Storage systems are typically purchased and have a warranty that is often negotiated as part of the sale. It is common to get a four-year or five-year warrant. After that time, there is a maintenance charge. Some of the value-added features of the storage system are separately licensed which may be annualized or capacity-based. This is a competitive area, however, and some vendors include the value-add software for their systems in the base price.

Storage systems have had a consistent price decline over the years, transferring the economics of improving technology and the effect of competition to the customers. Software typically does not have commensurate price reductions. It is seen as an annuity for the vendor for maintaining and updating.

The vendor lock-in message triggers emotion and rapid conclusions that may not represent reality. Deeper analysis is required on specific situations. The value of “compartmentalizing” information handling to allow technology transitions or transformations rather than massive infrastructure change that become inhibitors cannot be discarded in considerations. The vendor lock-in message is not really that simple, and attributing the next new thing as being the answer is not well thought out.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


May 24, 2013  8:09 AM

Scale-out NAS becoming an enterprise fixture

Randy Kerns Randy Kerns Profile: Randy Kerns

Enterprise systems with scale-out capability have been making an impact in IT environments and are almost always a consideration in every evaluation of client storage strategies.

Although there are scale-out implementations of block and object storage, NAS has been the primary focus for enterprise scale-out storage deployments. Scale-out products range from the enterprise down to the SMB market. Some high-end scale-out NAS systems such as EMC Isilon and Hitachi Data Systems HNAS have made a transition from high performance computing (HPC) to enterprise IT.

Benefits of using scale-out NAS include:

• Performance scales in parallel with capacity so increases in capacity do not cause performance impacts requiring additional administrative effort to diagnose and correct.
• The continued increase in unstructured data can be addressed without a single administrative system without increasing administrative efforts and costs.
• New technology elements can be introduced and older ones retired without having to offload and reload data.

Not all NAS systems offered today are scale out. Traditional dual-node controller NAS systems still fit many customer needs, and are usually kept as separate platforms than scale-out systems. It is easier to design a new scale-out NAS system than to adapt an existing design and maintain the high-value features, although NetApp has shown that new technology can be introduced and adapted with its Clustered Data ONTAP systems.

A common approach to scale-out NAS is to take a distributed file system used in HPC and research environments. Considering the success vendors are having with their scale-out NAS offerings, it would seem to be inevitable that a majority of enterprise NAS systems will be multi-node, scale-out systems.

Vendors have several terms for scale-out NAS and scale-out storage in general. A look at some of the vendor product offering sees the terms clustered NAS, federated systems, and distributed systems. These are mostly vendor marketing aimed at creating a unique identification for their products. They are more likely to create confusion.

While scale-out block storage may be more difficult to implement because of the host interface connection and greater latency demands than NAS, the implementations provide the same value to IT customers. The measure is the number of nodes in the system and how the nodes are organized such as in pairs or an N+1 protection arrangement.

Scale-out NAS and scale-out storage in general is becoming prevalent because of the value. Vendors will continue to develop products that meet customer needs and more scale-out systems should be expected.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


May 23, 2013  9:46 AM

NetApp CEO: We invented software-defined storage

Dave Raffo Dave Raffo Profile: Dave Raffo

Software-defined storage is gaining a lot of attention these days, especially after EMC revealed plans for ViPR at EMC World earlier this month. Now EMC rival NetApp is taking credit for being a “pioneer” of the technology long before anybody from EMC or any other storage vendor used the team.

During NetApp’s earnings call with analysts Tuesday, CEO Tom Georgens cited the storage virtualization capability of the Data Ontap operating system as a prime example of software-defined storage. NetApp V-Series gateways can virtualize storage arrays from other major vendors.

“NetApp pioneered this value proposition with our Data OnTap operating system,” Georgens said. “For the last decade, we’ve been able to run OnTap on our hardware and other people’s hardware through V-Series.”

Georgens listed the ability to run OnTap in private clouds with Amazon Web Services and as a virtual machine in OnTap Edge as other examples of software-defined storage.

“This concept has been coined software-defined storage … only NetApp can deliver on the promise of software-defined storage today,” he said.

NetApp representatives have been making similar claims since EMC announced its ViPR software-defined storage offering at EMC World earlier this month. Because the definition of software-defined storage varies according to who’s defining it, Georgens offered his definition: flexible storage resources that can be deployed on a wide range of hardware and provisioned and consumed based on policy directly by the application and development teams.”

Clustered OnTap, more than a decade in development after NetApp acquired clustered technology from Spinnaker, is part of NetApp’s software-defined storage story. Georgens said NetApp has almost 1,000 clustered customers.

Georgens also proclaimed NetApp a flash leader. He said NetApp has shipped 44 petabytes of flash in its arrays. However, its FlashRay all-flash array remains a roadmap item while others have had all-flash systems on the market for at least a year.

NetApp has two convince two sets of people that it is a technology innovator – customers and investors.

As cutomers go, NetApp’s revenue of $1.72 billion last quarter increased one percent from the same quarter last year and five percent over the previous quarter. That’s not bad, considering several large competitors said their sales slipped from last year, but not exactly a home run.

NetApp has struggled to keep investors happy. To make amends, NetApp announced a 900-person layoff, a quarterly dividend of 15 cents a share and a $1.6 billion increase in its stock repurchase program that brings the total to $3 billion. The moves came after a Bloomberg story claimed Elliott Management Corp. – which owns 16 million NetApp shares – called for NetApp to changes its board and take steps to boost shareholder value. One of Elliott’s concerns was that NetApp’s technology hasn’t kept up with its rivals.

That puts Georgens in the position of announcing layoffs while pledging to be a tech leader.

“Last week was a difficult one for employees,” he said of the layoffs. “We are faced with the challenge of continuing to execute against our growth strategy while achieving our business and financial objectives in the context of a low-growth IT spending environment.”

His juggling act will be interesting to watch in the coming months.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: