Storage Soup


August 7, 2013  9:42 PM

Caringo joins the Amazon API bandwagon

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

 Caringo Inc. recently launched its CloudScaler 2.0, adding support for Amazon S3 support and transparent disaster recovery options for cloud service providers. The company now joins a growing number of companies  like Cleversafe Inc. and Amplidata that support Amazon APIs.

“This allows S3 API calls written throughout APIs of our object storage. S3 has done a very good job of building an ecosystem or support of applications,” said Adrian Herrera, Caringo’s senior director of marketing. “We definitely see a demand for (Amazon S3 APIs). We see it in the enterprise and also see demand from the service provider side.”

The update version also offers customized and transparent disaster recovery capabilities, which include more control over content location for disaster recovery and access, and automated local and geographic distribution of objects to multiple locations. This is particularly important for companies that have compliance and regulation concerns.

This most recent announcement follows Caringo’s announcement that it has integrated CAStor object storage software with CTERA Cloud Attached Storage gateways and the CTERA portfolio cloud storage solution.

The solutions now is available via a perpetual license or on-demand pricing model.

CloudScaler 2.0 offers enterprise features for cloud service providers that include CAStor’s WORM functionality, integrity seals and Elastic Content Protection. It offers replication and erasure coding simultaneously for any storage SLA. The 2.0 version allows most existing applications that support Amazon S3 to work seamlessly once reconfigured to send requests to CloudScaler. The cloud storage infrastructure is expanded with no service downtime and automated storage balancing.

CloudScaler 2.0 also is interoperable with the Citrix Cloud Portal Business Manager, which allows administrators to build clouds, manage and deliver any cloud or IT service through a single service platform. They can provision storage running in the cloud, aggregate cloud   infrastructure

 “We plug in as a connector to give administrators a dashboard,” said Herrera. “We plug in the infrastructure into those management features.

August 7, 2013  7:53 AM

Hidden costs of open source software

Randy Kerns Randy Kerns Profile: Randy Kerns

I had a long discussion with an IT director for a large company this past week, and we talked about his views on open source software. Basically, he said he considered open source for information management software a much greater expense than purchasing or site licensing.

That view was based on several factors:

Information management software has a long usage period. The expectation is the software will be in use for at least a decade as part of the company’s operational procedures.

• Open source software requires integration and customization by the IT staff. Given recent staff reductions, his team did not really have that time to spare.

• First-level and sometimes second-level support would have to be performed by his staff to get the required responsiveness. He said the cost of paying for support from a third party negated the benefits of open-source software, and was difficult to defend when budgets were being scrutinized.

• The IT organization did not want to budget staff time to include open source support. Past experiences had shown this to require much more input than had been forecast.

The bottom line was the economics did not really show benefit after taking all the operational expenses into account. This realization led to a policy of using information management software from established vendors with support and a history of longevity for their products.

He said it was a problem when staff brought open source software into the company. This created issues in the long term. Besides the economic impact, this caused a dependency on a single individual for keeping that application working. This led to establishing a policy where an individual who brought in unapproved software would be terminated immediately.

I told him the concept of open source and not having to pay a usage fee sounded so appealing – especially with so many bright people contributing to it and creating highly valuable software. He agreed with that but said it was still an economics issue and it was also much safer (meaning defensible to executives) to not use it.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


August 2, 2013  1:43 PM

Quantum still idling with DXi disk

Dave Raffo Dave Raffo Profile: Dave Raffo

The DXi data deduplication disk system still isn’t working out for Quantum as planned.

The DXi platform and StorNext archiving software are the main pillars in Quantum’s strategy to become a systems company instead of just a tape vendor. But the DXi is stuck in neutral. Quantum reported its disk revenue fell four percent last quarter due to poor enterprise sales, marking the second straight poor quarter for DXi. That was a drag on Quantum’s product revenue, which fell 8.3% to $86 million while overall revenue grew 5% to $148 million.

Quantum CEO Jon Gacek said the DXi problem was a combination of large deals that didn’t close and fewer sales than expected to existing customers. He said the DXi 8500 enterprise system faltered while the DXi 6000 midrange revenue grew 23% over last year.

“Our real issue this quarter and last has been the DXi,” he said on the earnings call. “We did well with new customers and in the midrange, but we didn’t do as well with our installed base and bigger deals. These last two quarters have been more of a struggle.”

Gacek added “We still think the market is there,” and that he is happy with how the DXi stacks up against EMC Data Domain and other competitors. The problem, he said, is mainly that Quantum lacks the market reach to get into enough deals. He pointed out that EMC is in nearly every deal because it is a much larger company than Quantum.

Besides EMC, Quantum faces a newer competitor in Symantec, which is having success selling its backup software on pre-packaged hardware appliances. When Symantec sells a NetBackup appliance, its customer no longer needs a separate disk target such as the DXi. According to the most recent market figures from IDC, Symantec’s disk appliance revenue grew 149.3% to $102.3 million in the first quarter of this year. Quantum’s $18.3 million revenue grew only 2.5 percent compared to the overall 16.5% market growth. Quantum was fifth behind EMC, Symantec, IBM and Hewlett-Packard with 3.1 percent market share, and it doesn’t look like it gained share last quarter.

“Symantec is 50 percent of the backup market, and their product does impact us for sure,” Gacek said in an interview after the earnings call. “But their product does not scale as much or have high performance. Is it good enough? For a bunch of enterprises it is. We have them below us and EMC on the same level and above us. But there’s still a lot of market.”

Take away a one-time $15 million intellectual property deal with Microsoft, and Quantum would have missed its revenue target by $2 million. Still, Gacek said the quarter had hits high points. While Quantum’s DXI revenue faltered, it beat estimates for income and finished $3 million in the black for the quarter on solid tape sales and 13% StorNext growth. He’s still counting on better DXi sales and expects Quantum’s new Lattus object-based storage system to pick up steam soon. But he spent much of the earnings call defending the DXi results.

“Without sounding whiny, everybody’s always unhappy about something,” he said of investors and analysts. “Last year the DXi was killing it and we were getting hammered because of tape. Now DXi is off and tape is doing well, and DXi is what everybody wants to talk about.”


July 30, 2013  3:57 PM

NetApp, Cisco add FlexPod for Hadoop, make name changes

Dave Raffo Dave Raffo Profile: Dave Raffo

NetApp and Cisco expanded their FlexPod lineup today, re-naming their main and SMB reference architectures and adding a version specifically for Hadoop.

The FlexPod lineup now consists of FlexPod Datacenter (formerly FlexPod), FlexPod Express (formerly ExpressPod ) for SMBs and FlexPod Select for data intensive workloads. FlexPod Select for Hadoop is the first Select configuration, validated for Hadoop workloads with Cloudera and Hortonworks. It is also the first FlexPod to include NetApp E Series storage along with its flagship FAS unified storage platform.

FlexPod is architecture built on existing products, so there is no new technology involved in these configurations. Even the new FlexPod Select looks a lot like the NetApp Open Solution for Hadoop that NetApp has sold for about a year now. Both FlexPod Select and Open Solution include NetApp FAS and E Series storage, the file system and Hadoop application. With Open Solutions, customers pick their own servers and networking, while the FlexPod version includes Cisco servers (UCS) and switching (Nexus, Catalyst and/or MDS). FlexPod Select uses FAS 2220 and E5460 storage while Open Solution for Hadoop uses FAS2200 and E2600.

Other FlexPod updates include the Nexus 7000 switch and NetApp SnapProtect data protection software in the DataCenter version.

Brandon Howe, NetApp VP of product and solutions marketing, said future fFlexPod Select architectures will focus on “high performance, bandwidth-hungry apps” such as video and other high performance computing (HPC) workloads. He said the Select platform might also include all-flash versions.

FlexPod Express for SMBs has the same configuration as before. “Only the name is changing,” Cisco VP of data center marketing Jim McHugh.

Howe said the name change came about because customers often did not understand ExpressPod was part of the FlexPod family. “People were saying, ‘we really like ExpressPod, it would be great if you could do this with FlexPod, too,’” he said.

Cisco and NetApp claim 2,400 FlexPod customers in the three years since they formalized the reference architectures. They see that as a success, although EMC president David Goulden mocked that number last week during EMC’s earnings call. Goulden said EMC has sold more than 3,600 Vspex reference architectures since April 2012.

“In other words, in less than half the time we have sold more systems than another less flexible reference architecture that has been on the market for several years,” Goulden said.

Cisco is neutral in the EMC-NetApp reference architecture faceoff because it is also an EMC Vspex partner and is a partner in the VCE company set up by EMC, VMware and Cisco to sell Vblock converged stacks.


July 29, 2013  2:20 PM

STec goes Micro while counting the days to HGST pickup

Dave Raffo Dave Raffo Profile: Dave Raffo

Western Digital/HGST’s planned acquisition of solid state drive (SSD) vendor sTec isn’t changing sTec’s product roadmap.

While waiting for the deal to close, sTec today launched what it claims is the first Micro (1.8-inch) SAS SSD, which is available in 200 GB and 400 GB capacities. STec is also now offering 256-bit AEX-XTS encryption on the Micro SAS drive as well as its 2.5-inch SAS SSDs.

Western Digital said last month it has agreed to acquire sTec for its HGST enterprise drive subsidiary for $340 million and expects the deal to close by the end of 2013.

The Micro SAS drive is designed for blade servers and other high density systems. Mark Rochlin, VP of sTec’s government and defense business said the dense form factor and encryption makes it a good fit for government and military customers.

There are two versions of the Micro SAS drive – encrypted and not encrypted.

Rochlin said security was especially important in the wake of the Bradley Manning and Edward Snowden incidents. “Security and information for the government is a big deal,” Rochlin said. “Vulnerabilities created by those breaches are every bit as dangerous as a bullet or a bomb. The encryption option makes it safe so if these devices fall into the wrong hands they can’t be exploited.”

Pricing for the Micro SAS SSD starts at $1,800 for 200 GB.

STec said the new drives will be available through its usual channel, although it’s uncertain what will happen to that channel after sTec becomes part of OEM-centric HGST.

“The reason HGST gave for buying sTec was our complementary technology, and HGST doesn’t have a product like this,” said Swapna Yasarupu, sTec’s director of SSD product marketing. “It’s business for usual for us. We’re continuing our channel reseller program.”


July 22, 2013  9:10 AM

Hand-me-down storage strategies evolve with solid state

Randy Kerns Randy Kerns Profile: Randy Kerns

When talking to clients about their IT needs and storage vendors about system development, I notice a change in the expected longevity of storage systems. This change is largely due to the rise of solid state storage systems.

Many of our high-end enterprise clients have had an established practice of using disk-based primary storage systems for three years in that role. After that, they move tier 1 application data to a new storage system and relegate the old system (now an ancient three-years old) to secondary storage for two more years.

The five-year lifespan and the plan to update or upgrade storage systems was based on the reliability of the system (meaning component failure rate change over time), the advantages of improved technology every few years, and the cost for maintenance once the warranty period expires.  This has been built in depreciation schedules and business plans.

Now I’m seeing a new dynamic around how long people expect their storage systems to provide value. This is an outcome of the availability and maturing of solid-state storage systems and plans to use them with tier 1 applications and virtualization. Most vendors have made great strides in addressing wear-out issues and the reliability improvements are driving down the service cost estimates for flash. Some vendors are quoting longer warranty periods or reduced maintenance costs.

Our IT clients see the dramatic performance improvements from all-SSD systems. They are purchasing more systems and capacity while still getting the benefits from the initial system they deployed. The depreciation schedules can change and the main driver for purchasing is not for replacement, but for addition.

All-flash storage systems are changing the thinking around the longevity of storage systems.  The planned hand-me-down process may no longer be the norm.  Some vendors see the longevity as a competitive opportunity and are aggressively pursuing that with a longer warranty period, lower maintenance costs, or a more easily negotiated warranty extension.  Other vendors have finance people that see maintenance revenue as a birthright annuity, and they are not changing. This is a competitive area and using negative marketing information to hold onto that annuity revenue will only work until the competitors have made enough inroads with customers to take away the business.

This will be another interesting storage trend to watch develop.  It will be an important competitive area and will distinguish vendors where innovation and opportunity is held back by maintaining that financial annuity while letting the competition move forward.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


July 9, 2013  12:56 PM

Appliances add options — and decisions — for storage purchasers

Randy Kerns Randy Kerns Profile: Randy Kerns

We’re seeing many different vendor initiatives for appliances or special purpose applications that run inside storage systems. In contrast to general purpose storage systems, these appliances consist of special purpose applications running on servers. The most common type of appliance is for backup. Another type includes archive systems.

For backup systems, appliances add deduplication and compression to reduce the amount of data stored, and replication for making disaster copies. Some integrated appliances include the media server software preloaded on the backup application.

With archive systems, value add features can include data movement software with a policy engine for selective controls and replication, functions for retention management, a search function with indexing, and compliance features such as immutability, audit trails, and security.

The value of these appliances comes from the integration enabling all the pieces to work together effectively for quick deployment, usually by IT staff. The appliance should be supported as a single system rather than independent elements. (If that’s not the case, that can be a problem.) The system may even be less expensive than putting the individual elements together.

The trade-offs of using general purpose storage with servers and software versus an integrated appliance are well understood. One issue raised is who has the authority to make the purchase. The decision for improving or deploying new backup technology may be with the backup team and selecting an appliance or system whose primary usage is for backup may be much easier than using some the primary storage under someone else’s control. The same holds true for archiving, which is usually a separate initiative with separate funding and not part of the backup area of responsibility in most large IT organizations.

The line between these trade-offs and roles blurs as vendors begin to offer storage systems with the ability to run selected applications on the storage controller. Now IT is faced with another choice: the purchase of a storage system that can have an application loaded in addition to the storage control function with embedded software. Implemented as a virtual machine in storage, it really is a similar choice to using general purpose storage with the application on a server, only now the server is in the storage system.

This raises issues around flexibility, support, customization, and potential lock-in. The organizational boundaries for purchasing will still hold and be the determining factor in many environments. But, it is another option to consider.

The option may be exploited effectively by system integrators and resellers who can put together solutions that combine several functions that would have been multiple appliances before. With service and support, this is an opportunity for integrators and VARs to deliver more value to their customers.

Appliances, storage systems with the option to load application software, and general purpose storage with separate servers and applications are all possibilities. Not all fit the needs of an IT organization. It’s important to understand the basic needs or problems to be solved, look at the options and then develop a strategy that will deliver the best overall solution.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


July 3, 2013  11:21 AM

Struggling FalconStor shuffles CEOs again

Dave Raffo Dave Raffo Profile: Dave Raffo

FalconStor has been looking for a buyer for seven months. Now it may be looking for a CEO, too.

Jim McNiel suddenly resigned as CEO and president Monday and gave up his seat on the board. FalconStor named Gary Quinn interim CEO. The company did not say if it is conducting a search for a permanent CEO.

FalconStor may not need to find a full-time replacement. It could give the job to Quinn eventually or it may not need one at all. The storage software vendor last November revealed it hired investment banker Wells Fargo to pursue strategic options, which could be a sale or funding. Quinn could merely be a caretaker until there is a sale. Then again, McNiel’s resignation could be a sign that there are no serious bidders for FalconStor.

McNiel replaced FalconStor founder ReiJane Huai as CEO in 2010 after Huai’s role in a customer bribery scandal came to light. Huai committed suicide a year later. FalconStor agreed to pay $5.8 million in June 2012 to settle criminal and civil charges that it bribed JP Morgan Chase to buy its software.

The bribery scandal came with FalconStor already reeling following the loss of OEM partners for its virtual tape library (VTL) software. Most of the major storage players used FalconStor software before developing or acquiring their own data deduplication software for backup.

FalconStor lost $3.4 million in the first quarter of 2013 and its $15.3 million in revenue was down 21% from the same quarter last year. With $28.2 million in operating cash, the vendor cannot sustain those kinds of losses for long without getting acquired or finding funding.

But in some ways, FalconStor is operating as it will be around long-term. It hired Rob Zecha as chief product officer in April, putting him in charge of product management, quality assurance, software development, engineering and research and development. And last month, FalconStor signed an OEM deal to sell its VTL software on IBM hardware.

Quinn joined FalconStor in April 2012 as vice president of North America sales and marketing and was promoted to executive vice president and COO in April 2013. He previously held executive positions at software vendor CA Technologies and Suffolk County, NY.


June 28, 2013  8:01 AM

Common cloud storage topics: archiving and gateways

Randy Kerns Randy Kerns Profile: Randy Kerns

Two storage aspects always enter the discussion when talking about the cloud: archiving and the use of gateways to adapt on-premise environments.

Archiving information to a cloud location is interesting for several reasons:

  • The growth of file data precipitates a discussion of what to do with all the files over time.  IT is hesitant to establish and enforce deletion policies and business owners are either unmotivated or unwilling to address the increasing capacity. Moving files that are rarely or never accessed off to a cloud can free up IT resources related to storage systems, data protection processes and administration. It can also allow IT to transfer the charge for storing the inactive data back to the business owners, which may prompt them to establish retention policies.
  • Retained backups are a good fit for the cloud. These have retention governance or business practices but do not include recovering individual files that are damaged or deleted. The long-term nature of retained backups and the unlikely event they will be required for immediate usage make them valuable targets for cloud storage.
  • Potential sharing of information within an enterprise with geographic access requirements looks to be an area that IT can address with a software service using cloud providers.

Examining how to move data to the cloud usually leads IT to two primary choices:   applications written for specific usages with management of cloud storage or use of a gateway device with existing applications. Both approaches are viable but a gateway device is the least disruptive to existing environments. The gateway presents itself as a traditional NAS system with NFS and CIFS/SMB access and manages moving data to and from a cloud service provider as required.

Gateways can be storage systems or appliances that include the hardware and pre-loaded, pre-configured software or a software application that IT must install, configure, and support. Some software gateway applications may be delivered as virtual machines, which reduces the installation effort.  No matter which delivery method is used, the gateway must have a policy engine for IT administrators to use to set the rules regarding movement of data and management of the access to the cloud service provider along with billing and chargeback capabilities.

There are several factors that can differentiate gateways:

  • The performance required to move and retrieve data must be in sync with the requirements. If used for more active data, performance can be a big concern. More typical usage for inactive data reduces the performance concerns except for initial deployments where there may be a large amount of data to move to the cloud location at the start.
  • Cost of storage and retrieval grows over time with the usage and the amount of data offloaded to the cloud.
  • Migration of data from one cloud provider to another is another concern as the landscape changes with cloud providers. Changes in providers’ pricing or poor service may necessitate a change. The ability to seamlessly transfer data between cloud providers lessens the IT impact and addresses a major concern.

Archiving data into the cloud using application software that was developed for that purpose or by using gateways (sometimes called on-ramps) for existing applications is usually done as a funded project.  Either the project is a new application deployment or a business operation that is being re-engineered, or it is a funded initiative inside of IT to improve operations. Ultimately, the funding of the program dictates the action because it does require strategy, planning, and project management in addition to the operational process changes.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


June 20, 2013  8:51 AM

EMC eyes ScaleIO — a good fit for ViPR?

Dave Raffo Dave Raffo Profile: Dave Raffo

Word is coming out of Israel that EMC is deep in talks to acquire software-based storage startup ScaleIO for a price tag in the range of $200 million to $300 million. ScaleIO came out of stealth last December with software that it positions as a virtual storage appliance for enterprises and cloud service providers.

ScaleIO would fit the software-defined storage strategy that EMC laid out for its ViPR software last month at EMC World. EMC’s vision for ViPR combines the ability to pool storage across any hardware with cloud management capabilities. ScaleIO’s Elastic Converged Storage (ECS) can help on both fronts.

ECS agents install on servers running hypervisors, databases and other applications. The software aggregates capacity on those servers, turning them into a large storage network. The concept is similar to the virtual storage appliance (VSA) approach taken by Hewlett-Packard’s StoreVirtual VSA, built on LeftHand technology and VMware’s vSphere Storage Appliance, but those products are for small- and medium-sized businesses and small enterprises.

ECS would be a fit for public cloud storage, which tends to be more server-based than SAN- and NAS-focused enterprise storage. ScaleIO founder and CEO Boaz Palgi claimed at launch that he had one customer using ECS on 260 nodes. He said customers can add nodes on the fly and the software automatically rebalances performance.

It’s too soon to say how much of ScaleIO technology would end up in ViPR, but it’s unlikely that EMC would make a software buy now without considering it part of its software-defined strategy.

“This possible move leaves us to think more deeply about EMC’s willingness to aggressively step in front of potential meaningful disruptive secular changes,” Stifel Nicolaus Equity Research financial analyst Aaron Rakers wrote in a report to clients today. Rakers added that he sees the possible acquisition as “another move to position itself against the evolution toward open source solutions, pubic cloud providers and other software vendors pushing into software-defined storage.”

ScaleIO, which is based in Israel but has a Palo Alto, Calif. office, raised $12 million in funding from Greylock Partners, Norwest Venture Partners (NVP) and private investors.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: