Storage Soup


August 8, 2013  5:10 PM

Nirvanix facing tough battle against Amazon S3

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Nirvanix, first out of the gate as a pure-play cloud storage provide, is facing a tough battle against Amazon S3 and other large cloud providers  and there are indications the company is struggling.

Nirvanix has switched CEOs twice since last December, when Scott Genereaux left to become a senior vice president at Oracle. Dru Borden replaced Genereaux, then Debra Chrapaty took over for Borden in May. Although Nirvanix has raised $70 million in funding, industry sources say it is reducing spending as it struggles to compete with larger rivals that can afford to offer lower cloud pricing.

“The problem is they are still a small player,” said Henry Baltazar, a senior analyst for infrastructure and operations professionals at Forrester Research Inc. “They were early [in the market] but they don’t have the resources that Google, Microsoft and Amazon have. Those companies have more resources and bandwidth. It’s a very difficult market to sell services.”

Baltazar said startups in the cloud services market generally have a difficult time competing against behemoths like Amazon. Nirvanix did sign a five-year OEM agreement with IBM in 2011, forming a partnership to be part of IBM’s SmartCloud Enterprise storage services portfolio. However, IBM spent $2 billion to acquire SoftLayer Technologies in June. SoftLayer offers cloud storage among other cloud services, so it competes with Nirvanix.

One venture capitalist who is familiar with Nirvanix said it cannot afford to play the how-low-can-you-go pricing game that Amazon, Google and Microsoft participate in. Those companies have other large revenue streams and are trying to gain a stronghold in the cloud to expand. The cloud is Nirvanix’s only business, and it can’t give its services away.

“The problem is with the business model and not the people at Nirvanix,” said the VC, who asked to not be identified. “How will they compete with Amazon, which is practically giving away its cloud storage for free? And Microsoft and Google are doing the same.”

San Diego-based Nirvanix was spun off from early Internet storage service provider Streamload in 2007. Niranix offers public, hybrid and private cloud storage services with usage-based pricing and accessible via HTTP using the Nirvanix Web Services API based on REST and SOAP protocols or the Nirvanix Cloud NAS gateway.  It currently has 50 employees and manages the data centers used to host customers’ data.

August 8, 2013  8:26 AM

Fusion-io fumbles again

Dave Raffo Dave Raffo Profile: Dave Raffo
Storage

This has been a tough year for server-side flash pioneer Fusion-io, and it probably has yet to hit bottom.

Fusion-io Wednesday reported disappointing sales for last quarter, and expects things to get worse this quarter. This comes after it revealed in January a slowdown of buying from its largest customers Facebook and Apple, and then CEO David Flynn suddenly resigned and was replaced by board member Shane Robison in May.

On Wednesday, Fusion-io reported $106 million in revenue for last quarter, which was about the same as a year ago and about $4 million below Wall Street expectations. Fusion-io lost $23.8 million in the quarter. The guidance for this quarter was even worse: Fusion-io now expects $80 million to $90 million in revenue compared to $118.1 million in the same quarter last year and a consensus expectation of $123 million from financial analysts.

Several factors are hampering Fusion-io. The main problem is Apple and Facebook, which made up most of its revenue in its early days, have not picked up spending on Fusion-io product as fast as expected. Robison said Facebook bought more than expected last quarter and made up 36% of total revenue, but will fall off again this quarter. Apple accounted for less than 10% of revenue last quarter, and Fusion-io has no clear expectation for when or if it will pick up. When Fusion-io first reported the slowdown in spending by Facebook and Apple in January, it claimed they would resume spending in the second half of this year.

Fusion-io is trying to get other large customers on board to make up for the Facebook and Apple slowdowns. Robison mentioned LinkedIn, Pandora, Spotify, Alibaba, Alipay, Salesforce, China Mobile and U.K. National Health as newer customers the vendor is focusing on.

“Because of the way people build out their big data centers, this is by definition a lumpy business,” Robison said on the company’s earnings call. “The best way for us to sort of dampen the lumpiness is to have a dozen of these [large] customers instead of just two or three.”

Fusion-io also is selling arrays it acquired from NexGen at a slower rate than expected. When Fusion-io picked up NexGen for $119 million in April, Flynn claimed it would have meaningful sales this quarter. But Robison said the NexGen hybrid arrays, now called ioControl, are not expected to account for much revenue this quarter.

Other problems include conflict with its server OEM partners around pricing and the timing of its ioScale high-density PCIe card release in January. Robison said Fusion-io announced that product before its OEM partners were ready for it, and it is being qualified now.

There is also much more competition in the PCIe flash market than when Fusion-io began. EMC is pushing its XtremSF caching software with cards from Virident and Micron, and its marketing revolves mostly around how those cards are better than Fusion-io’s. Another rival, LSI, claims an unidentified leading social networking company is using its PCIe flash cards.

Robison downplayed the competition, saying the market was large enough for several vendors and not all PCIe flash companies are direct competitors in Fusion-io’s markets.

“We’re the leaders,” he said. “And if we can just get a few of our execution things lined up well here, well, I think we’ll be in good shape.”

Fusion-io shareholders aren’t convinced of that. The stock price dropped more than 20% in pre-market trading today. The more important issue is how its customers and partners will react.


August 7, 2013  9:42 PM

Caringo joins the Amazon API bandwagon

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

 Caringo Inc. recently launched its CloudScaler 2.0, adding support for Amazon S3 support and transparent disaster recovery options for cloud service providers. The company now joins a growing number of companies  like Cleversafe Inc. and Amplidata that support Amazon APIs.

“This allows S3 API calls written throughout APIs of our object storage. S3 has done a very good job of building an ecosystem or support of applications,” said Adrian Herrera, Caringo’s senior director of marketing. “We definitely see a demand for (Amazon S3 APIs). We see it in the enterprise and also see demand from the service provider side.”

The update version also offers customized and transparent disaster recovery capabilities, which include more control over content location for disaster recovery and access, and automated local and geographic distribution of objects to multiple locations. This is particularly important for companies that have compliance and regulation concerns.

This most recent announcement follows Caringo’s announcement that it has integrated CAStor object storage software with CTERA Cloud Attached Storage gateways and the CTERA portfolio cloud storage solution.

The solutions now is available via a perpetual license or on-demand pricing model.

CloudScaler 2.0 offers enterprise features for cloud service providers that include CAStor’s WORM functionality, integrity seals and Elastic Content Protection. It offers replication and erasure coding simultaneously for any storage SLA. The 2.0 version allows most existing applications that support Amazon S3 to work seamlessly once reconfigured to send requests to CloudScaler. The cloud storage infrastructure is expanded with no service downtime and automated storage balancing.

CloudScaler 2.0 also is interoperable with the Citrix Cloud Portal Business Manager, which allows administrators to build clouds, manage and deliver any cloud or IT service through a single service platform. They can provision storage running in the cloud, aggregate cloud   infrastructure

 “We plug in as a connector to give administrators a dashboard,” said Herrera. “We plug in the infrastructure into those management features.


August 7, 2013  7:53 AM

Hidden costs of open source software

Randy Kerns Randy Kerns Profile: Randy Kerns

I had a long discussion with an IT director for a large company this past week, and we talked about his views on open source software. Basically, he said he considered open source for information management software a much greater expense than purchasing or site licensing.

That view was based on several factors:

Information management software has a long usage period. The expectation is the software will be in use for at least a decade as part of the company’s operational procedures.

• Open source software requires integration and customization by the IT staff. Given recent staff reductions, his team did not really have that time to spare.

• First-level and sometimes second-level support would have to be performed by his staff to get the required responsiveness. He said the cost of paying for support from a third party negated the benefits of open-source software, and was difficult to defend when budgets were being scrutinized.

• The IT organization did not want to budget staff time to include open source support. Past experiences had shown this to require much more input than had been forecast.

The bottom line was the economics did not really show benefit after taking all the operational expenses into account. This realization led to a policy of using information management software from established vendors with support and a history of longevity for their products.

He said it was a problem when staff brought open source software into the company. This created issues in the long term. Besides the economic impact, this caused a dependency on a single individual for keeping that application working. This led to establishing a policy where an individual who brought in unapproved software would be terminated immediately.

I told him the concept of open source and not having to pay a usage fee sounded so appealing – especially with so many bright people contributing to it and creating highly valuable software. He agreed with that but said it was still an economics issue and it was also much safer (meaning defensible to executives) to not use it.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


August 2, 2013  1:43 PM

Quantum still idling with DXi disk

Dave Raffo Dave Raffo Profile: Dave Raffo

The DXi data deduplication disk system still isn’t working out for Quantum as planned.

The DXi platform and StorNext archiving software are the main pillars in Quantum’s strategy to become a systems company instead of just a tape vendor. But the DXi is stuck in neutral. Quantum reported its disk revenue fell four percent last quarter due to poor enterprise sales, marking the second straight poor quarter for DXi. That was a drag on Quantum’s product revenue, which fell 8.3% to $86 million while overall revenue grew 5% to $148 million.

Quantum CEO Jon Gacek said the DXi problem was a combination of large deals that didn’t close and fewer sales than expected to existing customers. He said the DXi 8500 enterprise system faltered while the DXi 6000 midrange revenue grew 23% over last year.

“Our real issue this quarter and last has been the DXi,” he said on the earnings call. “We did well with new customers and in the midrange, but we didn’t do as well with our installed base and bigger deals. These last two quarters have been more of a struggle.”

Gacek added “We still think the market is there,” and that he is happy with how the DXi stacks up against EMC Data Domain and other competitors. The problem, he said, is mainly that Quantum lacks the market reach to get into enough deals. He pointed out that EMC is in nearly every deal because it is a much larger company than Quantum.

Besides EMC, Quantum faces a newer competitor in Symantec, which is having success selling its backup software on pre-packaged hardware appliances. When Symantec sells a NetBackup appliance, its customer no longer needs a separate disk target such as the DXi. According to the most recent market figures from IDC, Symantec’s disk appliance revenue grew 149.3% to $102.3 million in the first quarter of this year. Quantum’s $18.3 million revenue grew only 2.5 percent compared to the overall 16.5% market growth. Quantum was fifth behind EMC, Symantec, IBM and Hewlett-Packard with 3.1 percent market share, and it doesn’t look like it gained share last quarter.

“Symantec is 50 percent of the backup market, and their product does impact us for sure,” Gacek said in an interview after the earnings call. “But their product does not scale as much or have high performance. Is it good enough? For a bunch of enterprises it is. We have them below us and EMC on the same level and above us. But there’s still a lot of market.”

Take away a one-time $15 million intellectual property deal with Microsoft, and Quantum would have missed its revenue target by $2 million. Still, Gacek said the quarter had hits high points. While Quantum’s DXI revenue faltered, it beat estimates for income and finished $3 million in the black for the quarter on solid tape sales and 13% StorNext growth. He’s still counting on better DXi sales and expects Quantum’s new Lattus object-based storage system to pick up steam soon. But he spent much of the earnings call defending the DXi results.

“Without sounding whiny, everybody’s always unhappy about something,” he said of investors and analysts. “Last year the DXi was killing it and we were getting hammered because of tape. Now DXi is off and tape is doing well, and DXi is what everybody wants to talk about.”


July 30, 2013  3:57 PM

NetApp, Cisco add FlexPod for Hadoop, make name changes

Dave Raffo Dave Raffo Profile: Dave Raffo

NetApp and Cisco expanded their FlexPod lineup today, re-naming their main and SMB reference architectures and adding a version specifically for Hadoop.

The FlexPod lineup now consists of FlexPod Datacenter (formerly FlexPod), FlexPod Express (formerly ExpressPod ) for SMBs and FlexPod Select for data intensive workloads. FlexPod Select for Hadoop is the first Select configuration, validated for Hadoop workloads with Cloudera and Hortonworks. It is also the first FlexPod to include NetApp E Series storage along with its flagship FAS unified storage platform.

FlexPod is architecture built on existing products, so there is no new technology involved in these configurations. Even the new FlexPod Select looks a lot like the NetApp Open Solution for Hadoop that NetApp has sold for about a year now. Both FlexPod Select and Open Solution include NetApp FAS and E Series storage, the file system and Hadoop application. With Open Solutions, customers pick their own servers and networking, while the FlexPod version includes Cisco servers (UCS) and switching (Nexus, Catalyst and/or MDS). FlexPod Select uses FAS 2220 and E5460 storage while Open Solution for Hadoop uses FAS2200 and E2600.

Other FlexPod updates include the Nexus 7000 switch and NetApp SnapProtect data protection software in the DataCenter version.

Brandon Howe, NetApp VP of product and solutions marketing, said future fFlexPod Select architectures will focus on “high performance, bandwidth-hungry apps” such as video and other high performance computing (HPC) workloads. He said the Select platform might also include all-flash versions.

FlexPod Express for SMBs has the same configuration as before. “Only the name is changing,” Cisco VP of data center marketing Jim McHugh.

Howe said the name change came about because customers often did not understand ExpressPod was part of the FlexPod family. “People were saying, ‘we really like ExpressPod, it would be great if you could do this with FlexPod, too,’” he said.

Cisco and NetApp claim 2,400 FlexPod customers in the three years since they formalized the reference architectures. They see that as a success, although EMC president David Goulden mocked that number last week during EMC’s earnings call. Goulden said EMC has sold more than 3,600 Vspex reference architectures since April 2012.

“In other words, in less than half the time we have sold more systems than another less flexible reference architecture that has been on the market for several years,” Goulden said.

Cisco is neutral in the EMC-NetApp reference architecture faceoff because it is also an EMC Vspex partner and is a partner in the VCE company set up by EMC, VMware and Cisco to sell Vblock converged stacks.


July 29, 2013  2:20 PM

STec goes Micro while counting the days to HGST pickup

Dave Raffo Dave Raffo Profile: Dave Raffo

Western Digital/HGST’s planned acquisition of solid state drive (SSD) vendor sTec isn’t changing sTec’s product roadmap.

While waiting for the deal to close, sTec today launched what it claims is the first Micro (1.8-inch) SAS SSD, which is available in 200 GB and 400 GB capacities. STec is also now offering 256-bit AEX-XTS encryption on the Micro SAS drive as well as its 2.5-inch SAS SSDs.

Western Digital said last month it has agreed to acquire sTec for its HGST enterprise drive subsidiary for $340 million and expects the deal to close by the end of 2013.

The Micro SAS drive is designed for blade servers and other high density systems. Mark Rochlin, VP of sTec’s government and defense business said the dense form factor and encryption makes it a good fit for government and military customers.

There are two versions of the Micro SAS drive – encrypted and not encrypted.

Rochlin said security was especially important in the wake of the Bradley Manning and Edward Snowden incidents. “Security and information for the government is a big deal,” Rochlin said. “Vulnerabilities created by those breaches are every bit as dangerous as a bullet or a bomb. The encryption option makes it safe so if these devices fall into the wrong hands they can’t be exploited.”

Pricing for the Micro SAS SSD starts at $1,800 for 200 GB.

STec said the new drives will be available through its usual channel, although it’s uncertain what will happen to that channel after sTec becomes part of OEM-centric HGST.

“The reason HGST gave for buying sTec was our complementary technology, and HGST doesn’t have a product like this,” said Swapna Yasarupu, sTec’s director of SSD product marketing. “It’s business for usual for us. We’re continuing our channel reseller program.”


July 22, 2013  9:10 AM

Hand-me-down storage strategies evolve with solid state

Randy Kerns Randy Kerns Profile: Randy Kerns

When talking to clients about their IT needs and storage vendors about system development, I notice a change in the expected longevity of storage systems. This change is largely due to the rise of solid state storage systems.

Many of our high-end enterprise clients have had an established practice of using disk-based primary storage systems for three years in that role. After that, they move tier 1 application data to a new storage system and relegate the old system (now an ancient three-years old) to secondary storage for two more years.

The five-year lifespan and the plan to update or upgrade storage systems was based on the reliability of the system (meaning component failure rate change over time), the advantages of improved technology every few years, and the cost for maintenance once the warranty period expires.  This has been built in depreciation schedules and business plans.

Now I’m seeing a new dynamic around how long people expect their storage systems to provide value. This is an outcome of the availability and maturing of solid-state storage systems and plans to use them with tier 1 applications and virtualization. Most vendors have made great strides in addressing wear-out issues and the reliability improvements are driving down the service cost estimates for flash. Some vendors are quoting longer warranty periods or reduced maintenance costs.

Our IT clients see the dramatic performance improvements from all-SSD systems. They are purchasing more systems and capacity while still getting the benefits from the initial system they deployed. The depreciation schedules can change and the main driver for purchasing is not for replacement, but for addition.

All-flash storage systems are changing the thinking around the longevity of storage systems.  The planned hand-me-down process may no longer be the norm.  Some vendors see the longevity as a competitive opportunity and are aggressively pursuing that with a longer warranty period, lower maintenance costs, or a more easily negotiated warranty extension.  Other vendors have finance people that see maintenance revenue as a birthright annuity, and they are not changing. This is a competitive area and using negative marketing information to hold onto that annuity revenue will only work until the competitors have made enough inroads with customers to take away the business.

This will be another interesting storage trend to watch develop.  It will be an important competitive area and will distinguish vendors where innovation and opportunity is held back by maintaining that financial annuity while letting the competition move forward.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


July 9, 2013  12:56 PM

Appliances add options — and decisions — for storage purchasers

Randy Kerns Randy Kerns Profile: Randy Kerns

We’re seeing many different vendor initiatives for appliances or special purpose applications that run inside storage systems. In contrast to general purpose storage systems, these appliances consist of special purpose applications running on servers. The most common type of appliance is for backup. Another type includes archive systems.

For backup systems, appliances add deduplication and compression to reduce the amount of data stored, and replication for making disaster copies. Some integrated appliances include the media server software preloaded on the backup application.

With archive systems, value add features can include data movement software with a policy engine for selective controls and replication, functions for retention management, a search function with indexing, and compliance features such as immutability, audit trails, and security.

The value of these appliances comes from the integration enabling all the pieces to work together effectively for quick deployment, usually by IT staff. The appliance should be supported as a single system rather than independent elements. (If that’s not the case, that can be a problem.) The system may even be less expensive than putting the individual elements together.

The trade-offs of using general purpose storage with servers and software versus an integrated appliance are well understood. One issue raised is who has the authority to make the purchase. The decision for improving or deploying new backup technology may be with the backup team and selecting an appliance or system whose primary usage is for backup may be much easier than using some the primary storage under someone else’s control. The same holds true for archiving, which is usually a separate initiative with separate funding and not part of the backup area of responsibility in most large IT organizations.

The line between these trade-offs and roles blurs as vendors begin to offer storage systems with the ability to run selected applications on the storage controller. Now IT is faced with another choice: the purchase of a storage system that can have an application loaded in addition to the storage control function with embedded software. Implemented as a virtual machine in storage, it really is a similar choice to using general purpose storage with the application on a server, only now the server is in the storage system.

This raises issues around flexibility, support, customization, and potential lock-in. The organizational boundaries for purchasing will still hold and be the determining factor in many environments. But, it is another option to consider.

The option may be exploited effectively by system integrators and resellers who can put together solutions that combine several functions that would have been multiple appliances before. With service and support, this is an opportunity for integrators and VARs to deliver more value to their customers.

Appliances, storage systems with the option to load application software, and general purpose storage with separate servers and applications are all possibilities. Not all fit the needs of an IT organization. It’s important to understand the basic needs or problems to be solved, look at the options and then develop a strategy that will deliver the best overall solution.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


July 3, 2013  11:21 AM

Struggling FalconStor shuffles CEOs again

Dave Raffo Dave Raffo Profile: Dave Raffo

FalconStor has been looking for a buyer for seven months. Now it may be looking for a CEO, too.

Jim McNiel suddenly resigned as CEO and president Monday and gave up his seat on the board. FalconStor named Gary Quinn interim CEO. The company did not say if it is conducting a search for a permanent CEO.

FalconStor may not need to find a full-time replacement. It could give the job to Quinn eventually or it may not need one at all. The storage software vendor last November revealed it hired investment banker Wells Fargo to pursue strategic options, which could be a sale or funding. Quinn could merely be a caretaker until there is a sale. Then again, McNiel’s resignation could be a sign that there are no serious bidders for FalconStor.

McNiel replaced FalconStor founder ReiJane Huai as CEO in 2010 after Huai’s role in a customer bribery scandal came to light. Huai committed suicide a year later. FalconStor agreed to pay $5.8 million in June 2012 to settle criminal and civil charges that it bribed JP Morgan Chase to buy its software.

The bribery scandal came with FalconStor already reeling following the loss of OEM partners for its virtual tape library (VTL) software. Most of the major storage players used FalconStor software before developing or acquiring their own data deduplication software for backup.

FalconStor lost $3.4 million in the first quarter of 2013 and its $15.3 million in revenue was down 21% from the same quarter last year. With $28.2 million in operating cash, the vendor cannot sustain those kinds of losses for long without getting acquired or finding funding.

But in some ways, FalconStor is operating as it will be around long-term. It hired Rob Zecha as chief product officer in April, putting him in charge of product management, quality assurance, software development, engineering and research and development. And last month, FalconStor signed an OEM deal to sell its VTL software on IBM hardware.

Quinn joined FalconStor in April 2012 as vice president of North America sales and marketing and was promoted to executive vice president and COO in April 2013. He previously held executive positions at software vendor CA Technologies and Suffolk County, NY.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: