The eighth annual Flash Storage Summit in Santa Clara, Calif., was once again an interesting mix of solid-state builders and buyers. While much of the exhibit hall featured exhibitors showing off components and testing gear in hopes of snagging OEM deals, there were still plenty of vendors with end-user—or close to end-user—products to browse.
There weren’t a lot of enterprise product announcements that we hadn’t already covered on SearchSolidStateStorage.com, but the real “star” of the show is a new flash architecture that should be showing up in products soon. The technology is called 3D NAND flash. It was described in a keynote presentation given by Samsung’s Dr. E.S. Jung, executive vice president and general manager of their semiconductor R&D center. The concept of 3D flash is analogous to the perpendicular recording technique that’s been used to dramatically increase the capacity of hard disk drives.
The goal of the 3D architecture is to solve some of the problems that currently dog NAND flash as flash lithography gets denser. As the cells that store data on flash chips get closer and closer the likelihood of interference among cells becomes greater, which can affect the reliability and performance of the flash. With 3D, cells still reside next to each other, but expansion is achieved by stacking cells. The stacked cells can keep their distance from each to lessen the likelihood of interference while still increasing the chip’s capacity.
Jung said that Samsung has been able to create architectures with 24 layers of cells to create 128 Gb chips; he foresees using this technique to ultimately build 1 Tb chips. He said that 3D technology will yield 10X endurance, use about half the power of traditional side-by-side cell architectures, run about 20% faster and be 50% smaller.
There are a number of other companies working on 3D flash designs, including Toshiba. Joel Hagberg, , vice president of marketing for Toshiba’s storage products business unit, said there are number of different approaches to 3D flash that Toshiba is currently considering. In another keynote, Gil Lee of Applied Materials described some of the challenges of turning 3D flash into a product by making the transition into manufacturing. Lee noted that the manufacturing process wouldn’t require “cutting edge lithography” but would need other modifications to current processes.
Several flash chip vendors indicated that we could expect to see wider production of products based on 3D NAND flash technology in 2014.
SPA officially launched at the eighth Flash Memory Summit in Santa Clara, Calif. No, SPA isn’t an upscale new Silicon Valley health resort where the nouveau Web riche renew their inner entrepreneurs with mud baths and herbal teas in between Pilates sessions.
SPA stands for Storage Products Association, a seemingly general name that could be a catchall for hundreds of storage and storage-related vendors. But this SPA is a much more exclusive club with just four members—but that gang of four represents nearly 100% of the market their new organization represents.
Seagate, WD, Toshiba and HGST are the founding and likely to be only members of this club which has come together to promote hard disk technology. The purpose of the SPA, according to the association’s published materials, is to “promote the use and understanding of rotating magnetic media hard drive (RHD) technologies…” including “solid-state hybrid drive (SSHD) and hard disk drive (HDD) technologies.”
The hybrid drives seem to be the hook for this announcement to come during a conference that specifically does not focus on hard drive technologies. David Burke, product marketing director for solid state hybrid technology at Seagate, explained that the SPA is not a standards-setting group; it will focus on education, awareness and advocacy.
“Both of the technologies will play large roles for years to come,” said Burke, referring to the marriage of flash and hard drives that the new association is promoting. Burke said that there isn’t a clear understanding of the benefits of fitting out a hard drive with a relatively small amount of flash storage that would be used as a cache to greatly improve performance. He said the SPA hopes to deliver a “more balanced message in the industry.”
The formation of the organization seems to be an early or preemptive move to ensure that hard disk technology doesn’t get overlooked or misunderstood as solid-state storage implementations threaten to monopolize the storage media conversation. The SPA’s initial focus on hybrid technology is at once an admission to the incursion of solid state and declaration of HDD relevancy. Products based on hybrid technology have been around for several years, but they haven’t had a profound impact on either the HDD or flash markets, according to Burke. While hybrids have been targeted at the mobile market, SPA members hope that newly released enterprise products will make their way into servers, appliances and arrays and help tip the balance.
Toshiba’s Joel Hagberg, vice president of marketing for their storage products business unit, said hybrid’s relatively slow adoption can be attributed to a couple of factors. First, there was only a single source for the tech for awhile, so laptop/mobile makers were reluctant to incorporate the technology. But even after there were multiple sources, Intel’s stringent requirements for UltraBook component compliance required at least 20 GB of solid state storage, while most hybrids were built with 4 GB of flash. It seems that both sides have come around a bit, with Intel relaxing their standard and the disk makers upping the amount of flash in the drives to 8 GB and 16 GB, with 32 GB models likely to appear soon.
Although it may seem odd that a group of disk manufacturers feel the need to bury differences and band together when they still enjoy a huge advantage of over solid-state by any measure—capacity, units and revenue. And all the members are also very actively involved with solid-state technologies. It may be that they saw how tape storage, without special attention was left to kind of drift off into oblivion. And now as tape vendors try to prove their value, especially with groundbreaking capabilities like LTFS, they have to first climb their way back into user consciousness and rebuild their tattered ties before they can renew interest in tape techs.
Tegile Systems pulled in $35 million in funding this week to help beef up sales and marketing for its Zebi storage arrays. Fittingly for a hybrid storage startup, the round included strategic t investments from a flash provider SanDisk and hard drive vendor Western Digital.
Tegile already uses hard drives and solid-state drives (SSDs) from Western Digital’s HGST subsidiary, and those SSDs include SanDisk flash.
Rob Commins, Tegile VP of marketing, describes Zebi as “an all-flash array with a hybrid twist. It looks like all-flash in the SAN but the customer can put spinning disk behind it to reduce dollars per gigabyte cost. Most customers do that, as opposed to going to Violin or Pure for all-flash array and then calling up somebody like NetApp for a bit bucket beside it. Customers can performance optimize with flash and capacity in the same array.”
Customers can buy a Zebi unified storage array with all flash but Commins said most use a blend of flash and spinning disk. He said the startup has more than 300 customers and more revenue than it has in outside funding. Tegile’s total of $47.5 million in three funding rounds is peanutes compared to some other flash startups.
“Until this round, we’ve been running on $12.5 million,” Commins said. “We take a different approach than competitors like Nimble and Pure – they get cash, bring in a bunch of people, build as fast as you can, then round out the edges as you scale. Ours is more of a measured approach. With this round we’ll mostly build up the customer facing side – sales, marketing and customer support. We’ll increase development, but not at the same scale.”
Meritech Capital led the round. Meritech focuses on late-stage startups, with previous investments in Facebook, Fusion-io, Riverbed, Greenplum, Netezza and Salesforce among others who have either gone public or been acquired. Previous Tegile investor August Capital also joined the new round.
Commins said Tegile has been doubling revenue every quarter, and “can see profitability in our sights.” But he said Meritech’s involvement doesn’t mean the company will try to for an initial public offering (IPO).
“We’ll go where the market leads us,” he said.
The disruption of Microsoft’s cloud services this week is likely to shake some customers’ confidence in the cloud, even those not impacted by the most recent disruption.
The Microsoft SkyDrive cloud storage service was one of several applications that were down for some users this week, along with Outlook and People. The apps went down Wednesday and some were still down most of Thursday. They are running normally today but that doesn’t mean customer confidence is restored.
“I worry whenever I hear of any type of outage,” said Cynthia Weaver, assistant vice president of IT at the Detroit-based Walbridge construction firm and an Microsoft Azure cloud storage customer. “Any outage is a big deal. Any is too many. You never know when you are in the middle of an important business process.”
Weaver said her company did not experience problems from the outage because it has not installed the version of Microsoft 365 that includes SkyDrive. Walbridge has at least 8 TB of data in the cloud managed via the Microsoft StorSimple 7020 gateway, a device it installed in 2011. Walbridge uses StorSimple for primary storage for user files, such as Word documents and engineering design files.
This isn’t Microsoft’s first cloud outage. In February, the Azure storage cloud went down when Microsoft let its Secure Socket Layer (SSL) certificate that secures customer data traffic for each of the main storage types to expire.
“That was a rookie mistake,” Weaver said of Microsoft’s February slip-up. “(But) lately they have really stabilized their service.”
Other cloud providers have also had service disruptions, and Weaver said application downtimes do happen more often than they are reported in the media.
“I wish they were [reported more]. It would keep them accountable,” she said.
When storage revenue slipped in the early part of 2013, vendors predicted sales would pick up in the second half.
We saw some evidence of that this week when Brocade and NetApp reported earnings for last quarter. However, there are still no signs of a full-blown recovery.
Brocade and NetApp finished their last quarters at the end of July, which brings them one month into the second half of the year. Both improved over the previous quarter, but didn’t exactly hit home runs.
Brocade, which depends on storage array sales for its Fibre Channel switches, reported $314 million in SAN revenue last quarter, down two percent than last year. That’s not great, but Brocade expected SAN sales to fall from eight percent to 11 percent in the quarter, following a seven percent year-over-year drop in the previous quarter. And the storage results were better than the eight percent drop in its network switching revenue.
NetApp’s $1.52 billion in total revenue was a bit below expectations, mainly because its $931 million product revenue missed the mark by more than $20 million. But NetApp executives said their five percent year-over-year growth in total revenue and four percent rise in product revenue ran ahead of the overall storage market this year. It certainly ouptaced NetApp’s one-percent growth in its previous quarter.
Brocade CEO Lloyd Carney said storage sales are clearly on the upswing.
“We tend to be a proxy for the overall storage market, and there are definitely indicators that there is a rebound in sight,” Carney said during Brocade’s earnings call. “The storage market is still softer than it was a year ago, but we are receiving positive indications from our partners that end user demand is improving.”
Brocade’s forecast of an increase in SAN revenue of from one percent to four percent from last quarter is a sign of that soft recovery.
Brocade CFO Daniel Fairfax added that while storage sales are not back to normal, the “SAN is dead” predictions that followed Brocade’s last earnings call were premature.
“We’re a little bit cautious because we haven’t seen the green light that things have returned to normal in storage,” he said. “But we’re optimistic.”
NetApp CEO Tom Georgens said on his earnings call that he was pleased with his company’s sales last quarter, but wouldn’t attribute that to an industry-wide rebound. He said NetApp sales were driven by its recently released clustered Ontap operating system and new flash products.
Georgens said NetApp has sold more than 300 EF540 all-flash arrays that launched this year, sales of its Flash Pool caching technology increased nearly 50% year-over-year, and more than 60% of NetApp FAS arrays ship with either Flash Cache or Flash Pool.
He said NetApp’s results came “despite the macro uncertainties and constrained IT spending environment.” When asked if the storage market was picking up, Georgens pointed out that other storage companies have not made big strides yet.
“This was a pretty strong market share quarter for us,” he said. “I want to be careful about the industry at large … I wouldn’t go as far as to say that we’ve got an economic tailwind driving us. I think things probably feel a little bit better than they were a year ago, but I wouldn’t use the word dramatic turnaround or anything like that.”
NetApp counts on the federal government for a big piece of its sales, and Georgens said he remains concerned about the sequester fallout.
“I think there’s a little bit of concern about what it’s actually going to play out there,” he said.
New FalconStor CEO Gary Quinn said the vendor has cut its ties with investment banker Wells Fargo, which the software vendor hired in December to pursue an acquisition of the company or other funding options.
Quinn, who replaced Jim McNiel as FalconStor CEO in June, said the company has found several ways to raise money outside of an acquisition. He also said the goal is to return to profitability by the end of the year.
FalconStor reached an agreement with Hale Capital Partners to make an investment between $7.5 million and $15 million, signed a joint development agreement with flash array vendor Violin Memory that can bring FalconStor $12 million over the next 18 months, and agreed to sell its stake in a joint venture to sell the Blue Whale file system for around $3 million.
FalconStor is no longer retaining Wells Fargo as its financial advisor. Quinn said FalconStor would continue to listen to acquisition offers “but at this time, we’d prefer to digest these alternatives and focus our company around moving ahead.”
It’s been a long time since FalconStor has moved ahead. It just completed its 14th straight money-losing quarter. Quinn became interim CEO when McNiel left suddenly in late June. FalconStor removed the interim tag from Quinn’s title in July. McNiel’s predecessor, ReiJane Huai resigned as CEO in 2010 after his role in a customer bribery scandal became known. Huai committed suicide in 2011.
According FalconStor’s earnings report released Thursday, the vendor’s revenue of $14 million last quarter dropped 15% from the previous year. It lost $4.6 million and finished the quarter with $21.9 million in cash and assets.
“We have some work to perform to get the company to a consistent, profitable, growing and financially healthy entity,” Quinn said.
He added that FalconStor set a goal of“returning the company to profitability and positive cash flow by the end of 2013 by reducing spending and increasing revenue.
Quinn did not talk much about FalconStor’s product and technology plans on the earnings call, and would not say if the company received any acquisition offers.
Nirvanix has switched CEOs twice since last December, when Scott Genereaux left to become a senior vice president at Oracle. Dru Borden replaced Genereaux, then Debra Chrapaty took over for Borden in May. Although Nirvanix has raised $70 million in funding, industry sources say it is reducing spending as it struggles to compete with larger rivals that can afford to offer lower cloud pricing.
“The problem is they are still a small player,” said Henry Baltazar, a senior analyst for infrastructure and operations professionals at Forrester Research Inc. “They were early [in the market] but they don’t have the resources that Google, Microsoft and Amazon have. Those companies have more resources and bandwidth. It’s a very difficult market to sell services.”
Baltazar said startups in the cloud services market generally have a difficult time competing against behemoths like Amazon. Nirvanix did sign a five-year OEM agreement with IBM in 2011, forming a partnership to be part of IBM’s SmartCloud Enterprise storage services portfolio. However, IBM spent $2 billion to acquire SoftLayer Technologies in June. SoftLayer offers cloud storage among other cloud services, so it competes with Nirvanix.
One venture capitalist who is familiar with Nirvanix said it cannot afford to play the how-low-can-you-go pricing game that Amazon, Google and Microsoft participate in. Those companies have other large revenue streams and are trying to gain a stronghold in the cloud to expand. The cloud is Nirvanix’s only business, and it can’t give its services away.
“The problem is with the business model and not the people at Nirvanix,” said the VC, who asked to not be identified. “How will they compete with Amazon, which is practically giving away its cloud storage for free? And Microsoft and Google are doing the same.”
San Diego-based Nirvanix was spun off from early Internet storage service provider Streamload in 2007. Niranix offers public, hybrid and private cloud storage services with usage-based pricing and accessible via HTTP using the Nirvanix Web Services API based on REST and SOAP protocols or the Nirvanix Cloud NAS gateway. It currently has 50 employees and manages the data centers used to host customers’ data.
This has been a tough year for server-side flash pioneer Fusion-io, and it probably has yet to hit bottom.
Fusion-io Wednesday reported disappointing sales for last quarter, and expects things to get worse this quarter. This comes after it revealed in January a slowdown of buying from its largest customers Facebook and Apple, and then CEO David Flynn suddenly resigned and was replaced by board member Shane Robison in May.
On Wednesday, Fusion-io reported $106 million in revenue for last quarter, which was about the same as a year ago and about $4 million below Wall Street expectations. Fusion-io lost $23.8 million in the quarter. The guidance for this quarter was even worse: Fusion-io now expects $80 million to $90 million in revenue compared to $118.1 million in the same quarter last year and a consensus expectation of $123 million from financial analysts.
Several factors are hampering Fusion-io. The main problem is Apple and Facebook, which made up most of its revenue in its early days, have not picked up spending on Fusion-io product as fast as expected. Robison said Facebook bought more than expected last quarter and made up 36% of total revenue, but will fall off again this quarter. Apple accounted for less than 10% of revenue last quarter, and Fusion-io has no clear expectation for when or if it will pick up. When Fusion-io first reported the slowdown in spending by Facebook and Apple in January, it claimed they would resume spending in the second half of this year.
Fusion-io is trying to get other large customers on board to make up for the Facebook and Apple slowdowns. Robison mentioned LinkedIn, Pandora, Spotify, Alibaba, Alipay, Salesforce, China Mobile and U.K. National Health as newer customers the vendor is focusing on.
“Because of the way people build out their big data centers, this is by definition a lumpy business,” Robison said on the company’s earnings call. “The best way for us to sort of dampen the lumpiness is to have a dozen of these [large] customers instead of just two or three.”
Fusion-io also is selling arrays it acquired from NexGen at a slower rate than expected. When Fusion-io picked up NexGen for $119 million in April, Flynn claimed it would have meaningful sales this quarter. But Robison said the NexGen hybrid arrays, now called ioControl, are not expected to account for much revenue this quarter.
Other problems include conflict with its server OEM partners around pricing and the timing of its ioScale high-density PCIe card release in January. Robison said Fusion-io announced that product before its OEM partners were ready for it, and it is being qualified now.
There is also much more competition in the PCIe flash market than when Fusion-io began. EMC is pushing its XtremSF caching software with cards from Virident and Micron, and its marketing revolves mostly around how those cards are better than Fusion-io’s. Another rival, LSI, claims an unidentified leading social networking company is using its PCIe flash cards.
Robison downplayed the competition, saying the market was large enough for several vendors and not all PCIe flash companies are direct competitors in Fusion-io’s markets.
“We’re the leaders,” he said. “And if we can just get a few of our execution things lined up well here, well, I think we’ll be in good shape.”
Fusion-io shareholders aren’t convinced of that. The stock price dropped more than 20% in pre-market trading today. The more important issue is how its customers and partners will react.
Caringo Inc. recently launched its CloudScaler 2.0, adding support for Amazon S3 support and transparent disaster recovery options for cloud service providers. The company now joins a growing number of companies like Cleversafe Inc. and Amplidata that support Amazon APIs.
“This allows S3 API calls written throughout APIs of our object storage. S3 has done a very good job of building an ecosystem or support of applications,” said Adrian Herrera, Caringo’s senior director of marketing. “We definitely see a demand for (Amazon S3 APIs). We see it in the enterprise and also see demand from the service provider side.”
The update version also offers customized and transparent disaster recovery capabilities, which include more control over content location for disaster recovery and access, and automated local and geographic distribution of objects to multiple locations. This is particularly important for companies that have compliance and regulation concerns.
This most recent announcement follows Caringo’s announcement that it has integrated CAStor object storage software with CTERA Cloud Attached Storage gateways and the CTERA portfolio cloud storage solution.
The solutions now is available via a perpetual license or on-demand pricing model.
CloudScaler 2.0 offers enterprise features for cloud service providers that include CAStor’s WORM functionality, integrity seals and Elastic Content Protection. It offers replication and erasure coding simultaneously for any storage SLA. The 2.0 version allows most existing applications that support Amazon S3 to work seamlessly once reconfigured to send requests to CloudScaler. The cloud storage infrastructure is expanded with no service downtime and automated storage balancing.
CloudScaler 2.0 also is interoperable with the Citrix Cloud Portal Business Manager, which allows administrators to build clouds, manage and deliver any cloud or IT service through a single service platform. They can provision storage running in the cloud, aggregate cloud infrastructure
“We plug in as a connector to give administrators a dashboard,” said Herrera. “We plug in the infrastructure into those management features.
I had a long discussion with an IT director for a large company this past week, and we talked about his views on open source software. Basically, he said he considered open source for information management software a much greater expense than purchasing or site licensing.
That view was based on several factors:
• Information management software has a long usage period. The expectation is the software will be in use for at least a decade as part of the company’s operational procedures.
• Open source software requires integration and customization by the IT staff. Given recent staff reductions, his team did not really have that time to spare.
• First-level and sometimes second-level support would have to be performed by his staff to get the required responsiveness. He said the cost of paying for support from a third party negated the benefits of open-source software, and was difficult to defend when budgets were being scrutinized.
• The IT organization did not want to budget staff time to include open source support. Past experiences had shown this to require much more input than had been forecast.
The bottom line was the economics did not really show benefit after taking all the operational expenses into account. This realization led to a policy of using information management software from established vendors with support and a history of longevity for their products.
He said it was a problem when staff brought open source software into the company. This created issues in the long term. Besides the economic impact, this caused a dependency on a single individual for keeping that application working. This led to establishing a policy where an individual who brought in unapproved software would be terminated immediately.
I told him the concept of open source and not having to pay a usage fee sounded so appealing – especially with so many bright people contributing to it and creating highly valuable software. He agreed with that but said it was still an economics issue and it was also much safer (meaning defensible to executives) to not use it.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).