Storage Soup

September 22, 2015  3:31 PM

Tegile expands all-flash portfolio through SanDisk partnership

Dave Raffo Dave Raffo Profile: Dave Raffo
Nexenta, SanDisk

SanDisk is putting its investments in private storage companies to good use. Two of the companies it has invested in – Nexenta and Tegile Systems – have signed on as OEM partners for SanDisk’s InfiniFlash all-flash storage platform.

Nexenta is a software vendor that is porting its ZFS-based NexentaStor application onto the InfiniFlash platform, which consists of proprietary NAND cards.

Tegile is expanding its all-flash platform with its IntelliFlash HD product, combing its software and controller with the SanDisk InfiniFlash array. Tegile launched its home-built all-flash arrays in June 2014, and also sells hybrid flash systems combining hard disk drives and solid-state drives.

Tegile VP of marketing Rob Commins said because the IntelliFlash system scales far higher than Tegile’s other all-flash arrays, there won’t be much overlap among customers. Tegile’s all-flash minimum capacities range from 12 TB to 48TB in an array while the IntelliFlash system starts at 127 TB and scales to more than 10 PB of usable capacity in a 42u rack.

Commins said the average price of Tegile’s all-flash platform is around $100,000 while the IntelliFlash system will average around $250,000 to $300,000.

“We said that’s a nice logical extension of capacity optimized media,” Commins said of the IntelliFlash platform. “We can pulll out our disk drives and use IntelliFlash HD as cheap and deep capacity.

“Our premise is there will always be performance optimized media and capacity optimized media. We’ll eventually go to PCIe and NVDIMM to keep going cheaper and deeper on the capacity layer.”

Tegile’s software stack will enable its IntelliFlash system to support block and file storage. Tegile supports Fibre Channel, iSCSI, NFS and SMB protocols.

Tegile expects IntelliFlash to cost around $1.50 per GB of raw capacity, and as little as 50 cents per usable GB after dedupe and compression when it is released in early 2016.

Commins said the IntelliFlash system should be a good fit for big data analytics and oil/gas exploration companies. “It’s a real nice screamer, but at super high capacity,” he said.

September 19, 2015  3:42 PM

New LTO-7 tape specification is now available for licensing

Carol Sliwa Carol Sliwa Profile: Carol Sliwa

Hard disk drives (HDDs) are up to 8 TB and 10 TB, and flash storage may be all the rage, but tape keeps rolling along.

Hewlett-Packard (HP), IBM and Quantum – the Linear Tape-Open (LTO) Program Technology Provider Companies (TPCs) – announced this week that the seventh generation specifications of the LTO Ultrium format are available for licensing by storage mechanism and media manufacturers.

The new LTO-7 specification lists the maximum compressed capacity at 15 TB per tape cartridge, more than double the 6.25 TB compressed capacity of the prior LTO-6 generation. The specification assumes a compression ratio of 2.5 to 1.

The compressed data transfer rate soars from 400 megabytes per second (MBps) with LTO-6 to 750 MBps with the new LTO-7 technology. That means users potentially could transfer more than 2.7 TB per hour per drive with LTO-7, up from 1.4 TB per hour per drive with LTO-6.

Paving the way for the higher capacity and data transfer rates were technology enhancements such as stronger magnetic properties and a doubling of the read/write heads in advanced servo format to allow the drive to write more data to the same amount of tape within the cartridge.

The new LTO-7 generation carries forward features of prior generations, including partitioning to enhance file control and space management with the Linear Tape File System (LTFS), hardware-based encryption, and write-once, read-many (WORM) functionality.

An LTO-7 Ultrium drive can read data from LTO-7, LTO-6 and LTO-5 cartridges and write data to an LTO-7 or LTO-6 cartridge.

Vendors who have already announced product support for LTO-7 include Quantum and SpectraLogic. Quantum expects LTO-7 technology to be available in its Scalar i6000 and Scalar i500 libraries in December, with other platforms to follow, and the company currently offers an LTO-7 pre-purchase program for interested customers.

The LTO-7 specification’s 15 TB compressed capacity and 750 MBps data transfer rate are slightly lower than the figures the LTO Program projected last year with the release of its extended roadmap. The September 2014 roadmap indicated the LTO-7 generation would provide a compressed capacity of 16 TB per tape cartridge and a compressed data transfer rate of 788 MBps.

The newly updated LTO Ultrium roadmap lists the following maximum compressed capacities and data transfer rates for future generations:

LTO-8: Up to 32 TB and 1,180 MBps

LTO-9: Up to 62.5 TB and 1,770 MBps

LTO-10: Up to 120 TB and 2,750 MBps

The LTO Program notes that the roadmap “is subject to change without notice and represents goals and objectives only.”

The LTO Program plans to provide further insight into the LTO roadmap and technology at the Storage Decisions conference on November 3-4 in New York, at the SC15 supercomputing conference running November 15-20 in Austin, Texas, and at the Government Video Expo on December 1-3 in Washington, D.C.

September 17, 2015  7:26 AM

Dell’Oro: Hyperscale DAS use drives storage revenue growth

Carol Sliwa Carol Sliwa Profile: Carol Sliwa

Market research firm Dell’Oro Group’s mid-year snapshot showed that total storage systems revenue is on track to grow 1% in 2015, driven largely by sales to hyperscale service providers of direct-attached storage (DAS) devices for servers.

The Redwood City, California-based company said total storage systems revenue approached $10 billion in the second quarter – a 1% increase compared to the same time frame in 2014. Revenue for internal storage rose 3%, while sales in the larger external storage segment stayed flat in the quarter, as high-end systems continued to experience a year-to-year decline, according to the recently released Dell’Oro report.

EMC maintained the top spot for overall storage revenue through the first half of the year, and Hewlett-Packard (HP) was No. 2. IBM dropped from third place at the end of 2014 to fifth place in the aftermath of the sale of its x86 server line. Dell and NetApp were third and fourth respectively.

Rapidly growing Huawei snuck ahead of Hitachi into fifth place in total storage systems revenue for the second quarter, but Dell’Oro said Huawei often has a strong second quarter after a seasonally weak first quarter.

Dell’Oro’s numbers varied a bit from those released by IDC earlier this month. IDC put total disk storage sales at $8.8 billion for the second quarter for a 2.1 percent increase over the second quarter of 2014. IDC said external storage sales declined 3.9 percent. In vendor market share, IDC had IBM in fourth place ahead of NetApp. IDC agreed with Dell’Oro that hyperscale storage is growing rapidly, putting it at a 26 percent increase over the second quarter of 2014.

Flash continued to factor into a higher percentage of total capacity for both internal and external storage systems. Dell’Oro estimated that flash drives represented 8% to 10% of the total capacity of hybrid arrays, and nearly 75% of midrange and high-end external storage systems included some flash. Dell’Oro expects the percentage to approach 100 within a few years.

Shipments of Fibre Channel (FC) and Ethernet ports for networked external storage systems remained even at about 50% each, and Dell’Oro expects the breakdown to stay the same for at least the next year.

For FC, the big trend was 16 Gbps taking share from 8 Gbps, as 69% of the switch ports and more than 20% of the adapter ports shipped at the higher data transfer rate in the second quarter. But Dell’Oro said total SAN revenue, including FC switches and adapters, dropped 5% from the first to second quarters to $550 million (the lowest level since Q2 of 2009), and the 1.9 million in port shipments represented a 7% decrease.

Dell’Oro attributed the SAN revenue decline to the resurgence of DAS as well as new storage alternatives, such as scale-out architectures, software-defined storage, hyperconverged infrastructure and cloud storage. Ethernet-based storage has also grown, although it still trails block-based storage in revenue, Dell’Oro said.

With Ethernet storage networking, 40 Gbps made inroads on 10 Gbps, but Dell’Oro expects the 40 Gbps Ethernet pattern to be short-lived as options such as 50 Gbps, 75 Gbps and 100 Gbps emerge in future years.

September 14, 2015  3:04 PM

Survey finds companies’ disaster recovery testing is inadequate

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Despite all the talk about disaster recovery testing, most organizations still don’t do it enough. And recovery point objectives (RPOs) are still way too high to facilitate adequate DR, according to a survey conducted by cloud vendor CloudVelox.

CloudVelox, which offers automated disaster recovery in the cloud, interviewed 343 IT executives responsible for DR in their organizations from nine vertical markets. The surveyed organizations ranged from less than 100 employees to more than 1,000.

The survey found 58 percent of the respondents ran DR tests once a year or less. Another 33 percent tested their DR infrequently or never, while 26 percent tested it quarterly and 16 percent did it monthly.

These results should not be surprising because other recent surveys have had similar results, including one conducted by our parent company TechTarget.

So why aren’t people testing more often? Fifty-six percent of the CloudVelox respondents said their DR testing was infrequent because they didn’t have adequate internal resources. Another 34 percent found the process complex, while 19 percent did not find it to be a priority and 12 percent said it costs too much.

Respondents also say their traditional DR solutions don’t offer adequate RPOs. One-third said their RPO was more than 12 hours, with only 21 percent claiming it is two hours or less and 46 percent said it is between two hours and 12 hours.

“The fact that RTO and RPO in this day and age is still in the two-to-12-hour range shows that disaster recovery is broken,” said Vasu Subbiah, CloudVelox’s vice president of products. “And IT does not have the resources. The average IT spend for disaster recovery is between five to seven percent. If they test less frequently, then mistakes are compounded when they try to recover in the future.”

Cloud Velox, formerly called CloudVelocity, offers cloud-based disaster recovery, cloud data migration and testing and development in the cloud. The July 2015 surveyed verticals that included oil and gas, basic materials, industrial, consumer goods and services, healthcare, telecommunications, utilities and finance.

The survey also found variations based on the vertical. For instance, the survey found the oil and gas industry has the highest average RPO, with 70 percent stating their it took 12 hours or more and they had the lowest test frequency, with 80 percent of those surveyed said they test once a year or less. Thirty percent of the all the industries included in the survey stated they had an RPO of 12 or more hours.

In healthcare, 69 percent tested once a year or less. Consumer services and healthcare were most willing to embrace cloud-based DR if they could automate network and security controls to the cloud. Sixty-five percent of respondents in consumer services and 64 percent of healthcare would do cloud DR if they had the option of automation.

One in four of the respondents said they experience failure or delays over half of the time when they tested their secondary data center. Fifty-three percent said network connectivity was the common cause of failure when testing their disaster recovery environment. Another thirty-seven percent cited wrong configuration and 33 percent cited missing patches.

Network and security concerns often are singled out as barriers to cloud adoption. CloudVelox’s survey found that 55 percent of respondents would use cloud DR if they could automate their on-premises network and security controls in the cloud, while the other 45 percent would not consider the cloud even if they had on-premise network and security controls

September 8, 2015  2:40 PM

External storage sales down as market shifts to hyperscale and server-based storage

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

External storage sales are shrinking.

The total worldwide enterprise storage systems factory revenue grew to an $8.8 billion during the second quarter of 2015, according to IDC. However, sales are tilting more toward hyperscale data centers and sever-based storage. External storage capacity — SAN and NAS — still represents the largest portion of the market, but sales dropped 3.9 percent compared to the second quarter of 2014.

Total disk revenue grew 2.1 percent, and capacity shipments were up 37 percent year over year to 30.3 exabytes during the quarter.

EMC was still the largest storage systems supplier with 29.9 percent of external storage worldwide revenues, while IBM, NetApp and HP came in second in a statistical tie for second with revenue shares of 11.1 percent, 10.9 percent and 10.5 percent, respectively. Dell and Hitachi also were in a statistical tie for fifth with Dell earning 6.6 percent and Hitachi earning 6.5 percent of the worldwide external storage revenues market during Q2.

Most of the top vendors declined in year-over-year revenue, with NetApp, IBM and Dell suffering largest revenue declines.  NetApp dropped 19.6 percent, finishing at$615 million in Q2 this year compared to $765 million in Q2 2014. IBM revenue fell 11 percent, coming in at$631 million in Q2 this year compared to $712 million in Q2 2014. Dell slipped 9.9 percent, falling to  $313 million compared to $414 in Q2 of 2014.

EMC’s revenues declined 4 percent to $1.7 billion compared to $1.764 million a year ago and Hitachi slipped 1.9 percent to $366 million. HP was the only of the top six vendors to increase year-over-year, and it barely went up. HP increased 0.2 percent to $597 million. The rest of the industry increased 9.3 percent year-over-year and grabbed 24.6 percent market share. IDC put the overall external storage revenue at $5.7 billion during the quarter.

Although all of its revenue comes from external storage, EMC also led the total worldwide enterprise storage systems market accounting for 19.2 percent of all revenues in 2Q15. HP held the number two position with 16.2 percent of spending during the quarter, and had the highest growth of eight percent. Dell accounted for 10.1 percent of global spending. Storage systems sales by original design manufacturers (ODMs) selling directly to hyperscale datacenter customers accounted for 11.5 percent of global spending during the quarter and server-based storage grew 10 percent to  $2.1 billion.

“Revenue growth was strongest within the group of original design manufacturers that sell directly to hyperscale data centers,” IDC storage reseach director Eric Sheppard said in the press release. “This portion of the market was up 25.8 percent year over year to $1 billion.”

September 7, 2015  4:48 PM

Formation Data Systems CEO offers take on hot storage trends

Carol Sliwa Carol Sliwa Profile: Carol Sliwa

Formation Data Systems CEO Mark Lewis has strong opinions on the direction that storage needs to take.

He sees the adoption of on-demand, “as-a-service” cloud models as the future, in contrast to the traditional networked storage model that “so many players out there in startup storage land” continue to follow.

Lewis founded Formation Data Systems in September 2012 after failed attempts to create a “ubiquitous data virtualization layer” at EMC with Invista and at Compaq/Hewlett-Packard with VersaStor. Formation raised $24.2 million in Series A funding in December 2013 from Pelion Ventures, Third Point Ventures, Dell Ventures and Mayfield.

The FormationOne Dynamic Storage Platform is data and storage virtualization software that runs on commodity x86 server hardware, whether bare metal or virtual machines (VMs), at multiple service levels, from archive to tier 1. The objective was to create a “consistent data layer” to enable capabilities such as snapshots, replication and deduplication across blocks, files and objects.

Lewis contrasts Formation’s approach to the model followed by EMC, which he said must write management code separately for siloed platforms such as Data Domain, VNX, Isilon, Symmetrix and XtremIO.

In the following interview excerpts, Lewis addressed some of the hottest technologies:

What is your strategy on hyperconvergence?

Lewis: My belief is that, from a market framework, the storage market in aggregate is going to go through two disruptions. At the entry level, we see hyperconverged, and I would characterize that as Nutanix, SimpliVity, et al, which has been going on for four or five years now. We’ll do very, very well at the entry to mid-tier and what I call single application, VDI frameworks because it’s very economical. It will replace a lot of low-end SANs, iSCSI, low-end NFS clusters, stuff like that because at that end, why do you need even storage and servers separated?

We believe at the high end that hyperconverged is not that interesting. When you’re going to need an elastic system that may operate against hundreds of applications, many, many use cases, the idea of converging the ratios of servers, network and storage and having to have a one all in box actually is economically suboptimal. So we believe that with larger scale systems, you really do want to consolidate as you have around networking, compute and storage in elastic deliverable pools because you might start out with a small amount of storage and a large amount of compute and then have to grow the storage or change the networking. And when you have hundreds and hundreds of potentially scale-out applications, those ratios aren’t the same. We believe that the new unified platform storage – we call it dynamic storage – becomes the disruptor for the mid- to high-end market vs. legacy large SANs and what not.

Which vendors or technologies have you gone up against with pilot customers?

Lewis: We’ve gotten most of our deal flow through people who have tried Ceph and been unable to be successful there or found that that it was far too much work . . . Other than that, we have some people that were presently on Amazon or [Amazon Web Services] AWS, and for scaling and other flexibility reasons want to build some or all of their own data centers. These would be startups software-as-a-service companies.

Then again, it’s less competition and more selection of alternatives. Some will say, ‘Well, I’m just not ready to do anything different.’ And so the alternative is to do nothing. We’ll see how it shakes up.

How do you differ from Ceph and vendors that claim to be software-defined, with the ability to run on any server hardware?

Lewis: By any definition that I’ve seen of the word, we are software defined. I believe that’s kind of like saying we’re defined as being a car or something. It’s accurate but not descriptive or helpful. It’s been so overused. I see people rebranding their old arrays saying, ‘We’re software, and we run an Intel processor in there,’ even though it’s unique, and ‘We’re going to be software-defined.’

We’re different in both technology and customer enlightenment and focus. We are trying to build something that will ultimately get categorized as modern enterprise storage – not technology, not open source.

Ceph started its life as open source software. Really cool stuff. Really technical. But really not very usable within enterprise storage . . . We looked at Ceph as the potential framework for Formation, but it didn’t have the enterprise-type technology we felt was needed. We are trying to appeal to people that need enterprise storage features and still would like to have it done within a private cloud. You have to be able to snapshot, to have quality of service guarantees, multi-tenancy, policy-based management, things like that.

August 28, 2015  1:27 PM

Drilling down into copy data management

Randy Kerns Randy Kerns Profile: Randy Kerns

Copy data management (CDM) is a relatively new term for many in Information Technology.  At first literal consideration, its meaning seems self-evident.  However, it is really a topical area that vendors address with new products and terminology.

Making copies of data for IT applications is a fundamental task. The how and why have been evolutionary processes. New developments have come from vendors to deliver solutions to manage and automate CDM.

The “why” of making copies starts with the basic function of data protection.  Protection is from a disaster (which also includes an orchestrated recovery process) or from corruption or deletion due to application, user, or hardware error.  The copy can also be used to create a point-in-time record of information for business or governance reasons.

Another reason for making a copy is to use that data for more than just the primary application. This could be for test/development, analytics, or just because the application owner or administrator just feels safer having another copy.  Especially in the case of test/development and analytics, another copy insulates the primary application from problems. Besides corruption and deletion, these problems can include potential performance impacts to the primary application.

Making data copies comes at a cost. The different types of copy mechanisms (the “how” of making copies) include making full copies of data or making snapshot copies where only changed data is represented along with the snapshot tables/indexes. The copies can be local, remote or both. Full copies will take time to create and require additional storage capacity. Snapshot copies can grow in capacity over time. All copies not only eventually consume storage space for usage but also consume time and space in backup processes. Copies of data must also be managed, especially snapshots which tend to proliferate.

This sets the stage for copy data management with the goal to orchestrate and automate the management of copies of data and to minimize the impacts on capacity utilization and copy actions. There have been two approaches to address CDM: software to manage copies/processes and a combination of software and hardware to create a “golden” copy to leverage for other needs.  The details and merits of each require a more involved evaluation. Managing copies has the potential to improve IT operational processes (including disaster recovery) and minimize costs.

There are a number of considerations, however. CDM crosses responsibility areas from an IT perspective.

  • The first area to consider is the backup administrator. The administrator often uses deduplication software or hardware to reduce the size of copies, and no longer sees the proliferation of copies as the problem it once was. Why there are multiple copies being created does not concern backup administrators, and they do not need to be the champion of making changes.
  • A storage administrator will manage the storage system and that usually includes managing the snapshot, copy and replication functions. A storage administrator is concerned with the amount of space consumed and will utilize snapshots as a means to reduce space requirements without challenging the application owner/administrator on the need for copies.
  • Application owner/administrators sometimes make complete copies of data (databases for example) rather than snapshots to fit their usage. Usually, they will not inform the storage administrator about usage as long as there is enough capacity available.   Integration with applications for automation enhances the value of CDM.

Snapshot management with tools outside of storage system element managers is a relatively new task for storage administrators. A useful tool is critically important for effective adoption and to gain confidence for the administrator. The tool manages the lifecycle of a snapshot copy, but the administrator would not think of it in that way.

Consolidating administration of copies – complete or snapshots, local or remote, including cloud – to a single tool has potentially high value. The more difficult part is making changes in the operational and personnel responsibilities. Those who gain from consolidation of these functions may also influence budgeting for the solution.

CDM represents a new tool and embracing a new tool is sometimes difficult.  It does not help that there have been inconsistent descriptions from vendors in their effort to market their solution as unique.

Looking forward, CDM could become part of the element manager for storage systems as an integrated function that works for that specific system.  This method is probably too limiting in achieving overall value. The process needs to be applied across IT, inclusive of copies at remote or cloud locations. The best way is likely to integrate CDM with overall orchestration software .This will take a long time given the change required for IT. Meanwhile, we will continue to see individual products that provide value for copy data management.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).

August 27, 2015  4:07 PM

EMC pronounces XtremIO its billion-dollar baby

Dave Raffo Dave Raffo Profile: Dave Raffo
EMC, XtremIO

EMC says its XtremIO all-flash array has cracked the $1 billion cumulative bookings mark in 588 days – or roughly six quarters – while remaining on track to do $1 billion of business in calendar 2015.

Perhaps because EMC’s traditional storage systems aren’t exactly going like gangbusters these days, it celebrated its billion-dollar baby with a blog and provided updated 2014 market share numbers from Gartner. The Gartner numbers also stick a pin in Pure Storage’s  planned IPO announcment. Pure was demoted from second to third in market share after its S-1 revenue figures came in below what it had led Gartner to believe they were.

EMC said it took VMware five years and Isilon scale-out NAS needed more than 11 years to hit $1 billion in sales. XtremIO is coming off a $300 million quarter, which included more than 40 $1 million-plus orders and 40 percent of the customers were repeat buyers.

Gartner puts EMC’s 2014 revenue at $443.6 million, giving it 34 percent of the $1.29 billion all-flash array market. IBM was second with $233.3 million and 18 percent share with Pure third at $149.4 million and 11.5 percent. Pure was listed at second with $276.3 million when Gartner released its numbers earlier this year, but Gartner edited its charts to reflect the reported revenue from Pure’s IPO filing.

EMC hails XtremIO’s scale-out performance, inline always-on data services, copy data management and application integration as reasons for its  success.

Because you don’t hear copy data management cited often as an all-flash selling point, we asked senior director of XtremIO marketing Andy Fenselau to explain. He said XtremIO’s copy data management comes from its use of in-memory snapshots and metadata.

“For us, a copy is not a traditional back-end storage copy, it’s a simple in-memory copy,” he said. “It’s instant, takes no additional space until new blocks are written, and only unique compressed blocks are written. And it’s full performance. Customers present them to application teams for self-service.”

Fenselau  said about 51 percent of XtremIO’s revenue comes from customers using it for databases, business applications and analytics. Server virtualization, private cloud and VDI are also common use cases.

“We were expecting lot of enterprise adoption,  but we’re also seeing a wonderful amount of midmarket adoption,” he said.


August 25, 2015  4:05 PM

Scality raises another $45 million in Series D funding

Sonia Lelii Sonia Lelii Profile: Sonia Lelii
Object storage, Scality

Object storage vendor Scality has raised $45 million in investment that will be used to expand its North American sales force, continue international expansion and build out its reseller program. The company, which is targeting an IPO in 2017, has raised a total of $80 million since its founding in 2009.

“We started a satellite office in Japan and we will continue to invest there. We started a satellite office in Singapore and we will expand there as well as the existing market in North America and Europe,” said Leo Leung, Scality’s vice president of marketing.

This latest funding round includes a new investor and partner BroadBand Tower, Inc., which is expected to expand Scality’s presence in Japan.

“They believe in the technology and the company,” Leung said. “They were one of the first companies in Japan to push some early trends such as virtualization. Now they are pushing software defined storage.”

Other investors include Menlo Ventures, IDnvest, the Digital Ambition Fund, Iris Capital, Omnes Capital and Galileo Partners. Also, 65 percent of Scality employees participated in the latest Series D funding round. The company has 160 employees worldwide.

Leung said Scality is planning an 80 percent channel-based and 20 percent direct sales force model.

Scality also will invest in building out its internal support for resellers, which include Hewlett-Packard and Dell.

“We have larger system vendors (but) just because we are in the price books doesn’t mean they are actually selling it,” he said. “And we are going to make more investments on the tech side. We do substantial research and most of it is research in technology. There are some new things coming down the road. There are still some very hard problems out there, especially when it comes to multi-geographies and security and interoperability.”

Scality’s Ring software uses a de-centralized distributed architecture, providing concurrent access to data stored on x86-based hardware. Ring’s core features include replication and erasure coding for data protection, auto-tiering, and geographic redundancies inside a cluster.

August 24, 2015  8:35 AM

Declining storage sales are the new normal

Dave Raffo Dave Raffo Profile: Dave Raffo
Brocade, Hewlett-Packard, NetApp

NetApp, Brocade and Hewlett-Packard last week all reported storage revenues that were better than expected, impressing Wall
Street analysts and investors. Yet in each case their revenue declined from last year. The better-than-expected numbers were achieved because of lowered expectations due to sagging storage sales.

NetApp beat its previous forecast in its first quarter following a CEO change, but its revenue continued to shrink from the previous year as it transitions to its clustered Data OnTap operating system.

NetApp revenue of $1.33 billion decreased 10 percent from last year, although it came in above the mid-point of the vendor’s previous forecast. Product revenue (outside of maintenance and service fees) of $664 million was down 25 percent from last year and 27 percent from last quarter, and below expectations.

“We did what we said we would, but we’re clear that we have a lot more work to do,” new CEO George Kurian said on the earnings conference call with analysts.

Kurian, who replaced Tom Georgens as CEO in June, added, “We have a heightened sense of urgency in working with our customers to enable their move to the modern architectures delivered by our portfolio.”

He said NetApp is “aggressively pivoting” towards a product portfolio consisting mainly of software-defined storage, flash, converged infrastructure and hybrid cloud.

One of NetApp’s major challenges is to upgrade customers from Data Ontap operating system to clustered OnTap. Kurian said shipments of clustered systems grew by around 115 percent last quarter and was deployed in 65 percent of the FAS arrays shipped compared to 25 percent a year ago. But those clustered deployments are predominantly with new customers, and clustered OnTap still accounts for only 15 percent of NetApp’s total installed base. That is up from 11 percent in the previous quarter, but there is still a long way to go.

“The percentage of our installed base that has migrated to clustered OnTap has been small,” Kurian said.

Brocade’s storage sales usually reflect those of the large storage vendors who sell Brocade switches and large directors as part of their SANs. Brocade’s storage revenue of $309 million decreased five percent from last year, which was a little better than expected after EMC, NetApp, HP, Hitachi Data Systems and IBM all reported flat or declining storage product revenue.

Brocade CEO Lloyd Carney said his company took a “prudent view of the storage business” with its forecast for last quarter but said he sees the Fibre Channel market stabilizing and “will remain durable for many years.” He also said IP storage and flash are pushing sales of network storage switching.

Carney said as long as data grows, companies will have to add storage.

“I worry about this space when people stop buying more storage,” he said. “When the overall storage market stops growing, then I start to worry. As long as overall storage market continues to grow … year over year, there’s going to be the need for more either Fibre Channel storage or IP-based storage.”

HP’s storage revenue of $784 million was down two percent from last year, which is better than HP has done in recent years. CEO Meg Whitman called it a “strong quarter” for storage.

As usual, HP’s best storage performer was its 3PAR StoreServ platform, which grew in double digits. HP converged storage (3PAR, StoreOnce, and StoreAll) revenue of $393 million grew eight percent, and now makes up most of the vendor’s storage sales.

“We’ve turned the corner in storage,” Whitman said. “3PAR is fulfilling the promise that we’ve all known 3PAR has had for many, many years. So listen, we’re feeling good about this business. We’ve got good momentum, and I think you’re going to see continued strength here over the next few quarters.”

NetApp, HP and Brocade all said they are receiving a boost from flash in storage arrays.

NetApp said revenue from all-flash arrays increased 140 percent from last year and HP reported 400 percent growth in 3PAR all-flash storage, although those all-flash products were in early stages last year. Brocade VP of storage networking Jack Rondini said around 70 percent to 80 percent of flash systems use Fibre Channel and will help keep Fibre Channel relevant.

“The attachment of flash continues to be one of the most disruptive factors in the data center,” Rondini said.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: