Storage Soup

March 31, 2016  8:34 PM

Red Hat Ceph optimized for SanDisk InfiniFlash storage

Carol Sliwa Carol Sliwa Profile: Carol Sliwa

Red Hat Ceph Storage software is now officially tested, optimized and certified to run on SanDisk’s InfiniFlash storage system thanks to a strategic partnership between the two vendors.

The alliance – announced this week – is Red Hat’s first partnership involving an all-flash array. Ross Turk, director of storage product marketing at Red Hat, said running Ceph software-defined storage on performance-optimized hardware has been a hot topic of discussion at events and conferences.

Turk said via an email that he could foresee the combination of Ceph on a high-performance, low-latency all-flash array “broadening the use cases” for Ceph to workloads such as analytics, high-speed messaging and video streaming. The original sweet spots for Ceph tended to center on capacity-optimized workloads such as active archives and rich media as well as OpenStack block storage.

“The applications running on top of OpenStack can always take advantage of a higher performing storage foundation,” Turk said.

Turk said engineers tuned over 70 individual parameters in Red Hat Ceph Storage – the vendor’s supported distribution of open source Ceph software – for optimal IOPS, latency and bandwidth characteristics with SanDisk’s InfiniFlash. The vendors are currently working on reference architectures that document recommended configurations.

Although the Red Hat-SanDisk alliance is a joint engineering/development and go-to-market effort, customers purchase the products separately – either directly from SanDisk and Red Hat or through their respective channel partners, according to Turk.

Support operates similarly. Turk said customers get support for Red Hat Ceph Storage from Red Hat and support for InfiniFlash from SanDisk. But he added that Red Hat’s global support and services team is trained on InfiniFlash, and SanDisk’s support team is trained on Red Hat Ceph Storage.

“Each of us is prepared to refer customers when appropriate,” said Turk.

SanDisk had been contributing to the open source Ceph project for more than two and a half years, according to Gary Lyng, senior director of marketing and strategy of data center solutions at SanDisk. He noted that SanDisk and Red Hat already have active joint customers.

“We believe that, with Red Hat’s proven leadership with open source technologies, the adoption of Ceph as a mainstream platform in the enterprise and cloud is possible,” Lyng wrote in an email.

He said the SanDisk-Red Hat alliance underscores a number of key areas of momentum in the storage industry, including the adoption of flash for massive-capacity workloads, software-defined storage, scale-out platforms and flexible, information-centric infrastructures.

Lyng said, although this week’s announcement focused only on Red Hat Ceph Storage, “additional collaboration is natural” given their tight relationship. Turk said the vendors are exploring potential uses for SanDisk technologies and Red Hat Gluster Storage, his company’s supported distribution of the open source Gluster distributed file system.

Red Hat’s partnerships also extend to manufacturers of servers, processors, networking gear, hard disks, flash drives and controllers. Vendors include Fujitsu, Intel, Mellanox and Super Micro. Red Hat has worked with Intel to optimize Ceph on flash, according to Turk.

Lyng said SanDisk has been formally building out a technology partner ecosystem for its data center portfolio since last fall. Vendors with which SanDisk has collaborated include IBM, Nexenta and Tegile. Western Digital acquired SanDisk last October for $19 billion.

March 31, 2016  11:42 AM

EMC adds weight to ScaleIO

Dave Raffo Dave Raffo Profile: Dave Raffo
EMC, ScaleIO

Private and hybrid cloud storage is a big piece of EMC’s strategy, and this week the vendor upgraded one of the key building blocks for that strategy with the release of ScaleIO 2.0.

ScaleIO is block storage that can run on commodity hardware, although EMC began packaging it on hardware nodes in late 2015. It is designed to add enterprise storage features to direct attached storage, allowing for easy upgrades. The software has multi-tenant support for building cloud storage.

David Noy, EMC’s VP of product management for emerging technologies. said ScaleIO is gaining traction with three kinds of customers: service providers building out public clouds to compete with Amazon, large enterprises building private clouds with Amazon-like features and large financial services firms looking to build block storage systems on commodity hardware.

“The appeal of ScaleIO is the ability to plop in a commodity server with some drives to add capacity to your block storage,” Noy said.

Features added in ScaleIO 2 are designed to take advantage of the hardware nodes that EMC sells it on, as well as fit the types of customers using it the most.

They include security enhancements such as IPv6 support, Secure Socket Layer (SSL) connections between components and the ability to integrate it with Active Directory and LDAP. It also added in-flight checksum read flash capabilities, phone-home support and a maintenance mode that mirrors I/O coming in during maintenance, copies that I/O to another temporary location and moves the data back when the node returns online.

EMC also expanded ScaleIO support for next-generation applications such as containers, CoreOS and OpenStack.

For companies who don’t want ScaleIO shipped on a hardware node, it’s also available on a trial basis as a free download.

March 29, 2016  1:48 PM

NetApp prepares SANtricity for Splunk

Dave Raffo Dave Raffo Profile: Dave Raffo

NetApp today rolled out an upgraded version of its SANtricity software for its E and EF Series of high performance arrays, with the focus on making Splunk and other data analytics applications run faster.

NetApp acquired the E Series platform from LSI in 2011, and added the EF all-flash version in 2013. The latest SANtricity release is designed to accelerate performance for high IOPS and low latency applications.

“E-Series is our main product line for these third-platform applications,” said Lee Caswell, NetApp vice president of solution and services marketing. “It gives you consistently low-latency response times.”

NetApp claims the latest version of SANtricity can:

· Increase Splunk search performance by 69% versus commodity servers with internal disks
· Drive 500% better Hadoop performance during data rebuild with Dynamic Disk Pools versus commodity servers with RAID
· Reconstruct a 400GB solid-state drive in 15 minutes for NoSQL database with commodity servers and direct attached storage
· Encrypt data at rest with less than one percent performance impact vs. the 70% impact from commodity servers with internal disk drives
· Build one architecture for hot, warm, cold and frozen tiers instead of different storage architectures for each tier.

NetApp is also partnering with Arrow on pre-configured E Series bundles for enterprise Splunk.

While SANtricity isn’t for flash-only storage, the EF Series seems the better fit when talking about high IOPS, low latency workloads. This release comes in the wake of two new flash systems from competitors that also target analytics – EMC’s DSSD D5 and Pure Storage’s FlashBlade.

“Flash enables a new level of performance for enterprise storage for big data applications,” Caswell said.

March 28, 2016  7:27 AM

IDC: Backup appliance revenue grew for 2015

Carol Sliwa Carol Sliwa Profile: Carol Sliwa

Factory revenue for backup appliances  soared to $1.05 billion in the fourth quarter and $3.35 billion for 2015 – up 2.5% over 2014 – according to statistics released last week by International Data Corp. (IDC).

IDC Research Manager Liz Conner said the fourth-quarter figure marked only the second time that revenue from what IDC calls the Purpose-Built Backup Appliances (PBBA) market has hit $1 billion for a quarter. The $1.05 billion represented a 4.1% increase over fourth-quarter revenue for 2014.

Conner said via a prepared statement that  backup appliance vendors have been adapting to industry trends by “putting greater emphasis on backup and deduplication software, meeting recovery objectives, [offering] the ability to tier to the cloud” and improving ease of use.

Worldwide PBBA capacity grew to 1,160 petabytes (PB) in the fourth quarter, a spike of 25.6% compared to the same period in 2014, according to IDC. Annual capacity rose to 3.3 exabytes, a 23.1% increase over 2014.

EMC continued to dominate the  backup appliance market, generating more than one-third of its total annual revenue in the fourth quarter. Its fourth-quarter revenue of $707.9 million represented 67.7% market share and 10.6% growth over Q4 in 2014. For 2015, EMC’s revenue exceeded $2 billion, as the company captured 61.4% market share.

Symantec was a distant second with $479 million in annual revenue and 14.3% market share in 2015. Symantec closed out the year with $125.1 million in fourth-quarter revenue – an 8.1% increase over the same quarter in 2014. In January, Symantec completed the sale of its Veritas division, which includes the NetBackup and Backup Exec products, to The Carlyle Group.

Rounding out the top five in annual revenue were IBM ($165.7 million), Hewlett Packard Enterprise (HPE, with $150.3 million) and Barracuda ($88.1 million). IBM’s revenue was down 19.5% compared to 2014, but HPE (11.7%) and Barracuda (32.6%) saw substantial growth.

Behind EMC and Symantec in the fourth quarter of 2015 were IBM ($41.6 million), HPE ($40.7 million) and Dell ($23.6 million). Dell’s revenue grew 16.5% in Q4 2015 versus Q4 2014.

For its PBBA market sizing, IDC includes products that integrate the data movement engine (backup application) with the appliance as well as products that serve only as a target for incoming backup application data.

March 24, 2016  8:24 PM

Commvault adds big data and cloud support to its data platform

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Commvault Systems Inc. this month rolled out the latest version of its backup and data management portfolio to add support for Hadoop, Greenplum and IBM’s General Parallel File System (GPFS).

It also extended support for its Commvault IntelliSnap product to include NEC and Nutanix Acropolis.

The company’s portfolio is comprised of Commvault Software for data protection, recovery and archiving, and the Commvault Data Platform, formerly known as Simpana. The Data Platform is an open architecture that supports APIs throughout the stack and focuses on making copies of data readily accessible in their native format for third-party applications. It allows customers to access native copies of data without having to wait for a restore from a backup repository.

Chris Van Wagoner, Commvault’s chief strategy officer, said customers can use Commvault Software to manage big data configurations. It can also recover data  across the whole system or across selected nodes, components or data sets.

“The architecture of GreenPlum, Hadoop and GPFS are node-based,” he said. “They are distributed rather than a vertical stack. Our platform is more for traditional data centers. So we had to do some work on our architecture. We had to make changes to our platform to mirror the multi-node architecture.”

The company also extended API support for Amazon S3, REST and NFS interfaces. The Commvault Data Platform also now offers customers a scale-out storage option that can run on any commodity hardware to support petabyte scale environments.

“We introduced the ability to provide search and index support for data without having to move the data,” Van Wagoner said. “If some uses, we can go out through connectors and index data in the Salesforce application without dragging the data back to our platform. We can index and search in place without having to move it.”

Commvault also can protect data inside VMware, Hyper-V, Xen, Red Hat Enterprise Virtualization (RHEV) and Nutanix Acropolis hypervisors, and protect workloads as the move from hypervisors to the Microsoft Azure and Amazon AWS public cloud.

“We can pick up VMware workloads and restore it into the Amazon clouds,” Van Wagoner said. “We give customers the true promise of portability and the ability to move data between cloud providers and between private and public clouds. We now also support Nutanix and its hypervisor.”

Commvault snapped a string of four quarters of year-over-year revenue declines when it reported of $155.7 million last quarter, up two percent from the previous year, and its software revenue of $71.4 million increased 24% from the previous year. While Commvault continued to make money during its sales declines, its $13.2 million income last quarter was its highest take in a year.

March 24, 2016  9:31 AM

Violin Memory fuses with Stream, creates FlashSync

Dave Raffo Dave Raffo Profile: Dave Raffo
flash storage, Violin Memory

Violin Memory is trying to push its all-flash arrays into the financial services market through a partnership with U.K. software vendor Stream Financial.

The vendors have combined to launch what they call a data appliance portal. The product, FlashSync, combines Violin’s Flash Storage Platform (FSP) arrays with servers and Stream’s Data Fusion federated query software. The target market is investment banks and other financial services firms that have to crunch billions of rows of data from various database sources. Violin and Stream Financial claim FlashSync is more efficient and cost-effective than pouring all of the data into a massive warehouse or doing everything in-memory where data is not persistent.

FlashSync has four configurations: Micro (24 CPUs, 7 TB capacity, 250 billion rows of data), Small (36 CPUs, 11 TB, 500 billion), Medium( 72 CPUs, 22 TB, 750 billion rows) and Large (144 CPUs, 44 TB, 1.5 trillion rows).

The systems will allow customers to access data at source and write to high-performance flash memory in a persistent manner. Data Fusion allows queries across various sources as if they were one system. The idea is to perform faster queries of data that helps make business decisions.

Carlo Wolf, Violin’s vice president of the Europe, the Middle East and Africa (EMEA) region, said the partnership came about after a bank tested Data Fusion and liked its performance but needed it to scale to billions of rows of data. “You just can’t do that with traditional disk arrays,” Wolf said.

FlashSync will be sold by Violin channel partners, with the array vendor providing support for the storage and Stream Financial tackling software support issues.

Wolf said a large U.K. financial services firm has done a trial with FlashSync, and the product will initially roll out in the U.K. He said he expects FlashSync to eventually hit the U.S. market but no U.S. channel partners have signed on yet.

Violin has been struggling financially and failed in an attempt to find a buyer for the company. CEO Kevin DeNuccio said Violin’s strategy will be to find partners to help bring products to market.

March 18, 2016  7:53 AM

Worldwide storage revenue down in Q4, barely up for 2015

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Despite slipping in the fourth quarter, overall worldwide storage revenue increased 2.2% in 2015.

External (networked) storage declined 2.3% for the year, according to International Data Corporation’s worldwide quarterly enterprise storage systems tracker. Hewlett Packard Enterprise (HPE) was the only major vendor with increases in the fourth quarter and 2015 overall. HPE finished the year second behind EMC in overall storage systems revenue and third behind EMC and NetApp in external storage.

Total 2015 revenue of $37.16 billion was up from $36.36 billion in 2014. EMC led the way with $7.13 billion for 19.2% market share, but EMC’s revenue fell 5.9% from 2014 and its market share slipped from 20.8%. HPE’s revenue of $5.77 billion increased 12.6% from 2014 and its share rose from 14.1% to 15.5%. Dell, IBM and NetApp completed the top five. IBM took the biggest hit with a 23.2% decline, partially because of the sale of its x86 server business to Lenovo.

Revenue from external storage – SAN and NAS — dropped 2.4% for the year to $24.08 billion in 2015. All of EMC’s systems revenue comes from external storage, and it had 29.6% market share – down from 30.7% in 2014. NetApp is also external storage-only, and its market share slipped from 12.7% to 11.1% after a revenue drop from $3.13 billion to $2.68 billion.

NetApp’s 14.3% decline was the biggest fall of all vendors for external storage. HPE improved 2.7% to $2.41 billion in external storage revenue, and its market share improved from 9.5% to 10%. IBM and Hitachi Data Systems completed the top five. External storage revenue from all other vendors rose eight percent and made up 31.5% of the market.

In the fourth quarter, total enterprise storage systems garnered $10.38 billion in revenues compared to the $10.62 billion in the same quarter of 2014, a 2.2% percent decline. The total worldwide external enterprise storage systems market generated $7 billion compared to $7.12 billion the previous year.

Revenue from flash inside storage arrays grew, however. The all-flash array market generated $955.4 million, a 71.9% increase from the previous year. Hybrid flash array revenue came to $2.9 billion, which is 28% of the overall market.

EMC had $2.23 billion revenue in the fourth quarter, giving it 21.5% share of the overall market and 31.7% of the external market. EMC’s revenue fell 5.2% from the previous year.

HPE, Dell, IBM and NetApp followed in overall storage. In external storage, IBM, HPE, NetApp and HDS rounded out the top five behind EMC in external storage. HPE revenue grew 7.9% overall and 2.6% in external storage in the fourth quarter.

NetApp took the biggest hit. Its revenue dropped 14.8% in the quarter and its external storage market share fell from 10.6% to 9.3%. External storage revenue from all other vendors grew 6.5% and their combined 30.1% share would rank second behind EMC.

March 17, 2016  9:35 AM

Pivot3 closes funding round to develop ‘new toys’

Dave Raffo Dave Raffo Profile: Dave Raffo

Hyper-converged vendor Pivot3 closed a $55 million funding round this week to support expansion following a merger with flash array vendor NexGen Storage.

Pivot3 CEO Ron Nash said the bulk of the funding would go towards integrating NexGen technologies into Pivot3 products and vice versa. The two companies merged in January.

“We’re putting the biggest part of it in product development,” Nash said of the new funding. “We like the products NexGen has, but we like even more what we could do jointly given the base of both companies and both sets of technologies. We have a bunch of products we can put together.

“We have more things to play with and I want it to go faster.”

Nash said quality of service and dynamic provisioning are key NexGen technologies that you can expect to see in Pivot3 products, while Pivot3’s erasure coding could end up in NexGen arrays. Multiple hypervisor support is another roadmap item for Pivot3, which current only supports VMware ESX. Nash said the current headcount of around 230 will probably rise about 20% this year, and the vendor will keep “equal size” offices in Houston, Austin, Texas and Boulder, Colorado. “We’re not calling any of those offices headquarters,” he said.

Nash said funding will also be spent on sales and marketing.

Pivot3 has raised $247 million in funding since its inception in 2003, including a $45 million round in February 2015. Previous investors Argonaut Private Equity and S3 Ventures were involved in the current round along with several undisclosed investors. Nash said none of the funding came from strategic investors.

That funding total isn’t much compared to hyper-converged rivals such as Nutanix ($312 million) and SimpliVity ($276 million).  SimpliVity raised $175 million in one round in 2015, and Nutanix raised $140 million in a 2014 round.

“Those companies put so much into sales and marketing,” Nash said. “I put a lot more in product development, playing for the long term. Hurling it into sales and marketing will be a short-term boost but you have to invest in products.”

March 15, 2016  12:46 PM

Hybrid cloud still an unfulfilled goal for most

Dave Raffo Dave Raffo Profile: Dave Raffo

Storage vendors often talk about this being a transformation period in storage. EMC, whose executives use that term as much as anybody, conducted an analysis of its customers to see just where they stand in the transformation.

EMC and its VMware subsidiary conducted IT transformation workshops across 18 industries to gauge their customers’ progress. The workshop focused on helping organizations identify gaps in their IT transformation, determine goals and prioritize their next steps.

While organizations generally were far along in virtualization, they have a long way to go in streamlining infrastructure and moving to hybrid cloud architectures:

  • More than 90% are in the evaluation or proof of concept stage for hybrid clouds, and 91% have no organized, consistent way of evaluating workloads for the hybrid cloud.
  • 95% say it is critical to have a services-oriented IT organization without silos but less than four percent operate that way.
  • 76% of organizations have no developed self-service portals or service catalog – crucial pieces to building a private cloud.
  • 77% want to provision infrastructure resources in less than a day, but most say it takes between a week and a month to do so.
  • Only the top 20th percentile can do showback and chargeback to bill the business for services consumed, and 70% say they don’t know what resources each business unit is consuming.

So if so many organizations want to streamline their IT and build hybrid clouds, why have so few done so? If you guessed cost, you’re probably on the right track.

“There are usually two limiting factors,” said Barbara Robidoux, VP of marketing for EMC global services. “They’re all being told they have to hold costs down, especially on the legacy side. If they’re going to go forward and modernize any aspect, that costs money, yet they’re being told to hold costs down. So to some degree, you’re stuck. We’re hearing ‘We need help with ROI analysis to see how we can save money on infrastructure.’ The other thing is a lack of skills and know-how. That’s pretty disruptive.”

March 15, 2016  9:17 AM

HPE turns 3PAR array from all-flash to hybrid

Sonia Lelii Sonia Lelii Profile: Sonia Lelii
Flash Array, HPE

For years we’ve seen storage vendors take systems designed for hard disk drives (HDDs) and add solid-state drives (SSDs) to them. Now Hewlett Packard Enterprise is taking hardware that ran all SSDs and allowing customers to use HDDs inside of it.

HPE’s new 3PAR StoreServ 20840 array uses the same hardware as the all-flash StoreServ 20850 launched in 2015. But while the 20850 only supports flash, the 20840 holds up to 1,920 HDDs. Both systems can use from six to 1,024 SSDs.

The 3PAR StoreServ 20840 scales up to eight controller nodes and includes 52 TB of cache, 48 TB of optional flash cache and 6000 TB of raw capacity. The system supports any combination of SAS, nearline SAS disk drives and SSDs.

“The system is the exact same hardware as the all-flash 20850 and we will keep that system around for customers who want to go entirely with flash. This 20840 is for our customers that happen to support spinning disk on the back end,” said Brad Parks, director of strategy for HPE storage. “The 20840 is flash optimized but not all-flash limited.”

The new 3PAR storage model is part of the HPE 3PAR StoreServ 20000 enterprise SAN family that the company launched in June 2015.  In August 2015, HPE announced the 20450 all-flash system and the 20800 all-flash starter kit.

HPE last week also launched the enterprise-level StoreOnce 6600 and midrange HPE StoreOnce 5500 data deduplication disk backup appliances. Both are based on HPE’s latest ProLiant servers. The 6600 scales from 72 TB of usable capacity to 1728 TB, while midrange HPE5500 model scales from 36TB to 864 of useable capacity in a highly dense footprint designed for data deduplication in large and midrange data centers and regional offices.

HPE StoreOnce supports HPE Data Protector, Veritas NetBackup, Backup Exec via OST, Veeam and BridgeHead software. The systems work with the 3PAR StoreServ’s StoreOnce Recovery Manager Central, which takes application-consistent snapshots on the HP 3PAR StoreServ array and copies the changed blocks directly to any HP StoreOnce appliance. The process is known as flat backup.

“Snapshot data moves directly over the network to StoreOnce without engaging the third-party software,” Parks said.

Jason Buffington, senior analyst at Enterprise Strategy Group, said these latest models demonstrate that HPE is offering a consolidated approach for data protection in both data centers and remote sites.

“What we are seeing is organizations have multiple workloads with specify data protection solutions,” he said. “That is the macro trend that HPE is trying to address. StoreOnce is trying to address a way to centralize storage data protection.”

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: