Storage Soup

A SearchStorage.com blog.


April 23, 2014  4:46 PM

Pure vacuums up $225 million in fresh funding



Posted by: Dave Raffo
Storage

When Pure Storage pocketed $150 million in funding last August, CEO Scott Dietzen said that gigantic round would fuel rapid growth for the all-flash array vendor in the face of increasing competition from EMC and other large storage vendors.

Apparently, the $150 million wasn’t enough to fund that growth spurt. Today Pure closed an even bigger round, picking up another $225 million to bring its total funding to $470 million. That’s either a lot of growth or a lot of money being burned.

In his blog today and in an interview with Storage Soup, Pure Storage president David Hatfield explained why the company went back so soon for so much money. He said it wasn’t out of necessity because Pure has not yet spent most of its last round and could be cash-flow positive if the leadership team wanted that. But Pure wants to keep growing its engineering, international sales breadth, brand support and channel.

Hatfield said current and new investors were eager to pump more money into Pure, so Pure took it.

The title of Hatfield’s blog includes the term “Building a War Chest,” and that tells you what you need to know about the all-flash storage market today. EMC, NetApp, IBM, Hitachi Data Systems (HDS), Hewlett-Packard and Dell are all pushing flash either in hybrid or all-flash systems. Then there are the all-flash pioneers such as Pure, Nimbus and Violin Memory vying to push spinning disk out of the enterprise. It’s easily the most competitive storage market today.

As for growth, Hatfield said Pure is “adding two or three people a day,” including new memebers of its large executive team. On the product front, Pure is in beta with replication, its major missing software piece. Hatfield said there are plans to continue to scale up the platform to reach hundreds of TB on a system, increase interoperability with third-party software applications and move beyond tier one storage.

On the customer front, Pure claims is revenue grew 700 percent in 2013 over 2012 and has been increasing more than 50 percent sequentially each quarter. Pure said it shipped more than 1,000 FlashArrays in 2013.

Hatfield said despite the large vendors’ talk about flash and their new all-flash systems, they are still committed to spinning disk while Pure is pure f lash. “EMC would rather sell a $1.5 million VMAX instead of a $300,000 [all-flash] XtremIO,” he said. “We’re competing with hybrid models. They’re selling disk first, then flash as a tier. We have a two-plus year lead on technology. As [legacy vendors] try to close the technology gap, they have a business dilemma. Their multi-billion dollar disk franchise is at risk. We have the ability to attack it, and not feather in flash as a performance tier.”

They have a huge war chest to fund that attack. The latest round included new investor Wellington Management Company as well as previous investors T. Rowe Price Associates, Tiger Global, Greylock Partners, Index Ventures, Redpoint Ventures, and Sutter Hill Ventures.

April 23, 2014  3:10 PM

Micron introduces higher endurance SSDs



Posted by: Sonia Lelii
Storage

Micron Technology unveiled the M500DC SATA solid state drive (SSD) that is targeted for both mission-critical storage and cloud-based Web 2.0 storage as the company tries to grab more of the data center market share by appealing to customers who are more cost-conscious as well as those who want performance and endurance.

The M500DC, which is part of Micron’s M500 portfolio, is designed for endurance to handle transactional databases, virtualization, big data and content streaming. The M500DC SATA SSD is built on the company’s MLC NAND flash technology and custom firmware. It’s integrated with Micron’s Extended Performance and Enhanced Reliability Technology  (XPERT) feature suite, which is an architecture that integrates storage media and controller  to extend drive life to meet demanding data center workloads.

“This product casts a wide net,” said Matt Shaine,  Micron’s product marketing manager  for enterprise SSD. “Out customer use cases for this product are all over tha map in terms of capacity,  performance and endurance at an attractive price point. Our data center customers give us a lot of feedback on requirements, and they essenttially make up of two groups.”

Shaine said one group looks more at the affordable price rather than feature, performance and endurance. The other group value the mixed use of random performance, full data protection and data-path protection.

The SSD combines a 6Gbps Marvell controller with Micron’s 20-nm MLC NAND.  There are some server-type features such as die-level redundancy for physical flash failures, onboard capacitors for power-loss protection and advanced signal processing to extend the life of the NAND.

The SSD comes in both 1.8-inch and 2.5-form factors and 120, 240, 480 and 800 GBs of storage capacities. The 800 GB SSD has sequential reads of 425 MBs per second and write performance of 375 MBs per second. The 800 GB SSD has a 65,000 IOPS random read performance and write performance of 24,000 IOPs. It also has an endurance random input of 1.9 Petabytes.

The 480 SSD also has an endurance random input of 1.9 Petabytes but it has a sequential read performance of 425 MBs per second and sequential writes of 375 MBps. Its random read runs at 63, 000 IOPs and random writes at 35,000 IOPs.

Micron claims the new SSD can achieve one to three drive fills per day over five years so it reduces the need to replace drives on a frequent basis.

“This is a more rugged drive that can handle longer workloads,” Shaine said.


April 21, 2014  10:51 AM

Object storage as common denominator



Posted by: Dave Raffo
Storage

Conversations at the recent National Association of Broadcasters (NAB) conference led me to the conclusion that object storage is becoming a common denominator between block and file storage within companies in this vertical market. I noticed a separation in the storage systems used for different business groups in a company.

That separation is happening because storage systems have different requirements among the groups. Block storage has a variety of performance, capacity, and resiliency needs. File storage — either on block storage system or with NAS systems — are different in scale and performance and economics.  The businesses have evolved separately and the accounting for storage expenses has never moved to a service model.

Broadcasters at the conference talked using object storage to build a hybrid cloud or private loud. The distinction between hybrid and private cloud was that hybrid clouds also include the use of public clouds.

The different use cases are the same situation as other industries that have deployed object storage systems faced previously. Broadcast and entertainment companies use object storage for content distribution, content repositories and to share data with file sync and share software along with high performance file transfer software.

Ultimately, there were no real differences in the characteristics of the needs from the different groups.  Their storage characteristics include massive scale of capacity and number of files stored. Object storage has the capabilities to address these needs, and can be deployed as a common solution to provide economies both in the acquisition and in the operational costs.  And the object storage system could be deployed as a service, charging users through a capacity-on-demand model. The economics overcame traditional parochialism.

This could be thought of as “technology as the unifier.”  Not exactly, though, because there remains the need for “special usage” storage to satisfy other needs.  Block systems and NAS systems with certain characteristics are still required and that is unlikely to change much.  So could be said that object storage is the common denominator for meeting new storage demands.


April 21, 2014  6:46 AM

Splunk’s app for VMware deepens NetApp support



Posted by: Sonia Lelii
Cloud storage, Storage, VMware

Data analytics and security vendor Splunk made it easier to use its software with NetApp and VMware with the latest version of its Splunk App for VMware.

The San Francisco-based Splunk’s software collects data from applications, operating systems, servers and storage, and uses the data for operational intelligence.

The upgraded Splunk App for VMware provides on automated drill down into data from the NetApp Data Ontap operating system in VMware environments.

Splunk correlates and maps data across virtualization and storage tiers to handle storage latency and capacity problems.

Leena Joshi, Splunk’s senior director for solutions marketing, said Splunk singled out NetApp ONTAP because of the company’s open APIs and because “a lot of our customers have NetApp installations.

“We already supported NetApp but what we have done is made the process automated,” Joshi said. “We just made it easier. We have taken advantage of (NetApp’s) open APIs to map VDMK file names to NetApp Ontap.”

The app provides capabilities such as analytics for root-cause discovery, capacity planning and optimization, chargeback, outlier detection, troubleshooting, and security intelligence. It also helps forecast future CPU, memory and disk requirements for VMware vCenter and ESXi hosts.


April 18, 2014  12:44 PM

IBM, VMware add options for cloud DR



Posted by: Dave Raffo
Storage

The cloud has been a boon for disaster recovery, bringing the technology to smaller companies while allowing vendor newcomers such as Axcient, Zetta, Zerto and Quorum to make names for themselves.

But this week two large vendors rolled out cloud DR. VMware added Disaster Recoveyr to its vCloud Hybrid Service (vCHS) and IBM added its Virtual Server Recovery (VSR) DR service to its SoftLayer cloud.

VMware has had DR on its roadmap since it launched VMwre vCloud Hybrid Service in late 2013. The vendor maintains five data centers in the U.S. and U.K. for the service.

Customers install a virtual appliance on-site, and use VMware’s data centers to replicate and fail over VMDKs. VMware said it can deliver a 15-minute recover point object (RPO) and subscriptions start at $835 a month for 1 TB of storage. Customers pick which data center location they want to use. The service includes two free DR tests per year.

“We identified DR as one of the key canonical uses of the hybrid cloud,” said Angelos Kottas, director of product marketing for VMware’s Hybrid Cloud unit. He added there is a “pent-up demand for a public cloud service optimized by the hybrid cloud.”

IBM will make its three-year-old Virtual Server Recovery (VSR) service available on its SoftLayer cloud for the first time. IBM claims it can recover workloads running on Windows, Linux and AIX servers within minutes.

Carl Brooks, a 451 Research analyst, said VMware is playing catchup to Amazon and other cloud services while IBM is shifting its business model with the new DR services.

“IBM is doing this now with SoftLayer,” he said. “It shows that IBM is changing its business model to include the cloud rather than traditional data center infrastructure, which is anti-cloud. It’s still on the Big Blue environment, still using Tivoli management software, but now SoftLayer is driving it.

“It’s business as usual but better for IBM. For VMware, it’s a new frontier.”


April 17, 2014  10:02 AM

Poor storage sales give IBM the blues



Posted by: Dave Raffo
Storage

IBM storage revenue declined for the 10th straight quarter, yet the results disclosed Wednesday night were hardly business as usual for Big Blue. IBM’s 23 percent year-over-year decline in storage was a much steeper decline than the normal drops in the six percent to 12 percent range.

When IBM sold its x86 server business to Lenovo in January, industry watchers wondered what impact that would have on storage because server sales often drive storage sales. It’s probably too early to blame the full drop in storage revenue on the server divestiture. Perhaps the more disturbing big-picture trend for IBM is that all of its major hardware platforms declined significantly last quarter.

CFO Martin Schroeter said IBM’s flash storage revenue grew, but high-end storage revenue fell substantially. That would be IBM’s DS8000 enterprise array series that competes mainly with EMC’s VMAX and Hitachi Data System’s Virtual Storage Platform (VSP).

There has been speculation since the Lenovo server sale that IBM would divest its storage hardware business, but Big Blue isn’t throwing in the towel on storage yet. Schroeter said the vendor has taken actions to “right-size” the storage business to the market dynamics, which likely means cutting staff and product lines. IBM is expected to launch upgrades to its DS8000, Storwize and XIV platforms over the next few months, and has promised further developments to its FlashSystem all-flash array line.

“IBM will remain a leader in high-performance and high-end systems, in storage and in cognitive computing and we will continue to invest in R&D for advanced semiconductor technology,” Schroeter said.

IBM’s storage software was a different story. IBM said its Tivoli software revenue grew seven percent and increased across the storage, security and systems management. Security was the big gainer there with double-digit growth, which means storage software likely increased less than the overall seven percent. Still, compared to IBM’s storage hardware, Tivoli storage software is booming.


April 11, 2014  1:31 PM

Data’s growth spurt still gaining steam



Posted by: Dave Raffo
Storage

IDC released its annual EMC-sponsored report this week that tries to quantify and forecast the amount of digital data generated in the world. The report includes the usual facts – some fun and others scary — along with predictions and recommendations for IT people.

Facts
• From 2013 to 2020, the digital universe will grow from 4.4 zettabytes to 44 zettabytes created and copied annually. It more than doubles every two years, and grows 40% each year. A zettabyte is one billion terarbytes.
• Enterprises were responsible for 85% of the digital universe in 2013, although two-thirds of the bits were created or captured by consumers or workers.
• Less than 20% of the digital universe had data protection in 2013, and less than 20% was stored or processed in a cloud. IDC predicts 40% of data will touch the cloud by 2020.
• The digital universe is growing faster than the storage available to hold it. In 2013, available storage capacity could hold only 33% of the digital universe and will be able to store less than 15% by 2020.
• Most of the digital universe is transient – for example, unsaved movie streams, temporary routing information in networks, or sensor signals discarded when no alarms go off.
• This year, the digital universe will equal 1.7 MB a minute for every person on earth.
• While the digital universe is doubling every two years in size, the number of IT professionals on the planet may never double again. The number of GB per IT professional will grow by a factor of eight between now and 2020.
• Mobile devices created 17 % of digital data in 2013, and that will rise to 27% by 2020.

Recommendations for organizations:
• Designate a C-level position in charge of developing new digital business opportunities, either by creating a new position or upgrading the responsibilities of the CIO or another executive.
• Develop and continuously revise an executive-team understanding of the new digital landscape for your enterprise and ask questions such as: Who are the new digital competitors? How are you going to cooperate with others in your industry to anticipate and thwart digital disruption?
• What are the short- and long-term steps you must take to ensure a smooth and timely digital transformation?
• Re-allocate resources across the business based on digital transformation priorities, invest in promising data collection and analysis areas, and identify the gaps in talent and skills required to deal with the influx of more data and new data types.


April 8, 2014  12:48 PM

Dell upgrades AppAssure; promises more data protection news



Posted by: Dave Raffo
Storage

Dell is planning to make a series of rollouts around its data protection products, beginning with today’s launch of AppAssure 5.4.

The AppAssure release is the first major release of the backup and replication product in 18 months. The new features focus mainly on replication. They include the ability to replicate between more than two sites and set different policies for a copy of the data onsite than an off-site copy, set replication schedules for each target and throttle bandwidth. AppAssure 5.4 also adds dynamic deduplication cache sizing that allows users to select their dedupe cache sizes based on available memory, and a new GUI. A dedupe cache size can be set for a core (AppAssure server) and all the repositories on that core.

AppAssure now also has a nightly mount check that verifies data can be recovered inside applications such as Microsoft Exchange and SQL.

While AppAssure was primarily an SMB product when Dell acquired it in Feb. 2012, many of the new features are designed for managed service providers.

Eric Endebrock, product management leader for Dell data protection software, said the AppAssure software has also moved upstream to small enterprises since Dell began selling it on DL4000 integrated appliances in late 2012. He said AppAssure installments were typically no more than a few terabytes before the appliance, but now Dell sees installations running in the 40 TB to 80 TB range.

“We still serve the middle market, but the AppAssure-based appliance has brought us into larger deals,” he said.

Endebrock said the AppAssure rollout is the first of several moves Dell will make with backup products. He said Dell will moved to capacity-based pricing across all its data protection software and will offer backup software acquired from Quest and AppAssure in a suite within a few months.

“Since the beginning of last year we’ve been working on bringing all of our applications together and planning our strategy,” he said. “We’re now starting to announce a lot of that work.”

AppAssure 5.4 pricing starts at $1,199 per core.


April 7, 2014  11:23 AM

Veeam snaps in support for NetApp storage



Posted by: Dave Raffo
Storage

Veeam Software is months away from launching Backup & Replication 8 for virtual machine backup, but the vendor today revealed the upgrade will support NetApp storage arrays and data protection applications.

The integration means Veeam’s Backup & Replication Enterprise Plus customers can back up from storage snapshots on NetApp arrays, and all Backup & Replication customers can recover virtual machines, individual files and application items from NetApp production storage through Veeam’s Explorer for Storage Snapshots.

Doug Hazelman, Veeam’s VP of product strategy, said the VM backup specialist is integrated with NetApp’s primary storage as well as its Snapshot, SnapVault and SnapMirror applications.

“With backup from storage snapshots, we can initiate a snap on a primary storage array, back up on an array and send the snapshot into SnapVault,” Hazelman said. “We get application-consistent VMs. Now we’re application consistent on backups as well as SnapVault.”

Veeam is far from the first backup software vendor to support snapshots on NetApp arrays. CommVault, Symantec, Asigra and Catalogic are among those who support NetApp snapshots. Even EMC – NetApp’s chief storage rival – is adding support for snapshots on NetApp NAS in its new version of NetWorker software.

Veeam first supported array-based snapshots for Hewlett-Packard’s StoreVirtual and 3PAR StoreServ arrays in Backup & Replication 6.5 in 2012 with the promise to support more software vendors. NetApp is the second storage vendor Veeam supports.

Hazelman said Veeam picks its array partners according to customer demand as well as “how easy it will be to work with that vendor.” He would not say which array vendor Veeam will support next.

Veeam’s Backup & Replication 8 is expected to be generally available in the second half of this year


April 4, 2014  3:28 PM

LSI doubles the flash capacity on its Nytro cards



Posted by: Sonia Lelii
Storage

LSI Corp. introduced the latest model to its Nytro product family, the Nytro MegaRAID 8140-8e8i card that accelerates application performance and provides RAID data protection for direct attached server (DAS) environments.

The LSI Nytro MegaRAID cards are part of the Nytro product portfolio of PCIe flash accelerator cards. The newest card doubles the capacity to 1.6 TB of usable onboard flash compared to the previous Nytro MegaRAID cards.

The Nytro MegaRAID 8140-8e8i card integrates an expander into the architecture to provide scale-out server environments with connectivity for up to 236 SAS and SATA devices through 8 external and 8 internals ports.  The total of 16 SAS ports are for both hard disk drives and JBOB connectivity.

“We are seeing a lot of demand in scale-out DAS,” said Jason Pederson, senior product manager at LSI Nytro solutions. ‘The demand we see so far is for a lot of Web hosting companies. The card will be available in the second quarter. We are in the final stages of testing.”

The MegaRAID 8110 and 8120 support up to 128 devices. The 8110 supports 800 GB of storage, while the 8120 supports 800 GB of storage.

The MegaRAID design is geared towards scale-out servers and high capacity storage environments. LSI first launched its Nytro Architecture product family in April 2012, combining PCIe flash technology and intelligent caching. LSI claims it has shipped more than 100,000 Nytro cards worldwide since introducing the products.

The card’s 1.6TB of onboard flash for intelligent data caching allow server solutions, particularly hyperscale environments such as cloud computing, Web hosting and big data analytics, to maximize application performance where data traffic is heavy.

The company also has introduced Nytro flexible flash so that 10 percent of the flash can be used for data stores and 90 percent for cache, or all the flash used for storage and none for cache or 100 percent used toward cache.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: