Storage Soup


April 21, 2014  10:51 AM

Object storage as common denominator

Randy Kerns Randy Kerns Profile: Randy Kerns

Conversations at the recent National Association of Broadcasters (NAB) conference led me to the conclusion that object storage is becoming a common denominator between block and file storage within companies in this vertical market. I noticed a separation in the storage systems used for different business groups in a company.

That separation is happening because storage systems have different requirements among the groups. Block storage has a variety of performance, capacity, and resiliency needs. File storage — either on block storage system or with NAS systems — are different in scale and performance and economics.  The businesses have evolved separately and the accounting for storage expenses has never moved to a service model.

Broadcasters at the conference talked using object storage to build a hybrid cloud or private loud. The distinction between hybrid and private cloud was that hybrid clouds also include the use of public clouds.

The different use cases are the same situation as other industries that have deployed object storage systems faced previously. Broadcast and entertainment companies use object storage for content distribution, content repositories and to share data with file sync and share software along with high performance file transfer software.

Ultimately, there were no real differences in the characteristics of the needs from the different groups.  Their storage characteristics include massive scale of capacity and number of files stored. Object storage has the capabilities to address these needs, and can be deployed as a common solution to provide economies both in the acquisition and in the operational costs.  And the object storage system could be deployed as a service, charging users through a capacity-on-demand model. The economics overcame traditional parochialism.

This could be thought of as “technology as the unifier.”  Not exactly, though, because there remains the need for “special usage” storage to satisfy other needs.  Block systems and NAS systems with certain characteristics are still required and that is unlikely to change much.  So could be said that object storage is the common denominator for meeting new storage demands.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).

April 21, 2014  6:46 AM

Splunk’s app for VMware deepens NetApp support

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Data analytics and security vendor Splunk made it easier to use its software with NetApp and VMware with the latest version of its Splunk App for VMware.

The San Francisco-based Splunk’s software collects data from applications, operating systems, servers and storage, and uses the data for operational intelligence.

The upgraded Splunk App for VMware provides on automated drill down into data from the NetApp Data Ontap operating system in VMware environments.

Splunk correlates and maps data across virtualization and storage tiers to handle storage latency and capacity problems.

Leena Joshi, Splunk’s senior director for solutions marketing, said Splunk singled out NetApp ONTAP because of the company’s open APIs and because “a lot of our customers have NetApp installations.

“We already supported NetApp but what we have done is made the process automated,” Joshi said. “We just made it easier. We have taken advantage of (NetApp’s) open APIs to map VDMK file names to NetApp Ontap.”

The app provides capabilities such as analytics for root-cause discovery, capacity planning and optimization, chargeback, outlier detection, troubleshooting, and security intelligence. It also helps forecast future CPU, memory and disk requirements for VMware vCenter and ESXi hosts.


April 18, 2014  12:44 PM

IBM, VMware add options for cloud DR

Dave Raffo Dave Raffo Profile: Dave Raffo

The cloud has been a boon for disaster recovery, bringing the technology to smaller companies while allowing vendor newcomers such as Axcient, Zetta, Zerto and Quorum to make names for themselves.

But this week two large vendors rolled out cloud DR. VMware added Disaster Recoveyr to its vCloud Hybrid Service (vCHS) and IBM added its Virtual Server Recovery (VSR) DR service to its SoftLayer cloud.

VMware has had DR on its roadmap since it launched VMwre vCloud Hybrid Service in late 2013. The vendor maintains five data centers in the U.S. and U.K. for the service.

Customers install a virtual appliance on-site, and use VMware’s data centers to replicate and fail over VMDKs. VMware said it can deliver a 15-minute recover point object (RPO) and subscriptions start at $835 a month for 1 TB of storage. Customers pick which data center location they want to use. The service includes two free DR tests per year.

“We identified DR as one of the key canonical uses of the hybrid cloud,” said Angelos Kottas, director of product marketing for VMware’s Hybrid Cloud unit. He added there is a “pent-up demand for a public cloud service optimized by the hybrid cloud.”

IBM will make its three-year-old Virtual Server Recovery (VSR) service available on its SoftLayer cloud for the first time. IBM claims it can recover workloads running on Windows, Linux and AIX servers within minutes.

Carl Brooks, a 451 Research analyst, said VMware is playing catchup to Amazon and other cloud services while IBM is shifting its business model with the new DR services.

“IBM is doing this now with SoftLayer,” he said. “It shows that IBM is changing its business model to include the cloud rather than traditional data center infrastructure, which is anti-cloud. It’s still on the Big Blue environment, still using Tivoli management software, but now SoftLayer is driving it.

“It’s business as usual but better for IBM. For VMware, it’s a new frontier.”


April 17, 2014  10:02 AM

Poor storage sales give IBM the blues

Dave Raffo Dave Raffo Profile: Dave Raffo

IBM storage revenue declined for the 10th straight quarter, yet the results disclosed Wednesday night were hardly business as usual for Big Blue. IBM’s 23 percent year-over-year decline in storage was a much steeper decline than the normal drops in the six percent to 12 percent range.

When IBM sold its x86 server business to Lenovo in January, industry watchers wondered what impact that would have on storage because server sales often drive storage sales. It’s probably too early to blame the full drop in storage revenue on the server divestiture. Perhaps the more disturbing big-picture trend for IBM is that all of its major hardware platforms declined significantly last quarter.

CFO Martin Schroeter said IBM’s flash storage revenue grew, but high-end storage revenue fell substantially. That would be IBM’s DS8000 enterprise array series that competes mainly with EMC’s VMAX and Hitachi Data System’s Virtual Storage Platform (VSP).

There has been speculation since the Lenovo server sale that IBM would divest its storage hardware business, but Big Blue isn’t throwing in the towel on storage yet. Schroeter said the vendor has taken actions to “right-size” the storage business to the market dynamics, which likely means cutting staff and product lines. IBM is expected to launch upgrades to its DS8000, Storwize and XIV platforms over the next few months, and has promised further developments to its FlashSystem all-flash array line.

“IBM will remain a leader in high-performance and high-end systems, in storage and in cognitive computing and we will continue to invest in R&D for advanced semiconductor technology,” Schroeter said.

IBM’s storage software was a different story. IBM said its Tivoli software revenue grew seven percent and increased across the storage, security and systems management. Security was the big gainer there with double-digit growth, which means storage software likely increased less than the overall seven percent. Still, compared to IBM’s storage hardware, Tivoli storage software is booming.


April 11, 2014  1:31 PM

Data’s growth spurt still gaining steam

Dave Raffo Dave Raffo Profile: Dave Raffo

IDC released its annual EMC-sponsored report this week that tries to quantify and forecast the amount of digital data generated in the world. The report includes the usual facts – some fun and others scary — along with predictions and recommendations for IT people.

Facts
• From 2013 to 2020, the digital universe will grow from 4.4 zettabytes to 44 zettabytes created and copied annually. It more than doubles every two years, and grows 40% each year. A zettabyte is one billion terarbytes.
• Enterprises were responsible for 85% of the digital universe in 2013, although two-thirds of the bits were created or captured by consumers or workers.
• Less than 20% of the digital universe had data protection in 2013, and less than 20% was stored or processed in a cloud. IDC predicts 40% of data will touch the cloud by 2020.
• The digital universe is growing faster than the storage available to hold it. In 2013, available storage capacity could hold only 33% of the digital universe and will be able to store less than 15% by 2020.
• Most of the digital universe is transient – for example, unsaved movie streams, temporary routing information in networks, or sensor signals discarded when no alarms go off.
• This year, the digital universe will equal 1.7 MB a minute for every person on earth.
• While the digital universe is doubling every two years in size, the number of IT professionals on the planet may never double again. The number of GB per IT professional will grow by a factor of eight between now and 2020.
• Mobile devices created 17 % of digital data in 2013, and that will rise to 27% by 2020.

Recommendations for organizations:
• Designate a C-level position in charge of developing new digital business opportunities, either by creating a new position or upgrading the responsibilities of the CIO or another executive.
• Develop and continuously revise an executive-team understanding of the new digital landscape for your enterprise and ask questions such as: Who are the new digital competitors? How are you going to cooperate with others in your industry to anticipate and thwart digital disruption?
• What are the short- and long-term steps you must take to ensure a smooth and timely digital transformation?
• Re-allocate resources across the business based on digital transformation priorities, invest in promising data collection and analysis areas, and identify the gaps in talent and skills required to deal with the influx of more data and new data types.


April 8, 2014  12:48 PM

Dell upgrades AppAssure; promises more data protection news

Dave Raffo Dave Raffo Profile: Dave Raffo

Dell is planning to make a series of rollouts around its data protection products, beginning with today’s launch of AppAssure 5.4.

The AppAssure release is the first major release of the backup and replication product in 18 months. The new features focus mainly on replication. They include the ability to replicate between more than two sites and set different policies for a copy of the data onsite than an off-site copy, set replication schedules for each target and throttle bandwidth. AppAssure 5.4 also adds dynamic deduplication cache sizing that allows users to select their dedupe cache sizes based on available memory, and a new GUI. A dedupe cache size can be set for a core (AppAssure server) and all the repositories on that core.

AppAssure now also has a nightly mount check that verifies data can be recovered inside applications such as Microsoft Exchange and SQL.

While AppAssure was primarily an SMB product when Dell acquired it in Feb. 2012, many of the new features are designed for managed service providers.

Eric Endebrock, product management leader for Dell data protection software, said the AppAssure software has also moved upstream to small enterprises since Dell began selling it on DL4000 integrated appliances in late 2012. He said AppAssure installments were typically no more than a few terabytes before the appliance, but now Dell sees installations running in the 40 TB to 80 TB range.

“We still serve the middle market, but the AppAssure-based appliance has brought us into larger deals,” he said.

Endebrock said the AppAssure rollout is the first of several moves Dell will make with backup products. He said Dell will moved to capacity-based pricing across all its data protection software and will offer backup software acquired from Quest and AppAssure in a suite within a few months.

“Since the beginning of last year we’ve been working on bringing all of our applications together and planning our strategy,” he said. “We’re now starting to announce a lot of that work.”

AppAssure 5.4 pricing starts at $1,199 per core.


April 7, 2014  11:23 AM

Veeam snaps in support for NetApp storage

Dave Raffo Dave Raffo Profile: Dave Raffo
Storage

Veeam Software is months away from launching Backup & Replication 8 for virtual machine backup, but the vendor today revealed the upgrade will support NetApp storage arrays and data protection applications.

The integration means Veeam’s Backup & Replication Enterprise Plus customers can back up from storage snapshots on NetApp arrays, and all Backup & Replication customers can recover virtual machines, individual files and application items from NetApp production storage through Veeam’s Explorer for Storage Snapshots.

Doug Hazelman, Veeam’s VP of product strategy, said the VM backup specialist is integrated with NetApp’s primary storage as well as its Snapshot, SnapVault and SnapMirror applications.

“With backup from storage snapshots, we can initiate a snap on a primary storage array, back up on an array and send the snapshot into SnapVault,” Hazelman said. “We get application-consistent VMs. Now we’re application consistent on backups as well as SnapVault.”

Veeam is far from the first backup software vendor to support snapshots on NetApp arrays. CommVault, Symantec, Asigra and Catalogic are among those who support NetApp snapshots. Even EMC – NetApp’s chief storage rival – is adding support for snapshots on NetApp NAS in its new version of NetWorker software.

Veeam first supported array-based snapshots for Hewlett-Packard’s StoreVirtual and 3PAR StoreServ arrays in Backup & Replication 6.5 in 2012 with the promise to support more software vendors. NetApp is the second storage vendor Veeam supports.

Hazelman said Veeam picks its array partners according to customer demand as well as “how easy it will be to work with that vendor.” He would not say which array vendor Veeam will support next.

Veeam’s Backup & Replication 8 is expected to be generally available in the second half of this year


April 4, 2014  3:28 PM

LSI doubles the flash capacity on its Nytro cards

Sonia Lelii Sonia Lelii Profile: Sonia Lelii
Storage

LSI Corp. introduced the latest model to its Nytro product family, the Nytro MegaRAID 8140-8e8i card that accelerates application performance and provides RAID data protection for direct attached server (DAS) environments.

The LSI Nytro MegaRAID cards are part of the Nytro product portfolio of PCIe flash accelerator cards. The newest card doubles the capacity to 1.6 TB of usable onboard flash compared to the previous Nytro MegaRAID cards.

The Nytro MegaRAID 8140-8e8i card integrates an expander into the architecture to provide scale-out server environments with connectivity for up to 236 SAS and SATA devices through 8 external and 8 internals ports.  The total of 16 SAS ports are for both hard disk drives and JBOB connectivity.

“We are seeing a lot of demand in scale-out DAS,” said Jason Pederson, senior product manager at LSI Nytro solutions. ‘The demand we see so far is for a lot of Web hosting companies. The card will be available in the second quarter. We are in the final stages of testing.”

The MegaRAID 8110 and 8120 support up to 128 devices. The 8110 supports 800 GB of storage, while the 8120 supports 800 GB of storage.

The MegaRAID design is geared towards scale-out servers and high capacity storage environments. LSI first launched its Nytro Architecture product family in April 2012, combining PCIe flash technology and intelligent caching. LSI claims it has shipped more than 100,000 Nytro cards worldwide since introducing the products.

The card’s 1.6TB of onboard flash for intelligent data caching allow server solutions, particularly hyperscale environments such as cloud computing, Web hosting and big data analytics, to maximize application performance where data traffic is heavy.

The company also has introduced Nytro flexible flash so that 10 percent of the flash can be used for data stores and 90 percent for cache, or all the flash used for storage and none for cache or 100 percent used toward cache.


April 2, 2014  8:04 AM

Open Source Storage hits the comeback trail

Dave Raffo Dave Raffo Profile: Dave Raffo

An early open-source storage player is back, seven years after going out of business mainly because it was ahead of its time.

Open Source Storage (OSS) re-launched in January with an undisclosed amount of funding from private investors and has since released two product lines.

OSS first launched in 2001 and was gone by 2007 despite landing a few big customers including Facebook.

“We were the guys who pioneered this,” said OSS CEO Eren Niazi, who was 23 when he founded the company. “We started the open source storage movement.”

He said the movement stalled because the storage ecosystem did not warm to OSS. “A lot of tier one vendors didn’t want to work with us and investors didn’t want to back us,” he said. “The business came to a halt.

“Seven years later, people say ‘Open source storage, I get it, it’s exactly what I need.’’

Of course, OSS faces a lot more competition in 2014 than it did in 2007. Today there are open source storage options such as OpenStack, Hadoop, Ceph, and products built on ZFS. Still, adoption remains low as vendors such as Red Hat, SwiftStack, Cloudera, Nexenta, Inktank and now OSS are trying to break out.

OSS’s Open Cluster software can run on solid-state or SAS drives and supports block or file storage. Niazi said OSS has more than 30 customers with others evaluating the software. He said his products are used mostly by enterprises “with large data demands and large deployments and are trying to reduce their costs.”

OSS produces are based on its N1.618 Plug and Play Middleware and open-source software. Last month it brought out the Open Cluster ZX that scales to 1,024 nodes. Open Cluster ZX is built for virtual servers based on OpenStack as well as NAS, object storage and virtual machine-aware storage. OSS this week added its Open Cluster Cloud Series designed for virtual servers, cloud-based services, high performance computing and big data analytics. The cloud series comes in two-node and four-node bundles.


April 1, 2014  2:55 PM

Which storage features add value?

Randy Kerns Randy Kerns Profile: Randy Kerns

Looking at advanced features is always a critical step when reviewing storage systems because of the value these features bring.  There are a large number of storage system-based features, and variations exist between different vendor implementations. But taking a step back, it is interesting to examine where these features really belong. They were developed in storage systems to fill a need and each feature could be applied at a single point, regardless of the different hosts accessing the information.

To start a discussion about where the features really belong, let’s examine the more commonly used ones. This is not a call for change because change is unlikely.  It is a discussion that may assist in some understanding of how these features can help.

Encryption. Encryption should be done at the time an application creates or modifies data and before the data is transmitted out of the application’s control. This means the data would be encrypted when it is transmitted over any network to a storage location. Access to data from another server would require the authentication and the encryption keys for decrypting the data. Encrypting data at the storage system is not the best location because applications can access the data without controls regarding encryption. Encryption at the storage system is protection from physical theft of devices.

Data protection through backup. Decisions about backup should be made by the application owner or business unit.  In most environments, IT makes a broad-based decision about data protection as a standard policy and applies that to data – usually on a volume basis.  The actual value of the data and the corresponding protection requirements may not be known (or as continued knowledge) by IT. Data protection should be controlled and initiated at the application level. The same holds true for restoration.

Data protection for Disaster Recovery and Business Continuance. Recovering from a disaster or continuing operations during a disaster or interruption is a complex process and requires coordination between applications and storage systems. Storage systems make replicated copies of data to other storage systems and the process to failover or recovery is orchestrated according to a set of rules. If the applications could make the copies (simultaneous writes to more than one location) without impact to operations (in performance, etc.) the storage system would still store the remote copy but the recovery process and coordination would be done by the application at a single point with knowledge of the information requirements.

Compliance management. This is a set of features that help meet different regulatory requirements. Each of those represents a challenge. Only a few storage systems do all of these:

  • Retention controls – these protect data from being deleted until a certain time, event, and/or approval. These can be done in software if the software is the only method to access data.  Because this may defeat the flexibility of how data is stored for archiving, the storage system may be the best location for this function with software controlling the operation.
  • Immutability – protection from data alteration has been implemented in storage for some time.  With many sources of access, immutability at the storage system would still seem to be the best location.
  • Audit trail for access to data – requirements to track access to data can be done in the application if there is no way data could be accessed otherwise. Typically there may be other means, so having the storage system handle the audit trail is probably the best solution.
  • Validation of integrity – this requirement is a typical feature of storage systems anyway so continuing this in the storage system would be expected.
  • Secure deletion – this is implemented as a digital overwrite of the data on the storage device.  Because of device characteristic knowledge, this should continue to be part of the storage system.
  • Legal hold – this has the same issues as retention controls.

It’s unlikely that we’ll see any substantial change in the way things work regarding storage features in existing IT environments. New implementations for private clouds may do things differently, which means applications would have different requirements. It will be interesting to see what develops as private clouds are deployed within traditional IT environments.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: