Storage Soup


July 13, 2016  1:07 PM

Rubrik, Pure Storage converge for performance

Dave Raffo Dave Raffo Profile: Dave Raffo

Rubrik, a startup selling convergence for data management, is trying to piggy-back on the success of all-flash arrays through a partnership with Pure Storage.

Rubrik’s early focus is on data protection, although it has designs on primary data eventually. It is pushing its r300 and r500 Series appliances with Pure Storage’s FlashArray//m platform. The vendors claim common customers are seeing more than 250 TB per hour ingest rate from Pure arrays per Rubrik appliance.

Rubrik CEO Bipul Sinha said the partnership with Pure includes collaboration among their sales team plus co-engineering that allows Rubrik appliances to directly read data from Pure SANs. That’s a unique feature for Rubrik, which connects to other storage systems through VMware APIs. Common customers can manage Rubrik and Pure systems through the same browser.

Both systems dedupe data – Pure handling primary data and Rubrik backup data.

“The systems complement each other,” Sinha said. “The biggest benefit is the faster ingest. We make sure we extract data from the primary array without hurting performance of the primary data. We have seen dramatic throughput from data coming from Pure to Rubrik.”

Sinha said Rubrik will have a similar partnership with Pure’s FlashBlade when the unstructured data system ships.

Pure and Rubrik name ExponentHR, Castilleja School, Red Hawk Casino, Phreesia, and Wabash as customers using Pure flash arrays and Rubrik appliances for data protection.

With Cisco Live running this week, there were other storage partnerships in the news. Backup software vendor Commvault launched reference architectures combining its data protection with Cisco UCS servers and added support for Cisco HyperFlex hyper-converged appliances. Commvault software will detect and classify new virtual machines on HyperFlex, and also load balances across backup servers.

Flash array vendor Tegile Systems expanded its IntelliStack converged infrastructure with Cisco UCS. Tegile customers can use IntelliStack software to orchestrate and automate management through Cisco UCS Director. UCS Director manages compute, network, virtualization, storage and public clouds connected to UCS.

July 8, 2016  1:11 PM

Backblaze B2 takes on Amazon, Google and Azure

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Cloud provider Backblaze has gone GA with B2 Cloud Storage, a cloud service that is trying to compete with heavy hitters such as Amazon, Google and Microsoft Azure at a cheaper cost.

The Backblaze B2 Cloud Storage, which went into private beta in September 2015 and then public beta December that year, is an open-source object storage service in the cloud. It provides cloud storage for backup, disaster recovery, replication and copying data on tape and it can work with on-premise or public-based cloud.

“If you want compute or database and you use Amazon S3 storage, then you want to stay with Amazon,” Backblaze CEO Gleb Budman said. “But if you don’t need all the other things that Amazon offers and you care about cost then we will provide the storage. B2 is four to six times lower in cost than Amazon S3.”

Backblaze came out of stealth seven years ago with the Backblaze Storage Pod that was a high-density, cheap online cloud service that offered unlimited storage at $5 a month. It was marketed as online backup for consumers and businesses with no more than a few employees.

The company built its own systems, called storage pods that contain 67 TB of capacity in 4U servers for its cloud. Backblaze now is making its cloud service available either through a Web interface, a command line interface or API interface integrated with applications.

Backblaze partners Synology, CloudBerry Lab and Ortana Media Group use B2 as a cloud target. Synology’s Network Attached storage (NAS) users can sync folders between the Backblaze B2 Cloud Storage and the Synology NAS system. After installing the 2.1.0 Cloud Sync package to their system, customers can select Backblaze B2 as their cloud destination.

CloudBerry Lab, which provides a cloud-based backup and file management service to small to med-size business, has integrated Backblaze B2 with the CloudBerry Backup for Windows Servers. Support for B2 on CloudBerry Explorer and CloudBerry Managed Backup Service is on the road map.

Ortana Media Group’s Cubix is a scalable software platform for controlling media workflow, such as post production, broadcast and archiving. Backblaze B2 is integrated in Cubix as a storage destination for a customer’s digital assets. Cubix defines storage rules and can automatically route the assets to the appropriate storage media and location.


July 1, 2016  8:12 AM

EMC to Nexsan: We said ‘Unity’ first

Carol Sliwa Carol Sliwa Profile: Carol Sliwa

In the latest shot fired in the David-and-Goliath dispute between Nexsan Technologies and EMC over the Unity trademark, EMC claimed it began using the Unity brand name in 2014. The first scheduled court appearance for the lawsuit is less than two weeks away.

Nexsan filed a complaint on May 6 in the United States District Court in Boston, claiming it has priority to the Unity trademark. EMC submitted counterclaims on June 20 alleging trademark infringement and unfair competition. The vendors are scheduled to make their first court appearance on July 14 before U.S. District Court judge William Young.

The battle started in the spring, after Nexsan and EMC executed major product launches for their respective Unity products. Nexsan’s press release hit the wires on April 26 for its Unity storage product, combining NAS, SAN and enterprise private-cloud file system synchronization. Nexsan had submitted applications for the terms “Unity” and “Nexsan Unity” on March 22, 2016 with the U.S. Patent and Trademark Office (USPTO).

EMC filed its applications for the “Unity” and “EMC Unity” trademarks with the USPTO on April 29 – the same day it sent a letter to Nexsan reserving the “right to commence legal action” if Nexsan didn’t stop using the product name Unity and withdraw its trademark applications. EMC launched its Unity mid-range array, combining block and file storage, on May 2 at EMC World 2016.

In its June 20 court filing, EMC claimed it started using the Unity and EMC Unity trademarks publicly more than two years ago in connection with an extension of its VNX storage system. Examples cited in the court document include:

–May 5, 2014: Blog post by Chad Sakac at http://virtualgeek.typepad.com/virtual_geek/2014/05/vnx-architectural-evolution-keeps-rolling-vnxe-3200-project-liberty.html

–March 19, 2015: Unity product presentation to a customer and large reseller. At least three additional customer presentations took place in March 2015, according to EMC.

–Dec. 14, 2015 to March 14, 2016: EMC provided 21 partners and customers versions of the Unity product, “featuring the UNITY marks.” Several of those original beta testers subsequently purchased the products, according to EMC.

EMC also claimed that “at least as early as” May 2014 it “has spent a significant amount of money to advertise” the Unity products. As a result, “purchasers and potential purchasers of data management software immediately associate the distinctive” Unity trademarks with EMC, according to the company’s court filing.

So, if EMC had been using the name Unity for so long, why did the company wait until April 29 to file its trademark application with the USPTO?

EMC declined comment for this blog post, citing a policy not to comment on pending litigation.

Nexsan also declined comment yesterday on EMC’s counterclaims. But, in a May conversation, Nexsan CEO Bob Fernander said he had no knowledge of any 2015 EMC customer presentations in which the term “Unity” may have been used.

“You’ve got to do it publicly,” Fernander said at the time. “And the definition of public is kind of common sense, we think. And your commerce becomes another next step in the process of making it publicly known that you’re using a mark. So we’re scratching our heads on that one.”

Fernander said he first learned of the EMC Unity product when he received a call from a reseller in May, asking him if he knew that EMC had just launched a product bearing the same name as Nexsan’s product.

Lisa Tittemore, an attorney at Boston-based Sunstein Kann Murphy & Timbers LLP, the firm that filed Nexsan’s federal court complaint, said in May, “Priority is based on who uses or files the mark first, so you can have rights based on filing first or based on using first. That’s literally what this lawsuit is about.”

Through its court filing, EMC claimed it exercised valid trademark rights and had priority over Nexsan since its “adoption, promotion, beta testing and offering for sale” of its Unity products pre-dated Nexsan’s trademark application filing dates or “any other dates upon which Nexsan could rely.”

EMC claimed, via the document, that Nexsan’s Unity products were not generally available until June 2016 at the earliest and that Nexsan had not indicated it had closed any sales for the Unity system.


June 30, 2016  8:06 PM

Red Hat Storage VP sees different uses for Ceph, Gluster

Carol Sliwa Carol Sliwa Profile: Carol Sliwa

Red Hat Storage showed off updates to its Ceph and Gluster software and laid out its strategy for working with containers at this week’s Red Hat Summit in San Francisco.

We caught up with Ranga Rangachari, vice president and general manager of Red Hat Storage, to discuss the latest product releases, industry trends and the company’s future storage direction. Interview excerpts follow.

Which of the Red Hat storage products – Gluster or Ceph – is seeing greater adoption?

Rangachari: Both of them are. It’s very much a workload-driven conversation. Ceph is part and parcel to the OpenStack story. In the community, [the majority] of the OpenStack implementations were using Ceph as the storage substrate. Gluster is classic file serving – second-tier storage, deep archiving. If I were to take a look at the big picture, it’s right down the middle. Some days, you might have a lot more OpenStack conversations. Other days, you might have a lot more archiving, file services or virtualization conversations.

Red Hat just unveiled a technology preview of the Ceph file system (CephFS). Why does Red Hat need another file-based storage option?

Rangachari: The tech preview is focused on OpenStack. The advantage of this approach is customers can now use Ceph for file, object and block for OpenStack. Ceph has good penetration within the OpenStack market for block storage. We expanded it to object, and file is the third leg to the stool.

Do you envision CephFS in use only in conjunction with OpenStack?

Rangachari: It’s tough to predict, but for the foreseeable future, it’s going to be mainly focused on OpenStack. File systems need a lot of time from a testing and a maturity standpoint before we throw it out and say, ‘Start using it for general-purpose workloads.’ . . . We have not yet formulated any detailed plans around what else we could do with CephFS beyond OpenStack.

What’s the future direction for Ceph and Gluster?

Rangachari: One area that we are focused on the ability to manage our portfolio through a single management plane. The other area is interfacing and integrating with leading ISV applications, especially in the object storage space. The first wave of our ecosystem was around the hardware vendors, whether it’s server vendors or SSDs and those type of things.

With all the disruption in the storage industry, which events are having the greatest impact on Red Hat storage strategy?

Rangachari: One is flash [solid-state drives] SSDs. One of the biggest holdbacks a year ago was that the cost per TB was pretty expensive when it comes to SSDs and flash. But now I think Moore’s law in a way is taking shape. You’re seeing the processing and the capacity increase and the price dramatically drop. That’s one area that we are paying very close attention to, and the SanDisk [InfiniFlash] announcement was the first step in that direction.

The other thing that we are seeing is containers. In the conversations that we are having with customers, that’s becoming the next wave in infrastructure and the next wave in how applications are developed and delivered.


June 30, 2016  3:02 PM

Turnaround specialist Walsh takes over IBM storage

Dave Raffo Dave Raffo Profile: Dave Raffo
Catalogic, IBM Storage

IBM is bringing in industry veteran Ed Walsh to try to light a fire under its struggling storage division.

Walsh will take over as general manager of IBM Storage and Software Defined Infrastructure on July 11. He joins IBM from Catalogic Software, where he was CEO since 2014.

Walsh worked for IBM Storage from 2010-13 and has been CEO of four storage startups. Walsh became CEO of Catalogic nine months after it spun out of Syncsort. Under Walsh, Catalogic has broadened the storage arrays it supports – adding support for IBM and EMC systems to go with its original NetApp support.

An e- mail from an IBM spokesperson referred to Wash as a “change agent” and noted his “ability to drive transformation and lead teams to embrace a new direction.” Walsh is expected to try and rally IBM’s storage business around its FlashSystem all-flash platform, Spectrum storage software and its Cleversafe object storage acquisition.

Walsh faces a different type of challenge at IBM than he is used to. He is considered a turnaround specialist of startups, and his tenure usually ends with a sale to a larger vendor. At IBM he will be tasked with waking a sleeping giant and he will more likely be buying companies instead of selling his.

IBM’s storage revenue has declined in each of the last four years, dropping from $3.7 billion in 2011 to $2.4 billion in 2015. Its storage hardware revenue was $433.5 million in the first quarter of this year, down 6% year over year.

According to IDC, IBM stood fifth behind EMC, NetApp, Hewlett Packard Enterprise and Hitachi in networked storage sales in the first quarter with 7.9% market share. IBM’s full year 2015 market share was 10%, according to IDC.

IBM has fared better in the all-flash market, ranking second by IDC for 2015. However, IDC put IBM’s first quarter all-flash revenue at $67.4 million, up 54% in a market that grew 87.4%. IBM ranked fifth in all-flash revenue for the first quarter behind EMC, NetApp, Pure Storage and HPE on IDC’s list.

Walsh was IBM’s vice president in storage in charge of marketing and strategy after selling primary data compression vendor Storwize to Big Blue in 2010. Walsh was also CEO of data deduplication pioneer Avamar from 2005 until selling the company to EMC in November of 2006. He stayed with EMC to run the Avamar division until February 2007. He was CEO of server virtualization startup Virtual Iron from 2009 until selling that company to Oracle in 2010.

Walsh was also VP of sales, marketing and alliance for Fibre Channel switch vendor CNT Technologies from 2001 to 2005.

Walsh replaces Greg Lotko, who held the GM job on an interim basis and will becomes vice president of development for IBM Storage.

Catalogic today named Ken Barth as its CEO to replace Walsh. Barth has been on the Catalogic board since the 2013 spinout. He was CEO of storage resource management vendor TekTools from 1996-2010 until SolarWinds acquired TekTools.


June 30, 2016  2:59 PM

Webscale Networks airs out multi-cloud DR

Paul Crocetti Paul Crocetti Profile: Paul Crocetti

Webscale Networks is well aware that its e-commerce customers can’t tolerate much downtime.

To that end, the website infrastructure provider recently expanded capabilities to build a multi-cloud for disaster recovery as a service. Webscale Networks lets customers implement backup instances of their cloud deployments in separate regions or with separate cloud providers. The provider offers a service-level agreement (SLA) that guarantees customers will have their site running at an alternate location within 60 minutes, with no more than 15 minutes of data loss.

“Customers were interested in DR that was cross-cloud,” said Jay Smith, CTO and founder of Webscale.

Smith said Webscale’s SLA is conservative with the stated recovery time objectives and recovery point objectives, and sees them decreasing over time.

“We’re able to meet SLAs with room to spare,” Smith said.

Webscale Networks, based in Mountain View, Calif., began operations in 2012 under the name Lagrange Systems. It now claims over 40 customers, mainly mid-market e-commerce companies.

The Webscale Multi-Cloud DR service allows customers to fail over to another cloud provider, with minimal data loss and minimal outage time, said CEO Sonal Puri. In addition, if downtime occurs, Webscale can automatically fail over to a scheduled alternate region.

Webscale Multi-Cloud DR provides two options for disaster recovery — Webscale Cloud Backup and Webscale Cloud Mirror.

With Webscale Cloud Backup, customers can make a copy of their entire back end — the application and data server — periodically. For e-commerce applications that require more frequent backup, Webscale Cloud Mirror allows customers to keep a near real-time replica of their back end in an alternate location. Webscale Cloud Mirror also ensures that application delivery controllers remain consistently available regardless of the status of either the data server or application layer, according to the company.

Webscale Multi-Cloud DR, with Cloud Backup, is included with the Webscale Pro and Enterprise platforms. Webscale Multi-Cloud DR, with Cloud Mirror, is included in Webscale Enterprise or available as an additional service for Webscale Pro.

Puri said much of the Webscale Networks value lies in its back-end DR services across cloud providers and regions.

“We are unique because of the back end,” she said.


June 29, 2016  2:40 PM

Veeam changes CEO; founder steps aside

Dave Raffo Dave Raffo Profile: Dave Raffo
Veeam

Veeam Software, which went from a niche virtual machine backup software vendor to an industry leader in less than a decade, changed its top leadership team Tuesday. The changes come as Veeam prepares to make a stronger run at enterprise sales and helping customers move to the cloud.

Ratmir Timashev stepped down as CEO and his co-founder Andrei Baronov shifted from VP of software engineering into the new CTO post. They will help Veeam with market strategy and product development. Veeam veteran William Largent was promoted to CEO and the vendor added VMware executive Peter McKay, who will run the vendor’s day-to-day operations as president and COO.

Veeam chief marketing officer Peter Ruchatz said the vendor had close to $500 million in billings last year and the transition is designed to help it reach its goal of $1 billion annual billings by 2019.

“We’re constantly thinking about how we can take the business to the next level,” he said. “We have a couple of things coming together now. Over the past 12 to 18 months, we’ve pursued opportunities beyond what Veeam’s business was in the beginning, which was the SMB market. Now we’re focused on availability for the enterprise.

“The changes we’ve made are starting to come to fruition and they will take Veeam to the next growth level. Those new opportunities also bring complexity, so we decided we should bring on external management.”

Largent joined Veeam in 2008 as executive vice president. He previously worked with Timashev and Baronov at Aelita Software, which Quest Software acquired in 2006. He has also been CEO of Applied Innovation. Largent will move from Veeam’s Columbus, Ohio office to its global headquarters in Baar, Switzerland.

McKay comes to Veeam from VMware, where he was senior vice president and general manager of the Americas. He was CEO of startups Desktone, Watchfire and eCredit – all acquired by larger companies – before joining VMware.

Ruchatz said McKay will run the day-to-day business. “He has experience in large corporations,” Ruchatz said. “He also knows how startups work. He knows how to scale and where we need to be.”

Ruchatz said Timashev will help plan Veeam’s strategic moves, and Baronov will remain involved in product direction.

Veeam has sold into the enterprise for the past year or so, but mostly to departments inside large companies. Ruchatz said the vendor is ramping up its sales team to go after larger footprints inside enterprises.  It is planning an August product launch that expands its availability platform. Cloud connectivity will play a large role in the new product, which will include disaster recovery orchestration. Veeam is also expected add deeper integration with public clouds such as Amazon and Microsoft Azure. The changes will include more subscription-based pricing.

Veeam cracked the Gartner Magic Quadrant leaders category for data center backup and recovery software this year for the first time. Gartner listed Veeam as a leader along with Commvault, IBM, Veritas Technologies and EMC.

Veeam, a private company, claims its bookings revenue grew 24% year-over-year for the first quarter of 2016, including a 75% growth in deals above $500,000. Veeam claims an average of 3,500 new customers each month and said it finished March with 193,000 customers worldwide.

Newcomer McKay previously served as an executive-in-residence for Insight Venture Partners, which has a minority holding in Veeam. However, Ruchatz said Veeam has no plans to seek venture funding or become a public company.

“Nothing changes on the investment side,” Ruchatz said. “We enjoy being a private company and have the flexibility to make big moves. We’re running a profitable company and the market knows it. We don’t need further funding. In fact, we have a enough to start looking at making potential acquisitions.”


June 28, 2016  6:18 PM

Zerto disaster recovery products get boost with $20M investment

Paul Crocetti Paul Crocetti Profile: Paul Crocetti
Storage

With a new round of $20 million in funding, business continuity/disaster recovery software vendor Zerto plans to double its engineering force by the end of the year to accelerate product releases.

“It extends what we can do and how long we can continue to be as aggressive as we are,” said Rob Strechay, Zerto’s vice president of product.

The Series E1 financing for Zerto disaster recovery, led by Charles River Ventures (CRV), follows the $50 million Series E financing headed by Institutional Venture Partners announced in January. The vendor has raised $130 million in total financing.

Strechay joined Zerto from Hewlett Packard Enterprise after the January funding round. He said Zerto’s engineer head count will be hitting close to 160 by the end of 2016. He anticipates two product releases in 2017 that will extend Azure, failback and cloud capabilities.

At the end of May, Zerto detailed its next Zerto Virtual Replication release, code-named “Athena.” That product is due late in 2016. It will include support for replication into the Azure cloud. Zerto also unveiled a mobile application for monitoring BC/DR.

Zerto has been expanding internationally. Last week, Zerto opened an office in Singapore, with six employees in support, sales and marketing, Strechay said. The company also has support services in its Boston and Israel offices, meaning it now offers support across the globe.

Zerto is expanding its office in the United Kingdom outside London, which is the base of its European operations. Strechay said that he expects no immediate impact following the Brexit vote, but the vendor may need to look at pricing following the dropping of the value of the pound.

Zerto is also looking to accelerate its sales and marketing in Asia and the Pacific with the new funding.

Strechay said CRV approached Zerto to extend the Series E funding and provided the vast majority of this round.

“[CRV general partner Murat Bicer] understood the value proposition, the leadership the company was taking,” Strechay said. “He really wanted in.”

Zerto claims four consecutive years of at least 100% sales growth.


June 28, 2016  2:30 PM

EMC pitches isolated data recovery to thwart cyber attacks

Dave Raffo Dave Raffo Profile: Dave Raffo
EMC

Who’s protecting the data in your data protection storage? That’s a question EMC wants you to think about as the scope of security threat increase.

EMC recommends – and has customers using – an isolated data center disconnected from the network to keep out threats such as ransomware and other types of cyber attacks. This involves locking down systems used for recovery and limiting exposure to create an air gap between the recovery zone and production systems.

An air gapped device never has an active unsecured connection. EMC’s isolated data recovery makes the recovery target inaccessible to the network and restricted from all users who are not cleared to access the it. In most cases, it’s a Data Domain disk backup system that is off the grid most of the time.

EMC’s isolated recovery includes VMAX storage arrays, Data Domain disk backup and RecoverPoint or Vplex software. A Fibre Channel connection between the VMAX and Data Domain ports is recommended.

The air gap is created by closing ports when not in use, and limiting the open ports to those needed to replicate data. VMAX SRDF creates a crash consistent copy of the production environment, and its SymACL Access Control is used to restrict access and eliminate remote commands from being executed from production arrays.

RecoverPoint and Vplex can be used with EMC XtremIO and VNX arrays to handle replication and provide crash consistent copies.

The process allows companies to keep a secure and isolated gold copy. When a new gold copy is replicated, analytics are run to compare it to the most recently copied version. If this validation process reveals a corruption in the new version, an alert goes out and emergency script is triggered to invalidate the replication center and lock down the isolated recovery system. A good gold copy can be restored to a recovery host in the isolated recovery area.

“We think we’re ahead of the curve here,” said Alex Almeida, manager of EMC’s data protection technical marketing.

He said the key to the air gap process is “traffic cannot reach that isolated system from outside. We can shut down ports to that system.”

Almeida sad EMC built its first isolated recovery network at the request of the CIO from “a well-known consumer brand.” The storage vendor has since received requests from other companies, mainly in the healthcare and financial services industries.

“We have sold dozens of these things,” he said.

EMC has been quiet about its air gapping process until now, but went public with it today when it released the EMC Global Data Protection Index 2016 research that included scary numbers about the frequency of data loss from a survey of 2,200 IT decision-makers.

Those numbers include:

  • 36% of businesses surveyed have lost data as the result of an external or internal security breach.
  • Fewer than 50% of organizations are protecting cloud data against corruption or against deletion. Many incorrectly believe their cloud provider protects data for them.
  • 73% percent admitted that they were not confident their data protection systems will be able to keep pace with the faster performance and new capabilities of flash storage.
  • Only 18% said they were confident that their data protection solutions will meet their future business challenges.

User error and product malfunctions have always been a problem and cyber theft and denial of service attacks have been around for years. But newer tactics such as cyber extortion and cyber destruction through use of ransomware and other means are looming as expensive threats to large companies.

“Data protection now requires a business to defend backup copies against malicious attack,” said Chris Ratcliffe, senior vice president of marketing, for EMC’s Core Technologies Division.  “It’s no longer good enough to have storage as a last resort. You need a solution to protect your storage as a last resort.”


June 28, 2016  9:52 AM

How to measure flash storage’s true value

Randy Kerns Randy Kerns Profile: Randy Kerns
flash storage

Flash storage, or using the broader term, solid-state storage, suffers from an inadequate measure of value. Flash storage provides a step-function improvement in the ability to store and retrieve information. The value of processing with flash storage compared to access from electro-mechanical devices is not easy to express.

Many in the industry still use a “data at rest” measure, which is the cost of storing data. That fails to represent more valuable characteristics such as access time and longevity. The data at rest measure, given as dollars per GB, can be misleading and does not convey real economic value. If that is the only measure to use for information storage, then you should use magnetic tape for all operations because it is the least expensive media.

Some vendors also use a dollars per IOPs measure for all-flash storage systems.  This measure does not represent the value of what flash can accomplish because it is an aggregate number. This means it represents the total number of I/Os a system can do, which could be from thousands of short-stroked disk drives.  It does not directly reflect the response time increase, which is the most meaningful measure in accelerating applications and getting more work done.

So if these measures are inadequate, what is the best way to gauge the value of flash storage? It actually varies depending on the case. Flash can provide key improvements, including consolidation, acceleration, reduction in physical space/power/cooling, longevity, and reduced tuning. Let’s look at these:

  • Consolidation – The greater performance levels of flash storage allow for the deployment of more diverse workloads on a single system. With larger capacity flash storage systems, workloads running on multiple spinning disk systems can be consolidated to a single flash storage system. The value of consolidation includes a reduction of the number of systems to manage and the physical space required.
  • Acceleration – The first deployments of flash systems focused on accelerating applications (mostly databases), and virtual machine or desktop environments. Acceleration enabled more transactions and improvements in the number of VMs and desktops supported. The successes here drove the shift to more widespread use of solid-state storage technology.
  • Physical space – Flash technology increases the capacity per chip and results in less physical space required. Even flash packaged in solid-state drives have eclipsed the capacity points of hard disk drives. With flash storage, more information can already be contained within physical space than was previously possible and technology gains are still improving in this area. This is important for most organizations where information storage represents a large physical presence.
  • Power and cooling – Storage devices using flash technology consume less power and generate less heat (requiring less cooling) than devices with motors and actuators. There is an obvious reduction in cost from this improvement. But this becomes more important when physical plant limitations prevent bringing in more power and cooling to the data center.
  • Longevity – Probably the least understood added value from flash storage is the greater longevity in usage for the flash devices and the economic impact that brings. The reliability and wear characteristics are different from electro-mechanical devices, and have reached a point where vendors are giving seven- and 10-year guarantees and even some lifetime warrantees with ongoing support contracts. This dramatically changes the economics from the standpoint of total cost of ownership over the long lifespan. The key driver of this is the disaggregation of the storage controller or server from the flash storage enclosures that allows controllers to be updated independently. This has led to some “evergreen” offerings by vendors, which actualizes the economic value in this area.
  • Reduction in tuning – One of the most often reported benefits (which can be translated to economic value) from deployment of flash storage is the reduction in performance tuning required. This means there is no longer a need to chase performance problems and move data to balance workloads with actuator arms.

It is clear that a data at rest measure is inadequate.  Nevertheless, price is always an issue and the cost for flash storage continues to decline at a steep rate because of the investment in technology.  Data reduction in the form of compression and deduplication also is a given for the most part in flash storage, multiplying the capacity stored per unit by 4-1 or 5-1 in most cases.  The continued technology advances will improve costs even more.

The $/GB data at rest measure is difficult for many to stop using, even though it misrepresents true value. People do it because it is easy and it is a habit after years of measuring value that way. However, it is wrong. There needs to be another relatively simple measure to encompass all the values noted earlier. It may take a while for that to come about.  In the meantime, we will continue to look at economic value, do TCO economic models, and explain real value as part of evaluating solutions to handling information.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: