Storage Soup


June 29, 2016  2:40 PM

Veeam changes CEO; founder steps aside

Dave Raffo Dave Raffo Profile: Dave Raffo
Veeam

Veeam Software, which went from a niche virtual machine backup software vendor to an industry leader in less than a decade, changed its top leadership team Tuesday. The changes come as Veeam prepares to make a stronger run at enterprise sales and helping customers move to the cloud.

Ratmir Timashev stepped down as CEO and his co-founder Andrei Baronov shifted from VP of software engineering into the new CTO post. They will help Veeam with market strategy and product development. Veeam veteran William Largent was promoted to CEO and the vendor added VMware executive Peter McKay, who will run the vendor’s day-to-day operations as president and COO.

Veeam chief marketing officer Peter Ruchatz said the vendor had close to $500 million in billings last year and the transition is designed to help it reach its goal of $1 billion annual billings by 2019.

“We’re constantly thinking about how we can take the business to the next level,” he said. “We have a couple of things coming together now. Over the past 12 to 18 months, we’ve pursued opportunities beyond what Veeam’s business was in the beginning, which was the SMB market. Now we’re focused on availability for the enterprise.

“The changes we’ve made are starting to come to fruition and they will take Veeam to the next growth level. Those new opportunities also bring complexity, so we decided we should bring on external management.”

Largent joined Veeam in 2008 as executive vice president. He previously worked with Timashev and Baronov at Aelita Software, which Quest Software acquired in 2006. He has also been CEO of Applied Innovation. Largent will move from Veeam’s Columbus, Ohio office to its global headquarters in Baar, Switzerland.

McKay comes to Veeam from VMware, where he was senior vice president and general manager of the Americas. He was CEO of startups Desktone, Watchfire and eCredit – all acquired by larger companies – before joining VMware.

Ruchatz said McKay will run the day-to-day business. “He has experience in large corporations,” Ruchatz said. “He also knows how startups work. He knows how to scale and where we need to be.”

Ruchatz said Timashev will help plan Veeam’s strategic moves, and Baronov will remain involved in product direction.

Veeam has sold into the enterprise for the past year or so, but mostly to departments inside large companies. Ruchatz said the vendor is ramping up its sales team to go after larger footprints inside enterprises.  It is planning an August product launch that expands its availability platform. Cloud connectivity will play a large role in the new product, which will include disaster recovery orchestration. Veeam is also expected add deeper integration with public clouds such as Amazon and Microsoft Azure. The changes will include more subscription-based pricing.

Veeam cracked the Gartner Magic Quadrant leaders category for data center backup and recovery software this year for the first time. Gartner listed Veeam as a leader along with Commvault, IBM, Veritas Technologies and EMC.

Veeam, a private company, claims its bookings revenue grew 24% year-over-year for the first quarter of 2016, including a 75% growth in deals above $500,000. Veeam claims an average of 3,500 new customers each month and said it finished March with 193,000 customers worldwide.

Newcomer McKay previously served as an executive-in-residence for Insight Venture Partners, which has a minority holding in Veeam. However, Ruchatz said Veeam has no plans to seek venture funding or become a public company.

“Nothing changes on the investment side,” Ruchatz said. “We enjoy being a private company and have the flexibility to make big moves. We’re running a profitable company and the market knows it. We don’t need further funding. In fact, we have a enough to start looking at making potential acquisitions.”

June 28, 2016  6:18 PM

Zerto disaster recovery products get boost with $20M investment

Paul Crocetti Paul Crocetti Profile: Paul Crocetti
Storage

With a new round of $20 million in funding, business continuity/disaster recovery software vendor Zerto plans to double its engineering force by the end of the year to accelerate product releases.

“It extends what we can do and how long we can continue to be as aggressive as we are,” said Rob Strechay, Zerto’s vice president of product.

The Series E1 financing for Zerto disaster recovery, led by Charles River Ventures (CRV), follows the $50 million Series E financing headed by Institutional Venture Partners announced in January. The vendor has raised $130 million in total financing.

Strechay joined Zerto from Hewlett Packard Enterprise after the January funding round. He said Zerto’s engineer head count will be hitting close to 160 by the end of 2016. He anticipates two product releases in 2017 that will extend Azure, failback and cloud capabilities.

At the end of May, Zerto detailed its next Zerto Virtual Replication release, code-named “Athena.” That product is due late in 2016. It will include support for replication into the Azure cloud. Zerto also unveiled a mobile application for monitoring BC/DR.

Zerto has been expanding internationally. Last week, Zerto opened an office in Singapore, with six employees in support, sales and marketing, Strechay said. The company also has support services in its Boston and Israel offices, meaning it now offers support across the globe.

Zerto is expanding its office in the United Kingdom outside London, which is the base of its European operations. Strechay said that he expects no immediate impact following the Brexit vote, but the vendor may need to look at pricing following the dropping of the value of the pound.

Zerto is also looking to accelerate its sales and marketing in Asia and the Pacific with the new funding.

Strechay said CRV approached Zerto to extend the Series E funding and provided the vast majority of this round.

“[CRV general partner Murat Bicer] understood the value proposition, the leadership the company was taking,” Strechay said. “He really wanted in.”

Zerto claims four consecutive years of at least 100% sales growth.


June 28, 2016  2:30 PM

EMC pitches isolated data recovery to thwart cyber attacks

Dave Raffo Dave Raffo Profile: Dave Raffo
EMC

Who’s protecting the data in your data protection storage? That’s a question EMC wants you to think about as the scope of security threat increase.

EMC recommends – and has customers using – an isolated data center disconnected from the network to keep out threats such as ransomware and other types of cyber attacks. This involves locking down systems used for recovery and limiting exposure to create an air gap between the recovery zone and production systems.

An air gapped device never has an active unsecured connection. EMC’s isolated data recovery makes the recovery target inaccessible to the network and restricted from all users who are not cleared to access the it. In most cases, it’s a Data Domain disk backup system that is off the grid most of the time.

EMC’s isolated recovery includes VMAX storage arrays, Data Domain disk backup and RecoverPoint or Vplex software. A Fibre Channel connection between the VMAX and Data Domain ports is recommended.

The air gap is created by closing ports when not in use, and limiting the open ports to those needed to replicate data. VMAX SRDF creates a crash consistent copy of the production environment, and its SymACL Access Control is used to restrict access and eliminate remote commands from being executed from production arrays.

RecoverPoint and Vplex can be used with EMC XtremIO and VNX arrays to handle replication and provide crash consistent copies.

The process allows companies to keep a secure and isolated gold copy. When a new gold copy is replicated, analytics are run to compare it to the most recently copied version. If this validation process reveals a corruption in the new version, an alert goes out and emergency script is triggered to invalidate the replication center and lock down the isolated recovery system. A good gold copy can be restored to a recovery host in the isolated recovery area.

“We think we’re ahead of the curve here,” said Alex Almeida, manager of EMC’s data protection technical marketing.

He said the key to the air gap process is “traffic cannot reach that isolated system from outside. We can shut down ports to that system.”

Almeida sad EMC built its first isolated recovery network at the request of the CIO from “a well-known consumer brand.” The storage vendor has since received requests from other companies, mainly in the healthcare and financial services industries.

“We have sold dozens of these things,” he said.

EMC has been quiet about its air gapping process until now, but went public with it today when it released the EMC Global Data Protection Index 2016 research that included scary numbers about the frequency of data loss from a survey of 2,200 IT decision-makers.

Those numbers include:

  • 36% of businesses surveyed have lost data as the result of an external or internal security breach.
  • Fewer than 50% of organizations are protecting cloud data against corruption or against deletion. Many incorrectly believe their cloud provider protects data for them.
  • 73% percent admitted that they were not confident their data protection systems will be able to keep pace with the faster performance and new capabilities of flash storage.
  • Only 18% said they were confident that their data protection solutions will meet their future business challenges.

User error and product malfunctions have always been a problem and cyber theft and denial of service attacks have been around for years. But newer tactics such as cyber extortion and cyber destruction through use of ransomware and other means are looming as expensive threats to large companies.

“Data protection now requires a business to defend backup copies against malicious attack,” said Chris Ratcliffe, senior vice president of marketing, for EMC’s Core Technologies Division.  “It’s no longer good enough to have storage as a last resort. You need a solution to protect your storage as a last resort.”


June 28, 2016  9:52 AM

How to measure flash storage’s true value

Randy Kerns Randy Kerns Profile: Randy Kerns
flash storage

Flash storage, or using the broader term, solid-state storage, suffers from an inadequate measure of value. Flash storage provides a step-function improvement in the ability to store and retrieve information. The value of processing with flash storage compared to access from electro-mechanical devices is not easy to express.

Many in the industry still use a “data at rest” measure, which is the cost of storing data. That fails to represent more valuable characteristics such as access time and longevity. The data at rest measure, given as dollars per GB, can be misleading and does not convey real economic value. If that is the only measure to use for information storage, then you should use magnetic tape for all operations because it is the least expensive media.

Some vendors also use a dollars per IOPs measure for all-flash storage systems.  This measure does not represent the value of what flash can accomplish because it is an aggregate number. This means it represents the total number of I/Os a system can do, which could be from thousands of short-stroked disk drives.  It does not directly reflect the response time increase, which is the most meaningful measure in accelerating applications and getting more work done.

So if these measures are inadequate, what is the best way to gauge the value of flash storage? It actually varies depending on the case. Flash can provide key improvements, including consolidation, acceleration, reduction in physical space/power/cooling, longevity, and reduced tuning. Let’s look at these:

  • Consolidation – The greater performance levels of flash storage allow for the deployment of more diverse workloads on a single system. With larger capacity flash storage systems, workloads running on multiple spinning disk systems can be consolidated to a single flash storage system. The value of consolidation includes a reduction of the number of systems to manage and the physical space required.
  • Acceleration – The first deployments of flash systems focused on accelerating applications (mostly databases), and virtual machine or desktop environments. Acceleration enabled more transactions and improvements in the number of VMs and desktops supported. The successes here drove the shift to more widespread use of solid-state storage technology.
  • Physical space – Flash technology increases the capacity per chip and results in less physical space required. Even flash packaged in solid-state drives have eclipsed the capacity points of hard disk drives. With flash storage, more information can already be contained within physical space than was previously possible and technology gains are still improving in this area. This is important for most organizations where information storage represents a large physical presence.
  • Power and cooling – Storage devices using flash technology consume less power and generate less heat (requiring less cooling) than devices with motors and actuators. There is an obvious reduction in cost from this improvement. But this becomes more important when physical plant limitations prevent bringing in more power and cooling to the data center.
  • Longevity – Probably the least understood added value from flash storage is the greater longevity in usage for the flash devices and the economic impact that brings. The reliability and wear characteristics are different from electro-mechanical devices, and have reached a point where vendors are giving seven- and 10-year guarantees and even some lifetime warrantees with ongoing support contracts. This dramatically changes the economics from the standpoint of total cost of ownership over the long lifespan. The key driver of this is the disaggregation of the storage controller or server from the flash storage enclosures that allows controllers to be updated independently. This has led to some “evergreen” offerings by vendors, which actualizes the economic value in this area.
  • Reduction in tuning – One of the most often reported benefits (which can be translated to economic value) from deployment of flash storage is the reduction in performance tuning required. This means there is no longer a need to chase performance problems and move data to balance workloads with actuator arms.

It is clear that a data at rest measure is inadequate.  Nevertheless, price is always an issue and the cost for flash storage continues to decline at a steep rate because of the investment in technology.  Data reduction in the form of compression and deduplication also is a given for the most part in flash storage, multiplying the capacity stored per unit by 4-1 or 5-1 in most cases.  The continued technology advances will improve costs even more.

The $/GB data at rest measure is difficult for many to stop using, even though it misrepresents true value. People do it because it is easy and it is a habit after years of measuring value that way. However, it is wrong. There needs to be another relatively simple measure to encompass all the values noted earlier. It may take a while for that to come about.  In the meantime, we will continue to look at economic value, do TCO economic models, and explain real value as part of evaluating solutions to handling information.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


June 27, 2016  10:36 AM

Nasuni customers hit with cloud outage

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Cloud provider Nasuni Corp. experienced two cloud disruptions recently that affected about 20 percent of its customers and about 10 percent of workloads, according to company spokesperson Fred Pinkett.

Pinkett, senior director of product marketing and managment at Nasuni, said the first four-hour cloud outage problem on June 15 was a performance degradation in the company’s API servers. Then there was a another feature disruption tied to the global file-locking system (GFS) on June 16 for about one hour and 45 minutes.

“The GFS uses some servers for the API and does the health check and it caused (a problem with) the GFS,” said Pinkett. ”

The second disruption was shorter than the first. Amazon Web Services (AWS) happened to be going through a performance degradation while Nasuni was doing a health-check rebuild process.

“Customers had intermittent access during that time to those storage volumes that use global file locking,” Nasuni said in a prepared statement. “No data was lost or corrupted. In addition, Nasuni is taking measures to prevent a similar feature disruption in the future by rolling out cross-regional locking and by helping customers configure their filers so that, in the case of a locking disruption, they will be able to read and write to all data in their local cache.”

Nasuni executives said it has served billions of global locks and believes that cloud-based global file locking is the best architecture for locking files over many locations sharing many terabytes of data.

Systems that rely on devices to act as a lock server are extremely difficult to scale globally and are vulnerable to disruption if the device fails, in which case the responsibility for fixing an outage lies with the customer’s IT organization.

“Nasuni, on the other hand, proactively monitors its service and takes full responsibility for fixing issues as they arise,” according to the company statement.

Nasuni uses a hybrid approach to the cloud. Its Nasuni Filers on customers’ locations hold frequently accessed data in cache while sending older data to the Microsoft Azure and AWS public clouds.

The global file-locking capability facilitates use of the controller in multiple locations within a company by maintaining data integrity. It prevents users from accessing a file that is already open and creating multiple, conflicting file versions.

Nasuni isn’t the first cloud gateway to add file locking but it is considered a key step to becoming a primary storage system.


June 23, 2016  6:50 AM

Dell extends Nutanix hyper-converged OEM deal

Dave Raffo Dave Raffo Profile: Dave Raffo
Dell, EMC, Hyper-convergence, Nutanix, VMware

LAS VEGAS — Dell answered one question about its post-EMC merger product lineup Wednesday at the Nutanix .NEXT conference.

Alan Atkinson, Dell vice president of storage, appeared on stage alongside Nutanix CEO Dheeraj Pandey to reveal Dell will extend its OEM deal to sell Nutanix hyper-converged systems. That revelation comes with Dell poised to pick up a bunch of new hyper-converged products from EMC and VMware.

“Customers love the XC Series (using Nutanix software),” Atkinson said. “We started with one product and now we have seven products. I’m thrilled to say we’ve reached an agreement to extend our OEM relationship.  We’ll keep rolling it forward.”

Atkinson left the stage without saying how long the OEM extension will run, but a Nutanix spokesperson said the extension is for “multiple years.”

Dell and Nutanix originally signed a three-year OEM deal in 2014. Nutanix has since added a similar deal with Lenovo.

The Dell-Nutanix relationship was threatened by Dell’s pending $67 billion acquisition of EMC. Both EMC and its majority-owned VMware have hyper-converged products that compete with Nutanix. EMC sells VxRail and VxRack systems built on VMware’s VSAN and EMC ScaleIO software. Dell already re-brands VSAN Ready Node systems and resells EMC’s VSAN and ScaleIO boxes.

Atkinson said he has been asked about the future of the Dell-Nutanix deal often since Dell disclosed plans to acquire EMC last October.

“That’s a question I get once an hour,” he said at .NEXT.

Even Panday has been asking. “We’ve been talking to Michael (Dell) on an everyday basis,” he said.

The Dell-EMC deal is expected to close in August.

In its Securities Exchange Commission filing detailing its plans to become a public company, Nutanix listed the Dell-EMC deal as a risk to its business.

“Dell is not just a competitor but also is an OEM partner of ours and the combined company may be more likely to promote and sell its own solutions over our products or cease selling or promoting our products entirely,” the filing reads. “If EMC and Dell decide to sell their own solutions over our products, that could adversely impact our OEM sales and harm our business, operating results and prospects, and our stock price could decline.”


June 20, 2016  4:26 PM

EMC, Veritas lead in worldwide PBBA revenues

Sonia Lelii Sonia Lelii Profile: Sonia Lelii
Storage

Unlike overall disk storage, purpose-built backup appliances’ total worldwide factory revenue increased in the first quarter of 2016, according to International data Corporation (IDC) Worldwide Quarterly tracker.

IDC shows the backup appliance marketing growing 6.2 percent year-over-year to $762.2 million in the first quarter. That’s in contrast to overall disk storage,  which declined seven percent to $8.2 billion in the quarter according to IDC. External storage, which includes SAN and NAS, declined 3.7 percent to $5.4 billion for the quarter.

EMC led the pack in overall growth for backup appliances.  Total worldwide PBBA capacity shipped during this period was 886 PB, an increase of 40% compared to last year.

EMC generated $427 million in revenue and held 56% of the market share for PBBAs, compared to $377 million in revenue and 52.5% market share in the first quarter of 2015. That was a 13.2 percent growth rate.

No. 2 Veritas generated $139 million in revenue and held 18.2% market share in the first quarter of this year, compared to $133 million in revenue and 18.5% market share in the first quarter or 2015 when it was part of Symantec. The company gained 4.4 percent growth this year in the PBBA market.

Hewlett Packard Enterprise came in third with $35 million in revenue or 4.5 percent market share, compared to $32.5 million in revenue and 4.5 percent market share in the first quarter of 2015. HPE experienced 6.3 percent growth between the first quarter this year and the first quarter in 2015.

IBM backup appliance revenue fell 26% to $30 million, or  3.9 percent market share, compare to $41 million last year. Dell generated $25 million or 3.2 percent market share compared to $18.3 million in revenue the same period last year.

Dell is in completing a $67 billion acquisition of EMC. The combined companies had 59.2% of the backup appliance market.

Total PBBA open systems factory revenue increased 8.3 percent year-over-year in the first quarter with revenues of $703.2 million. Mainframe revenues declined 14%. Last year, the PBBA market experienced a downturn in early 2015 before sales picked up later in the year. 

IDC defines a PBBAs as a standalone disk-based solution that utilizes software, disk arrays, server engines or nodes used for a target for backup data and specifically data coming from a backup application or can be tightly integrated with the backup software to catalog, index, schedule, and perform data movement.

The PBBA products are deployed in standalone configurations or as gateways.


June 16, 2016  9:35 AM

Cavium pays $1 billion for QLogic, storage

Dave Raffo Dave Raffo Profile: Dave Raffo
Cavium, Qlogic, Storage Networking

After months of shopping itself, storage networking vendor QLogic has a buyer.

Networking semiconductor vendor Wednesday said it will pay $1 billion for the FC and Ethernet adapter company in a deal expected to close in the third quarter of 2016. The deal adds to the evidence that storage networking is a best fit as part of a larger Ethernet-dominated networking company.

Cisco kicked off the trend of combining networking and storage when the Ethernet giant stared selling Fibre Channel SAN switches in 2003. The FC connectivity market has seen a great deal of consolidation since then, with FC vendors either going away or joining forces with Ethernet companies.

Avago Technologies acquired QLogic rival Emulex for $606 million in 2015. That deal was sandwiched between Avago’s purchases of storage and networking chip firm LSI and networking connectivity vendor Broadcom.

Brocade, which began as a Fibre Channel switch company, moved into Ethernet network switching with a $2.6 billion acquisition of Foundry Networks in 2008. FC remains the bulk of Brocade’s business, but the vendor also added wireless networking vendor Ruckus for $1.2 billion in April to make it a broader enterprise play.

Now we have the two FC switch vendors Brocade and Cisco and the two main FC adapter providers QLogic and Emulex selling storage as part of larger networking companies.

Cavium is a player in the networking, communications and cloud markets but will use QLogic to fill one gap in its product line.

“Storage has been one of the areas that has been an aspirational market for us,” Cavium CEO Syed Ali said Wednesday evening on a webcast to discuss the QLogic deal. “We never had penetration into the mainstream storage market.”

Ali said owning QLogic can help Cavium push its own products such as its ThunderX data center and cloud processor into deals with storage vendors such as EMC and NetApp.  It also gives Cavium an instant storage software stack.

“QLogic has an extensive software stack that we don’t have for the mainstream market,” he said. “That software stack takes years to build out.”

Ali said Cavium will kill QLogic legacy products such as its FC switches because “it’s not a great idea for a silicon company to also be a switch company” but he said he is bullish on QLogic’s FC adapters. He pointed to the ongoing transition from 8 Gb per second FC to 16 Gbps and the move to 32 Gbps expected in 2017-18 and the rise of all-flash arrays as drivers of FC business. He said all-flash arrays have a 70% to 80% FC attach rate.


June 10, 2016  5:34 PM

IDC: NetApp jumps to No. 2 in Q1 all-flash

Carol Sliwa Carol Sliwa Profile: Carol Sliwa
Storage

EMC remained No. 1 and NetApp jumped to No. 2 in all-flash array (AFA) revenue for the first quarter of 2016, according to IDC’s new market share statistics.

EMC had $245.6 million in revenue and 30.9% AFA market share, while NetApp took in $181.1 million and accounted for 22.8%. Rounding out the top five were Pure Storage at 15.0%, Hewlett Packard Enterprise (HPE) at 12.4% and IBM at 8.5%.

NetApp benefitted from a change to IDC’s AFA taxonomy with the release of the June 2016 Tracker. Previously, IDC only recognized NetApp’s EF-Series as an AFA. With the new taxonomy, NetApp’s All Flash FAS also qualifies under a new “Type 3” category.

IDC defines an AFA as “any network-based storage array that supports only all-flash media as persistent storage and is available as a unique SKU.” With the new IDC AFA taxonomy, IDC for the first time identified three types of AFAs. Type 1 and Type 2 would have fit the old taxonomy, but Type 3 is new, according to Eric Burgener, an IDC storage research director.

Type 1:  Arrays that were originally “born” as AFAs. Examples include EMX XtremIO, IBM FlashSystem, Kaminario K2, Pure Storage FlashArray//m, NetApp SolidFire FS Series, Tegile IntelliFlash HD, and Violin Memory Flash Storage Platform. Although NetApp acquired SolidFire in December, IDC won’t begin crediting NetApp with the SolidFire revenue until the second quarter, according to Burgener.

Type 2: Arrays that originated as hybrid designs but have undergone significant flash optimization, do not support HDDs, and include some high-performance hardware unique to the all-flash configuration, such as controllers that are faster than those included in the vendor’s hybrid flash arrays (HFAs). Examples include: Hitachi Data Systems (HDS) VSP F Series, Hewlett Packard Enterprise (HPE) 3PAR StoreServ 8450, and Tegile IntelliFlash T3800.

Type 3: Arrays that originated as hybrid designs but have undergone significant flash optimization. Type 3 arrays do not support HDDs and do not include hardware, other than flash media, that is different than the hardware the vendor ships on its HFAs. Examples include: EMC VMAX All Flash, Fujitsu DX200F, NEC M710F, and NetApp All Flash FAS.

Burgener said IDC changed its AFA taxonomy for several reasons.

“No. 1 is the level of flash optimization that we were starting to see from the systems that began life years ago as hybrids were producing performance that was not really very distinguishable from purpose-built AFAs,” he said. “Two years ago, there was a big difference in terms of the latencies and the ability to sustain consistent performance. But now, there’s not that big a difference.”

Burgener said storage vendors were selling their all-flash configurations, whether hybrid or not, in a “directly competitive manner to purpose-built AFAs.” Because the products target the same customers in essentially the same market, it made sense to include them, he said.

But, if a storage array ships with only flash drives and can support HDDs, IDC considers it to be a hybrid flash array.

“That gets rid of this whole question: ‘Well, what if I put a disk in later?’ ” Burgener said. “We’ve clarified that now. We just said, ‘Look, let’s simplify it.’ ”

Most of NetApp’s new AFA revenue was due to its All Flash FAS (AFF), according to Burgener.

Burgener said there might have been pent-up demand for a more flash-optimized All Flash FAS with NetApp’s OnTap 8.2 release. But, he expects the pent-up demand will wane, and NetApp’s growth rate will drop by the fourth quarter.

IDC went back and adjusted its historical numbers and forecasts to reflect the new AFA taxonomy. Under the old taxonomy, 2015 revenue for the total AFA market was $2.53 billion. With the new AFA taxonomy, the 2015 revenue figure for the entire market is $2.73 billion, according to Burgener.

NetApp placed fifth in 2015 AFA revenue  under the new IDC taxonomy, but moved ahead of HPE into fourth place in Q4 of 2015, Burgener said.

Table-Q1 2016-IDC AFA Market Share

Table-2015-IDC AFA Market Share-update


June 10, 2016  4:15 PM

HDS storage ‘freezing’ really a focus on future

Rodney Brown Rodney Brown Profile: Rodney Brown
Hitachi Data Systems

There has been some noise in the storage space lately, based on the English translation of a Japanese IT website that said Hitachi Data Systems was “freezing” investment in its high-end storage products. Lost in this declaration is that HDS gets it that arrays alone aren’t the future of enterprise storage.

On June 1, the site IT Pro Nikkei published a report based on an HDS briefing that said the vendor would be “freezing the investment in the high-end model of the storage business.” That led media sites and competitors to speculate that perhaps HDS was going to let its high-end storage business languish and die.

HDS has been quick to deny it will exit the high-end storage market. But the strategy outlined in the IT Pro Nikkei story is not new or surprising. High-end enterprise disk arrays are far from a growth market in this age of flash, cloud, hyper-convergence and software-defined storage.

HDS CTO Hu Yoshida revealed the vendor’s new strategy in a January interview on our SearchStorage site with senior writer Carol Sliwa.

In that interview, Yoshida said, right out of the gate “The infrastructure market is not going to be a growth market. Instead of trying to compete on infrastructure, we’re going to have to compete on application enablement.” Translation: HDS’ new Lumada line for handling data from IoT sources, based on the Pentaho IoT data analytics technology HDS acquired in 2015, will play a major role in the company’s future.

Yoshida specifically calls out IoT as a vital component of the future of HDS. “We have an overall corporate strategy with Hitachi, called Social Innovation, where we are moving toward the Internet of Things (IoT), trying to build smart cities and provide more insights into data centers, telco [and] automotive,” he said.

IoT was also a big topic at the HDS Connect partner conference a year ago.

On his own blog, Hu’s Place, Yoshida this week clarified what “freezing” investment in high-end storage means for HDS. He wrote that hardware investments in the flagship high-end HDS hard disk drive arrays are no longer necessary due to the ability to use standard Intel processors and flash for performance, with storage features running in software.  HDS will shift research and development from its Virtual Storage Platform (VSP) hardware to Storage Virtualization Operating System (SVOS), flash and storage automation.

“There is no need to build separate hardware for midrange and enterprise customers,” Yohsida wrote. “They all have access to enterprise functionality and services like virtualization of external storage systems for consolidation and migration, high availability with Global-Active Device, and geo-replication with Universal Replicator.”

In April HDS added to SVOS what it apparently considers to be the last hardware  puzzle piece, building NAS functionality (or, as Yoshida described it “embedded file support in our block storage”) into its G series of hybrid flash  arrays.

So the upshot is, HDS will invest in making SVOS and VSP perform better for specific applications, like IoT data storage and analytics. That sounds much less like the sky is falling on HDS, and a lot more like a strategic investment in what it sees as the future in which application integration rather than storage features becomes the major differentiator between storage vendors.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: