Storage Soup


June 30, 2016  8:06 PM

Red Hat Storage VP sees different uses for Ceph, Gluster

Carol Sliwa Carol Sliwa Profile: Carol Sliwa

Red Hat Storage showed off updates to its Ceph and Gluster software and laid out its strategy for working with containers at this week’s Red Hat Summit in San Francisco.

We caught up with Ranga Rangachari, vice president and general manager of Red Hat Storage, to discuss the latest product releases, industry trends and the company’s future storage direction. Interview excerpts follow.

Which of the Red Hat storage products – Gluster or Ceph – is seeing greater adoption?

Rangachari: Both of them are. It’s very much a workload-driven conversation. Ceph is part and parcel to the OpenStack story. In the community, [the majority] of the OpenStack implementations were using Ceph as the storage substrate. Gluster is classic file serving – second-tier storage, deep archiving. If I were to take a look at the big picture, it’s right down the middle. Some days, you might have a lot more OpenStack conversations. Other days, you might have a lot more archiving, file services or virtualization conversations.

Red Hat just unveiled a technology preview of the Ceph file system (CephFS). Why does Red Hat need another file-based storage option?

Rangachari: The tech preview is focused on OpenStack. The advantage of this approach is customers can now use Ceph for file, object and block for OpenStack. Ceph has good penetration within the OpenStack market for block storage. We expanded it to object, and file is the third leg to the stool.

Do you envision CephFS in use only in conjunction with OpenStack?

Rangachari: It’s tough to predict, but for the foreseeable future, it’s going to be mainly focused on OpenStack. File systems need a lot of time from a testing and a maturity standpoint before we throw it out and say, ‘Start using it for general-purpose workloads.’ . . . We have not yet formulated any detailed plans around what else we could do with CephFS beyond OpenStack.

What’s the future direction for Ceph and Gluster?

Rangachari: One area that we are focused on the ability to manage our portfolio through a single management plane. The other area is interfacing and integrating with leading ISV applications, especially in the object storage space. The first wave of our ecosystem was around the hardware vendors, whether it’s server vendors or SSDs and those type of things.

With all the disruption in the storage industry, which events are having the greatest impact on Red Hat storage strategy?

Rangachari: One is flash [solid-state drives] SSDs. One of the biggest holdbacks a year ago was that the cost per TB was pretty expensive when it comes to SSDs and flash. But now I think Moore’s law in a way is taking shape. You’re seeing the processing and the capacity increase and the price dramatically drop. That’s one area that we are paying very close attention to, and the SanDisk [InfiniFlash] announcement was the first step in that direction.

The other thing that we are seeing is containers. In the conversations that we are having with customers, that’s becoming the next wave in infrastructure and the next wave in how applications are developed and delivered.

June 30, 2016  3:02 PM

Turnaround specialist Walsh takes over IBM storage

Dave Raffo Dave Raffo Profile: Dave Raffo
Catalogic, IBM Storage

IBM is bringing in industry veteran Ed Walsh to try to light a fire under its struggling storage division.

Walsh will take over as general manager of IBM Storage and Software Defined Infrastructure on July 11. He joins IBM from Catalogic Software, where he was CEO since 2014.

Walsh worked for IBM Storage from 2010-13 and has been CEO of four storage startups. Walsh became CEO of Catalogic nine months after it spun out of Syncsort. Under Walsh, Catalogic has broadened the storage arrays it supports – adding support for IBM and EMC systems to go with its original NetApp support.

An e- mail from an IBM spokesperson referred to Wash as a “change agent” and noted his “ability to drive transformation and lead teams to embrace a new direction.” Walsh is expected to try and rally IBM’s storage business around its FlashSystem all-flash platform, Spectrum storage software and its Cleversafe object storage acquisition.

Walsh faces a different type of challenge at IBM than he is used to. He is considered a turnaround specialist of startups, and his tenure usually ends with a sale to a larger vendor. At IBM he will be tasked with waking a sleeping giant and he will more likely be buying companies instead of selling his.

IBM’s storage revenue has declined in each of the last four years, dropping from $3.7 billion in 2011 to $2.4 billion in 2015. Its storage hardware revenue was $433.5 million in the first quarter of this year, down 6% year over year.

According to IDC, IBM stood fifth behind EMC, NetApp, Hewlett Packard Enterprise and Hitachi in networked storage sales in the first quarter with 7.9% market share. IBM’s full year 2015 market share was 10%, according to IDC.

IBM has fared better in the all-flash market, ranking second by IDC for 2015. However, IDC put IBM’s first quarter all-flash revenue at $67.4 million, up 54% in a market that grew 87.4%. IBM ranked fifth in all-flash revenue for the first quarter behind EMC, NetApp, Pure Storage and HPE on IDC’s list.

Walsh was IBM’s vice president in storage in charge of marketing and strategy after selling primary data compression vendor Storwize to Big Blue in 2010. Walsh was also CEO of data deduplication pioneer Avamar from 2005 until selling the company to EMC in November of 2006. He stayed with EMC to run the Avamar division until February 2007. He was CEO of server virtualization startup Virtual Iron from 2009 until selling that company to Oracle in 2010.

Walsh was also VP of sales, marketing and alliance for Fibre Channel switch vendor CNT Technologies from 2001 to 2005.

Walsh replaces Greg Lotko, who held the GM job on an interim basis and will becomes vice president of development for IBM Storage.

Catalogic today named Ken Barth as its CEO to replace Walsh. Barth has been on the Catalogic board since the 2013 spinout. He was CEO of storage resource management vendor TekTools from 1996-2010 until SolarWinds acquired TekTools.


June 30, 2016  2:59 PM

Webscale Networks airs out multi-cloud DR

Paul Crocetti Paul Crocetti Profile: Paul Crocetti

Webscale Networks is well aware that its e-commerce customers can’t tolerate much downtime.

To that end, the website infrastructure provider recently expanded capabilities to build a multi-cloud for disaster recovery as a service. Webscale Networks lets customers implement backup instances of their cloud deployments in separate regions or with separate cloud providers. The provider offers a service-level agreement (SLA) that guarantees customers will have their site running at an alternate location within 60 minutes, with no more than 15 minutes of data loss.

“Customers were interested in DR that was cross-cloud,” said Jay Smith, CTO and founder of Webscale.

Smith said Webscale’s SLA is conservative with the stated recovery time objectives and recovery point objectives, and sees them decreasing over time.

“We’re able to meet SLAs with room to spare,” Smith said.

Webscale Networks, based in Mountain View, Calif., began operations in 2012 under the name Lagrange Systems. It now claims over 40 customers, mainly mid-market e-commerce companies.

The Webscale Multi-Cloud DR service allows customers to fail over to another cloud provider, with minimal data loss and minimal outage time, said CEO Sonal Puri. In addition, if downtime occurs, Webscale can automatically fail over to a scheduled alternate region.

Webscale Multi-Cloud DR provides two options for disaster recovery — Webscale Cloud Backup and Webscale Cloud Mirror.

With Webscale Cloud Backup, customers can make a copy of their entire back end — the application and data server — periodically. For e-commerce applications that require more frequent backup, Webscale Cloud Mirror allows customers to keep a near real-time replica of their back end in an alternate location. Webscale Cloud Mirror also ensures that application delivery controllers remain consistently available regardless of the status of either the data server or application layer, according to the company.

Webscale Multi-Cloud DR, with Cloud Backup, is included with the Webscale Pro and Enterprise platforms. Webscale Multi-Cloud DR, with Cloud Mirror, is included in Webscale Enterprise or available as an additional service for Webscale Pro.

Puri said much of the Webscale Networks value lies in its back-end DR services across cloud providers and regions.

“We are unique because of the back end,” she said.


June 29, 2016  2:40 PM

Veeam changes CEO; founder steps aside

Dave Raffo Dave Raffo Profile: Dave Raffo
Veeam

Veeam Software, which went from a niche virtual machine backup software vendor to an industry leader in less than a decade, changed its top leadership team Tuesday. The changes come as Veeam prepares to make a stronger run at enterprise sales and helping customers move to the cloud.

Ratmir Timashev stepped down as CEO and his co-founder Andrei Baronov shifted from VP of software engineering into the new CTO post. They will help Veeam with market strategy and product development. Veeam veteran William Largent was promoted to CEO and the vendor added VMware executive Peter McKay, who will run the vendor’s day-to-day operations as president and COO.

Veeam chief marketing officer Peter Ruchatz said the vendor had close to $500 million in billings last year and the transition is designed to help it reach its goal of $1 billion annual billings by 2019.

“We’re constantly thinking about how we can take the business to the next level,” he said. “We have a couple of things coming together now. Over the past 12 to 18 months, we’ve pursued opportunities beyond what Veeam’s business was in the beginning, which was the SMB market. Now we’re focused on availability for the enterprise.

“The changes we’ve made are starting to come to fruition and they will take Veeam to the next growth level. Those new opportunities also bring complexity, so we decided we should bring on external management.”

Largent joined Veeam in 2008 as executive vice president. He previously worked with Timashev and Baronov at Aelita Software, which Quest Software acquired in 2006. He has also been CEO of Applied Innovation. Largent will move from Veeam’s Columbus, Ohio office to its global headquarters in Baar, Switzerland.

McKay comes to Veeam from VMware, where he was senior vice president and general manager of the Americas. He was CEO of startups Desktone, Watchfire and eCredit – all acquired by larger companies – before joining VMware.

Ruchatz said McKay will run the day-to-day business. “He has experience in large corporations,” Ruchatz said. “He also knows how startups work. He knows how to scale and where we need to be.”

Ruchatz said Timashev will help plan Veeam’s strategic moves, and Baronov will remain involved in product direction.

Veeam has sold into the enterprise for the past year or so, but mostly to departments inside large companies. Ruchatz said the vendor is ramping up its sales team to go after larger footprints inside enterprises.  It is planning an August product launch that expands its availability platform. Cloud connectivity will play a large role in the new product, which will include disaster recovery orchestration. Veeam is also expected add deeper integration with public clouds such as Amazon and Microsoft Azure. The changes will include more subscription-based pricing.

Veeam cracked the Gartner Magic Quadrant leaders category for data center backup and recovery software this year for the first time. Gartner listed Veeam as a leader along with Commvault, IBM, Veritas Technologies and EMC.

Veeam, a private company, claims its bookings revenue grew 24% year-over-year for the first quarter of 2016, including a 75% growth in deals above $500,000. Veeam claims an average of 3,500 new customers each month and said it finished March with 193,000 customers worldwide.

Newcomer McKay previously served as an executive-in-residence for Insight Venture Partners, which has a minority holding in Veeam. However, Ruchatz said Veeam has no plans to seek venture funding or become a public company.

“Nothing changes on the investment side,” Ruchatz said. “We enjoy being a private company and have the flexibility to make big moves. We’re running a profitable company and the market knows it. We don’t need further funding. In fact, we have a enough to start looking at making potential acquisitions.”


June 28, 2016  6:18 PM

Zerto disaster recovery products get boost with $20M investment

Paul Crocetti Paul Crocetti Profile: Paul Crocetti
Storage

With a new round of $20 million in funding, business continuity/disaster recovery software vendor Zerto plans to double its engineering force by the end of the year to accelerate product releases.

“It extends what we can do and how long we can continue to be as aggressive as we are,” said Rob Strechay, Zerto’s vice president of product.

The Series E1 financing for Zerto disaster recovery, led by Charles River Ventures (CRV), follows the $50 million Series E financing headed by Institutional Venture Partners announced in January. The vendor has raised $130 million in total financing.

Strechay joined Zerto from Hewlett Packard Enterprise after the January funding round. He said Zerto’s engineer head count will be hitting close to 160 by the end of 2016. He anticipates two product releases in 2017 that will extend Azure, failback and cloud capabilities.

At the end of May, Zerto detailed its next Zerto Virtual Replication release, code-named “Athena.” That product is due late in 2016. It will include support for replication into the Azure cloud. Zerto also unveiled a mobile application for monitoring BC/DR.

Zerto has been expanding internationally. Last week, Zerto opened an office in Singapore, with six employees in support, sales and marketing, Strechay said. The company also has support services in its Boston and Israel offices, meaning it now offers support across the globe.

Zerto is expanding its office in the United Kingdom outside London, which is the base of its European operations. Strechay said that he expects no immediate impact following the Brexit vote, but the vendor may need to look at pricing following the dropping of the value of the pound.

Zerto is also looking to accelerate its sales and marketing in Asia and the Pacific with the new funding.

Strechay said CRV approached Zerto to extend the Series E funding and provided the vast majority of this round.

“[CRV general partner Murat Bicer] understood the value proposition, the leadership the company was taking,” Strechay said. “He really wanted in.”

Zerto claims four consecutive years of at least 100% sales growth.


June 28, 2016  2:30 PM

EMC pitches isolated data recovery to thwart cyber attacks

Dave Raffo Dave Raffo Profile: Dave Raffo
EMC

Who’s protecting the data in your data protection storage? That’s a question EMC wants you to think about as the scope of security threat increase.

EMC recommends – and has customers using – an isolated data center disconnected from the network to keep out threats such as ransomware and other types of cyber attacks. This involves locking down systems used for recovery and limiting exposure to create an air gap between the recovery zone and production systems.

An air gapped device never has an active unsecured connection. EMC’s isolated data recovery makes the recovery target inaccessible to the network and restricted from all users who are not cleared to access the it. In most cases, it’s a Data Domain disk backup system that is off the grid most of the time.

EMC’s isolated recovery includes VMAX storage arrays, Data Domain disk backup and RecoverPoint or Vplex software. A Fibre Channel connection between the VMAX and Data Domain ports is recommended.

The air gap is created by closing ports when not in use, and limiting the open ports to those needed to replicate data. VMAX SRDF creates a crash consistent copy of the production environment, and its SymACL Access Control is used to restrict access and eliminate remote commands from being executed from production arrays.

RecoverPoint and Vplex can be used with EMC XtremIO and VNX arrays to handle replication and provide crash consistent copies.

The process allows companies to keep a secure and isolated gold copy. When a new gold copy is replicated, analytics are run to compare it to the most recently copied version. If this validation process reveals a corruption in the new version, an alert goes out and emergency script is triggered to invalidate the replication center and lock down the isolated recovery system. A good gold copy can be restored to a recovery host in the isolated recovery area.

“We think we’re ahead of the curve here,” said Alex Almeida, manager of EMC’s data protection technical marketing.

He said the key to the air gap process is “traffic cannot reach that isolated system from outside. We can shut down ports to that system.”

Almeida sad EMC built its first isolated recovery network at the request of the CIO from “a well-known consumer brand.” The storage vendor has since received requests from other companies, mainly in the healthcare and financial services industries.

“We have sold dozens of these things,” he said.

EMC has been quiet about its air gapping process until now, but went public with it today when it released the EMC Global Data Protection Index 2016 research that included scary numbers about the frequency of data loss from a survey of 2,200 IT decision-makers.

Those numbers include:

  • 36% of businesses surveyed have lost data as the result of an external or internal security breach.
  • Fewer than 50% of organizations are protecting cloud data against corruption or against deletion. Many incorrectly believe their cloud provider protects data for them.
  • 73% percent admitted that they were not confident their data protection systems will be able to keep pace with the faster performance and new capabilities of flash storage.
  • Only 18% said they were confident that their data protection solutions will meet their future business challenges.

User error and product malfunctions have always been a problem and cyber theft and denial of service attacks have been around for years. But newer tactics such as cyber extortion and cyber destruction through use of ransomware and other means are looming as expensive threats to large companies.

“Data protection now requires a business to defend backup copies against malicious attack,” said Chris Ratcliffe, senior vice president of marketing, for EMC’s Core Technologies Division.  “It’s no longer good enough to have storage as a last resort. You need a solution to protect your storage as a last resort.”


June 28, 2016  9:52 AM

How to measure flash storage’s true value

Randy Kerns Randy Kerns Profile: Randy Kerns
flash storage

Flash storage, or using the broader term, solid-state storage, suffers from an inadequate measure of value. Flash storage provides a step-function improvement in the ability to store and retrieve information. The value of processing with flash storage compared to access from electro-mechanical devices is not easy to express.

Many in the industry still use a “data at rest” measure, which is the cost of storing data. That fails to represent more valuable characteristics such as access time and longevity. The data at rest measure, given as dollars per GB, can be misleading and does not convey real economic value. If that is the only measure to use for information storage, then you should use magnetic tape for all operations because it is the least expensive media.

Some vendors also use a dollars per IOPs measure for all-flash storage systems.  This measure does not represent the value of what flash can accomplish because it is an aggregate number. This means it represents the total number of I/Os a system can do, which could be from thousands of short-stroked disk drives.  It does not directly reflect the response time increase, which is the most meaningful measure in accelerating applications and getting more work done.

So if these measures are inadequate, what is the best way to gauge the value of flash storage? It actually varies depending on the case. Flash can provide key improvements, including consolidation, acceleration, reduction in physical space/power/cooling, longevity, and reduced tuning. Let’s look at these:

  • Consolidation – The greater performance levels of flash storage allow for the deployment of more diverse workloads on a single system. With larger capacity flash storage systems, workloads running on multiple spinning disk systems can be consolidated to a single flash storage system. The value of consolidation includes a reduction of the number of systems to manage and the physical space required.
  • Acceleration – The first deployments of flash systems focused on accelerating applications (mostly databases), and virtual machine or desktop environments. Acceleration enabled more transactions and improvements in the number of VMs and desktops supported. The successes here drove the shift to more widespread use of solid-state storage technology.
  • Physical space – Flash technology increases the capacity per chip and results in less physical space required. Even flash packaged in solid-state drives have eclipsed the capacity points of hard disk drives. With flash storage, more information can already be contained within physical space than was previously possible and technology gains are still improving in this area. This is important for most organizations where information storage represents a large physical presence.
  • Power and cooling – Storage devices using flash technology consume less power and generate less heat (requiring less cooling) than devices with motors and actuators. There is an obvious reduction in cost from this improvement. But this becomes more important when physical plant limitations prevent bringing in more power and cooling to the data center.
  • Longevity – Probably the least understood added value from flash storage is the greater longevity in usage for the flash devices and the economic impact that brings. The reliability and wear characteristics are different from electro-mechanical devices, and have reached a point where vendors are giving seven- and 10-year guarantees and even some lifetime warrantees with ongoing support contracts. This dramatically changes the economics from the standpoint of total cost of ownership over the long lifespan. The key driver of this is the disaggregation of the storage controller or server from the flash storage enclosures that allows controllers to be updated independently. This has led to some “evergreen” offerings by vendors, which actualizes the economic value in this area.
  • Reduction in tuning – One of the most often reported benefits (which can be translated to economic value) from deployment of flash storage is the reduction in performance tuning required. This means there is no longer a need to chase performance problems and move data to balance workloads with actuator arms.

It is clear that a data at rest measure is inadequate.  Nevertheless, price is always an issue and the cost for flash storage continues to decline at a steep rate because of the investment in technology.  Data reduction in the form of compression and deduplication also is a given for the most part in flash storage, multiplying the capacity stored per unit by 4-1 or 5-1 in most cases.  The continued technology advances will improve costs even more.

The $/GB data at rest measure is difficult for many to stop using, even though it misrepresents true value. People do it because it is easy and it is a habit after years of measuring value that way. However, it is wrong. There needs to be another relatively simple measure to encompass all the values noted earlier. It may take a while for that to come about.  In the meantime, we will continue to look at economic value, do TCO economic models, and explain real value as part of evaluating solutions to handling information.

(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).


June 27, 2016  10:36 AM

Nasuni customers hit with cloud outage

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Cloud provider Nasuni Corp. experienced two cloud disruptions recently that affected about 20 percent of its customers and about 10 percent of workloads, according to company spokesperson Fred Pinkett.

Pinkett, senior director of product marketing and managment at Nasuni, said the first four-hour cloud outage problem on June 15 was a performance degradation in the company’s API servers. Then there was a another feature disruption tied to the global file-locking system (GFS) on June 16 for about one hour and 45 minutes.

“The GFS uses some servers for the API and does the health check and it caused (a problem with) the GFS,” said Pinkett. ”

The second disruption was shorter than the first. Amazon Web Services (AWS) happened to be going through a performance degradation while Nasuni was doing a health-check rebuild process.

“Customers had intermittent access during that time to those storage volumes that use global file locking,” Nasuni said in a prepared statement. “No data was lost or corrupted. In addition, Nasuni is taking measures to prevent a similar feature disruption in the future by rolling out cross-regional locking and by helping customers configure their filers so that, in the case of a locking disruption, they will be able to read and write to all data in their local cache.”

Nasuni executives said it has served billions of global locks and believes that cloud-based global file locking is the best architecture for locking files over many locations sharing many terabytes of data.

Systems that rely on devices to act as a lock server are extremely difficult to scale globally and are vulnerable to disruption if the device fails, in which case the responsibility for fixing an outage lies with the customer’s IT organization.

“Nasuni, on the other hand, proactively monitors its service and takes full responsibility for fixing issues as they arise,” according to the company statement.

Nasuni uses a hybrid approach to the cloud. Its Nasuni Filers on customers’ locations hold frequently accessed data in cache while sending older data to the Microsoft Azure and AWS public clouds.

The global file-locking capability facilitates use of the controller in multiple locations within a company by maintaining data integrity. It prevents users from accessing a file that is already open and creating multiple, conflicting file versions.

Nasuni isn’t the first cloud gateway to add file locking but it is considered a key step to becoming a primary storage system.


June 23, 2016  6:50 AM

Dell extends Nutanix hyper-converged OEM deal

Dave Raffo Dave Raffo Profile: Dave Raffo
Dell, EMC, Hyper-convergence, Nutanix, VMware

LAS VEGAS — Dell answered one question about its post-EMC merger product lineup Wednesday at the Nutanix .NEXT conference.

Alan Atkinson, Dell vice president of storage, appeared on stage alongside Nutanix CEO Dheeraj Pandey to reveal Dell will extend its OEM deal to sell Nutanix hyper-converged systems. That revelation comes with Dell poised to pick up a bunch of new hyper-converged products from EMC and VMware.

“Customers love the XC Series (using Nutanix software),” Atkinson said. “We started with one product and now we have seven products. I’m thrilled to say we’ve reached an agreement to extend our OEM relationship.  We’ll keep rolling it forward.”

Atkinson left the stage without saying how long the OEM extension will run, but a Nutanix spokesperson said the extension is for “multiple years.”

Dell and Nutanix originally signed a three-year OEM deal in 2014. Nutanix has since added a similar deal with Lenovo.

The Dell-Nutanix relationship was threatened by Dell’s pending $67 billion acquisition of EMC. Both EMC and its majority-owned VMware have hyper-converged products that compete with Nutanix. EMC sells VxRail and VxRack systems built on VMware’s VSAN and EMC ScaleIO software. Dell already re-brands VSAN Ready Node systems and resells EMC’s VSAN and ScaleIO boxes.

Atkinson said he has been asked about the future of the Dell-Nutanix deal often since Dell disclosed plans to acquire EMC last October.

“That’s a question I get once an hour,” he said at .NEXT.

Even Panday has been asking. “We’ve been talking to Michael (Dell) on an everyday basis,” he said.

The Dell-EMC deal is expected to close in August.

In its Securities Exchange Commission filing detailing its plans to become a public company, Nutanix listed the Dell-EMC deal as a risk to its business.

“Dell is not just a competitor but also is an OEM partner of ours and the combined company may be more likely to promote and sell its own solutions over our products or cease selling or promoting our products entirely,” the filing reads. “If EMC and Dell decide to sell their own solutions over our products, that could adversely impact our OEM sales and harm our business, operating results and prospects, and our stock price could decline.”


June 20, 2016  4:26 PM

EMC, Veritas lead in worldwide PBBA revenues

Sonia Lelii Sonia Lelii Profile: Sonia Lelii
Storage

Unlike overall disk storage, purpose-built backup appliances’ total worldwide factory revenue increased in the first quarter of 2016, according to International data Corporation (IDC) Worldwide Quarterly tracker.

IDC shows the backup appliance marketing growing 6.2 percent year-over-year to $762.2 million in the first quarter. That’s in contrast to overall disk storage,  which declined seven percent to $8.2 billion in the quarter according to IDC. External storage, which includes SAN and NAS, declined 3.7 percent to $5.4 billion for the quarter.

EMC led the pack in overall growth for backup appliances.  Total worldwide PBBA capacity shipped during this period was 886 PB, an increase of 40% compared to last year.

EMC generated $427 million in revenue and held 56% of the market share for PBBAs, compared to $377 million in revenue and 52.5% market share in the first quarter of 2015. That was a 13.2 percent growth rate.

No. 2 Veritas generated $139 million in revenue and held 18.2% market share in the first quarter of this year, compared to $133 million in revenue and 18.5% market share in the first quarter or 2015 when it was part of Symantec. The company gained 4.4 percent growth this year in the PBBA market.

Hewlett Packard Enterprise came in third with $35 million in revenue or 4.5 percent market share, compared to $32.5 million in revenue and 4.5 percent market share in the first quarter of 2015. HPE experienced 6.3 percent growth between the first quarter this year and the first quarter in 2015.

IBM backup appliance revenue fell 26% to $30 million, or  3.9 percent market share, compare to $41 million last year. Dell generated $25 million or 3.2 percent market share compared to $18.3 million in revenue the same period last year.

Dell is in completing a $67 billion acquisition of EMC. The combined companies had 59.2% of the backup appliance market.

Total PBBA open systems factory revenue increased 8.3 percent year-over-year in the first quarter with revenues of $703.2 million. Mainframe revenues declined 14%. Last year, the PBBA market experienced a downturn in early 2015 before sales picked up later in the year. 

IDC defines a PBBAs as a standalone disk-based solution that utilizes software, disk arrays, server engines or nodes used for a target for backup data and specifically data coming from a backup application or can be tightly integrated with the backup software to catalog, index, schedule, and perform data movement.

The PBBA products are deployed in standalone configurations or as gateways.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: