Cloud provider Nasuni Corp. experienced two cloud disruptions recently that affected about 20 percent of its customers and about 10 percent of workloads, according to company spokesperson Fred Pinkett.
Pinkett, senior director of product marketing and managment at Nasuni, said the first four-hour cloud outage problem on June 15 was a performance degradation in the company’s API servers. Then there was a another feature disruption tied to the global file-locking system (GFS) on June 16 for about one hour and 45 minutes.
“The GFS uses some servers for the API and does the health check and it caused (a problem with) the GFS,” said Pinkett. ”
The second disruption was shorter than the first. Amazon Web Services (AWS) happened to be going through a performance degradation while Nasuni was doing a health-check rebuild process.
“Customers had intermittent access during that time to those storage volumes that use global file locking,” Nasuni said in a prepared statement. “No data was lost or corrupted. In addition, Nasuni is taking measures to prevent a similar feature disruption in the future by rolling out cross-regional locking and by helping customers configure their filers so that, in the case of a locking disruption, they will be able to read and write to all data in their local cache.”
Nasuni executives said it has served billions of global locks and believes that cloud-based global file locking is the best architecture for locking files over many locations sharing many terabytes of data.
Systems that rely on devices to act as a lock server are extremely difficult to scale globally and are vulnerable to disruption if the device fails, in which case the responsibility for fixing an outage lies with the customer’s IT organization.
“Nasuni, on the other hand, proactively monitors its service and takes full responsibility for fixing issues as they arise,” according to the company statement.
Nasuni uses a hybrid approach to the cloud. Its Nasuni Filers on customers’ locations hold frequently accessed data in cache while sending older data to the Microsoft Azure and AWS public clouds.
The global file-locking capability facilitates use of the controller in multiple locations within a company by maintaining data integrity. It prevents users from accessing a file that is already open and creating multiple, conflicting file versions.
Nasuni isn’t the first cloud gateway to add file locking but it is considered a key step to becoming a primary storage system.
LAS VEGAS — Dell answered one question about its post-EMC merger product lineup Wednesday at the Nutanix .NEXT conference.
Alan Atkinson, Dell vice president of storage, appeared on stage alongside Nutanix CEO Dheeraj Pandey to reveal Dell will extend its OEM deal to sell Nutanix hyper-converged systems. That revelation comes with Dell poised to pick up a bunch of new hyper-converged products from EMC and VMware.
“Customers love the XC Series (using Nutanix software),” Atkinson said. “We started with one product and now we have seven products. I’m thrilled to say we’ve reached an agreement to extend our OEM relationship. We’ll keep rolling it forward.”
Atkinson left the stage without saying how long the OEM extension will run, but a Nutanix spokesperson said the extension is for “multiple years.”
The Dell-Nutanix relationship was threatened by Dell’s pending $67 billion acquisition of EMC. Both EMC and its majority-owned VMware have hyper-converged products that compete with Nutanix. EMC sells VxRail and VxRack systems built on VMware’s VSAN and EMC ScaleIO software. Dell already re-brands VSAN Ready Node systems and resells EMC’s VSAN and ScaleIO boxes.
Atkinson said he has been asked about the future of the Dell-Nutanix deal often since Dell disclosed plans to acquire EMC last October.
“That’s a question I get once an hour,” he said at .NEXT.
Even Panday has been asking. “We’ve been talking to Michael (Dell) on an everyday basis,” he said.
The Dell-EMC deal is expected to close in August.
In its Securities Exchange Commission filing detailing its plans to become a public company, Nutanix listed the Dell-EMC deal as a risk to its business.
“Dell is not just a competitor but also is an OEM partner of ours and the combined company may be more likely to promote and sell its own solutions over our products or cease selling or promoting our products entirely,” the filing reads. “If EMC and Dell decide to sell their own solutions over our products, that could adversely impact our OEM sales and harm our business, operating results and prospects, and our stock price could decline.”
Unlike overall disk storage, purpose-built backup appliances’ total worldwide factory revenue increased in the first quarter of 2016, according to International data Corporation (IDC) Worldwide Quarterly tracker.
IDC shows the backup appliance marketing growing 6.2 percent year-over-year to $762.2 million in the first quarter. That’s in contrast to overall disk storage, which declined seven percent to $8.2 billion in the quarter according to IDC. External storage, which includes SAN and NAS, declined 3.7 percent to $5.4 billion for the quarter.
EMC led the pack in overall growth for backup appliances. Total worldwide PBBA capacity shipped during this period was 886 PB, an increase of 40% compared to last year.
EMC generated $427 million in revenue and held 56% of the market share for PBBAs, compared to $377 million in revenue and 52.5% market share in the first quarter of 2015. That was a 13.2 percent growth rate.
No. 2 Veritas generated $139 million in revenue and held 18.2% market share in the first quarter of this year, compared to $133 million in revenue and 18.5% market share in the first quarter or 2015 when it was part of Symantec. The company gained 4.4 percent growth this year in the PBBA market.
Hewlett Packard Enterprise came in third with $35 million in revenue or 4.5 percent market share, compared to $32.5 million in revenue and 4.5 percent market share in the first quarter of 2015. HPE experienced 6.3 percent growth between the first quarter this year and the first quarter in 2015.
IBM backup appliance revenue fell 26% to $30 million, or 3.9 percent market share, compare to $41 million last year. Dell generated $25 million or 3.2 percent market share compared to $18.3 million in revenue the same period last year.
Dell is in completing a $67 billion acquisition of EMC. The combined companies had 59.2% of the backup appliance market.
Total PBBA open systems factory revenue increased 8.3 percent year-over-year in the first quarter with revenues of $703.2 million. Mainframe revenues declined 14%. Last year, the PBBA market experienced a downturn in early 2015 before sales picked up later in the year.
IDC defines a PBBA as a standalone disk-based solution that utilizes software, disk arrays, server engines or nodes used for a target for backup data and specifically data coming from a backup application or can be tightly integrated with the backup software to catalog, index, schedule, and perform data movement.
The PBBA products are deployed in standalone configurations or as gateways.
After months of shopping itself, storage networking vendor QLogic has a buyer.
Networking semiconductor vendor Wednesday said it will pay $1 billion for the FC and Ethernet adapter company in a deal expected to close in the third quarter of 2016. The deal adds to the evidence that storage networking is a best fit as part of a larger Ethernet-dominated networking company.
Cisco kicked off the trend of combining networking and storage when the Ethernet giant stared selling Fibre Channel SAN switches in 2003. The FC connectivity market has seen a great deal of consolidation since then, with FC vendors either going away or joining forces with Ethernet companies.
Avago Technologies acquired QLogic rival Emulex for $606 million in 2015. That deal was sandwiched between Avago’s purchases of storage and networking chip firm LSI and networking connectivity vendor Broadcom.
Brocade, which began as a Fibre Channel switch company, moved into Ethernet network switching with a $2.6 billion acquisition of Foundry Networks in 2008. FC remains the bulk of Brocade’s business, but the vendor also added wireless networking vendor Ruckus for $1.2 billion in April to make it a broader enterprise play.
Now we have the two FC switch vendors Brocade and Cisco and the two main FC adapter providers QLogic and Emulex selling storage as part of larger networking companies.
Cavium is a player in the networking, communications and cloud markets but will use QLogic to fill one gap in its product line.
“Storage has been one of the areas that has been an aspirational market for us,” Cavium CEO Syed Ali said Wednesday evening on a webcast to discuss the QLogic deal. “We never had penetration into the mainstream storage market.”
Ali said owning QLogic can help Cavium push its own products such as its ThunderX data center and cloud processor into deals with storage vendors such as EMC and NetApp. It also gives Cavium an instant storage software stack.
“QLogic has an extensive software stack that we don’t have for the mainstream market,” he said. “That software stack takes years to build out.”
Ali said Cavium will kill QLogic legacy products such as its FC switches because “it’s not a great idea for a silicon company to also be a switch company” but he said he is bullish on QLogic’s FC adapters. He pointed to the ongoing transition from 8 Gb per second FC to 16 Gbps and the move to 32 Gbps expected in 2017-18 and the rise of all-flash arrays as drivers of FC business. He said all-flash arrays have a 70% to 80% FC attach rate.
EMC remained No. 1 and NetApp jumped to No. 2 in all-flash array (AFA) revenue for the first quarter of 2016, according to IDC’s new market share statistics.
EMC had $245.6 million in revenue and 30.9% AFA market share, while NetApp took in $181.1 million and accounted for 22.8%. Rounding out the top five were Pure Storage at 15.0%, Hewlett Packard Enterprise (HPE) at 12.4% and IBM at 8.5%.
NetApp benefitted from a change to IDC’s AFA taxonomy with the release of the June 2016 Tracker. Previously, IDC only recognized NetApp’s EF-Series as an AFA. With the new taxonomy, NetApp’s All Flash FAS also qualifies under a new “Type 3” category.
IDC defines an AFA as “any network-based storage array that supports only all-flash media as persistent storage and is available as a unique SKU.” With the new IDC AFA taxonomy, IDC for the first time identified three types of AFAs. Type 1 and Type 2 would have fit the old taxonomy, but Type 3 is new, according to Eric Burgener, an IDC storage research director.
Type 1: Arrays that were originally “born” as AFAs. Examples include EMX XtremIO, IBM FlashSystem, Kaminario K2, Pure Storage FlashArray//m, NetApp SolidFire FS Series, Tegile IntelliFlash HD, and Violin Memory Flash Storage Platform. Although NetApp acquired SolidFire in December, IDC won’t begin crediting NetApp with the SolidFire revenue until the second quarter, according to Burgener.
Type 2: Arrays that originated as hybrid designs but have undergone significant flash optimization, do not support HDDs, and include some high-performance hardware unique to the all-flash configuration, such as controllers that are faster than those included in the vendor’s hybrid flash arrays (HFAs). Examples include: Hitachi Data Systems (HDS) VSP F Series, Hewlett Packard Enterprise (HPE) 3PAR StoreServ 8450, and Tegile IntelliFlash T3800.
Type 3: Arrays that originated as hybrid designs but have undergone significant flash optimization. Type 3 arrays do not support HDDs and do not include hardware, other than flash media, that is different than the hardware the vendor ships on its HFAs. Examples include: EMC VMAX All Flash, Fujitsu DX200F, NEC M710F, and NetApp All Flash FAS.
Burgener said IDC changed its AFA taxonomy for several reasons.
“No. 1 is the level of flash optimization that we were starting to see from the systems that began life years ago as hybrids were producing performance that was not really very distinguishable from purpose-built AFAs,” he said. “Two years ago, there was a big difference in terms of the latencies and the ability to sustain consistent performance. But now, there’s not that big a difference.”
Burgener said storage vendors were selling their all-flash configurations, whether hybrid or not, in a “directly competitive manner to purpose-built AFAs.” Because the products target the same customers in essentially the same market, it made sense to include them, he said.
But, if a storage array ships with only flash drives and can support HDDs, IDC considers it to be a hybrid flash array.
“That gets rid of this whole question: ‘Well, what if I put a disk in later?’ ” Burgener said. “We’ve clarified that now. We just said, ‘Look, let’s simplify it.’ ”
Most of NetApp’s new AFA revenue was due to its All Flash FAS (AFF), according to Burgener.
Burgener said there might have been pent-up demand for a more flash-optimized All Flash FAS with NetApp’s OnTap 8.2 release. But, he expects the pent-up demand will wane, and NetApp’s growth rate will drop by the fourth quarter.
IDC went back and adjusted its historical numbers and forecasts to reflect the new AFA taxonomy. Under the old taxonomy, 2015 revenue for the total AFA market was $2.53 billion. With the new AFA taxonomy, the 2015 revenue figure for the entire market is $2.73 billion, according to Burgener.
NetApp placed fifth in 2015 AFA revenue under the new IDC taxonomy, but moved ahead of HPE into fourth place in Q4 of 2015, Burgener said.
There has been some noise in the storage space lately, based on the English translation of a Japanese IT website that said Hitachi Data Systems was “freezing” investment in its high-end storage products. Lost in this declaration is that HDS gets it that arrays alone aren’t the future of enterprise storage.
On June 1, the site IT Pro Nikkei published a report based on an HDS briefing that said the vendor would be “freezing the investment in the high-end model of the storage business.” That led media sites and competitors to speculate that perhaps HDS was going to let its high-end storage business languish and die.
HDS has been quick to deny it will exit the high-end storage market. But the strategy outlined in the IT Pro Nikkei story is not new or surprising. High-end enterprise disk arrays are far from a growth market in this age of flash, cloud, hyper-convergence and software-defined storage.
In that interview, Yoshida said, right out of the gate “The infrastructure market is not going to be a growth market. Instead of trying to compete on infrastructure, we’re going to have to compete on application enablement.” Translation: HDS’ new Lumada line for handling data from IoT sources, based on the Pentaho IoT data analytics technology HDS acquired in 2015, will play a major role in the company’s future.
Yoshida specifically calls out IoT as a vital component of the future of HDS. “We have an overall corporate strategy with Hitachi, called Social Innovation, where we are moving toward the Internet of Things (IoT), trying to build smart cities and provide more insights into data centers, telco [and] automotive,” he said.
IoT was also a big topic at the HDS Connect partner conference a year ago.
On his own blog, Hu’s Place, Yoshida this week clarified what “freezing” investment in high-end storage means for HDS. He wrote that hardware investments in the flagship high-end HDS hard disk drive arrays are no longer necessary due to the ability to use standard Intel processors and flash for performance, with storage features running in software. HDS will shift research and development from its Virtual Storage Platform (VSP) hardware to Storage Virtualization Operating System (SVOS), flash and storage automation.
“There is no need to build separate hardware for midrange and enterprise customers,” Yohsida wrote. “They all have access to enterprise functionality and services like virtualization of external storage systems for consolidation and migration, high availability with Global-Active Device, and geo-replication with Universal Replicator.”
In April HDS added to SVOS what it apparently considers to be the last hardware puzzle piece, building NAS functionality (or, as Yoshida described it “embedded file support in our block storage”) into its G series of hybrid flash arrays.
So the upshot is, HDS will invest in making SVOS and VSP perform better for specific applications, like IoT data storage and analytics. That sounds much less like the sky is falling on HDS, and a lot more like a strategic investment in what it sees as the future in which application integration rather than storage features becomes the major differentiator between storage vendors.
Newcomer iguaz.io is the latest software startup that will try and deliver the Holy Grail of storage — the ability to provision and manage on-premise capacity the same way as in Amazon Web Services (AWS).
Iguaz.io calls its software a virtualized data services architecture, similar language copy data management vendor Actifio used when it launched in 2010 and others have adopted. There does seem to be some copy data management in iguaz.io’s software along with features that help developers and application owners provision and manage storage. Iguaz.io gave a peek under the hood this week but is still months away from a shipping product.
“When people go to Amazon, they don’t know anything about the infrastructure,” Iguaz.io founder and CTO Haron Yaviv said. “You go through APIs and define policies. Enterprise storage today is legacy storage – you go to IT, say ‘Provision this stuff for me, this is the performance I need, go run backups against my data’ and so on. We said, let’s take Amazon features and extend it to enterprise storage. It’s all self-service. Most of the work is for the application users and developers. They create policies and provisions, just like they’re using Amazon.”
Haviv said iguaz.io software will be sold either as software-only or on an appliance, and he expects cloud providers to be a target customer as well as enterprises looking to build private clouds. He gives no target ship date but said Iguaz.io plans to launch by the end of 2016.
Here is what Iguaz.io promises its software will do:
- consolidate data into a high-volume, high-velocity, real-time data repository designed that virtualizes and transforms data on the fly, exposes it as streams, messages, files, objects or data records consistently, and stores it on different memory or storage tiers;
- seamlessly accelerate popular application frameworks including Spark, Hadoop, ELK, or Docker containers;
- offer enterprises a 10x-to-100x improvement in time-to-insights at lower costs;
- leverage deep data insights to provide best-in-class data security, a critical need for data sharing among users and business units.
Its goals include the ability to enable stateless application containers in a cloud-type approach, provide access to data from multiple applications and users, and to simplify deployment and management. Haviv said it will run on flash, NVM, in the cloud, and on block and file storage.
If you’re wondering, the vendor’s name comes from the cascading Iguazu Falls on the border of Argentina and Brazil – signifying data cascading into a single stream. The Israel-based startup was founded in 2014 and has $15 million in funding. Its other founders include CEO Asof Somekh, formerly of Mellanox and Voltaire, and COO Yaron Segev, who founded all-flash array pioneer XtremIO and sold it to EMC.
Hewlett Packard Enterprise (HPE) bucked the trend of storage revenue declines in the first quarter of 2016, according to IDC’s quarter enterprise storage systems tracker. HPE was the only member of the top six vendors to increase its storage revenue over the first quarter of the previous year.
Industry-wide total external storage (SAN and NAS) declined 3.7 percent to $5.4 billion for the quarter. Overall storage, including servers and server-based storage, declined seven percent to $8.2 billion.
HPE’s external storage revenue increased 4.6 percent to $535.7 million, ranking third behind EMC and NetApp. HPE increased market share from 9.1 percent in the first quarter of 2015 to 9.9 percent in the first quarter of 2016.
EMC revenue declined 11.8% to $1.349 billion as it awaits its $67 billion acquisition by Dell. EMC’s market share fell from 27.2% to 24.9% over the year.
NetApp had a bigger fall, dropping 15.6% to $645.5 million and a share decline from 13.6% to 11.9%.
HPE was followed by Hitachi in fourth, IBM in fifth and Dell in sixth. All three declined in revenue year-over-year, although Hitachi gained share from nine percent to 9.2 percent. All other vendors combined for $1.59 billion, a 7.8% increase from the previous year. “Others” market share grew from 26.2% to 29.3%.
In the total storage market, HPE grew 11% and jumped ahead of EMC into first with $1.42 billion. EMC, which generates all its revenue from external storage, fell 11.8% in the overall market to a 16.4% market share compared to HPE’s 17.3% share. No. 3 Dell, No. 4 NetApp, No. 5 Hitachi and No. 6 IBM all declined in overall storage revenue. Other vendors increased 5.3% but storage systems sales by original design manufacturers (ODMs) selling directly to hyperscale data center customers slipped 39.9%.
IDC put all-flash arrays at $794.8 million in the quarter, up 87.4% for the year. Hybrid flash arrays accounted for $2.2 billion and 26.5% of the overall storage market share.
HPE continued its momentum into the second quarter, according to recent earnings reports from the major vendors. HPE reported two percent year-over-year growth while EMC, NetApp and IBM all said their storage revenue declined. HPE CEO Meg Whitman said 3PAR all-flash revenue nearly doubled from last year.
“We estimate that we gained market share in the external disk for the tenth consecutive quarter and expect storage to gain shares throughout the remainder of the year on the strength of the 3PAR portfolio and new logo wins as we take advantage of the uncertainties surrounding the Dell-EMC merger,” Whitman said on HPE’s May 24 earnings call.
|Top 5 Vendors, Worldwide External Enterprise Storage Systems Market, First Quarter of 2016 (Revenues are in Millions)|
|Vendor||1Q16 Revenue||1Q16Market Share||1Q15 Revenue||1Q15 Market Share||1Q16/1Q15 Revenue Growth|
|Source: IDC Worldwide Quarterly Enterprise Storage Systems Tracker, June 3, 2016|
Qumulo, the scale-out data-aware NAS startup founded by Isilon veterans, today added $32.5 million in funding to expand its sales operation. Qumulo’s C funding round brings its total funding to $100 million.
“We’ve had a great year,” Qumulo CEO Peter Godman said. “We’ve been launched for about a year and we have more than 60 customers who continue to deploy bigger and bigger systems.”
Godman said Qumulo’s goal is to generate three-times as much revenue over the next year, so a significant amount of the funding will go towards sales and marketing. He said field sales operations will nearly double over the next year. The start-up has around 135 employees.
Godman said about half of Qumulo’s sales come from customers adding systems to their original purchases. A lot of repeat buys come from media and entertainment whose capacity needs expand rapidly due to new higher-definition video formats.
On the product development front, he said Qumulo will continue to shoot out upgrades to its Core software every two weeks.
The daunting part for Qumulo is it competition comes from two of the largest storage vendors, EMC and NetApp. In EMC’s case, Qumulo usually goes head-to-head with the Isilon platform that Qumulo founders helped develop. Godman said Qumulo competes frequently with Isilon in use cases such as animated movies where high performance is required. He said Quantum StorNext is another competitor in media but “90-plus percent of the time, we compete with NetApp and EMC.”
Allen & Company, Top Tier Capital Partners, and Tyche Partners were Qumulo investors with the C round, joining previous investors Kleiner Perkins Caufield & Byers (KPCB), Madrona Venture Group, Highland Capital Partners, and Valhalla Partners.
DR may be dying. The term DR, that is, not the actual process of disaster recovery. There is a move in the industry to replace the phrase with “IT resilience.”
At last week’s ZertoCON business continuity conference, analysts from Gartner and Forrester both threw their support behind using the term resilience over disaster recovery.
Stephanie Balaouras, vice president and research director at Forrester, said she dislikes the term “disaster recovery” because it tends to focus on catastrophic events, which can cause management to think it’s too expensive and rare.
Organizations need to move beyond disaster recovery and embrace resiliency, which is more concentrated on continuous availability and continuous improvement, Balaouras said. Customers don’t care what happened to cause an outage, they just want “always-on.”
Balaouras outlined three actions to improve IT resilience.
- Calculate the cost of downtime. 57 percent of companies have told Forrester that they haven’t calculated that expense. And downtime is more than lost revenue — it’s loss of employee productivity and morale, as well as lost business opportunities. Organizations should calculate revenue and productivity losses plus customer impact, and present several loss scenarios.
- Measure availability end-to-end. Availability is not about individual components, it’s the whole service, Balaouras said. When making your business case, take everything into account. As an example, Balaouras noted that the recent New York Stock Exchange outage was human error.
- Match business objectives to the right mix of technologies. Balaouras suggests planning an evolution to active-active sites, which takes some time. Businesses should maximize virtualization investments for resiliency. And rethink failover and replication options, as the technologies are not “one size fits all.”
In his keynote address, John Morency, a research vice president at Gartner, said that IT resilience is becoming the new disaster recovery.
Most Gartner clients don’t use the term “disaster recovery” anymore — they want to focus more on IT resiliency, Morency said.
Newer technologies, such as replication, continuous data protection and snapshotting, are helping organizations enhance resiliency and proactively avoid recovery situations. While recovery time objectives used to be six to 18 hours for many, they’ve dropped to four hours or below, Morency said.
In her presentation, Balaouras also stressed the importance of time. With disaster recovery, downtime is measured in hours to days, while with IT resiliency, downtime is measured in minutes to hours.
Investments in disaster recovery are seen as expensive insurance policies and there isn’t enough emphasis in DR on the everyday events that cause the majority of business disruptions, Balaouras said. IT resilience investments, on the other hand, are driven by the need to serve customers and stay competitive, and resiliency is focused on all likely business disruptions.
Which term do you prefer?