Storage Soup


August 18, 2017  11:04 AM

All-flash sales lift NetApp earnings per share, revenue

Garry Kranz Garry Kranz Profile: Garry Kranz

Strong all-flash sales and a new niche acquisition highlighted the NetApp earnings call this week. The vendor on Wednesday reported net revenue of $1.33 billion, up 2% year over year and above the midpoint of its guidance range.

The new pickup is Reykjavik, Iceland-based startup Greenqloud for an undisclosed sum. As with its pickup earlier this year of storage memory startup Plexistor, NetApp did not disclose acquisition details. Greenqloud’s Qstack self-service stack allows enterprises to build, deploy and manage branded cloud infrastructure across multiple hypervisors and locations.

NetApp’s all-flash business contributed $1.5 billion in revenues, reflecting a 95% growth in demand for its All Flash FAS, EF and SolidFire all-flash arrays. In prepared remarks, CEO George Kurian said NetApp has been able to “substantially outpace” the growth rate of competing vendor’s flash arrays, after NetApp initially was late to enter the all-flash market. NetApp ranks second in all-flash revenue to market leader Dell EMC.

“The industry is in the early innings of the move from disk-based storage to flash as customers modernize existing datacenters and build next- generation datacenters to lower the total cost of ownership while gaining greater speed and responsiveness from key business applications,” Kurian said.

Kurian said NetApp has secured committed NAND flash supplies to meet requirement for its fiscal year.

NetApp product revenues increase 10%, hyper-converged gear on the horizon
Consolidated gross margins of 63.8% and increased revenues helped boost NetApp earnings per share (EPS) to 62 cents, five cents higher than anticipated. Wall Street analysts estimated pegged NetApp’s revenue at $132 billion and its EPS at 55 cents.

NetApp product revenue was $723 million, which was up 10% and marks the third consecutive quarterly increase. Revenue from maintenance and services contracts fell 5% to $602 million, which NetApp blamed on changes to service pricing, several years of declining product revenue and on “renewal execution issues” in 2017, perhaps related to a disruptive software upgrade to its Clustered OnTap operating system.

Kurian took the helm at NetApp in 2015 and charted a strategy to expand its footprint in all-flash, converged infrastructure and hybrid cloud services. At the time, development of NetApp’s all-flash FlashRay scale-out product had dragged for years, appearing only as a single node in limited availability. NetApp scrapped FlashRay when it acquired SolidFire in 2015.

Since abandoning plans to build an EVO: RAIL system in partnership with VMware, NetApp also has been conspicuously absent from the hyper-converged market. Kurian said that will change later this year.

“We have already transitioned our business away from the declining segments to the data-driven, high-growth segments of all-flash arrays, converged infrastructure, and hybrid cloud,” he said. “We will further expand our opportunity with the general availability of our hyper-converged solutions,” based on SolidFire technology.

NetApp seems to have weathered being late in the all-flash and hyper-converged sectors. It closed the quarter with $250 million in operating cash flow, a 10% increase from the previous year, and free cash flow in the quarter of $214 million, or roughly 16% of net revenues. It is sitting on $5.3 billion in cash and liquid investments, and that’s after buying back $150 million in shares and paying out $54 million in cash dividends.

It also is using the money for strategic acquisitions. Kurian said Greenqcloud’s Qstack technology will “augment” development of NetApp Data Fabric for delivering hybrid cloud services.

Kurian used the NetApp earnings call to highlight a recently announced Microsoft partnership. Engineers from the two vendors are integrating NetApp’s Data Fabric to support automated storage tiering and backup technologies in Microsoft Azure, Azure Stack and Office 365. Joint product announcements with Microsoft are planned for the NetApp Insight user conference in October.

Estimates of NetApp earnings per share for the second quarter range from 64 cents to 72 cents. Net revenue is expected to range between $1.31 billion and $1.46 billion.

August 16, 2017  11:33 AM

Unitrends backup and recovery gets hyped for Nutanix AHV

Paul Crocetti Paul Crocetti Profile: Paul Crocetti
Storage

Unitrends backup products will go where some vendors have gone before — supporting the Nutanix Acropolis Hypervisor – while adding a cloud twist.

The company’s Recovery Series backup appliances and Unitrends Backup virtual appliances will feature integration for the Acropolis Hypervisor (AHV). Unitrends is extending its core data center backup and recovery capabilities for Nutanix to the purpose-built Unitrends Cloud.

That cloud backend separates Unitrends from other partners, said Joseph Noonan, vice president of product management. Unitrends also offers the flexibility to protect all hypervisors that run on Nutanix. In addition, Unitrends supports VMware, Hyper-V and Citrix XenServer hypervisors.

Organizations can back up from Nutanix appliances directly to Unitrends appliances or to an external NAS device.

Veeam, Commvault, Rubrik and Comtrade Software are among the other data protection vendors that recently launched or will launch backup for Nutanix AHV.

Unitrends is looking to broaden its customer base with AHV support. Noonan said only a small percentage of Unitrends’ 19,000 customers use Nutanix. The Unitrends backup line has a lot of midmarket customers, and Noonan said he hopes the AHV integration brings in more small to medium enterprises.

Noonan said he is seeing organizations that are early to adopt newer technologies going hyper-converged.

“It significantly reduces footprint for them,” he said. “It’s more about TCO and simplicity.”

He said customers are also looking to reduce infrastructure costs of VMware licensing.

Joseph Noonan, Unitrends vice president of product management

Joseph Noonan

Nutanix executives said at the vendor’s .NEXT 2017 user conference in June that using AHV can help customers save money by avoiding VMware enterprise license agreements, even if Nutanix HCI software and appliances are considered pricey. Nutanix offers AHV as part of its hyper-converged platform with no licensing costs.

On the cloud level, Unitrends backup also integrates with Microsoft Azure and Amazon Web Services. But there are gaps with the big cloud providers, especially as they relate to small and medium enterprises, Noonan said.

“We see the Unitrends Cloud being a better fit,” Noonan said, pointing to stronger service-level agreements, holistic support, scalability, and total cost of ownership and cost predictability.

Unitrends has been in the Nutanix Elevate Technology Alliance Program since October, supporting joint customers.

“Now we’re extending it more to integration,” Noonan said.

The Unitrends backup integration with the Nutanix AHV will be available later this year.


August 15, 2017  10:24 AM

MozyEnterprise backup tool finds a new key for security

Paul Crocetti Paul Crocetti Profile: Paul Crocetti

Dell EMC’s Mozy has unlocked a new encryption key security feature for its enterprise backup product.

MozyEnterprise now provides support for the Key Management Interoperability Protocol (KMIP), which automatically generates per-user encryption keys that can be managed through an on-premises key management server (KMS).

The update features a “single pane of glass to manage all those encryption keys across all applications,” said David Hartley, product management consultant in research and development at Mozy.

MozyEnterprise now has more automated, granular encryption key management, Hartley said.

Mozy backup previously offered three other encryption key options:
• Mozy default encryption key: Mozy assigns an encryption key to users, stores and manages it.
• Personal encryption key: Each user manually creates a unique personal encryption key.
• Corporate encryption key: A Mozy administrator can create a key for all users in the company or a unique one for each user group.

MozyEnterprise backup customers most often used the corporate encryption key method.

The key management under Mozy backup had leaned traditional, but now it’s more isolated and controlled, said Robert Rhame, research director of backup technologies and storage for Gartner.

“This updates their methodology, silos individual users to protect them and gives them centralized control while having granularity,” Rhame said.

Dell EMC claims more than 900 customers for MozyEnterprise, a cloud backup product that includes file sync and mobile access. It’s aimed at large companies, ones with full-time IT staff and thousands of endpoints.

A couple of Mozy’s large customers had requested the KMS option. Hartley said Mozy was happy to oblige because of the trend in enterprise IT towards the centralized management of encryption keys across multiple applications using a KMS.

Mozy's David Hartley

David Hartley

The KMS enables backup administrators to create and manage per-user local encryption keys.

“At-rest security is better because you have an encryption key per user,” Hartley said.

KMIP is now generally available for MozyEnterprise, for free, on Windows. Mac support is expected soon.

Mozy backup finds home in Dell EMC

Mozy is part of the Dell security suite, Hartley said. EMC bought Mozy backup in 2007. Dell bought EMC in 2016.

Rhame noted that Dell EMC does a solid job with its security overall.

“This is just an additional layer of isolation,” Rhame said of the MozyEnterprise update. “Key management was a little bit — but not much — behind.”

The Dell-EMC merger has worked well so far for Mozy because EMC had an enterprise focus, and Dell brings more consumers and SMBs to the table, Hartley said. In addition, Mozy is recommending SafeNet, a Dell technology partner, for the KMS aspect.

Rhame said it’s refreshing to see this additional feature from Mozy.

“It gives you an indication that this product has legs inside the new Dell EMC,” Rhame said. Conversely, Dell EMC in April spun out its Spanning cloud-to-cloud backup to Insight Venture Partners.

Rhame said he sees Mozy positioning itself toward more remote and branch office (ROBO) backup.

“A lot of endpoint backup vendors are moving in that direction,” towards ROBO, Rhame said.

For example, Druva went ROBO with its Phoenix product.

Mozy backup products also include MozyPro for smaller businesses and MozyHome for personal use.


August 11, 2017  11:58 AM

Flash Memory Summit 2017: Flash really on fire

Dave Raffo Dave Raffo Profile: Dave Raffo
flash storage

Any claims that “flash is on fire” at Flash Memory Summit 2017 this week drew awkward glances, nervous laughs or groans. That’s because one flash system literally caught fire, causing the exhibition hall at the Santa Clara Convention Center to close for the entire show.

The Innodisk booth caught fire hours before Flash Memory Summit 2017 opened Tuesday morning.  Damage from the fire and water from the sprinkler system that doused it prompted fire marshals to order the exhibition floor closed for the entire three-day show.

The show went on, with meetings and dozens of keynotes and panel sessions discussing all things flash for three days. Product launches went out as scheduled but the shutting of the exhibit hall disappointed vendors who planned demos of new and emerging products.

Fire marshals have not identified the cause of the fire.

Demonstrations that were never demonstrated included the Kaminario K2.N NVMe array due to ship in spring of 2018 and E8 Storage’s shipping D-24 rack-scale NVMe array as well as its coming X24 arrays. Newcomer Liqid wanted to show off what it calls a bare-metal composable infrastructure system using hardware from OneStop Systems.

Other products scheduled for demos included Toshiba NVMe over Fabric software, several new Intel SSDs, Mellanox NVMe over Fabrics devices, Everspin 1 GB and 2 GB DDR4 form factor MRAM devices, and a host of Samsung products including a reference “Mission Peak” 1U server that can store 576 TB of SSD capacity with new form factor 16 TB drives.

“We wanted to show that we’re real, and our stuff is battle tested,” said Julie Heard, E8 Storage’s director of technical marketing.

Flash Memory Summit 2017 wasn’t a complete waste for E8. The team won a best-of-show award for most innovative flash memory technology and showed off its Game of Thrones-knockoff “Game of LUNs” poster.

Other notable Flash Memory Summit 2017 award winners included Western Digital for NAND flash, CNEX Labs and Brocade for storage networking, Excelero for software-defined storage, and Attala Systems Inc. for storage system.


August 10, 2017  9:09 AM

Primary Data DataSphere upgrade follows funding grab

Garry Kranz Garry Kranz Profile: Garry Kranz
Data Analytics, Primary Data

Primary Data has beefed up storage analytics and cloud migration in its DataSphere virtualization platform. Now the startup is ready to dip into a fresh stash of cash totaling $40 million to heighten its profile in enterprise data storage.

PrimaryData DataSphere 2.0, released this week in early access, builds on previous editions oriented mostly for application development. The latest version embeds an artificial intelligence-based storage analytics engine that automatically moves inactive data to Amazon S3-compatible cloud object stores.

If the data once again becomes active, DataSphere transparently retrieves it from the cloud for access on local storage.

“We are able to give storage awareness to an application. Normally, you would have to write (code) for that,” Primary Data CEO Lance Smith said.

A policy catalog in 2.0, known as Objective Expressions, allows customers to prescribe the characteristics that can be applied to all data or to an individual file. To move data between cloud platforms, users need to change only the objectives for the data. Primary Data DataSphere then moves the data to the appropriate storage target.

“We traditionally have gone after the development and testing space, which are usually small deployments. But people are finding that our technology is so powerful that many of them are putting it in production (as a way) to save lots of money” on storage, Smith said.

Data protection and cloud mobility highlight 2.0 release

Primary Data claims DataSphere can manage and move billions of files and objects. The software will consume a customer’s block storage and converts it to the file namespace.

The enhanced storage analytics examine historical usage patterns to determine which tier of storage best meets an application’s requirements. DataSphere determines the optimal data placement based on customer-defined attributes relating to cost, data primacy or performance.

Primary Data DataSphere 2.0 include assimilation of array-based snapshots, allowing customers to use the snapshots to both preserve changes in real time and to serve as a disaster recovery tool. DataSphere accesses snapshot APIs of underlying storage arrays to clone space-efficient copies on a WAN or public cloud. The vendor claims this feature allows it to mix and match different vendors’ storage in the same share. Additional data protection in 2.0 includes metadata backup and restore and portal protection.

Primary Data DataSphere 2.0 supports cross-domain mapping and fully integrates with Windows Active Directory and Windows Access Control Lists, allowing mixed shares between Linux and Windows.

New investment earmarked for expansion of sales teams in U.S.

Along with DataSphere’s revamped storage analytics, the data management specialist announced up to $40 million obtained in separate funding transactions. The proceeds boost the startup’s total investment haul to nearly $100 million since its 2016 launch.

Primary Data received $20 million in venture funding in a Series B round led by Pelion Venture Partners, with participation from existing vendors Accel Partners and Battery Ventures. Up to $20 million in additional funding is available through a line of credit.

Smith said Primary Data will expand sales teams in growing markets, particularly Europe and North America.

“We have been hiring in North America and Europe since the start of this year to vastly grow our presence in vertical markets. We had been investing heavily in engineering up to now,” Smith said.


July 31, 2017  1:55 PM

NetApp cloud strategy includes SolidFire, SDS

Dave Raffo Dave Raffo Profile: Dave Raffo
Cloud storage, NetApp

Created in the 20th century to sell storage to engineers, NetApp has survived for 25 years to remain the largest standing data storage company not tied to a server vendor. Founder Dave Hitz credits that survival to the company’s “enormous capacity to change” as the IT landscape changes.

“People ask me, why are you still alive after 25 years? That’s a very real question,” Hitz, currently a NetApp executive vice president, said during a press even last month. “NetApp has survived 25 years because we have an amazing ability for radical change when we need it.”

Hitz said his company has previously pivoted to survive disruptions caused by the rise of the internet, the internet crash and virtualization. He said all posed threats to NetApp when they first developed, and NetApp adjusted its storage to take advantage. Now the NetApp cloud pivot is the current adjustment that can make or break the company.

“Each of these transitions were things that were going to kill us,” he said. “Here we are again, possibly the biggest transition of all, into cloud computing and again it’s the thing that’s going to kill us. We hear, ‘We’re all doomed, everything’s going to move into the cloud, there’s no room for NetApp.’ I don’t think it’s true. It could be true if we don’t’ respond.”

Of course, you don’t have to be a bull-castrating genius to figure out the cloud is the key for today’s storage companies. Every large storage company has the cloud in its strategy and barely a month goes by when we don’t see a startup come along promising to provide cloud-like storage for enterprises, and to connect on-premises storage to public clouds.

So what is the NetApp cloud strategy?

Hitz said NetApp “way underestimated how pervasive the cloud would be on all enterprise computing,” just like it misjudged how flash would impact enterprise storage. (NetApp originally bet on flash as cache instead of solid-state drives in storage arrays before getting out its successful All-Flash FAS array in 2016.). But he said the NetApp cloud plan consists of doing what it does best — data management.

NetApp's Dave Hitz

NetApp founder Dave Hitz

“We think data is the hardest part [of the cloud],” Hitz said. “It is very easy to go to Amazon or Azure, fire up 1,000 CPUs, run them for an hour or day or week, [and] then turn them off. It’s not easy to get them the data they need, and after they make a bunch of data, it’s not easy to get it back and keep track of it. Those are the hard parts. And that’s right in the center of our wheelhouse.”

Channeling NetApp’s history, CEO George Kurian said he saw his job when he took over in 2015 as leading the company through transition. “As the world around us changed, NetApp needed to change fundamentally,” he said.

He sees a strong NetApp cloud strategy as the key to initiating that change. “Many customers are engaged with us to help them build hybrid architectures, whether it’s between on-prem and public cloud, between two public clouds or migrating one of their sites to a colocation,” Kurian said.

Kurian cites SolidFire — an all-flash array platform built for cloud providers — as the “backbone of the next-gen data center.” NetApp acquired SolidFire in 2016 as much as a cloud platform as to fill a need for all-flash storage.

NetApp cloud software-defined storage (SDS) and services include Private Storage for Cloud, Ontap Cloud, Data Fabric, AltaVault cloud backup and others. NetApp also has a Cloud Business Unit, which includes development, product management, operations, marketing and sales.

Senior vice president Anthony Lye joined the company last March to run the NetApp Cloud Business Unit. “The whole purpose of my organization is to build software that runs on hyper-scale platforms,” Lye said. “The software can be consumed by NetApp or non-NetApp customers, on hybrid or multi-cloud environments.”

The NetApp cloud portfolio will go a long way in determining if the vendor gets to keep its survivor status.


July 31, 2017  10:07 AM

Commvault earnings: Sales gain ahead of product launches

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Commvault Systems is inching back toward profitability after another strong quarter of growth as it continues to build out its business beyond its traditional data backup model. The company’s leaders identified the ability to land more and larger Commvault software deals as the key to greater growth.

Commvault generated $166 million in total revenue for the first 2018 fiscal quarter. That represents a 9% year-over-year increase and is essentially flat sequentially compared to the $172.9 million in total revenue from the previous quarter. Commvault software revenue came in at $74.8 million, an increase of 18% year-over-year and down 4% sequentially.

The company cut its quarterly loss to $284,000 compared to the $2.6 million loss in the same quarter last year.

“We have successfully recast Commvault and repositioned our business to regain our momentum,” said CEO Bob Hammer on the Commvault earnings call last week.

Hammer said Commvault’s enterprise deals increased 10% year-over-year, with the average enterprise deal size ticking up 4% to approximately $282,000 for the quarter. “Our ability to achieve our growth objectives is dependent on a steady flow of $500,000 and $1 million plus deals,” Hammer said.

Brian Carolan, Commvault’s chief financial officer, said enterprise deals of more than $100,000 represented 63% of the Commvault software revenue.

Commvault earnings foreceasts include copy management, hyper-convergence

Hammer provided a preview of Commvault software additions due to launch this year. He cited web-scale appliances for the midmarket and enhancements in active copy management, which targets orchestration for complex tasks for databases and application workloads so they can be ported between tiers of on-premises and cloud storage. This is considered critical for disaster recovery and development-test environments.

On the previous Commvault earnings call in May, Hammer discussed the company’s intention to deliver hyper-converged reference architectures for secondary storage along with an enhanced platform for the cloud, new service offerings for endpoints and Commvault managed services. It also intends to enhance its flagship Commvault Data Platform with business analytics.

“Early this fall, we plan to complement our enterprise web-scale hyper-converged solutions with Commvault web-scale appliances for the midmarket,” Hammer said. “These solutions will be fully indexed copies for backup data protection, snapshot, replication, and archive and copy management. They will have instant data accessibility, and be highly orchestrated for workload portability across hybrid IT with higher functionality, security and scalability.”

Al Bunte, the company’s chief operating officer, said Commvault is “packaging some software and some services together” that will help customers address the General Data Protection Regulation deadline in 2018.

“Obviously, it will first hit the European theaters, primarily enterprise accounts, and that will happen between now and our GO conference in November,” Bunte said on the Commvault earnings call.


July 31, 2017  6:59 AM

‘Right to erasure’ law proves major European Union GDPR hurdle

Randy Kerns Randy Kerns Profile: Randy Kerns

European Union General Data Protection Regulation 2016/679 that takes effect in May 2018 has the potential to greatly disrupt IT storage and data management operations along with other aspects of business.

Among the 99 articles in the GDPR, Article 17 may have the most impact on IT professionals.

Article 17 is the “right to erasure,” which is commonly called the “right to be forgotten.” Briefly, this means an individual can request to a data controller that all of their personal data must be erased without undue delay and with no cost to the person making the request. This means erasing all of the individual’s personal data — files, records in a database, replicated copies, backup copies and any copies that may have been moved into an archive. This right to erasure requirement is enough to make even a young IT pro consider early retirement.

The terms data controller and data processor must be understood in relation to GDPR. A data controller is “the individual or legal person who controls and is responsible for keeping and using personal information on computer or in structured manual files.” This means the IT function in a company or organization.

A data processor is the group or organization that “holds or processes personal data, but does not exercise responsibility for or control over the personal data.” This applies to a cloud where the processing is done or an IT data center where data resides. The data center can be internal or outsourced. The data controller is responsible for deleting the personal data and ensuring it has been erased, as well as executing the operations but not for the decision process. The data processor cannot hold copies of data or make them available for other uses.

A few important points to consider:

  • There is no escape clause or excuse for avoiding the right to erasure process, so organizations must plan for it.
  • The “without undue delay” interpretation is measured in days and not months. It certainly does not mean waiting until the technology becomes unreadable or older backup copies get overwritten.
  • If personal data was sent to another organization, Article 17 requires the data controller to tell the other organization to erase the personal data.
  • The requirement is more global than you may think. It applies to the personal data of individuals residing in the European Union (EU), and not where data is stored or where a company or organization is located. To do business in the EU, you must guarantee the privacy protection of individuals.
  • The penalties are enormous. Even for one instance of not erasing an individual’s data, the penalty is up to 4% of annual global turnover or €20 million (currently $23.336 million in U.S. dollars). There is a tiered approach to fines, but failure to prove personal data was erased is a 2% fine or €10 million. This will force executives to be proactive with IT in ensuring they put the erasure capabilities and verification in place.

Right to erasure implications for IT

The concept of tracking down all copies of data and erasing a specific individual’s personal data seems almost impossible. Consider the simple case of personal data in a database. How many copies of that database exist and where are they? How many DBAs have made extra copies for testing and extra protection? This will be an intensive, time-consuming task. Even worse, it is not a revenue-producing function.

No specific solution exists in general usage today. Using backup catalogs is not a complete answer because other copies can be made outside of the visibility of the backup or copy data management software.

Some application vendors have put forth the practical approach of encrypting each individual’s personal data and maintaining a person-specific encryption key. Only the application software would have the visibility and knowledge of what personal data is needed to control the encryption. This would be an effective means for erasure because destroying the personal encryption key would erase all copies. This would eliminate the need to process all copies — backup, replicated, privately held and so on — for the erasure.

There are obvious problems with the approach, though. It would require application changes and create issues with data that is shared between applications or used for other purposes. But these problems are relatively minor. New processor capabilities to do encryption, such as the IBM z14 with new, high-performance encryption and Intel Skylake x86 technologies, remove the performance impacts on applications.

Encrypting data at the application level where the content is understood makes sense, but there are downstream consequences. Data manipulation processes such as compression and deduplication would be significantly impaired if not eliminated. The loss of those data reduction techniques would increase storage capacity requirements. Data reduction could still be accomplished, but would have to move up to the application prior to the encryption to have the same effect. Discovery of information about data stored would also have to go through the applications rather than trolling the data itself.

The magnitude of the problem to meet the EU GDPR regulations overall is major, and Article 17, the right to erasure, is almost overwhelming. You may find halfway approaches that only work in certain cases. But beware of these incomplete approaches. They ultimately may cost more to implement and may still result in the extreme fines when the incomplete nature is exposed. The impending date is a hard deadline — there are no stages to adoption. You need to develop a strategy now and plan your implementation.


July 28, 2017  8:04 PM

WD-Toshiba memory sale requires 2-week notice to WD

Carol Sliwa Carol Sliwa Profile: Carol Sliwa

Toshiba and Western Digital’s SanDisk subsidiary acted on a San Francisco judge’s suggestion today and agreed that Toshiba would give SanDisk two weeks’ notice before closing on a transfer or sale related to their NAND flash memory joint venture.

The WD-Toshiba agreement removes the need for the preliminary injunctive relief SanDisk had sought through the California court system, pending arbitration to determine if the planned WD-Toshiba memory business sale can take place without its consent. Toshiba and WD/SanDisk worked out the agreement prior to a scheduled hearing this afternoon in the San Francisco Superior Court.

Judge Harold Kahn approved the agreement and signed the order that requires Toshiba to publicly announce, within 24 hours, the signing of any agreement that “contemplates a closing” and send a copy to SanDisk by email. Toshiba also agreed to send a written notice by courier to WD’s chief legal officer at least two weeks before any closing occurs. Both agreements apply until 60 day after an arbitration panel is formed.

The agreement does not require Toshiba to recognize SanDisk’s claims of consent rights over the transfer of the Toshiba memory business. Toshiba said the agreement also preserved its objections to the California court having any jurisdiction in the WD-Toshiba dispute.

Tokyo-based Toshiba has been trying to sell its profitable memory business to cover losses with its struggling U.S.-based Westinghouse Electric nuclear power division. In June, the company selected a Japan-backed consortium as the preferred bidder.

But WD has claimed that no sale can take place without its consent. In May, WD filed a request for arbitration and injunctive relief with the International Court of Arbitration operated by the Paris-based International Chamber of Commerce.

Impending Toshiba memory business sale

Western Digital’s SanDisk subsidiary sought a preliminary injunction through the Superior Court of California after Toshiba announced plans to try to strike a mutually satisfactory agreement with the consortium by June 28, when its annual shareholder meeting took place. Toshiba said it hoped to close the transaction by March 2018.

SanDisk filed a claim with the California court to prevent the Toshiba memory business sale without its consent. SanDisk later sought an additional preliminary injunction “to stop retaliatory action” that it claimed Toshiba had taken to block San Disk’s affiliates from accessing Toshiba’s facilities, databases and networks that are critical to the operation of their NAND flash joint venture.

WD issued a release two weeks ago stating the California court had directed Toshiba not to transfer its interests in the three WD-Toshiba NAND flash joint ventures without “specified advance notice” to SanDisk to ensure the matter would be preserved for arbitration.

WD CEO Steve Milligan, via a prepared statement, claimed the court’s “directive” marked a victory for WD, SanDisk and their stakeholders. “Our entire goal was to preserve and protect our rights through the binding arbitration process, and that’s precisely what the Court has done,” Milligan said.

Yasuo Naruke, a senior executive vice president at Toshiba Corp., said via a prepared statement today that the “mutually acceptable understanding” is effective for a “very limited time” and “recognizes Toshiba’s right to negotiate and sign a definitive agreement for the sale of its memory business.” Toshiba claims it does not require SanDisk’s consent to transfer its memory business.

Naruke stated: “Further, as a practical matter, we don’t expect to close a deal during the period addressed in the order. Closing a transaction of this magnitude would require many months – well beyond the limited timeframe specified in the ruling. Toshiba therefore remains focused on preparing for the ICC (Chamber of Commerce) arbitration process, which we believe is the appropriate venue to address these issues. We look forward to successfully presenting Toshiba’s position to the tribunal, which we believe will be formed within the next month or so.”

Toshiba also noted in today’s statement that it “remains intent on soon entering into a definitive agreement for the sale of its memory business with one of the bidders.”

Toshiba’s board of directors disclosed last month that it selected a consortium, including Innovation Network Corporation of Japan (INCJ), a government-supported investment fund, Development Bank of Japan (DJB), and Bain Capital Private Equity LP. According to Western Digital, Korea-based chipmaker SK Hynix was also part of the consortium.

Prior to the Toshiba board’s announcement, potential bidders that surfaced through published reports included U.S.-based chipmaker Broadcom, Taiwan-based electronics maker Foxconn, SK Hynix and Western Digital.

During Western Digital’s earnings call Thursday night, Milligan said there had been constructive talks with Toshiba recently. Milligan said there was no “reasonable scenario” of disruption to the NAND supply his company receives from the joint venture.

“We expect to continue to secure wafer output, and from a manufacturing and operations perspective, the JV remains very healthy,” he said.

When asked if he had contingency plans if things did not work out, Milligan said “I don’t think that that’s a reasonable scenario to be considered. I would find that to be … just a highly unrealistic assumption to assume that.”

Milligan said he and WD CFO Mark Long travelled to Japan last week to talk to Toshiba. “Our discussions were constructive, and we will continue to work to seek a solution that is in the best interest of all parties,” he said. “Rest assured, this has the full attention of our team as we are committed to the continued success of the joint venture.”


July 28, 2017  7:26 AM

Axcient and eFolder merger gathers clouds, enhances data protection

Paul Crocetti Paul Crocetti Profile: Paul Crocetti
Data protection

Though they existed in the same general market, Axcient and eFolder had different technologies, different platforms and different customer bases. They’re all the same now.

Axcient and eFolder said Thursday that they are merging, bringing together Axcient’s cloud-based disaster recovery and data protection platform and eFolder’s cloud business continuity, cloud file sync and cloud-to-cloud backup. The new company will be called … well, we’re not sure about that yet.

“We plan on converging these technologies to become the best in breed in this category of backup and disaster recovery,” said Matt Nachtrab, who just last week was named CEO of eFolder and will run the combined company. “The technology is really driving the desire of bringing these companies together.”

The eFolder line of products, including backup and disaster recovery platform Replibit, which it acquired last year, is geared toward SMBs. Replibit enables recovery on an appliance or in the eFolder cloud. The vendor also sells Anchor file sync and Cloudfinder cloud-to-cloud backup software.

Axcient is a pioneer of disaster recovery as a service (DRaaS), said Justin Moore, its founder and CEO who will be chief strategy officer of the combined company. In addition to DRaaS, Axcient provides copy data management, orchestration and automation under a single cloud-based server, Moore said. The vendor brings a more midmarket reach, but also hits SMBs and the enterprise.

Over time, Nachtrab said he expects the combined company’s offerings to feel more like a suite of products than separate lines.

CEO Matt Nachtrab

Matt Nachtrab

The two companies claim more than 50,000 customers with a reach of 4,000 managed service providers (MSPs) after the merger.

Nachtrab said the new management team has not yet decided on what to call the combined company, though he threw out the idea of calling it Axcient.

“The Axcient brand name is very strong,” Nachtrab said. Either way, he said, the company will likely keep the Axcient or eFolder brands for its products and services.

The idea to merge came when Axcient went looking for funding, a quest that led it to private equity firm K1 Capital Investment. K1 owned controlling interest in eFolder and decided to fund Axcient and bring the two companies under common management.

Axcient and eFolder describe the convergence as a merger rather than one company acquiring the other. They have not disclosed financial details of the transaction.

Integration work ahead

While eFolder hosts its cloud services in its own data center, Axcient uses the public cloud. After the companies’ products are integrated, Nachtrab said eFolder will be able to move its data to the public cloud by using Axcient Fusion technology.

Axcient Fusion, an orchestration and automation software platform that launched last year, targets the midmarket through both MSPs and direct sales. And eFolder sells almost entirely through MSPs.

Kevin Hoffman, an eFolder founder, will serve as CTO of the combined company. Hoffman held that position at eFolder before the merger.

Nachtrab said they have offered jobs to all Axcient and eFolder employees. If all accept, the company will have more than 300 employees.

The combined company will keep its four offices for now, Nachtrab said. Axcient’s headquarters are in Mountain View, Calif.,  with satellite offices in Austin, Texas and Smolensk, Russia, while eFolder is based in Denver.

The merger is the second convergence of backup and recovery companies this month.  Earlier in July, data protection vendor Arcserve acquired cloud provider Zetta and its backup and recovery technology.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: