Quantum CEO Jon Gacek is out following poor sales results for the data protection and scale-out storage vendor last quarter.
The Quantum board named director Adalio Sanchez as interim CEO. Chairman Raghu Rau said he will head a search for a permanent CEO, with the help of an executive headhunter firm.
Quantum Thursday reported revenue of $107.1 million for last quarter, down from $135 million last year and more than $15 million below Wall Street expectations. Quantum lost $7.9 million in the quarter compared to a $4.1 million profit last year.
The results prompted the Quantum CEO change. Gacek joined Quantum through the 2006 acquisition of rival tape vendor ADIC. He had been ADIC’s CFO and assumed that role at Quantum. He was promoted to COO in 2010 and became the Quantum CEO the following year.
Chairman Rau described the quarter as a disappointment “that fell short of all our expectations” and a “very eventful” quarter for Quantum.
The Quantum CEO change was hardly shocking considering moves the company made over the past eight months. After years of up-and-down quarters, Quantam agreed with demands from investor VIEX Capital Advisors to change the board last March, and Rau joined then. IBM veteran Sanchez and Marc Rothman joined the board in May, pushing Gacek off the board. Rau became chairman in August. Quantum added VIEX’s Eric Singer to the board Thursday.
After Rau became chairman, he, Sanchez, Rothman and Alex Pinchev formed a committee to conduct a strategic review of Quantum.
Sanchez said his work on the review gave him a head start in as interim Quantum CEO.
“I am hitting the ground running,” said Sanchez, who spent 32 years at IBM and a year at Lenovo.
Rau said Quantum is “intensely focused on taking aggressive actions” to reduce cost and he predicted increased revenue and profitability over the next six months.
Sanchez said Quantum will cut around $35 million in costs over the next year. Quantum also secured $20 million in funding from TCW Direct Landing and PNC Bank to go with $170 million in funding from those lenders a year ago.
Sanchez said the board reviewed Quantum’s strategy, go-to-market model and cost structure. He described StorNext scale-out storage as Quantum’s growth engine and data protection as its profit engine. But while Quantum is looking for LTO-8 to give the tape products a boost, CFO Fuad Ahmad said the vendor must “reorient” its strategy for its DXi disk backup platform. He said Quantum will maintain its partnership with Veeam Software to integrate backup software on DXi and tape products, but will scale back development on the DXi data deduplication appliances.
“We are a small player in that market with less than three percent market share,” Ahmad said. “While it’s a fairly profitable business for us, it is not core to what we want to do long term.”
Sanchez said Quantum will build a software-defined storage business around StorNext and its Rook.io open source project to build cloud-native file, block and object storage.
“We will reposition our company over time as a modern software-defined provider as new products rollout,” Sanchez he said.
Sanchez described his priorities over the next 90 days as “re-ignite the sales engine,” reduce costs and “execution, execution, execution.”
Product revenue last quarter slipped to $63.6 million from $88.6 million last year. Overall Scale-out storage revenue of $33.8 million was down from $46.6 million. Disk backup fell from $18.7 million to $11.7 million, and tape automation slipped from $59.7 million a year ago to $52.2 million.
Quantum executives blamed the poor results partly on industry conditions and the failure to close large deals before the end of the quarter. They expect a bit of improvement this quarter but will still fall below last year’s results. For this quarter, Quantum forecast revenue of $120 million to $125 million compared to $133 million last year. Its six-month guidance is for revenue of $250 million to $260 million.
Ransomware attacks on SMBs have increased, according to a recent survey, but backup and disaster recovery platforms can calm data protection fears.
An estimated 5% of SMBs worldwide fell victim to a ransomware attack from the second quarter of 2016 to the second quarter of 2017, according to the “State of the Channel Ransomware Report” released by backup and recovery vendor Datto. About 1,700 managed service providers (MSPs) serving more than 100,000 SMBs provided data for the ransomware report.
Ninety-seven percent of the MSPs report ransomware is becoming more frequent and 99% predict the frequency of attacks will continue to increase over the next two years.
Anxiety is rising. Among MSPs, 90% say they are “highly concerned” about ransomware, up from 88% in 2016, while 38% of SMBs say the same, up from 34% in 2016.
“There’s more of an awareness of ransomware and it being an epidemic,” Datto CTO Robert Gibbons said. However, the gap between SMBs’ perception and MSPs’ awareness is too far on the side of SMBs being under aware, he said.
While CryptoLocker remains one of the top ransomware strains, the Bad Rabbit virus caused problems globally in the last month.
SMBs need to understand that the downtime is often the worst element of an attack on a business. Seventy-five percent of MSPs report their clients experienced “business-threatening downtime” after an attack.
On another positive note, though, reporting is increasing. SMB victims reported about one in three ransomware attacks to authorities, up from one in four incidents reported in 2016.
And less SMBs are paying the ransom, according to the report. In 2017, 35% of MSPs report SMBs paid the ransom, down from 41% in 2016. Of those that paid the ransom, 15% never recovered their data, according to the ransomware report.
“The word is getting out that if you pay the ransom, sometimes you get your data, sometimes you don’t,” Gibbons said.
A ‘multilayered portfolio’ of protection includes backup
Ransomware is getting smarter. About 30% of MSPs report a virus remained on an SMB’s system after the initial attack and hit again later. And one in three MSPs report ransomware encrypted an SMB’s backup.
So what are SMBs to do?
First of all, backup systems vary in complexity and strength. Copying files to a USB drive is one method, but not a great one. Having a comprehensive backup and recovery platform, following a “3-2-1” system of three copies of data, on two different media, with one copy off-line, is much more secure.
Backup and disaster recovery is the most effective protection, according to MSPs in the ransomware report, followed by employee cybersecurity training, anti-virus software, email/spam filters, patching applications and ad/pop-up blockers.
If backup and recovery is in place, 96% of MSPs report SMBs fully recover from ransomware, according to the report. And 95% of MSPs said they feel more prepared to respond to an SMB infection.
But ransomware protection goes beyond having just one safety element in place. For example, 94% of MSPs report ransomware successfully bypassed anti-virus software.
“As no single solution is guaranteed to prevent ransomware attacks, a multilayered portfolio is highly recommended,” the report said.
MSPs blamed a lack of cybersecurity training as the leading reason for a successful ransomware attack, followed by phishing emails and malicious websites/ads.
“Employees today are largely unprepared to defend themselves against these attacks,” the ransomware report said.
Gibbons said in one type of education, a company will send out a fake phishing scam and anyone who clicks in the email will get diverted to ransomware training. Just one employee who clicks on a bad link — in a company of hundreds — can cause a business possibly irreparable harm from a ransomware attack.
“There are more tools available to up your minimum game,” Gibbons said.
SMBs need to stay on top of the issue, because attacks are constantly evolving. For example, in 2017, 26% of MSPs reported ransomware infections in cloud applications. Gibbons said he thinks cracking Salesforce is at the top of the attackers’ radar in their continuing quest to best wreak havoc among SMBs.
Like many storage and data center vendors, hyper-converged vendor Nutanix is taking the next steps to give its platform multi-cloud capabilities.
Nutanix today laid out its plans to add services for developers to its Enterprise Cloud OS software. These include a Nutanix Acropolis Object Storage Service and Acropolis Cloud Compute. The hyper-converged pioneer will also add a Nutanix App Marketplace to its Calm cloud application and orchestration service.
“The Nutanix roadmap is evolving, looking at public cloud services as a deployment model for applications,” said Greg Smith, Nutanix vice president of product and technical marketing. “We want our customers’ data center to operate like a public cloud. This is a continuation of the Nutanix journey to build an enterprise cloud that provides much of the same capabilities that customers expect from public cloud services, but in their own data centers.”
The new Nutanix Acropolis features will not be available until 2018. Smith said the marketplace will start in 2017 with 20 validated pre-defined app blueprints, and add “a significant number” soon after.
Nutanix will provide an Amazon Web Services S3-compatible API to help application development teams use Nutanix storage for on-demand object storage as they would use the public cloud. Smith said the Nutanix Acropolis Object Storage Service can store billions of objects in a single namespace.
“People want to write to S3 through a standard API,” Smith said. “We’ve embraced that interface. Now the Nutanix Cloud Storage OS can store and manage all those large unstructured data files with a single namespace.”
Nutanix Acropolis Cloud Compute (AC2) consists of compute-only nodes that can run in a Nutanix cluster. AC2 nodes are for CPU-intensive applications such as in-memory analytics, large-scale web services, and Citrix XenApp. Most hyper-converged nodes include storage and compute. Nutanix does already offer capacity-only storage nodes but has not had compute-only nodes.
Smith said Nutanix will have several AC2 configuration options, and customers will still require a minimum of three storage nodes in a cluster. AC2 is built on Nutanix’s AHV hypervisor and will initially be available only on Nutanix-branded appliances. Smith said Nutanix hopes its OEM hardware partners Dell EMC and Lenovo will eventually make compute-only nodes available.
“This is to provide additional compute resources to support apps and services that require a lot of CPU but not storage with it,” Smith said. “The new compute resources will benefit application developers as well as infrastructure managers.”
The Nutanix App Marketplace will include applications defined via standards-based blueprints that developers can quickly consume in self-service fashion. These published validated blueprints will include developer tools such as Kubernetes, Hadoop, MySQL, Jenkins and Puppet. Nutanix customers can also publish apps on the marketplace to share them internally.
Hyper-converged vendor Pivot3 said its Acuity appliance is significantly expanding its enterprise footprint, with one-third of its revenue coming from deals of $500,000 or more last quarter.
The private company today said its average sales price increased 25% and overall revenue increased 50% from the previous quarter. Pivot3 reported a record in million-dollar orders in the quarter. Now it seeks to expand deeper into the enterprise by tailoring its HCI software for cloud implementations and broadening its distribution strategy with partners Lenovo and Arrow Electronics.
Along with boosting performance on Pivot3 Acuity with NVMe, the vendor is concentrating on solving the problems of moving data in and out of the cloud. Pivot3 Acuity’s quality of service is designed to run multiple applications, which will help cloud customers. Pivot3 said deals supporting multiple use cases on its appliances more than doubled last quarter.
But data movement is another issue.
“It’s a long process, but there’s a massive economic gain if we get it right,” Pivot3 CEO Ron Nash said. “The cloud’s not monolithic. There are many clouds with many different characteristics.”
He said mastering the cloud requires a policy management engine, an orchestration engine and analytics engine. Pivot3 has the policy management and has started with orchestration to move data in and out of the cloud. The analytics will determine if the policy decisions are working.
“It’s easy to say, ‘There is the goal line, that’s where we want to get to. Then let’s lay out the steps,’” Nash said. “If clouds let you spin up and spin down quickly and take peak of peaks type demand, that’s a valuable service and something you are willing to pay a lot for.”
Nash said it will take a few years to get all the pieces down, but he considers Pivot3 ahead of the other HCI vendors In the meantime, Pivot3 is expanding its distribution process.
Pivot3’s branded appliances run on Dell servers, but it also has an OEM deal with Lenovo and a channel partnership with Cisco. Pivot3 chief marketing officer Bruce Milne said Pivot3 considers Lenovo its key partner, and last month signed a distribution deal with Arrow Electronics to sell Pivot3 Acuity software on Lenovo. Milne said around 15% of Pivot3 revenue comes from software-only deals.
Pivot3 has a similar go-to-market strategy as Nutanix. Nutanix sells its own branded appliances complemented by OEM deals with Dell EMC and Lenovo and meet-in-the-channel deals with Cisco and Hewlett Packard Enterprise resellers. These types of partnerships make hyper-converged full of coopetition. Dell EMC, Cisco and HPE all sell hyper-converged appliances with their own software, too. Dell EMC uses VMware vSAN software on PowerEdge servers, Cisco acquired Springpath for its UCS-based HyperFlex appliances, and HPE bought hyper-converged startup SimpliVity for its software.
“We see Cisco making a lot of noise, but no accounts except for the Cisco base,” Milne said. “HPE is starting to make noise, trying to differentiate its hardware by embedding SimpliVity software in ProLiant servers. Dell EMC is coming on strong, which I’m sure concerns Nutanix. I can’t count on a competitor as a supplier on my platforms.”
SwiftStack’s leadership team appreciated being selected a visionary in Gartner’s recent Magic Quadrant for distributed file systems and object storage. The vendor even put out a press release celebrating Gartner’s inclusion of SwiftStack object storage in the report.
But SwiftStack’s chief marketer said the object storage neighborhood is not where the vendor wants to live anymore.
“We’re a software company, not an object company,” SwiftStack VP of marketing Mario Blandini said. “We just happen to have our own object storage system on premises.”
So Blandini admits SwiftStack’s cloud storage software based on the OpenStack Swift object project is indeed object. He just thinks that’s not chic. Or at least SwiftStack customers and potential customers don’t think SwiftStack object storage has the right ring to it.
“People say to us, ‘How can an object storage company be doing something cool? Don’t object storage companies all suck?’” Blandini said. “No one loves object storage because it doesn’t do enough to be transformative. There’s nothing wrong with NAS, why replace it?”
Blandini also wants to distance SwiftStack object storage from OpenStack, even if SwiftStack is the main contributor to OpenStack. “SwiftStack is too often known as an OpenStack company,” he said. “In reality, we are a data management platform across multiple clouds.”
The multiple cloud part, more commonly known as multi-cloud storage, is what’s cool now. Ask just about any storage vendor out there, because we’re hearing that phrase a lot more these days.
“We like the concept of multi-cloud,”Blandini said. “Do you want to be locked into one cloud provider, or be able to put your data closer to the user?”
SwiftStack object storage includes a file system gateway on top of its native object storage software. Earlier this year SwiftStack beefed up its Cloud Sync feature for moving data in and out of Amazon Web Services and the Google Cloud Platform as part of the multi-cloud plan.
This week SwiftStack added policy-based auto tiering, the ability to use erasure coding across regions, client capability to access objects in private clouds as if they were on-premises, and more granular policies for determining which nodes and regions data should reside.
Blandini said these features are merely setting up a significant product release coming around AWS re:Invent this month. He’s keeping quiet on details for now, but you can expect it to center around managing data on multiple clouds (just a wild guess).
“We architected our product to be cloud from the beginning,” he said. “I’m not saying this will kill NAS and there will be no more Fiber Channel. There always will be those things. But there’s always room for new types of storage.”
Or new labels for storage, anyway. So say goodbye to SwiftStack object storage and hello to SwiftStack multi-cloud storage.
Cloud NAS vendor Elastifile has struck an OEM deal with Dell EMC, one of the startup’s first investors.
Dell EMC will integrate Elastifile Cloud File System and CloudConnect cloud transfer and object storage tiering software on PowerEdge servers. Dell EMC will sell the appliances as part of the Dell EMC OEM Solutions program, with Elastifile providing software support. The appliances include the Elastifile license and three years of support. The vendors will formally disclose the deal Thursday.
Elastifile came out of stealth in April with its scale-out file system designed for flash hardware that spans on-premises and cloud storage. Until now, it has sold its software standalone. Andy Fenselau, Elastifile vice president of marketing, said the OEM deal makes sense because many of the startup’s early customers are using PowerEdge servers.
“You can put it on any standard server,” Fenselau said of Elastifile’s software. “But we were finding as we were ramping our business that many customers and many of our partners are joint customers and joint partners with Dell. They really wanted the Easy Button. They wanted a pre-integrated, pre-bundled solution for their on-prem deployments that they could buy from their standard OEM, in this case Dell.”
The Dell EMC Elastifile appliances scale from four to 100, with performance and capacity models available. The startup claims the performance model supports from 800,000 to 26 million IOPs with bandwidth ranging from 3.6 GB per second to 120 GBps. Capacity models range from 100 TB to 3.5 TB.
Fenselau said the performance optimized node street price starts at 13 cents per IOPS, and the capacity optimized node costs $2 per raw GB.
The PowerEdge appliances are flash-only.
Elastifile’s Cloud File System handles active data and performance-oriented workloads. CloudConnect provides access to Amazon S3-compliant cloud services, moving inactive data for archiving or analytics into an object tier.
EMC invested in Elastifile’s first funding round in 2014, before the Dell-EMC merger. The companies’ ties go back even farther. Elastifile founder and CTO Sharar Frank also founded all-flash array startup XtremIO and was chief architect at scale-out NAS vendor Exanet. EMC acquired XtremIO in 2012 and Dell bought Exanet in 2010.
Storage hardware vendors Cisco, Lenovo and Western Digital also have strategic investments in Elastifile. Those relationships could lead to more appliance partnerships.
“As customers standardize on other servers, we will work to give them what they want,” Fenselau said when asked about the possibility of working with other strategic partners.
Arcserve is on the hunt to buy more companies.
The data protection and recovery company already has made two acquisitions since becoming independent from CA Technologies in 2014. It bought cloud provider Zetta last summer, giving it a larger cloud footprint and disaster recovery as a service (DRaaS) offering. That followed the Arcserve acquisition of email archiving specialist FastArchiver in April for on premises or public cloud long-term data retention.
Both Zetta and FastArchiver target the midmarket. New CEO Tom Signorello is planning the next Arcserve acquisition, with possible targets ranging from analytics to information management to data management or even security.
“We will be looking at targets again in the coming quarters,” Signorello said. “We are actively working with our owners on what the next logical bolt-on. The opportunities are broad, so I don’t want to be specific.”
Signorello took over as CEO in early October, after Arcserve’s first CEO Mike Crest left to head IT services firm Optanix.
Arcserve has returned as an independent vendor at a time the data protection market is evolving into another area, the overall data management space. Companies such as Veritas Technologies and Commvault Systems are building these over-arching data management platforms that do data indexing, search, analytics, copy data management, governance, security and data mobility.
Arcserve acquisition integrations expand cloud footprint
Signorello said that the integration with Zetta is “well underway.” The latest Arcserve acquisition gave it cloud data centers in the West Coast and New Jersey, and Arcserve plans to open one in the United Kingdom. The Arcserve acquisition also broadens its relationship with cloud providers Amazon Web Services (AWS) and Microsoft Azure.
Arcserve already had a relationship with AWS before the Zetta buy. The Arcserve UDP appliance allows customers to use the AWS cloud as a remote disaster recovery site. They can replicate recovery points to a local Windows-based Recovery Point Server (RPS) in the AWS cloud and launch an Amazon Elastic Compute Cloud (EC2) and copy full recovery points to Amazon Simple Storage Service (Amazon S3).
Zetta also gave Arcserve a direct-to-cloud offering but the company also is investing more in the hybrid cloud approach.
“There are going to be enhancements in the hybrid area in the next couple of quarters,” Signorello said. “Our clients need it. They need the flexibility. All the managed services providers (MSPs) and VARs are moving in that direction.”
Before joining Arcserve, Signorello was CEO of OnX Enterprise Solutions and held vice president positions at Diebold and Unisys.
With the latest upgrades to its object storage platform, IBM recognizes that not everyone who uses object storage does so in the public cloud. They don’t all start with hundreds of terabytes of capacity either.
IBM Cloud Object Storage System now offers “compliance-enabled vaults” so customers can create on-premises data vaults that protects data from deletion or modification for unstructured data subject to regulations like SEC Rule 17a-4 and the Financial Industry Regulatory Authority (FINRA).
IBM also is offering a new capacity level for customers that want to get started with cloud deployments. The IBM Cloud Object Storage System now starts with a 72 TB capacity level with a concentrated dispersal mode capability that allows for smaller footprint systems to scale to larger ones. Despite the name, IBM Cloud Object Storage System – based on technology acquired from Cleversafe – is available for on-premises use or in a dedicated environment in the IBM Cloud.
IBM hopes the compliance vaults will open up opportunities for the applications market in on-premises object storage, while the lower-capacity option is about getting more customers started with an on-site cloud with the expectation they will move to the public cloud.
“This is a new space that they can compete in,” said Scott Sinclair, a storage analyst with Enterprise Strategy Group. “We have run studies that show on-premises object storage is less expensive than cloud-based object storage. The bottom line is, public cloud is not always cheaper.
“Some workloads are more expensive in the cloud,” Sinclair said.
There is evidence customers are learning not all workloads are meant for the public cloud, and that unexpected costs and security are becoming two red flags.
The ESG 2017 Storage Trends: Challenges and Spending report found 39% of storage decision makers using off-premises cloud resources had moved at least one workload back to on premises.
“There is evidence that organizations need to apply a pragmatic approach with regards to the location of applications and data, whether on- or off-premises,” Sinclair. “Moving a workload to the public cloud is a big decision and should be treated as such.
“Additionally, what workloads are coming back? Why? Is it cost? Is it security? Is it availability? And how are these movements changing the cloud strategy within the organizations?”
He said ESG is researching those issues now.
Robert McCammon, IBM’s leader for IBM Cloud Object Storage, said the vendor is tackling security concerns with its compliance vault feature.
“This is a new software feature of our existing object storage product that is available with our software release in December,” McCammon said. “This is a new type of vault that prevents operations that are unacceptable on one of these compliance environments.”
McCammon said when it comes to compliance rules, “customers don’t have a lot of choices for storage and they have even fewer choices for object storage.”
Highlighted by the largest customer deal in its history and a big bump in enterprise bookings, Veeam reported 34% year-over-year total bookings growth in the last quarter.
There has been a push in the last year-and-a-half to go after the enterprise more aggressively, said Peter McKay, Veeam co-CEO and president. The data protection vendor reported 84% year-over-year growth in new enterprise bookings for the third quarter.
A $4.1 million deal with a European company is Veeam’s biggest ever enterprise booking, McKay said, although he would not identify the customer. The Veeam revenue report showed more $500,000 deals closed in 2017 than in the past four years combined.
“That partnership ecosystem has become a really critical part” of Veeam growth, he said.
The cloud continues to be a bigger piece of the Veeam revenue picture. It took six years for Veeam to net $50 million in bookings for its cloud business, but it has hit $54 million in three quarters this year, according to McKay. Veeam also reported a 72% year-over-year increase in cloud bookings for the third quarter.
Veeam is averaging 4,000 new customers each month. The vendor claims 267,500 customers and 16,700 service provider partners using its software.
Looking for more in Veeam revenue, platform
Veeam is shooting for $1 billion in annual bookings by 2018 and $1.5 billion by 2020. It hit $607 million in 2016, 10 years after the company launched. At its current pace, Veeam is projected to hit about $800 million in revenue by the end of the year.
“We have to have a good Q4 to get there,” McKay said of the 2018 goal.
Veeam has invested in smaller companies such as cloud data protection vendor N2WS. Veeam has an OEM agreement through which N2WS technology will be part of Veeam Availability for Amazon Web Services (AWS).
McKay said to expect a technology acquisition soon.
“We’re looking. We’re active,” McKay said, adding that he doesn’t feel acquisitions are needed for Veeam revenue to hit $1.5 billion.
McKay pointed to data management, visibility and protection as areas for growth. McKay said there are areas where Veeam could improve its processes, for example in better figuring out how go-to market strategies differ by country.
“We’re incredibly paranoid of taking our eyes off the ball,” McKay said.
Veeam’s competition includes Commvault and Veritas. McKay said he sees the amount of funding and competition from startups — such as Cohesity and Rubrik — as a good sign.
“It makes us better,” McKay said.
Version 10 of Veeam’s Availability Suite is due soon. That upgrade will feature continuous data protection and object storage support. Veeam also plans to add new elements to the platform that it hasn’t publicly disclosed yet, McKay said.
Veeam has recently expanded into physical backup as well as multi-cloud support for Microsoft Azure and Azure Stack, AWS, IBM Cloud and software-as-a-service applications such as Microsoft Office 365.
Western Digital CEO Steve Milligan said he remains confident his company will win its fight to prevent its NAND manufacturing joint venture partner Toshiba from selling its memory chip unit without Western Digital’s consent.
Toshiba has agreed to sell its chip unit to a group led by Bain Capital despite Western Digital’s opposition to the deal. Western Digital is trying to block the proposed $18 billion Toshiba NAND sale, claiming it violates terms of its joint venture with Toshiba.
“It continues to be our position that the transaction is not permitted without our consent. That leads to where we are today,” Milligan said during Western Digital’s earnings call Thursday night.
Western Digital gained its stake in the joint venture when it acquired flash manufacturer SanDisk, which already had the joint venture agreement with Toshiba. But when Toshiba decided to put its memory business up for sale earlier this year, it looked at groups outside of Western Digital. Earlier this month, Toshiba said it would sell to the Bain consortium. That group includes Western Digital competitors Seagate, Kingston Technologies and SK Hynix, along with its customers Dell and Apple. Toshiba will retain a stake in the NAND unit if the deal goes through.
Milligan predicted Western Digital will “ensure the longevity and continued success of the joint venture,” either through arbitration or negotiation with Toshiba. And he advised: don’t believe everything you read about the dispute over the Toshiba NAND sale, unless it comes from Western Digital.
“We are confident in our fact-based legal positions, and our right to injunctive relief,” he said.
Western Digital claims the joint venture prevents Toshiba from working with other companies to manufacture NAND, or to transfer interests in the joint venture without Western Digital’s consent.
Milligan said it may take until 2019 to gain a final ruling on the Toshiba NAND sale from the International Court of Arbitration. However, he hopes SanDisk will win temporary injunctive relief by early 2018. That relief would prevent Toshiba’s planned NAND sale to the Bain group. Western Digital has made three requests for relief with the arbitration court in 2017. Each arbitration will be decided by a three-person tribunal. Western Digital has also sought arbitration to try to prevent Toshiba from moving ahead with its Fab 6 production plant without SanDisk involvement. Toshiba opened that new production center in August on its own.
Milligan said SanDisk’s consent rights “are clear and explicit” and will hold up legally, although he would rather not have to go that route.
“Just to be clear, we do not undertake litigation lightly,” he said. “We are not litigious. And it should only be a last resort, especially in the context of this joint venture relationship.” He said Western Digital is open to any reasonable terms proposed by Toshiba, but “we will not agree to terms such as SanDisk unilaterally waiving or negating its consent.”
In an interview with Bloomberg this week, Bain managing director David Gross-Loh accused Western Digital of misrepresenting its rights in its legal challenge to the Toshiba NAND sale. He also urged Western Digital to reach an agreement with Toshiba to allow the deal to proceed.
Milligan said there has been “a great deal of misinformation provided into the marketplace through various channels” about the situation. “Western Digital will continue to communicate consistently and transparently,” he said.
When asked if he had alternative plans if Western Digital is unsuccessful in arbitration, Milligan said current supply agreements will give his company NAND through 2029.