Storage Soup


August 11, 2017  11:58 AM

Flash Memory Summit 2017: Flash really on fire

Dave Raffo Dave Raffo Profile: Dave Raffo
flash storage

Any claims that “flash is on fire” at Flash Memory Summit 2017 this week drew awkward glances, nervous laughs or groans. That’s because one flash system literally caught fire, causing the exhibition hall at the Santa Clara Convention Center to close for the entire show.

The Innodisk booth caught fire hours before Flash Memory Summit 2017 opened Tuesday morning.  Damage from the fire and water from the sprinkler system that doused it prompted fire marshals to order the exhibition floor closed for the entire three-day show.

The show went on, with meetings and dozens of keynotes and panel sessions discussing all things flash for three days. Product launches went out as scheduled but the shutting of the exhibit hall disappointed vendors who planned demos of new and emerging products.

Fire marshals have not identified the cause of the fire.

Demonstrations that were never demonstrated included the Kaminario K2.N NVMe array due to ship in spring of 2018 and E8 Storage’s shipping D-24 rack-scale NVMe array as well as its coming X24 arrays. Newcomer Liqid wanted to show off what it calls a bare-metal composable infrastructure system using hardware from OneStop Systems.

Other products scheduled for demos included Toshiba NVMe over Fabric software, several new Intel SSDs, Mellanox NVMe over Fabrics devices, Everspin 1 GB and 2 GB DDR4 form factor MRAM devices, and a host of Samsung products including a reference “Mission Peak” 1U server that can store 576 TB of SSD capacity with new form factor 16 TB drives.

“We wanted to show that we’re real, and our stuff is battle tested,” said Julie Heard, E8 Storage’s director of technical marketing.

Flash Memory Summit 2017 wasn’t a complete waste for E8. The team won a best-of-show award for most innovative flash memory technology and showed off its Game of Thrones-knockoff “Game of LUNs” poster.

Other notable Flash Memory Summit 2017 award winners included Western Digital for NAND flash, CNEX Labs and Brocade for storage networking, Excelero for software-defined storage, and Attala Systems Inc. for storage system.

August 10, 2017  9:09 AM

Primary Data DataSphere upgrade follows funding grab

Garry Kranz Garry Kranz Profile: Garry Kranz
Data Analytics, Primary Data

Primary Data has beefed up storage analytics and cloud migration in its DataSphere virtualization platform. Now the startup is ready to dip into a fresh stash of cash totaling $40 million to heighten its profile in enterprise data storage.

PrimaryData DataSphere 2.0, released this week in early access, builds on previous editions oriented mostly for application development. The latest version embeds an artificial intelligence-based storage analytics engine that automatically moves inactive data to Amazon S3-compatible cloud object stores.

If the data once again becomes active, DataSphere transparently retrieves it from the cloud for access on local storage.

“We are able to give storage awareness to an application. Normally, you would have to write (code) for that,” Primary Data CEO Lance Smith said.

A policy catalog in 2.0, known as Objective Expressions, allows customers to prescribe the characteristics that can be applied to all data or to an individual file. To move data between cloud platforms, users need to change only the objectives for the data. Primary Data DataSphere then moves the data to the appropriate storage target.

“We traditionally have gone after the development and testing space, which are usually small deployments. But people are finding that our technology is so powerful that many of them are putting it in production (as a way) to save lots of money” on storage, Smith said.

Data protection and cloud mobility highlight 2.0 release

Primary Data claims DataSphere can manage and move billions of files and objects. The software will consume a customer’s block storage and converts it to the file namespace.

The enhanced storage analytics examine historical usage patterns to determine which tier of storage best meets an application’s requirements. DataSphere determines the optimal data placement based on customer-defined attributes relating to cost, data primacy or performance.

Primary Data DataSphere 2.0 include assimilation of array-based snapshots, allowing customers to use the snapshots to both preserve changes in real time and to serve as a disaster recovery tool. DataSphere accesses snapshot APIs of underlying storage arrays to clone space-efficient copies on a WAN or public cloud. The vendor claims this feature allows it to mix and match different vendors’ storage in the same share. Additional data protection in 2.0 includes metadata backup and restore and portal protection.

Primary Data DataSphere 2.0 supports cross-domain mapping and fully integrates with Windows Active Directory and Windows Access Control Lists, allowing mixed shares between Linux and Windows.

New investment earmarked for expansion of sales teams in U.S.

Along with DataSphere’s revamped storage analytics, the data management specialist announced up to $40 million obtained in separate funding transactions. The proceeds boost the startup’s total investment haul to nearly $100 million since its 2016 launch.

Primary Data received $20 million in venture funding in a Series B round led by Pelion Venture Partners, with participation from existing vendors Accel Partners and Battery Ventures. Up to $20 million in additional funding is available through a line of credit.

Smith said Primary Data will expand sales teams in growing markets, particularly Europe and North America.

“We have been hiring in North America and Europe since the start of this year to vastly grow our presence in vertical markets. We had been investing heavily in engineering up to now,” Smith said.


July 31, 2017  1:55 PM

NetApp cloud strategy includes SolidFire, SDS

Dave Raffo Dave Raffo Profile: Dave Raffo
Cloud storage, NetApp

Created in the 20th century to sell storage to engineers, NetApp has survived for 25 years to remain the largest standing data storage company not tied to a server vendor. Founder Dave Hitz credits that survival to the company’s “enormous capacity to change” as the IT landscape changes.

“People ask me, why are you still alive after 25 years? That’s a very real question,” Hitz, currently a NetApp executive vice president, said during a press even last month. “NetApp has survived 25 years because we have an amazing ability for radical change when we need it.”

Hitz said his company has previously pivoted to survive disruptions caused by the rise of the internet, the internet crash and virtualization. He said all posed threats to NetApp when they first developed, and NetApp adjusted its storage to take advantage. Now the NetApp cloud pivot is the current adjustment that can make or break the company.

“Each of these transitions were things that were going to kill us,” he said. “Here we are again, possibly the biggest transition of all, into cloud computing and again it’s the thing that’s going to kill us. We hear, ‘We’re all doomed, everything’s going to move into the cloud, there’s no room for NetApp.’ I don’t think it’s true. It could be true if we don’t’ respond.”

Of course, you don’t have to be a bull-castrating genius to figure out the cloud is the key for today’s storage companies. Every large storage company has the cloud in its strategy and barely a month goes by when we don’t see a startup come along promising to provide cloud-like storage for enterprises, and to connect on-premises storage to public clouds.

So what is the NetApp cloud strategy?

Hitz said NetApp “way underestimated how pervasive the cloud would be on all enterprise computing,” just like it misjudged how flash would impact enterprise storage. (NetApp originally bet on flash as cache instead of solid-state drives in storage arrays before getting out its successful All-Flash FAS array in 2016.). But he said the NetApp cloud plan consists of doing what it does best — data management.

NetApp's Dave Hitz

NetApp founder Dave Hitz

“We think data is the hardest part [of the cloud],” Hitz said. “It is very easy to go to Amazon or Azure, fire up 1,000 CPUs, run them for an hour or day or week, [and] then turn them off. It’s not easy to get them the data they need, and after they make a bunch of data, it’s not easy to get it back and keep track of it. Those are the hard parts. And that’s right in the center of our wheelhouse.”

Channeling NetApp’s history, CEO George Kurian said he saw his job when he took over in 2015 as leading the company through transition. “As the world around us changed, NetApp needed to change fundamentally,” he said.

He sees a strong NetApp cloud strategy as the key to initiating that change. “Many customers are engaged with us to help them build hybrid architectures, whether it’s between on-prem and public cloud, between two public clouds or migrating one of their sites to a colocation,” Kurian said.

Kurian cites SolidFire — an all-flash array platform built for cloud providers — as the “backbone of the next-gen data center.” NetApp acquired SolidFire in 2016 as much as a cloud platform as to fill a need for all-flash storage.

NetApp cloud software-defined storage (SDS) and services include Private Storage for Cloud, Ontap Cloud, Data Fabric, AltaVault cloud backup and others. NetApp also has a Cloud Business Unit, which includes development, product management, operations, marketing and sales.

Senior vice president Anthony Lye joined the company last March to run the NetApp Cloud Business Unit. “The whole purpose of my organization is to build software that runs on hyper-scale platforms,” Lye said. “The software can be consumed by NetApp or non-NetApp customers, on hybrid or multi-cloud environments.”

The NetApp cloud portfolio will go a long way in determining if the vendor gets to keep its survivor status.


July 31, 2017  10:07 AM

Commvault earnings: Sales gain ahead of product launches

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

Commvault Systems is inching back toward profitability after another strong quarter of growth as it continues to build out its business beyond its traditional data backup model. The company’s leaders identified the ability to land more and larger Commvault software deals as the key to greater growth.

Commvault generated $166 million in total revenue for the first 2018 fiscal quarter. That represents a 9% year-over-year increase and is essentially flat sequentially compared to the $172.9 million in total revenue from the previous quarter. Commvault software revenue came in at $74.8 million, an increase of 18% year-over-year and down 4% sequentially.

The company cut its quarterly loss to $284,000 compared to the $2.6 million loss in the same quarter last year.

“We have successfully recast Commvault and repositioned our business to regain our momentum,” said CEO Bob Hammer on the Commvault earnings call last week.

Hammer said Commvault’s enterprise deals increased 10% year-over-year, with the average enterprise deal size ticking up 4% to approximately $282,000 for the quarter. “Our ability to achieve our growth objectives is dependent on a steady flow of $500,000 and $1 million plus deals,” Hammer said.

Brian Carolan, Commvault’s chief financial officer, said enterprise deals of more than $100,000 represented 63% of the Commvault software revenue.

Commvault earnings foreceasts include copy management, hyper-convergence

Hammer provided a preview of Commvault software additions due to launch this year. He cited web-scale appliances for the midmarket and enhancements in active copy management, which targets orchestration for complex tasks for databases and application workloads so they can be ported between tiers of on-premises and cloud storage. This is considered critical for disaster recovery and development-test environments.

On the previous Commvault earnings call in May, Hammer discussed the company’s intention to deliver hyper-converged reference architectures for secondary storage along with an enhanced platform for the cloud, new service offerings for endpoints and Commvault managed services. It also intends to enhance its flagship Commvault Data Platform with business analytics.

“Early this fall, we plan to complement our enterprise web-scale hyper-converged solutions with Commvault web-scale appliances for the midmarket,” Hammer said. “These solutions will be fully indexed copies for backup data protection, snapshot, replication, and archive and copy management. They will have instant data accessibility, and be highly orchestrated for workload portability across hybrid IT with higher functionality, security and scalability.”

Al Bunte, the company’s chief operating officer, said Commvault is “packaging some software and some services together” that will help customers address the General Data Protection Regulation deadline in 2018.

“Obviously, it will first hit the European theaters, primarily enterprise accounts, and that will happen between now and our GO conference in November,” Bunte said on the Commvault earnings call.


July 31, 2017  6:59 AM

‘Right to erasure’ law proves major European Union GDPR hurdle

Randy Kerns Randy Kerns Profile: Randy Kerns

European Union General Data Protection Regulation 2016/679 that takes effect in May 2018 has the potential to greatly disrupt IT storage and data management operations along with other aspects of business.

Among the 99 articles in the GDPR, Article 17 may have the most impact on IT professionals.

Article 17 is the “right to erasure,” which is commonly called the “right to be forgotten.” Briefly, this means an individual can request to a data controller that all of their personal data must be erased without undue delay and with no cost to the person making the request. This means erasing all of the individual’s personal data — files, records in a database, replicated copies, backup copies and any copies that may have been moved into an archive. This right to erasure requirement is enough to make even a young IT pro consider early retirement.

The terms data controller and data processor must be understood in relation to GDPR. A data controller is “the individual or legal person who controls and is responsible for keeping and using personal information on computer or in structured manual files.” This means the IT function in a company or organization.

A data processor is the group or organization that “holds or processes personal data, but does not exercise responsibility for or control over the personal data.” This applies to a cloud where the processing is done or an IT data center where data resides. The data center can be internal or outsourced. The data controller is responsible for deleting the personal data and ensuring it has been erased, as well as executing the operations but not for the decision process. The data processor cannot hold copies of data or make them available for other uses.

A few important points to consider:

  • There is no escape clause or excuse for avoiding the right to erasure process, so organizations must plan for it.
  • The “without undue delay” interpretation is measured in days and not months. It certainly does not mean waiting until the technology becomes unreadable or older backup copies get overwritten.
  • If personal data was sent to another organization, Article 17 requires the data controller to tell the other organization to erase the personal data.
  • The requirement is more global than you may think. It applies to the personal data of individuals residing in the European Union (EU), and not where data is stored or where a company or organization is located. To do business in the EU, you must guarantee the privacy protection of individuals.
  • The penalties are enormous. Even for one instance of not erasing an individual’s data, the penalty is up to 4% of annual global turnover or €20 million (currently $23.336 million in U.S. dollars). There is a tiered approach to fines, but failure to prove personal data was erased is a 2% fine or €10 million. This will force executives to be proactive with IT in ensuring they put the erasure capabilities and verification in place.

Right to erasure implications for IT

The concept of tracking down all copies of data and erasing a specific individual’s personal data seems almost impossible. Consider the simple case of personal data in a database. How many copies of that database exist and where are they? How many DBAs have made extra copies for testing and extra protection? This will be an intensive, time-consuming task. Even worse, it is not a revenue-producing function.

No specific solution exists in general usage today. Using backup catalogs is not a complete answer because other copies can be made outside of the visibility of the backup or copy data management software.

Some application vendors have put forth the practical approach of encrypting each individual’s personal data and maintaining a person-specific encryption key. Only the application software would have the visibility and knowledge of what personal data is needed to control the encryption. This would be an effective means for erasure because destroying the personal encryption key would erase all copies. This would eliminate the need to process all copies — backup, replicated, privately held and so on — for the erasure.

There are obvious problems with the approach, though. It would require application changes and create issues with data that is shared between applications or used for other purposes. But these problems are relatively minor. New processor capabilities to do encryption, such as the IBM z14 with new, high-performance encryption and Intel Skylake x86 technologies, remove the performance impacts on applications.

Encrypting data at the application level where the content is understood makes sense, but there are downstream consequences. Data manipulation processes such as compression and deduplication would be significantly impaired if not eliminated. The loss of those data reduction techniques would increase storage capacity requirements. Data reduction could still be accomplished, but would have to move up to the application prior to the encryption to have the same effect. Discovery of information about data stored would also have to go through the applications rather than trolling the data itself.

The magnitude of the problem to meet the EU GDPR regulations overall is major, and Article 17, the right to erasure, is almost overwhelming. You may find halfway approaches that only work in certain cases. But beware of these incomplete approaches. They ultimately may cost more to implement and may still result in the extreme fines when the incomplete nature is exposed. The impending date is a hard deadline — there are no stages to adoption. You need to develop a strategy now and plan your implementation.


July 28, 2017  8:04 PM

WD-Toshiba memory sale requires 2-week notice to WD

Carol Sliwa Carol Sliwa Profile: Carol Sliwa

Toshiba and Western Digital’s SanDisk subsidiary acted on a San Francisco judge’s suggestion today and agreed that Toshiba would give SanDisk two weeks’ notice before closing on a transfer or sale related to their NAND flash memory joint venture.

The WD-Toshiba agreement removes the need for the preliminary injunctive relief SanDisk had sought through the California court system, pending arbitration to determine if the planned WD-Toshiba memory business sale can take place without its consent. Toshiba and WD/SanDisk worked out the agreement prior to a scheduled hearing this afternoon in the San Francisco Superior Court.

Judge Harold Kahn approved the agreement and signed the order that requires Toshiba to publicly announce, within 24 hours, the signing of any agreement that “contemplates a closing” and send a copy to SanDisk by email. Toshiba also agreed to send a written notice by courier to WD’s chief legal officer at least two weeks before any closing occurs. Both agreements apply until 60 day after an arbitration panel is formed.

The agreement does not require Toshiba to recognize SanDisk’s claims of consent rights over the transfer of the Toshiba memory business. Toshiba said the agreement also preserved its objections to the California court having any jurisdiction in the WD-Toshiba dispute.

Tokyo-based Toshiba has been trying to sell its profitable memory business to cover losses with its struggling U.S.-based Westinghouse Electric nuclear power division. In June, the company selected a Japan-backed consortium as the preferred bidder.

But WD has claimed that no sale can take place without its consent. In May, WD filed a request for arbitration and injunctive relief with the International Court of Arbitration operated by the Paris-based International Chamber of Commerce.

Impending Toshiba memory business sale

Western Digital’s SanDisk subsidiary sought a preliminary injunction through the Superior Court of California after Toshiba announced plans to try to strike a mutually satisfactory agreement with the consortium by June 28, when its annual shareholder meeting took place. Toshiba said it hoped to close the transaction by March 2018.

SanDisk filed a claim with the California court to prevent the Toshiba memory business sale without its consent. SanDisk later sought an additional preliminary injunction “to stop retaliatory action” that it claimed Toshiba had taken to block San Disk’s affiliates from accessing Toshiba’s facilities, databases and networks that are critical to the operation of their NAND flash joint venture.

WD issued a release two weeks ago stating the California court had directed Toshiba not to transfer its interests in the three WD-Toshiba NAND flash joint ventures without “specified advance notice” to SanDisk to ensure the matter would be preserved for arbitration.

WD CEO Steve Milligan, via a prepared statement, claimed the court’s “directive” marked a victory for WD, SanDisk and their stakeholders. “Our entire goal was to preserve and protect our rights through the binding arbitration process, and that’s precisely what the Court has done,” Milligan said.

Yasuo Naruke, a senior executive vice president at Toshiba Corp., said via a prepared statement today that the “mutually acceptable understanding” is effective for a “very limited time” and “recognizes Toshiba’s right to negotiate and sign a definitive agreement for the sale of its memory business.” Toshiba claims it does not require SanDisk’s consent to transfer its memory business.

Naruke stated: “Further, as a practical matter, we don’t expect to close a deal during the period addressed in the order. Closing a transaction of this magnitude would require many months – well beyond the limited timeframe specified in the ruling. Toshiba therefore remains focused on preparing for the ICC (Chamber of Commerce) arbitration process, which we believe is the appropriate venue to address these issues. We look forward to successfully presenting Toshiba’s position to the tribunal, which we believe will be formed within the next month or so.”

Toshiba also noted in today’s statement that it “remains intent on soon entering into a definitive agreement for the sale of its memory business with one of the bidders.”

Toshiba’s board of directors disclosed last month that it selected a consortium, including Innovation Network Corporation of Japan (INCJ), a government-supported investment fund, Development Bank of Japan (DJB), and Bain Capital Private Equity LP. According to Western Digital, Korea-based chipmaker SK Hynix was also part of the consortium.

Prior to the Toshiba board’s announcement, potential bidders that surfaced through published reports included U.S.-based chipmaker Broadcom, Taiwan-based electronics maker Foxconn, SK Hynix and Western Digital.

During Western Digital’s earnings call Thursday night, Milligan said there had been constructive talks with Toshiba recently. Milligan said there was no “reasonable scenario” of disruption to the NAND supply his company receives from the joint venture.

“We expect to continue to secure wafer output, and from a manufacturing and operations perspective, the JV remains very healthy,” he said.

When asked if he had contingency plans if things did not work out, Milligan said “I don’t think that that’s a reasonable scenario to be considered. I would find that to be … just a highly unrealistic assumption to assume that.”

Milligan said he and WD CFO Mark Long travelled to Japan last week to talk to Toshiba. “Our discussions were constructive, and we will continue to work to seek a solution that is in the best interest of all parties,” he said. “Rest assured, this has the full attention of our team as we are committed to the continued success of the joint venture.”


July 28, 2017  7:26 AM

Axcient and eFolder merger gathers clouds, enhances data protection

Paul Crocetti Paul Crocetti Profile: Paul Crocetti
Data protection

Though they existed in the same general market, Axcient and eFolder had different technologies, different platforms and different customer bases. They’re all the same now.

Axcient and eFolder said Thursday that they are merging, bringing together Axcient’s cloud-based disaster recovery and data protection platform and eFolder’s cloud business continuity, cloud file sync and cloud-to-cloud backup. The new company will be called … well, we’re not sure about that yet.

“We plan on converging these technologies to become the best in breed in this category of backup and disaster recovery,” said Matt Nachtrab, who just last week was named CEO of eFolder and will run the combined company. “The technology is really driving the desire of bringing these companies together.”

The eFolder line of products, including backup and disaster recovery platform Replibit, which it acquired last year, is geared toward SMBs. Replibit enables recovery on an appliance or in the eFolder cloud. The vendor also sells Anchor file sync and Cloudfinder cloud-to-cloud backup software.

Axcient is a pioneer of disaster recovery as a service (DRaaS), said Justin Moore, its founder and CEO who will be chief strategy officer of the combined company. In addition to DRaaS, Axcient provides copy data management, orchestration and automation under a single cloud-based server, Moore said. The vendor brings a more midmarket reach, but also hits SMBs and the enterprise.

Over time, Nachtrab said he expects the combined company’s offerings to feel more like a suite of products than separate lines.

CEO Matt Nachtrab

Matt Nachtrab

The two companies claim more than 50,000 customers with a reach of 4,000 managed service providers (MSPs) after the merger.

Nachtrab said the new management team has not yet decided on what to call the combined company, though he threw out the idea of calling it Axcient.

“The Axcient brand name is very strong,” Nachtrab said. Either way, he said, the company will likely keep the Axcient or eFolder brands for its products and services.

The idea to merge came when Axcient went looking for funding, a quest that led it to private equity firm K1 Capital Investment. K1 owned controlling interest in eFolder and decided to fund Axcient and bring the two companies under common management.

Axcient and eFolder describe the convergence as a merger rather than one company acquiring the other. They have not disclosed financial details of the transaction.

Integration work ahead

While eFolder hosts its cloud services in its own data center, Axcient uses the public cloud. After the companies’ products are integrated, Nachtrab said eFolder will be able to move its data to the public cloud by using Axcient Fusion technology.

Axcient Fusion, an orchestration and automation software platform that launched last year, targets the midmarket through both MSPs and direct sales. And eFolder sells almost entirely through MSPs.

Kevin Hoffman, an eFolder founder, will serve as CTO of the combined company. Hoffman held that position at eFolder before the merger.

Nachtrab said they have offered jobs to all Axcient and eFolder employees. If all accept, the company will have more than 300 employees.

The combined company will keep its four offices for now, Nachtrab said. Axcient’s headquarters are in Mountain View, Calif.,  with satellite offices in Austin, Texas and Smolensk, Russia, while eFolder is based in Denver.

The merger is the second convergence of backup and recovery companies this month.  Earlier in July, data protection vendor Arcserve acquired cloud provider Zetta and its backup and recovery technology.


July 25, 2017  7:41 PM

Seagate revenue slides, CEO Luczo says ‘see ya later’

Carol Sliwa Carol Sliwa Profile: Carol Sliwa

Seagate Technology changed its CEO, reported a revenue miss, and disclosed plans to cut 600 jobs — all on the same day.

The hard disk drive and systems vendor today said Dave Mosley, the company’s president and chief operating officer, will take over as CEO on Oct. 1. That’s when Steve Luczo will give up the CEO job to become executive chairman.

Luczo served as Seagate’s CEO for 16 of the past 20 years. Mosley, who has worked at Seagate since 1995, immediately joins the company board.

Seagate revenue drops

The Cupertino, Calif., storage vendor’s $2.4 billion in revenue for its fiscal fourth quarter came in below the Wall Street analysts’ consensus estimate of $2.56 billion and the $2.65 billion the company reported for the same quarter a year ago. Seagate revenue was also down for the full 2017 fiscal year, with $10.8 billion compared to last year’s $11.2 billion.

Luczo said the overall Seagate revenue results were about 5% below plan. He said roughly half the shortfall came from its cloud storage systems and the rest from weakened demand for enterprise hard disk drives (HDDs) and channel inventory management in the NAS and surveillance markets.

Seagate shipped 23.4 exabytes (EB) of enterprise HDD capacity in its fiscal fourth quarter — off from the 26.9 EB in the same quarter a year ago. The shipped capacity for mission-critical enterprise HDDs was constant at 2.2 EB, but the near-line enterprise HDD capacity fell to 21.2 EB from 24.7 EB a year ago.

Seagate’s enterprise HDD capacity reached a high-water mark in the first fiscal quarter of 2017 at 28.1 EB, including 25.7 EB for near-line and 2.4 EB for mission-critical HDDs. In the second fiscal quarter, the shipped enterprise mission-critical HDD capacity hit 2.6 EB, but the near-line HDD capacity plummeted to 21.6 EB and has declined slightly with every subsequent quarter.

Another cloud hanging over storage vendors is the ongoing NAND flash shortage. Although solid-state drives (SSDs) aren’t its main business, Seagate sells enterprise SSDs to server and storage OEMs and includes flash in the storage systems it sells.

Luczo did not specifically mention NAND flash, but he noted the end-to-end storage supply chain continued to experience price increases of two times to three times in the memory market. Seagate anticipates the situation will improve over the next several quarters, he said.

CFO Dave Morton said he was optimistic about the enterprise HDD market, due to expected growth in hyperscale and cloud storage. He said he remains confident about Seagate’s near-line HDD portfolio across a variety of capacity points, including 10 TB and 12 TB HDDs that are ramping up in production.

Seagate to cut jobs

To cope with the current Seagate revenue drop, the company’s global headcount reduction should produce approximately $90 million in annual savings.

The company expects to complete the job cuts by the end of September, according to Seagate’s Form 8-K filing with the U.S. Securities and Exchange Commission.

Morton said the restructuring would help reduce operating expenses to about $400 million per quarter by the end of the year, with a long-term target of roughly $375 million per quarter.

The CEO change follows a series of other executive changes during the last 18 months. Those include Morton’s shift to CFO and Jeff Nygaard’s move to oversee all manufacturing in his role as senior vice president of operations. Other recent appointments included Jim Murphy as executive vice president of worldwide sales and marketing, Kate Schuelke as chief legal officer and Ravi Naik as chief information officer.

Luczo says Seagate revenue not the whole story

Although Seagate failed to achieve its main targets in Luczo’s penultimate quarter as CEO, he noted that the company had “effectively achieved” its operating margin and gross margin profitability targets for the full 2017 fiscal year that ended on June 30.

Luczo corrected an analyst who congratulated him on his retirement during the earnings call. “Much to my children’s chagrin, I have not retired,” he said, pointing out he will remain on as chairman to focus on strategic growth initiatives and other opportunities.

He quickly added, “I suppose I did retire from the CEO job,” and sounded like he will not miss it.

“As I said to someone the other day, running a disk drive company is a little bit like driving in stop-and-go traffic,” Luczo said. “Sometimes you’re going 15 miles an hour and sometimes you’re going 85 miles an hour. But you usually get to your destination on time and no one’s hurt. But it’s stressful for the driver and oftentimes for the passengers, too. So I think we have a younger driver with better reaction times now.”

Rival Western Digital will likely produce much less surprise when it reports earnings Thursday. Western Digital a month ago confirmed its previous guidance of $4.8 billion for last quarter and upgraded its profit forecast.

 


July 25, 2017  9:35 AM

SoftNAS Cloud NAS software paces itself when moving petabytes

Sonia Lelii Sonia Lelii Profile: Sonia Lelii

SoftNAS engineers encountered a problem early this year with Amazon Web Services and Microsoft Azure clouds that caused a delay in the SoftNAS Cloud NAS 3.5 release.

The engineers hit a snag while doing quality assurance testing on pushing petabyte-scale migrations into the cloud using the SoftNAS primary storage filer. They needed to get the testing done in one month’s time, but the public cloud’s algorithms had other ideas.

“When you are moving massive amounts of data in the cloud, you have to have parallel I/O streams,” said Rick Braddy, CEO of SoftNAS. “When you do that, (the algorithms) start stiff-arming you and telling you to slow down. When you start sending data too fast, they start penalizing you. You get error messages and you have to do a retry.”

SoftNAS engineers hit a wall with the ingest rates. This problem forced them to innovate out of necessity because at the pace the cloud algorithms were dictating, the quality assurance testing would have taken six months to put a few petabytes into the cloud.

So the engineers developed patent-pending ObjFast technology, and integrated it into the company’s latest version of its SoftNAS Cloud NAS. The company claims the software-defined-storage product writes to the cloud more than twice as fast as previous SoftNAS Cloud NAS versions, effectively giving customers block-storage performance at the price of object storage.

“This basically delayed our whole product release by a quarter, so we ended up finishing the testing in the second quarter,” Braddy said.

Braddy said his team discovered in testing that each public cloud has its own pace for ingesting and the pacing varies depending on demand.

“It’s a shared service,” he said. “So the amount of throttling is based on everyone’s usage. What happens is if you don’t pace them correctly, they send you error messages and have to go into the retry loop.”

The new ObjFast technology uses massive parallel I/Os of object data streams coupled with algorithms to pace the data ingest rate for each I/O stream. That keeps the ingest rate from going over the clouds’ maximum allowed per stream data rate.

“We are not pacing their algorithm,” Braddy said. “We invented a proprietary pacing that adapts. Based on our testing, we found that Microsoft Azure’s overall ingest rate is faster than Amazon S3.”

SoftNAS Cloud NAS customers migrate PB of data

Braddy said a year ago customers typically moved approximately 50 TB to public clouds, but the vendor began seeing petabyte (PB) scale migrations to the cloud over the past three quarters.

The SoftNAS Cloud NAS marketplace capacities have gone from 1 TB and 20 TB instances to include 50 TB, 100 TB, 250 TB and 1 PB instances with annual licenses that can grow to 16 PB.

“We are seeing customers who are tired of being on the hardware treadmill,” Braddy said. “They have aging hardware now and, increasingly, they are being told by bosses to move to the cloud. A lot of organizations have aging Isilon scale-out file storage and they are moving away from it and into the cloud. We are also seeing a lot of file server consolidation.”

SoftNAS started selling its cloud software in 2012 for AWS, and added support for Azure and VMware vCloud Air in 2014.


July 21, 2017  11:48 AM

Symbolic IO CEO Ignomirello arrested on assault charges

Garry Kranz Garry Kranz Profile: Garry Kranz
Storage

Symbolic IO, identified as one of SearchStorage’s storage startups to watch in 2017, now bears watching for different reasons than the other vendors on the list.

Brian Ignomirello, 46, the Symbolic IO founder and former CEO, has been arrested in New Jersey in connection with an alleged physical attack of his girlfriend.

As of Friday morning, Ignomirello was still listed as the Symbolic IO CEO on the company’s web page. However, his attorney, Mitchell Ansell, said Ignomirello is no longer with the company and the website no longer lists him on its leadership page. It’s not clear when Ignomirello left the company.

Ignomirello was taken into custody Wednesday on a bench warrant for outstanding arrest and violating a restraining order, said Charles Webster, a spokesman for the Monmouth County Prosecutor’s Office. Webster did not provide further details. Colts Neck, New Jersey, police told Patch that a Monmouth County SWAT (Monmouth County Emergency Response Team) team was called in to grab Ignomirello.

A report in the Asbury Park Press, citing local law enforcement, said Ignomirello’s arrest stems from an incident in May in which he allegedly “knocked his girlfriend to the ground, kicked her and punched her in the face.”

Ansell, Ignomirello’s attorney, said Ignomirello was issued a summons for aggravated assault in relation to those charges. Ansell cited “vast technical violations” with the warrant that charged his client this week with violating a restraining order.

“To violate a restraining order, you have to have intent. I don’t think the allegation rises to the level of intent,” Ansell said.

Ansell disputed media reports that Ignomirello was armed and barricaded himself in his central New Jersey residence. “He was not armed and there never was a barricade,” Ansell said.

Arrest could throw startup into turmoil

Symbolic IO has won praise for its Intensified RAM Intelligent Server (IRIS) memory-based storage technology, but the charges against its founder and leader could throw the startup into turmoil.  Company officials have not yet commented publicly on the matter.

The Asbury Park Press cited a police report from a May incident in which Ignomirello’s girlfriend purportedly said he had “slapped her in the face before, causing her nose to bleed,” although she reportedly did not report that alleged assault to police.

The newspaper also reported that Ignomirello’s girlfriend received a restraining order against him approximately five years ago, but she subsequently had the order lifted. Ignomirello apologized to his girlfriend following the May incident, but she “told him to get away from her and that [she] didn’t want to talk to him,” according to the Press.

Symbolic IO came out of stealth in May 2016 with its “computational-defined” IRIS storage system, which it claims can shrink data stored in RAM by changing the manner in which binary bits get processed. SearchStorage named Symbolic IO to its list of storage startups to watch based on its technique for creating and recreating data as users request it.

Symbolic IO has received nearly $15 million in venture funding, including a $12.75 Series A round of seed capital in December 2014. Ignomirello is a serial entrepreneur. He filed a patent in May to develop a housing device designed to turn an automobile’s rearview mirror into an interactive computing device by displaying application images on a car’s windshield.

NOTE: This story was updated Friday afternoon with comments from Ignomirello’s lawyer.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: