LAS VEGAS – This blog was supposed to tell our readers about the big developments surrounding Day 1 of the NetApp Insight user conference. Sadly, today’s subject matter touches on more sobering news.
On Sunday, as NetApp users, executives and media converged on Mandalay Bay Resort and Casino for the big event, a shooter with a high-powered assault weapon wrought havoc and put the frivolity of Las Vegas in a stark context. Perched from his room on the 32nd floor at Mandalay Bay, police here say 64-year-old Stephen Paddock indiscriminately rained a hail of bullets on neighboring concert-goers, killing nearly 60 people and wounding more than 500 others, including some critically.
Law enforcement officials have said they expect the number of fatalities to rise. Let us hope they are mistaken.
On the morning after, much of downtown Las Vegas remained on lockdown until mid-afternoon. The Strip that normally teems day and night with shoppers, fun-seekers and those combining business with pleasure is deserted. Major arteries are closed. Traffic is almost nonexistent. Shops remain shuttered. Those who escaped the carnage are still suffering, the frightful memory coaxing fresh tears from scores of meandering hotel guests.
The gambling capital of the world resembles a scene from the 1960s nuclear-horror flick, “On the Beach.”
Mandalay Bay has been the vendor’s preferred venue for NetApp Insight for a number of years. The 2017 event was to be the culmination of NetApp’s long, tortuous journey toward hyper-convergence. On Thursday, NetApp planned to take the main stage and take the wraps off NetApp HCI, the hyper-converged gear built with SolidFire all-flash storage. But no one here is thinking of such things now.
Yes, life indeed does go on. And sorrow has many manifestations, it seems. Even now, amid the gore and multiple crime-scene investigations, people sit meekly in the casinos, the arcades trilling incongruously against a police update being broadcast on local TV.
“I am shocked and saddened by the tragic event that occurred in Las Vegas at the Mandalay Bay last night. I am sure you all share these sentiments. My heart and the hearts of thousands of NetApp employees break for the loved ones of those affected by the terrible events,” NetApp CEO George Kurian said in a prepared statement.
In a separate announcement, planned NetApp Insight-related activities are expected to resume Tuesday, beginning with a general session at which the shooting tragedy almost certainly will headline discussion. SearchStorage.com remains at NetApp Insight and we will endeavor to update our readers as soon as we have news to bring.
From this viewpoint, NetApp would be better to postpone the event and give Las Vegas time to heal its grievous wounds. In contrast to this moment in time, data storage seems less than trivial.
Software-only hyper-converged vendor Maxta said it can enable customers to migrate virtual machines from VMware ESXi to Red Hat Virtualization, and run both hypervisors on Maxta hyper-converged systems.
Red Hat Virtualization is based on open-source KVM. Maxta executives said the goal is to provide an easy migration path for customers to move from ESXi to RHV. Maxta MxSP software runs on x86 servers, and is sold stand-alone or packaged on appliances by resellers.
Why would Maxta hyper-converged customers want to migrate VMs from VMware to RHV? Mostly to avoid paying costly VMware licenses. Why does Maxta want to move customers off VMware hypervisors, which is by far the most popular hypervisor? Because customers are asking to do that, Maxta CEO Yoram Novick says
“We heard from customers: support multiple hypervisors, and make it easy,” Novick said. “Customers say’ We may be VMware today but we may go to Red Hat in the future. We may even do both at the same time.’”
And why support Red Hat KVM and not the open-source versions on Maxta hyper-converged software?
Kiran Sreenivasamurthy, VP of product management, said Maxta’s customers want the support from Red Hat missing form open-source KVM versions.
“We believe Red Hat has the market share and the ecosystem to support this move,” Sreenivasamurthy said. “Why is VMware popular? It’s all about the ecosystem. You have tools for backing up and managing it. Red Hat has that similar kind of ecosystem. It’s easier to run on Red Hat than to support different [KVM] hypervisors.”
And what about Microsoft Hyper-V? That is often the second commercial hypervisor storage vendors support, after VMware.
Novicksaid Hyper-V support will likely follow, but there is more interest in RHV now.
“We know Hyper-V is out there, but the target market we’re going after is predominantly VMware and Red Hat KVM,” Novick said. “And we’re seeing an increasing interest in Red Hat. Certain workloads don’t make sense for VMware. But this is not about moving away from VMware. It’s about moving certain workloads to Red Hat.”
Sreenivasamurthy said the simple migration process works like this: customers select the VMs they want to move over, and Maxta hyper-converged software automatically exports VMs from VMware and imports them onto RHEV. Following the migration, the RHV VMs can be managed through the MaxtaMxInsight for KVM Management Console.
Maxta’s chief marketing officer Barry Phillips called the migration feature “the first step in supporting Red Hat,” which does have its own hyper-converged software. Phillips said the vendors will also discuss joint go-to-market programs.
Dell EMC today enhanced its Nutanix-powered XC hyper-converged platform by adding data protection features for Microsoft Windows Hyper-V.
The news, made during Microsoft Ignite, shows Dell EMC is keeping its commitment to maintain its partnerships with vendors Dell also competes against. Nutanix competes with other Dell EMC hyper-converged infrastructure (HCI) appliances – mainly VxRail – and Hyper-V competes with Dell-owned VMware’s hypervisors.
The news also comes as IDC shows Dell EMC overtaking Nutanix for the hyper-converged market share lead. According to IDC’s Converged infrastructure systems tracker second-quarter figures released Tuesday, Dell EMC led with 29% of the $763.4 million HCI market compared to Nutanix’s 20.9% share. Dell EMC’s HCI revenue grew 149% year-over-year in the quarter, compared to overall HCI growth of 48.5%.
But the Dell EMC XC Series – which began as an OEM deal between Dell and Nutanix in 2014 – continues to sell well. Dan McConnell, Dell EMC VP of converged platforms, said the Dell EMC XC Series generated more than $100 million in revenue in the second quarter. Judging by IDC’s numbers, Dell EMC’s overall HCI revenue would be around $220 million.
“It’s a sizeable meaningful business that continues to grow at high rates,” McConnell said of the Dell EMC XC Series.
Dell EMC positions VxRail with VMware vSAN HCI software as the HCI appliance for VMware shops and Dell EMC XC as HCI for other hypervisors.
The new Dell EMC XC Data Protection Management Console integrates Dell EMC Avamar and Data Domain backup into the Nutanix Prism Management Console. That enables Avamar software to back up data from Hyper-V to Data Domain targets and tier data to Microsoft Azure and Dell EMC Virtustream public clouds. The Dell EMC XC management console launches as an application from inside Nutanix Prism.
McConnell said data protection policies can be set automatically and applied to any new virtual machines brought online. He said the console will eventually support the Nutanix AHV hypervisor following more development work between AHV and Avamar. But the first version due in the fourth quarter of 2017 only supports Hyper-V.
Also, the XC Series Azure Log Analytics Solution will integrate with Microsoft Operations Management Suite-based automation tools to enable trend analysis and proactively detect problems.
McConnell said Hyper-V customers make up from 10% to 30% of XC customers in any given quarter.
“As we look to take hyper-converged infrastructure more down market, we encounter the need to run on Hyper-V a lot more,” he said.
Virgin Group founder Richard Branson stepped away from his hurricane relief efforts last week to warn against climate change inaction at Veritas Vision 2017 in Las Vegas.
Branson said the recent trail of hurricanes shows global warming’s destructive powers are a present threat.
“I hope most of you in this room would agree knowing what you know about data and how to read data, the impacts of Harvey, Irma and Maria send us straight forward global messages,” he said. “This is the moment to make a full and unambiguous commitment to the Paris Agreement. Not a moment to retreat from it. If we fail to take action now, the price tag of inaction will be enormous. It’s already becoming enormous.”
During his Veritas Vision 2017-opening keynote, Branson emphasized how access to information is crucial when dealing the destructive forces of nature.
“Through this time of crisis, the importance of information has become very, very clear. As communication and information collapsed, we went from the 21st Century to the 19th Century in a matter of hours,” Branson said.
Branson spoke weeks after Hurricane Irma destroyed his home on Necker Island, which is part of the British Virgin Islands. The Category 5 hurricane featured winds that reached 185 mph and destroyed several Caribbean Islands before reaching Florida.
“When infrastructure fails, it makes communication almost impossible. Even the most basic coordination becomes difficult,” he said. “We saw that first-hand when organizing aid, transportation and fuel.”
Information, Branson said, is also under assault. In politics and in business, data and information needs to be correctly used. We are living in a time where more data is available to global communities in real-time. Veritas Vision 2017 speakers noted that 2.5 quintillion bytes of data are now produced daily.
“We tend to take our access to information for granted, yet we are not doing enough to protect it,” he said. “As is often quoted, around 90 percent of the data today has been created in the last two years alone. But there is reason to be wary, data information and fact are not interchangeable terms.”
Pointing to what he called unethical business practices and shady political decision-making processes, Branson told the Veritas Vision 2017 audience that the truth is often harder to find than ever before. So it’s up to us to confront this challenge as governments, as individuals and as businesses.
“It’s up to us to manage, to protect and to utilize the truth and information in order to make the world a better and much safer place,” he said. “There is another simpler lesson about the value of information. You cannot build a brand on abstract revenue targets. Brands are built on values. Brands are built on principles. They are built on beliefs.
“And brands are built on information and in depth understanding of the market, your competitors market and your customers. Having access to information at the right moment is a huge part of that. It allows entrepreneurs to know when to take risks and how to make decision. That is what disruption is all about.”
Branson has called for a Marshall Plan to rebuild Caribbean islands to better withstand hurricanes and also to more us away from fossil fuel dependency. He wants to build more energy efficient infrastructure and make better use of the Caribbean’s natural wind and solar resources.
“As an investor and entrepreneur, I don’t see any downside to that,” he said. “And as our corner of the world is subjected to nature’s fury, I can’t help but ask how much more disruption is needed to show that the way we treat our planet is having serious, serious consequences.
“Scientists have highlighted (the danger) carbon emissions and ocean surface temperatures,” Branson said. “When Irma struck, the Atlantic and Caribbean water reached temperatures of 86 degrees Fahrenheit in some places and as people and in Houston, Florida and across the BVI would agree and I suspect by this morning, Puerto Rico as well, the time to act is now.”
In just a few days, the door will shut—and your outstanding data storage product will be left standing out in the cold rather than in the running for a Storage magazine/SearchStorage.com Storage Products of the Year award!
If you rolled out a new or enhanced product in the past year, point your browser to our entry form where you can read about the judging criteria and even get some tips on how to fill out the form.
I’ve been involved with the Storage Products of the Year competition for 14 of its 16 years, and it is without any self-consciousness I can say that there are few—if any—accolades as prestigious as these awards. Just ask any one of the 238 gold, silver and bronze award winners to date. That’s some pretty heavy metal and a mark of honor for the vendors and products that have been singled out for innovation, performance, ease of integration, ease of use, functionality and value.
Make no mistake, the competition is tough and our panel of judges—an assembly of prominent storage analysts, consultants and editors—hold each product to the highest standards. That’s why winning is such a great honor, and why winning vendors proudly display their badges of honor.
Sixteen years is a long time for anything in the IT world, but year in and year out, the Storage magazine/SearchStorage.com Storage Products of the Year awards have continuously strived to recognize the storage products that stand out among their competition.
Some of the companies and product names that snagged awards 14 or 15 years ago may be memories now, the results of the mergers and acquisitions that help ensure that cutting-edge technologies can survive and thrive in the storage ecosystem. On the other hand, some companies and their products have established trails of excellence over the course of 15 Storage magazine Products of the Year competitions with repeat wins. And while names like McData or AppIQ may be distant memories today, other Products of the Year winners have managed the transition from upstart to established star. In 2002—our inaugural Products of the Year, Commvault won gold for it Galaxy 4.1 backup app—but then came back five more times to add more gold, silver and bronze medals to its haul.
But, as they say, you’ve got to be in it to win it. The deadline is looming, so don’t delay—fill out that entry form now!
Of all the EMC executives who left the company since the Dell acquisition, the departure of David Goulden will have the greatest impact on the Dell EMC storage business.
Dell dropped the news that Goulden will depart at the end of 2017 last Friday in a news release.
Long-time EMC CEO Joe Tucci stepped aside when the $60-billion merger completed a year ago, but Tucci was already headed towards retirement and Michael Dell absorbed the EMC CEO duties. Goulden served as CEO of the EMC division that included its core storage products, and he led the Dell EMC group after the merger. He gave the multi-billion dollar storage business continuity in the transition from EMC to Dell EMC.
Goulden has close ties to Tucci, going back to their days together at Wang Global in the 1990s. Goulden joined EMC in 2002, and held positions in sales, marketing a new business development as well as serving as CFO, president and finally CEO of the Information Infrastructure group. That made him instrumental in the transition period. The big question now is whether that transition period is over.
The move and the timing of Goulden’s departure should not be a big surprise. It’s common for top executives to stick around for a year or so after a big acquisition, and then move on. Goulden was a candidate to replace Tucci as EMC CEO before the Dell deal came about, and he likely will pursue a CEO post at another ccompany.
But what does this mean to the Dell EMC storage business? Goulden was among a group of key EMC executives that Dell kept on to run the enterprise group after the merger. Others included CMO Jeremy Burton, head of IT services Howard Elias and sales chief Bill Scannell. They all remain at Dell but none of them will replace Goulden. Jeff Clarke, who runs Dell’s Client Solutions (PCs) business, will take over the Dell EMC Infrastructure Group. That gives Clarke immense power inside of Dell and also puts the core Dell EMC storage business in the hands of an executive without a great deal of storage experience.
Friday’s press release quoted Michael Dell and Goulden saying the merger is going well and the time is right for a change.
“The transition of EMC into Dell EMC is complete and we’re executing in the market with great momentum,” was part of Goulden’s quote.
But is that the case? On Dell’s quarterly earnings call two weeks ago, Goulden and Dell CFO Tom Sweet talked a lot about how areas of the Dell EMC storage business need improvement. Goulden identified weakness in the midrange storage business and outlined “robust plans” to remedy sales.
Sweet added: “While the integration has gone relatively smoothly in many areas, we recognize that we still have work to do to drive profitability higher and improve velocity in our storage business.”
Dell is by far the overall networked storage revenue leader, according to market research firm IDC. In the 2017 second quarter figures IDC released last week, Dell had $1.5 billion in revenue – more than twice that of second-place NetApp. But Dell’s revenue declined a whopping 22.3% from the previous year while NetApp increased 16.7% and the total market slipped 5.4%. The combined market share of Dell and EMC fell from 34.6% in the second quarter of 2016 to 28.4% a year later.
Goulden blames some of the Dell EMC storage share declines on the change in the quarterly calendar after the transition from EMC to Dell, but it’s hard to attribute all of that 22.3% drop to the calendar. And his comments during the last earnings call shows other problems exist.
The good news for Dell EMC storage is it leads in all-flash market share, and that market grew 37.6% to $1.4 billion last quarter according to IDC. It is also making a strong push to become the leader in hyper-convergence with its VxRail appliances powered by Dell-owned VMware’s vSAN software. But with the latest change at the top of Dell EMC, who knows what the company’s storage portfolio will look like a year from now.
Cloud-based file service provider Nasuni last week closed on $38 million in funding to boost its expansion plans in research and development and its channel and go-to-market efforts.
The Goldman Sachs Growth Equity-led financing boosted Nasuni’s total to about $120 million since the company incorporated in 2009. The Nasuni cloud storage momentum should help the vendor become cash flow positive by the end of 2018 and profitable by the end of 2019, according to the company’s president Paul, Flanagan.
“We’ve got a great opportunity as enterprises are thinking about moving their unstructured data and taking advantage of not only cloud economics but cloud collaboration,” said Flanagan, who joined Boston-based Nasuni in April.
Flanagan has been a Nasuni board member and investor since 2009 through his work at Sigma Partners and Sigma Prime Ventures, and he helped to build and scale two Boston-area startups, VistaPrint and Storage Networks.
Nasuni cloud storage software ties to Amazon, Azure
Nasuni was an early cloud gateway startup with a virtual appliance designed to cache active data on premises and store less frequently accessed files in the object storage of public clouds such as Amazon Web Services and Microsoft Azure. The Nasuni cloud-native UniFS global file system provides a single namespace to manage the unstructured data. The company also sells physical appliances bundled with its software.
Tom Rose, Nasuni’s chief marketing officer, estimated that customers now keep about 70% of their data in public clouds and 30% on premises in private clouds. He said the increase in private-cloud use reflects Nasuni’s growing partnerships with Dell EMC, with its Elastic Cloud Storage, and IBM, with its Cloud Object Storage (formerly Cleversafe).
The average annual contract for a Nasuni cloud storage software subscription is now $100,000, three times more than it was two years ago, according to Rose. He said customers tend to ask for 20% more capacity when they renew subscriptions, reflecting the increase in unstructured data as well as the use of Nasuni cloud storage for additional workloads.
Flanagan said Nasuni’s revenue has grown between 80% and 90% during each of the last four years. He said the company has about 350 customers ranging from the small- to mid-sized businesses that were the company’s original focus to large enterprises that deploy hundreds of terabytes to more than a petabyte.
“We give enterprises the ability to move all that unstructured data to the cloud and be able to get cost savings by not having to do backup anymore, business continuity. They don’t have to get additional providers for that,” said Flanagan. “And on top of that, we allow these enterprises to collaborate on that data around the world in ways they weren’t able to do previously.”
He said Nasuni’s primary competitors remain NetApp’s and Dell EMC’s Isilon traditional file storage, and the company only infrequently runs up against startups.
Latest Nasuni storage financing round
Flanagan said Nasuni’s Dec. 2016 $25 million financing round gave the company enough cash to operate through June 2018. The executive team didn’t expect to seek additional funds until later this year or the first quarter of 2018. But he said, in the ordinary course of “teeing up” investors for the future, Goldman Sachs’ David Campbell said, “Why wait?” Campbell formerly worked in Goldman Sachs’ IT organization managing storage infrastructure, according to Flanagan.
“Anytime you can get an investor like Goldman Sachs and particularly a guy like David Campbell, who understands what we do, on our board, it was a great move for us,” Flanagan said.
Flanagan said the money would allow Nasuni to explore additional opportunities to grow and expand, whether through aligned partnerships or business development deals with technology or cloud partners. He said the cash would also help to finance “a long funnel of projects” from an engineering standpoint.
Rose said customers can expect to see new capabilities for the Nasuni cloud storage portfolio, including support for more clouds. He said Nasuni moved this year into larger new headquarters in Boston for additional space to expand and grow.
The Toshiba memory business could be sold to a Bain Capital-led consortium that reportedly includes Apple, Dell Technologies and Seagate if the parties can strike a deal that passes legal muster over the objections of flash partner Western Digital.
Toshiba’s Board of Directors voted on Wednesday to sign a memorandum of understanding with Bain Capital after receiving a new proposal from the consortium. Toshiba claimed it would try to reach a definitive agreement with the Bain-led group by month’s end, but the company also noted the memorandum is non-binding, leaving the door open for other bidders.
Toshiba has been trying to sell its profitable memory business to cover enormous losses associated with its struggling U.S.-based Westinghouse Electric nuclear power division. Bids are reportedly in the range of $18 billion to $20 billion.
In addition to the Bain-led consortium and a Western Digital-backed group, Toshiba confirmed that a group led by Hon Hai Precision Industry Co. Ltd., also known as Foxconn, a Taiwan-based electronics manufacturer, has been in the running.
Toshiba said the Bain-led consortium includes the Innovation Networking Corp. of Japan and Development Bank of Japan. But the Wall Street Journal reported the group seeking to buy the Toshiba memory business also includes Apple, Dell, Seagate and Korean chipmaker SK Hynix.
The involvement of Dell and Seagate would throw two prominent storage vendors into the mix of companies seeking a share of Toshiba’s coveted memory business. Seagate is Western Digital’s main hard disk drive (HDD) competitor, but has lagged in sales of faster NAND flash memory-based solid-state drives (SSDs). Unlike some of the leading SSD manufacturers, Seagate has had no stake in a semiconductor fab that produces NAND flash chips. Seagate declined to comment on the Toshiba memory business.
Western Digital also bidding for Toshiba memory business
Western Digital, which became a joint venture partner in the Toshiba memory business through its 2016 acquisition of SanDisk, is also part of a consortium with a bid to acquire the Toshiba memory business.
Through a prepared statement, Western Digital expressed its disappointment that Toshiba signed the memorandum with the Bain-led consortium despite its “tireless efforts to reach a resolution that is in the best interests of all stakeholders.”
Western Digital claimed it has been “flexible” and “constructive” and submitted numerous proposals to address Toshiba’s concerns, and noted “multiple courts have ruled in favor of protecting SanDisk’s contractual rights.”
San Jose, California-based Western Digital has pursued legal action through the California court system, claiming the sale of the Toshiba memory business cannot take place without SanDisk’s consent. The company sought injunctive relief after Toshiba announced in June that the Bain-led group was the preferred bidder and it planned to strike an agreement by June 28.
Just prior to a July court hearing, Toshiba and Western Digital worked out an agreement that was approved by a San Francisco Superior Court judge. The court order requires Toshiba to publicly announce within 24 hours any agreement that “contemplates a closing” to sell its share of the flash memory joint ventures and give SanDisk two weeks’ written notice before any closing occurs.
In August, the court granted SanDisk’s request for a preliminary injunction to stop Toshiba from prohibiting SanDisk affiliates from accessing shared databases and refusing to ship certain engineering wafers and samples to ensure continuing operation of their NAND flash joint ventures. Toshiba is appealing the order.
Western Digital sought injunctive relief through the San Francisco Superior Court while its arbitration requests remain pending with the International Court of Arbitration, a Paris-based institution operated by the International Chamber of Commerce. The company is seeking to undo Toshiba’s April 1 transfer of its NAND flash memory joint venture interests to the Toshiba Memory Corp. subsidiary and prevent any subsequent sale without SanDisk’s consent.
At the U.S. Open tennis tournament last week, Rafael Nadal solidified his Hall of Fame credentials and Sloane Stephens became a hall of fame candidate. And the International Tennis Hall of Fame gained more artifacts to add to the thousands it is already beginning to digitize.
The ITHF in Newport, Rhode Island is months into a digitization project that will categorize and make searchable more than 25,000 historic tennis items. The hall uses Dell EMC Isilon NAS array and Piction digital asset management software as its primary tech tools for the digitization project.
Dell Technologies donated the Dell EMC Isilon storage as part of a partnership it forged with the Hall of Fame in late 2016. The partnership also included Dell sponsoring the Hall of Fame Open tournament in Newport for five years.
Doug Stark, the ITHF museum director, said his organization decided to go digital to better manage its historic items.
“First, we want to digitize all of our collection so we know everything we have and it’s well organized,” Stark said. “The second part is, we want people around the world to be able to access this. One of the ways might be going to our web site and being able to type in and search everything on Arthur Ashe, or everything we have on the U.S. Open or any Hall of Famer.
“We also want to take the digitized assets and incorporate them into social media and produce some videos. Getting it organized is the key to getting it out to the world.”
Stark estimated it could take five to 10 years just digitizing the Hall of Fame’s current assets on the Dell EMC Isilon array, and new materials constantly come in. “We will prioritize what should be digitized and how to roll this out to the public,” he said.
The museum artifacts run the gamut from a Roger Federer hologram to more than 1,100 rackets, 250 scrapbooks of tennis greats, 3,500 video and audio recordings, 600 pieces of tennis art and a 5,000-plus book library. The museum was established in 1881 and Newport hosted the U.S. Nationals tournament – the forerunner of the U.S. Open – from 1881 through 1914. The Hall of Fame began inducting retired greats in 1955.
But Stark said space constraints and sheer volume of its inventory mean the museum could display less than 10% of its artifacts. The digitization project will greatly increase that total.
“The Hall of Fame’s mission is to preserve and promote the history of tennis,” Stark said. “[Dell EMC Isilon storage] helps us to preserve that. Now that we know how to use that, we can start to promote the history of tennis using digitization as a tool.”
Spectra Logic has added a midrange BlackPearl NAS disk appliance to augment its tape-based object storage.
Spectra Logic launched the object-based BlackPearl line as a linear tape file system gateway in 2013, integrating a RESTful interface modeled after Amazon Simple Storage Services (S3). The disk appliance released Tuesday is branded as Spectra Logic BlackPearl Network Attached Storage (NAS). It exports file and object interfaces.
The Spectra Logic BlackPearl NAS archive can be configured as a converged system to replicate inactive primary data to multiple targets, including the cloud, a backup BlackPearl disk appliance and Spectra tape libraries.
“Customers with a file domain work flow can start out with a Black Pearl (NAS) and upgrade over time to a full object or converged storage platform,” Spectra Logic CTO Matt Starr said.
The Spectra Logic BlackPearl NAS caters mostly to media and entertainment companies that run a file domain workflow. Spectra Logic integrated the code base of its Verde NAS product, notably its Network File Interface (NFI) application to transparently move file data to back-end object storage.
When set up as a mount point, NFI takes a snapshot of the file system and sends only deltas to an object interface or cloud bucket. Customers can retrieve the data locally using the Spectra Logic Eon browser.
“It’s the same BlackPearl hardware, but we’re pulling in more of the Verde functionality to make a more feature-rich product,” Starr said. “This system takes lesser changing data and pushes it to BlackPearl NAS, and then allows NFI to make copies” to backup targets, he noted.
An entry-level BlackPearl NAS disk appliance is a 2U rack that takes two expansion chassis. List price is $14,200 for a 2U, eight-drive building block.
The densest configuration is a 4U product that scales to 7.1 PB. It scales to nine expansion chassis and 40U. The 4U master node houses 8 TB archive hard drives.
Starr said Spectra Logic BlackPearl NAS capacity can extend to hundreds of petabytes when used in conjunction with back-end tape storage.