Arcserve Unified Data Protection customers are being told to patch the backup platform after a security provider found issues that could leave data unprotected.
The four vulnerabilities in Arcserve UDP could compromise sensitive data through access to credentials, phishing attacks and the ability for a hacker to read files without authentication from the hosting system, according to Digital Defense, the company that discovered the problems.
Digital Defense, based in San Antonio, reached out to Arcserve with technical details of the vulnerabilities, said Mike Cotton, senior vice president of engineering at the security provider, which disclosed its findings publicly last week.
“[We] walked them through scenarios with how attackers can exploit the vulnerabilities in question,” Cotton wrote in an email. “Their team was extremely professional and they were very proactive in wanting to understand where the vulnerabilities were and how precisely to fix them.”
The vulnerabilities affect Arcserve Unified Data Protection 6.5, updates 3 and 4. Update 4 launched last month. UDP, Arcserve’s flagship product, features backup, recovery, automated testing, granular reporting and hardware snapshot support.
Arcserve Unified Data Protection customers can download a patch from Arcserve Support and reach out to the company to address any outstanding questions or concerns, the vendor said. Arcserve, based in Eden Prairie, Minn., also provided manual fix application instructions.
“Arcserve is committed to developing data protection solutions that meet the highest security standards to protect our partners, customers and, most importantly, their data,” the data protection vendor said in a statement. “We welcome reports from security researchers and experts so we can quickly and efficiently address any vulnerabilities, which was done by our incident response team in this case.”
Cotton said installing Arcserve’s patch is the best way to address these particular flaws.
“More generally, undertaking controlled network access strategies to limit access to the administrative interfaces of key backup systems can further harden installations such as this,” Cotton wrote.
Digital Defense regularly works with vendors regarding the disclosure of zero-day vulnerabilities. When the company’s Vulnerability Research Team finds issues and validates them, it contacts the affected vendor and helps with remediation actions.
Digital Defense has found vulnerabilities in other major backup products, including Dell EMC’s Avamar in 2017, but Cotton said this is the first time the company has worked with Arcserve.
“We believe they’ve addressed the flaws in question for these vulnerabilities,” Cotton wrote, “so no further action is necessary for them.”
Scale Computing CEO Jeff Ready said the hyper-converged vendor was on the edge of profitability before it decided to expand its edge computing sales.
Scale today said it has secured $21.2 million in funding, with another $13.6 million coming by the end of the year. The $34.8 million in new funding – led by strategic partner Lenovo – brings Scale’s total funding to $95.8 million.
It was Scale’s first funding since an $18 million round in 2015, and Ready said he was working to bring the Indianapolis-based hyper-converged pioneer into the black.
“I’m not from Silicon Valley,” he said. “I want to run this as a real business. I was on a plan that said, ‘Let’s make a profitable company.’ We were close. But we’ve decided to take a step back and invest more and push out profitability out a little bit, probably around a year. This opportunity was not expected two years ago.”
Scale started selling non-VMware based hyper-convergence to SMBs in 2012, putting it on a separate track from most early hyper-converged players. That is still the bulk of its business, and Ready said Scale has more than 3,000 customer deployments. But it has discovered a lucrative market of selling to companies in industries such as retail, healthcare and manufacturing, which have many small sites with little or no IT staff. Ready said Scale’s software and appliances are a good fit in these edge sites because they are easy to manage and have self-healing capabilities.
“SMB is still our bread and butter, and it’s why we’ve ended up positioned so well for edge environments,” he said. “They look like most of our SMB customers, with zero, one or maybe two people on site. They need applications to run and take care of themselves and prefer something that encompasses the entire infrastructure stack. They need a system that runs itself, they have no people or expertise locally to babysit them.”
There is one big difference between SMB and edge customers, though. The companies with many small sites also need central management for all of them. Ready said Scale manages hundreds of clusters as one large storage pool.
Scale isn’t alone on the edge, though. VMware claims it signed a deal with a retailer to install vSAN hyper-converged software on Dell EMC VxRail appliances across 1,200 stores this year.
Ready said he has considered VMware his main competition from the start, mainly because Scale gives customers the option of not using VMware.
“VMware inevitably will be our main competitors,” he said. “I’ve always considered VMware to be the main competitor.”
Ready declined to say how many employees Scale has, but said it’s “in the hundreds” and will add a least one hundred more over the next year. Many of those will come in the sales, marketing and channel teams. He is also counting on Lenovo to expand Scale’s market reach. Scale and Lenovo this month launched a formal partnership to sell an appliance with Lenovo servers running Scale’s HC3 Edge platform.
Allos Ventures, a Scale investor since its A Series round, also participated in the F Series funding. Ready said most of Scale’s other previous investors are also involved in the current round.
Jamie Lerner’s first 90 days as Quantum CEO confirmed the idea he had about the company when he took over. Discussions with customers convinced Lerner that Quantum’s greatest strength and greatest opportunity is storage for managing rich media data and video.
Now he has a solid plan for going forward: to complement Quantum’s StorNext file system technology with software features developed internally and through acquisition. Lerner said Quantum will also redesign its sales team more around specific solutions than geographies.
“What Oracle is to data management and what Cisco is to networking, that’s what Quantum is to rich media data,” Lerner said in an exclusive interview. “We see ourselves as the leader in infrastructure for managing rich media and video.”
Lerner said Quantum will add storage services intended specifically for that rich media and video. These services differ than storage features for traditional IT applications.
“We have storage, policy management and tiering technology,” Lerner said. “Over the next six to 18 months, we will layer on data services for video. Traditional data services – deduplication, compression, snapshots, clones, replication – are rarely used on video. With video you need a totally different set of data services. You need to search not just by keyword, but by image. You need deep media catalogues to know what media assets you have, what form they’re in, who has edited them. And you need a lot of analytics for video surveillance. Are people on video having an argument or holding a weapon? Has someone left a bag for a long period? Those are the data services needed for video.”
Lerner said Quantum will make core architectural changes by layering software modules on its current product. Those new modules will mostly be subscription based and cloud-hosted. He also said Quantum is likely to become a more aggressive acquirer of smaller storage companies.
“You’ll see a combination of tuck-in acquisitions to add features, skills and capabilities, and we’ll likely buy some technologies that are complete standalone entities,” he said. “They’ll be mostly software and cloud in nature, and heavy in rich media data services – analytics, a search catalog and other areas that will bolster our ability to handles petabytes of rich media.”
Lerner said he sees Quantum’s tape products retaining a large role in long-term archiving and cloud storage. The new Quantum CEO said the vendor will continue to see its DXi disk-backup appliances, while its dedupe capabilities will be woven into other storage platforms as a feature. But the main focus of development will be around StorNext and storage for rich media and video.
“Customers have figured out how to manage Oracle, they’ve figured out how to manage their email, but they are really struggling when incorporating video into their business,” Lerner said. “That’s the fastest segment of data growth.”
When Lerner became Quantum CEO in July, the vendor was knee deep in an internal accounting probe to find the cause of financial reporting irregularities. The probe is now complete. The main issue it found was that Quantum recorded revenue earlier than it should have, with approximately $25 million to $35 million of prematurely recognized revenue as of June 30, 2018. “We expect it to be good revenue but it was recorded too soon,” Lerner said.
Quantum detailed those findings in an SEC filing in September. Now it is restating past quarters to place the revenue in the right periods. The Quantum CEO said the restatements are not expected to affect cash flow.
He said he expects the restatements to wrap up by the end of the year, so Quantum can begin filing its quarterly earnings reports again in early 2019.
“Most of the deep concern phase is behind us,” he said. “Now we’re putting in place new loans and accounting procedures. We’re on the down slope of most of the unfortunate things that have happened to this company over the last couple of years.”
LAS VEGAS-NetApp expects about 5,000 customers and partners to gather here Monday as it lays out a roadmap for flash-enabled AI and cloud applications.
NetApp Insight 2018 marks the fifth year the event has been open to analysts, OEMs and the media. Prior to 2014, NetApp used Insight exclusively as technical training for data center managers. In 2017, NetApp Insight got off to a late start, as opening day was postponed following a mass shooting event on the Las Vegas Strip.
The three-day event will cover how the NetApp Data Fabric technologies extend to broader cloud use cases, said Kris Newton, a NetApp vice president of corporate communications and investor relations.
“We know that pretty much every organization, at some level, is thinking through AI and the cloud. We’ll have lots of discussion around how our customers can optimize their move to AI and the cloud, and see real results,” Newton said.
AI use cases cut across verticals, Newton said, spurring demand for faster storage and more efficient configurations to ingest data.
“AI puts pressure on your storage. You need storage that’s lightning-fast. You can’t wait around for your storage to respond,” Newton said.
Although she didn’t reveal details, Newton said NetApp Insight will highlight the role NVMe flash and storage class memory technologies play in a modernized data center. NetApp this year added an NVMe-based model to its All Flash FAS (AFF) Series arrays, mirroring similar moves by rival Dell EMC and others. The NVMe version of FAS allows customers to upgrade an existing FAS array by upgrading the OnTap operating system.
NVMe storage uses PCI Express to send traffic directly to CPUs, providing faster data transfer than traditional iSCSI command hops.
Sales of NVMe all-flash arrays will generate about $500 million in 2018, according to a report by analyst firm IDC, based in Framingham, Mass.
Storage arrays that extend NVMe from the back end to application hosts are sometimes known as rack-scale flash. NetApp’s AFF with NVMe technically doesn’t fit the definition, since NVMe runs on the front end, but allows customers to continue using SAS SSDs.
It wouldn’t be a NetApp Insight conference without product news. NetApp hinted it would reveal upgrades to its OnTap-based storage for converged systems, file storage and object platforms, as well as deeper integration for multicloud support.
Another point of interest will be any new details forthcoming on NetApp’s recent joint venture in China with server maker Lenovo. Under that deal, Lenovo will sell NetApp storage under its ThinkSystem brand.
Ctera Networks CEO Liran Eshel said his cloud file system company became cash flow positive this year, but it grabbed $30 million in new funding to grow as part of a booming market.
Ctera Networks raised $30 million in Series D growth equity funding to expand its global sales and delivery organization, especially in Southeast Asia and Singapore, and continue development of its enterprise file services technology. The latest financing round boosted the startup’s overall total to $100 million since 2008.
Ctera sells enterprise file software designed to cache active data on premises and shift colder data, in compressed and encrypted form, to object storage located in private and public clouds. In addition to translating data from file-to-object format, the software offers additional capabilities such as authentication, orchestration, synchronization and sharing.
Eshel said profitability is not Ctera’s top priority now. Neither is an IPO, although Eshel said “it’s definitely something we’re looking at.”
“We are investing significantly and will continue to invest in order to get more high growth and reach more customers,” Eshel said. “We could have just remained cash flow positive and be happy with where we are. But we think there’s much more in this market, and there’s much more land grabbing to be done. That’s why we will need to invest.”
Ctera customers have the option to use their own hardware or buy cloud gateway appliances that package the software. Ctera Networks introduced more powerful new HC Series Edge Filers on Dell and Hewlett Packard Enterprise (HPE) servers last summer.
“We are able to cover additional use cases and workloads that were traditionally solved by NAS systems. Now you could replace them with a more powerful cloud gateway,” Eshel said, claiming the new HC Series Edge Filers are doing well.
Eshel said Ctera Networks generally sells its software or gateways as part of deals with other infrastructure providers. He said the company often works with vendors such as Cisco Systems, Dell EMC, HPE and IBM.
“The bigger part of our business today comes from these infrastructure providers while we go to the market with complete solutions,” Eshel said. Ctera also has strategic reselling agreements with HPE and IBM.
Ctera Networks claims to have more than doubled its enterprise software subscription revenue during the last year. The company sells to cloud providers and enterprises, and its software is currently deployed in more than 200 private clouds, according to Eshel. Some of Ctera’s largest customers include McDonald’s, WPP, and the U.S. Department of Defense.
Eshel said the new funding would finance Ctera’s ongoing work to connect hyper-converged systems to a cloud file system. Ctera’s research and development arm is based in Israel, and the company’s sales headquarters is in New York.
Ctera’s competition in the cloud gateway space includes Nasuni and Panzura, but those vendors have all expanded their product lines with additional capabilities beyond mere file-to-object protocol translation.
Israel-based Red Dot Capital Partners led Ctera’s Series D funding round. Red Dot receives its funding from Temasek Holdings, an investment company owned by the Singapore government. Additional investors included Singtel Innov8, the VC arm of the Singapore-based Singtel Group telecommunications company. Also participating in Ctera’s Series D round were previous investors Benchmark Capital, Bessemer Venture Partners, Cisco, Venrock, Vintage Investment Partners and Viola Group.
Other recent funding rounds in the cloud market include $94 million for file and object storage vendor Cloudian, $75 million for cloud file sharing and content collaboration specialist Egnyte, $68 million for public cloud storage provider Wasabi Technologies, and $60 million for hybrid cloud computing and data management startup Datrium.
Quest backup has vaulted into the Office 365 workspace.
NetVault Backup 12.1 includes a plug-in that enables full and incremental backup and recovery of Office 365 Exchange Online mailboxes. Customers can back up to the cloud and on premises. They can restore individual, shared and resource mailboxes. The Office 365 plug-in provides flexible restore options and customers can restore only the data they need.
Quest built the plug-in with the Microsoft Graph API. While other vendors may be using old scripting, Quest is using new technology pushed by Microsoft, said Adrian Moir, senior consultant of product management.
“It allows us to grow across the Microsoft platform a lot faster,” Moir said.
Quest backup customers can restore emails, attachments, contacts and calendars.
Good timing for Office 365 backup
Don McNaughton, vice president of sales for Quest reseller HorizonTek, said many customers are using Office 365 and need backup for the SaaS app. Adding the backup support enables NetVault to remain a single data protection offering for those customers on Office 365. Standout features include the full or incremental backup options, full mailbox recovery and granular recovery, he said.
“So the timing was good,” McNaughton said.
Customers “want everything done in one place,” Moir said. That protection includes cloud and on-premises workloads, as well as hybrid approaches.
The Quest backup update builds on what the vendor launched with its NetVault 12.0 release, which aimed for more enterprise adoption. Moir said he expects Quest to add more technology focused on Office 365.
Competition includes some vendors purely focused on SaaS backup and others that incorporate it as part of an overall data protection platform.
“It’s a crowded market. Trying to differentiate is never easy,” Moir said, adding that he feels the Quest backup product’s flexibility, API incorporation, scalability and ease of use are standouts.
Beyond backup for Office 365
The NetVault Backup update also provides a multi-tenant architecture for managed service providers. In addition, an update to its VMware plug-in features vSphere 6.7 support.
McNaughton said HorizonTek is still analyzing the potential benefits of the other updates to 12.1 beyond the Office 365 backup.
McNaughton’s company has been a Quest partner since 2010. HorizonTek has actually been selling NetVault for about 20 years, predating when it became part of Quest backup. Quest acquired the NetVault platform from BakBone in 2010.
“After all this time, I’m still very happy introducing it to my customers,” McNaughton said. “NetVault has done a great job keeping up as technologies come out.”
Quest backup is on top of major trends in the industry, he said, including cloud integration and keeping everything under a single pane of glass.
McNaughton said he also likes how well NetVault integrates with Quest’s new QoreStor software-defined product as well as other secondary storage platforms.
Quest claims thousands of NetVault customers.
What are high availability applications if they’re not highly available?
According to a report released this month by SIOS, in partnership with ActualTech Media, one-quarter of respondents say their high availability applications fail every month. Only 5% said they never suffer an availability failure.
“An organization’s highly available applications are generally the ones that ensure that a business remains in operation. Such systems can range from order-taking systems to CRM databases to anything that keeps employees, customers and partners working with you,” the report said. “… The news is mixed when it comes to how well HA applications are supported.”
The report, “The State of Application High Availability,” gathered responses from 390 IT professionals in the United States and focused on tier-1 mission-critical applications, including Oracle, Microsoft SQL Server and SAP/HANA.
Twenty-six percent said their availability service fails at least once a month.
“This is a difficult statistic to grasp, as it would seem that there’s a fundamental flaw somewhere that needs to be corrected,” the report said. “Fortunately, not everyone is faring this badly.”
Among the rest of the 95% that said they suffer failures in high availability applications, 28% said it happens every three to six months, 16% said it happens every six to 12 months and 25% said it happens once per year or less.
High availability requires expertise, said Jerry Melnick, president and CEO of SIOS, a software company that manages and protects business-critical applications. That includes getting the right software to match requirements, getting the system configured correctly, plus discipline and management in how organizations approach the cloud, he said.
Is high availability up in the cloud?
As with many other uses, organizations are exploring the use of the cloud for high availability applications.
“Modern organizations are embracing the hybrid cloud and making strategic decisions around where to operate critical workloads,” the report said. “But not everyone is keen on moving applications into an off-premises environment.”
Twelve percent of respondents have not moved any high availability applications to the cloud. Twenty-four percent are running more than half of those applications in the cloud.
“Putting all those pieces together … requires a higher set of IT skills,” Melnick said.
Once an organization gets there, though, the cloud can help streamline high availability operations.
“The cloud offers a unique opportunity to cost effectively get to disaster recovery and handle disaster recovery scenarios,” Melnick said.
Sixty percent of organizations that haven’t made the full move to the cloud said they prefer to keep high availability applications on premises where they have more control over the infrastructure.
Melnick said he thinks some of those respondents will eventually move to the cloud.
Datrium’s latest $60 million funding will fuel its hybrid cloud computing and data management product line and business expansion into Europe.
The Series D funding round boosted the Sunnyvale, California-based startup’s overall total to $170 million since 2012. New CEO Tim Page closed the round as he tries to pivot the company from its SMB and midmarket roots to enterprise sales of Datrium DVX.
Former CEO Brian Biles, a Datrium founder who is now chief product officer, said the startup is having a great quarter, and Page has “re-energized a lot of our focus on go-to-market.” Page’s experience includes building out an enterprise sales organization while COO at VCE, the VMware-Cisco-EMC joint venture that produced Vblock converged infrastructure systems.
Datrium DVX first hit the market in early 2016 with server-based flash cache to accelerate data reads and separate data nodes for back end storage. DVX software orchestrates and manages data placement between the Datrium Compute Nodes and Data Nodes and provides storage features such as inline deduplication, compression, snapshots, replication, and encryption.
Separate Compute and Data Nodes
Datrium now pitches its on-premises DVX as converging “tier 1 hyper-converged infrastructure (HCI) with scale-out backup and cloud disaster recovery (DR).” But Datrium DVX is not HCI in the classic sense with virtualization, compute, and storage in the same box. The Datrium DVX system’s Compute Nodes cache active data on the front end, and separate Data Nodes store information on the back end, enabling customers to scale performance and capacity independently.
Customers have the option to buy Datrium Compute Nodes, supply their own servers, or use a combination of the two, so long as they’re equipped with solid-state drives (SSDs) to cache data. The compute nodes support VMware, Red Hat and CentOS virtual machines. Disk- or flash-based Datrium Data Node appliances handle the backend storage.
This year, Datrium added a software-as-a-service Cloud DVX option to back up data in Amazon Web Services (AWS) and CloudShift software for disaster recovery orchestration. The company claimed that more than 30% of its new customers adopted Cloud DVX within the first three months of its availability. Biles said Cloud DVX could lower backup costs in AWS because Datrium globally deduplicates data.
Biles characterized Datrium’s Series D funding as a “standard round” that will help to grow all parts of the company. He said Datrium currently operates in the United States and, to a lesser degree, in Canada and Japan, and the company plans to expand to Europe next year. Datrium has more than 150 employees and more than 200 customers, according to company sources.
“We have good momentum now, but we want to keep feeding that,” Biles said. He offered no estimate on when the company might become cash-flow positive. “A lot depends on the next couple of years of sales acceleration.”
Samsung Catalyst Fund led the latest funding round, with additional backing from Icon Ventures and prior investors NEA and Lightspeed Venture Partners. Icon’s Michael Mullany, a former VP of marketing and products at VMware, joined Datrium’s board of directors.
Dell EMC extended its lead over Nutanix in hyper-converged systems sales in the second quarter, although Nutanix crept ahead of Dell-owned VMware into first when the market is measured by HCI software.
That was the verdict from IDC in its worldwide converged systems tracker report released last night.
IDC measures the hyper-converged infrastructure (HCI) market two ways: by the brand of the systems and by the vendor whose software provides the core hyper-converged capabilities. Dell-owned technologies led both HCI market categories in the first quarter with Nutanix second in both. Nutanix, which moved to a software-centric reporting model earlier this year and is getting out of the hardware business, jumped up in software revenue but lost ground to Dell EMC in systems.
Overall, IDC said the HCI market grew 78% year-over-year to $1.5 billion in the second quarter. Dell EMC’s $419 million in revenue gave it 28.8% share. That represented 95.2% year-over-year growth, outgrowing the market. Nutanix placed second in branded revenue with $275.3 million, up 48.5% year-over-year and basically flat from its first-quarter branded revenue of $273 million. Nutanix had 18.9% of the branded revenue, down from 22.7% a year ago and 22.2% in the first quarter of 2018.
On the software side, Nutanix revenue grew 88.9% year-over-year to $498 million and 34.2% of the HCI market. It slipped past VMware, which grew 97% year-over-year to $496 million and 34.1% share. IDC considers Nutanix and VMware in a statistical tie because they are within one percent in share. VMware’s share jumped from 30.9% in the second quarter of 2017 to 34.1% a year later. But it dropped from 37.2% share in the first quarter while Nutanix increased from 35.2% to 34.2% quarter-over-quarter to catch VMware. However, Dell did receive part of Nutanix’s revenue gains because the Dell EMC XC platform uses Nutanix software through an OEM deal,.
Dell had $79 million in HCI software, putting it in a statistical tie Cisco ($77 million) and Hewlett-Packard Enterprise ($72 million). Dell had 5.4% share, Cisco 5.3% and HPE 4.9% — all within one percent. Because Cisco and HPE sell their software on their own servers, they had the same revenue and share in systems as in HCI software. HPE had the largest year-over-year growth of any systems vendor, increasing 119.4%. However, Cisco grew more since the first quarter, jumping from $60 million to $77 million and increasing share from 4.9%. HPE dropped share quarter-over-year, slipping from 5% to 4.9% while its revenue went from $61 million to $72 million.
Hyper-convergence was the only three of the converged markets that increased year-over-year. The certified reference systems/integrated infrastructure market declined 13.9% year-over-year to $1.3 billion in revenue. Integrated platform sales slipped to $729 million for a 12.5% decline. Dell led the certified reference systems market with $640 million, with No. 2 Cisco/NetApp at $481 million. Oracle led in integrated platforms with $441 million and 60.4% share. The HCI market is also now the largest of those three converged markets for the first time.
NetApp launched its Data Fabric architecture to adapt its storage to manage applications built for the cloud. Container orchestration had largely been a missing aspect in Data Fabric, but the vendor has taken a step to try and plug the gap.
NetApp has acquired Seattle-based StackPointCloud for an undisclosed sum. StackPointCloud has developed a Kubernetes-based control plane, Stackpoint.io, to federate trusted clusters and sync persistent storage containers among public cloud providers.
The first fruit of the merger is the NetApp Kubernetes Service, which the vendor claims will allow customers to launch a Kubernetes cluster in three clicks and scale it to hundreds of users. NetApp said it will levy a surcharge of 20% of the overall compute cost for the cluster to cover deployment, maintenance and upgrades. That equates to about $200 on $1,000 of overall compute.
The NetApp Kubernetes Service engine will allow customers to deploy containers at scale from a single user interface with underlying NetApp storage, said Anthony Lye, a NetApp senior vice president of cloud data services.
The Cloud Native Computing Foundation took over management of Kubernetes development earlier this year from Google. Docker Inc. popularized container deployments with its Docker Swarm orchestration management. Other open-source container tools include Apache Mesos and Red Hat OpenShift.
NetApp customers will still be able to use their preferred deployment framework, but Lye said Kubernetes is “the clear winner” among container operating systems.
He said Stackpoint completes the work NetApp started with its open source dynamic container-provisioning project, codenamed Trident. NetApp Kubernetes Service is available immediately.
Lye said his internal development teams were using the Stackpoint engine to deploy NetApp storage infrastructure at global cloud data centers run by Amazon Web Services (AWS), Google Compute Platform (GCP) and Microsoft Azure. In addition to the big three, StackPointCloud supports Digital Ocean and Packet clouds.
“My engineers were telling me this was the best thing they’d ever seen, plus the market was telling us that storage and containers need to go together and (enterprises) are using multiple clouds. Those three reasons led us to make the acquisition,” Lye said.
The DevOps trend has been fueled by container virtualization for writing cloud-native applications with specialized microservices. Linux-based containers also are gaining attention for the ability to “lift and shift” traditional legacy applications to hybrid cloud environments. Unlike a virtual machine, a container does not require a hosted copy of a full operating system.
Designed on Kubernetes Storage Classes, NetApp Trident was developed to simplify persistent-volume provisioning for OnTap-based storage, SolidFire and E Series arrays. Lye said the NetApp Kubernetes Service allows developers to run canary environments to test new applications with mixed nodes of graphics processing units and regular CPUs.
StackPointCloud launched in 2014 with bootstrapped funding. The transaction brings CEO Matt Baldwin to NetApp, along with an undisclosed number of StackPointCloud employees.
Stackpoint integration will start with NetApp HCI hyper-converged infrastructure and FlexPod converged systems. The plan is to extend NetApp Kubernetes Service across all of NetApp’s storage, Lye said. “Our strategy is to continue to build tighter connections between our cloud protocols and containers and extend the control plane from the public clouds down to support NetApp HCI or NetApp’s private clouds.”