Seagate this week launched its highest capacity 10,000 rpm 2.5-inch, 12-gigabit per second SAS hard disk drive (HDD), which includes a dash of flash. The Enterprise Performance 10K HDD is designed for enterprise workloads such as databases, online transaction processing, virtual desktop infrastructure, and file and print servers.
The new Seagate SAS HDD comes in 2.4 TB, 1.8 TB, 1.2 TB and 600 GB capacity models, and infuses 16 GB of flash cache to speed reads. Barbara Craig, Seagate senior product marketing manager of enterprise HDDs, said the Enterprise Performance 10K HDD also adds firmware-based advanced write caching that can improve random writes by approximately 60% over Seagate’s prior generation.
“It’s kind of a poor man’s SSD,” Craig said.
Craig said the flash cache makes the new Seagate SAS HDD three times faster than previous 10K drives without flash. The advanced write caching technology uses enhanced algorithms and 8 MB of non-volatile cache and media cache, she said.
The Seagate SAS HDD has a five-year warranty, a mean time between failures of 2 million hours, and supports the company’s ninth generation of magnetic recording technology and enterprise firmware.
Can HDDs hold off SSDs in enterprise?
The high-speed SAS HDD market tends to be one where enterprises consider flash solid-state drives (SSDs), but Seagate claims its new HDD would hold appeal for a range of workloads with enterprise users.
“They may be turned off by availability of flash right now. It’s hard to get, and the prices are high,” Craig said. “Maybe some small- to medium-sized businesses or even large data centers, the performance on the 10K drive is close enough, and the cost is that much more impressive for some customers.”
John Rydning, an IDC research vice president for HDDs, said he does not envision enterprise SSDs reaching price-per-GB parity with 10,000 rpm HDDs over the next five years. Rydning predicted sustained demand for 10,000 rpm HDDs for storage workloads where they provide “good enough” performance. He wrote via an email that 10,000 rpm HDDs continue to provide a “good balance of performance and value for several storage workloads.”
The new Seagate SAS HDD ships with a variety of different model numbers. The 2.4 TB HDD is model ST2400MM0129, without encryption. The self-encrypting model that supports the Federal Information Processing Standard is model ST2400MM0149.
The HDDs support Seagate’s FastFormat technology to enable customers to switch between applications with block sizes of 512 bytes and 4 KB.
The new Seagate SAS HDD is currently shipping to major OEMs such as Super Micro and Huawei for qualification, according to Craig. She said channel shipments will begin in the middle of August.
Nutanix’s success selling hyper-converged software depends largely on how well it both competes with and partners with the large server vendors.
Nutanix revenue beat expectations last quarter, after giving a disappointing forecast three months ago. Its revenue of $192 million increased 67% and smashed its guidance of $180 million to $190 million for the quarter. The hyper-convergence vendor still lost $112 million – up from $46.8 million a year ago – despite the increase in sales, but still has $350 million in cash and investments. Nutanix also had a higher forecast than Wall Street expected, guiding for $215 million to $220 million.
On the earnings call, Nutanix CEO Dheeraj Pandey said the vendor picked up 790 new customers in the quarter including 50 new Global 2000 customers. He credited OEM partners Dell EMC and Lenovo for helping it land bigger deals.
Nutanix executives spent much of the Thursday night earnings call discussing its relationship with the large server vendors who it competes and partners with.
Here’s a scorecard of Nutanix server relationships:
Dell EMC. The largest storage and server vendor is determined to be No. 1 in hyper-converged, and is looking to knock Nutanix from that perch with its VMware vSAN hyper-converged software and VxRail HCI appliances. Yet, Dell EMC also re-brands Nutanix software on its PowerEdge servers in a deal that Dell struck before it acquired EMC. Dell EMC XC Series appliances account for approximately 10% to 15% of Nutanix revenue each quarter. Nutanix did not give the exact figure for last quarter, but CFO Dustin Williams said it was below 15%.
“We compete and cooperate with Dell on a deal-by-deal basis,” Pandey said.
Nutanix revenue through Dell declined slightly last quarter from the previous quarter.
Lenovo. Lenovo doesn’t have a home-grow HCI product, and makes several vendors’ software available on its servers. Nutanix is its preferred partner, though, with a similar OEM deal that Nutanix has with Dell EMC. Nutanix executives said Lenovo sales rose last quarter, making up for the Dell declines.
“Lenovo is actually a great sign up for us,” Pandey said.
“Our Lenovo bookings increased sharply,” Williams said.
IBM. Nutanix and IBM last week said IBM would make Nutanix software available on RISC-based Power servers. Nutanix doesn’t have any revenue through IBM yet, but Depay said the deal had great potential.
“I think IBM could be dark horse,” Pandey said. “What’s interesting is, for the first time, a single control plane, a single data plane, a single hypervisor runtime can now span Intel x86 and Power microprocessor’s hardware.”
Cisco. Like Dell EMC, Cisco has its own HCI platform. Unlike Dell EMC, Cisco has no official partnership with Nutanix. But Nutanix and Cisco channel partners bundle Nutanix software on Cisco UCS servers. Depay said he hopes to turn Cisco into a willing partner, even if Cisco has its HyperFlex product.
“It’s perilous to predict what will happen in these situations,” Pandey said. “But one thing I’ve learned about the art of negotiation is that what was non-negotiable yesterday could probably become negotiable tomorrow.
“We’re hoping to have this process play out where Cisco understands what HyperFlex is, and Cisco also understands the value that we can bring to their rackmount servers. So I think there is something between us.”
Hewlett Packard Enterprise. Nutanix software is certified to run on HPE ProLiant servers, and sold on channel bundles similar to Nutanix on UCS. While Cisco has been mostly silent on Nutanix encroachment, HPE makes it clear it does not appreciate Nutanix piggy-backing on ProLiant. HPE marketing VP Paul Miller made that clear in a blog post titled, “Don’t be misled … HPE and Nutanix are not partners.” The blog urged customers to buy its SimpliVity software.
Nutanix executives said little about the HPE relationship on the call, except that it is early and they hope to build a relationship with HPE through successful channel sales.
Pandey made it clear Nutanix wants to provide its software on as many platforms as possible.
“We continue to build with ubiquity by offering customers’ choice of hardware, choice of hypervisor, and choice of public cloud providers for secondary storage, all managed by Prism,” he said. “Building an operating system is a journey and no more than one or two are successful each decade. It requires an immense focus in applications, interoperability, performance, security, automation and reliability and to make it all ubiquitous, that is location agnostic is the biggest engineering challenge.”
A DataCore IT survey on the state of software-defined storage, hyper-converged systems and cloud storage showed a gradual uptake of some of the most heavily promoted new technologies.
For instance, flash use is growing, yet it will represent only a small percentage of overall storage capacity in 2017, according to the DataCore IT survey of 426 customers and prospective customers conducted from late 2016 through April 2017 via Survey Monkey.
The majority (76%) of the surveyed IT professionals indicated flash would represent less than 20% of their storage capacity in 2017. Among that group, 14% weren’t using flash at all, and 32% projected less than 10% of their storage capacity would be flash-based.
However, the IT survey respondents listed all-flash arrays as their top preference to overcome performance problems, followed by software acceleration on the host machine and switching to in-memory databases.
Flash also factored into the responses to the question: “What technology disappointments or false starts have you encountered in your storage infrastructure?” Flash failed to accelerate applications for 16% of the IT survey respondents.
The most cited disappointments were “cloud storage failed to reduce costs,” selected by 31%, and “managing object storage is difficult,” mentioned by 29%.
Confusion over hyper-converged
The DataCore survey indicated there’s confusion over what the term “hyper-converged” means. Close to half (41%) of survey respondents think hyper-converged means software is tightly integrated with the hypervisor but hardware agnostic. Another 27% view hyper-converged as an “integrated appliance” with “hardware and software locked together.”
The majority of the IT survey respondents (67%) have not deployed hyper-converged infrastructure (HCI), although 34% said they are strongly considering it. The primary reason for ruling out hyper-converged was lack of flexibility, followed by expense and vendor lock-in.
Among those who have deployed HCI, 6% have standardized on it, 7% have a few major deployments, and 20% have a few nodes. Top reasons for deploying or evaluating hyper-converged systems were simplifying management (48%), ease of scale-out (39%), and reducing hardware costs (35%). Leading use cases were databases, data center consolidation, enterprise applications such as customer relationship management (CRM) and enterprise resource planning (ERP), and virtual desktop infrastructure (VDI).
Surveyed IT pros noted the following business drivers for implementing software-defined storage: simplify management of different models of storage (55%), future-proof infrastructure (53%), avoid hardware lock-in from storage manufacturers (52%), and extend the life of existing storage assets (47%).
When DataCore conducted the IT survey in 2015, a lower percentage (45%) indicated they were trying to simplify management of different storage classes by automating frequent or complex storage operations.
Top use cases that IT pros noted for public cloud storage were long-term archive (35%), back up to cloud and restore on premises (33%), and disaster recovery (33%). A small percentage (11%) said they use the public cloud for primary storage, but a substantial percentage (40%) are not currently evaluating or using the cloud for storage.
For IT pros unwilling to move applications to the public cloud, the primary reasons are security (57%), sensitive data (56%) and regulatory requirements (41%). The applications they’re most willing to move to public or hybrid cloud infrastructure were select enterprise applications such as Salesforce (33%), data analytics (22%), databases (21%), and virtual desktop infrastructure (VDI), according to the survey.
DataCore said the 426 IT professionals who responded to the survey are using or evaluating software-defined storage, hyper-converged systems and cloud storage. The majority of the respondents (84%) were from North America and Europe.
NetApp quietly slipped an acquisition of storage memory software startup Plexistor into an earnings call otherwise noteworthy for strong results last quarter and a disappointing forecast for this quarter.
NetApp CEO George Kurian disclosed the Plexistor acquisition during the Wednesday night earnings call. NetApp did not include the acquisition in its press release or filing with the SEC, and provided no financial details.
Plexistor, which developed software that uses nonvolatile memory as primary storage, fits into NetApp’s strategy of trying to dominate in flash and other emerging storage technologies.
Plexistor came out of stealth in late 2015 with SDM – a software-defined memory product designed to deliver high-capacity nonvolatile storage at near-memory speed. The vendor chased customers running big data analytics and in-memory database processing.
SDM talks directly to a physical memory device, presenting DRAM and persistent storage in one namespace. It uses dual-inline memory module (NVDIMM) memory cards, NVMe flash and a spinning disk tier.
In late 2016, Plexistor bundled SDM on Supermicro servers and Micron NVDIMM cards in a product called Persistent Memory over Fabric Brick (PMoF Brick). PMoF Brick was aimed at big data analytics and high-performance NoSQL databases.
It’s unclear whether NetApp will sell Plexistor as a separate product or embed the technology into its flash storage. Kurian referred to Plexistor as “a company with technology and expertise in ultra-low latency persistent memory. This differentiated intellectual property will help us further accelerate our leadership position and capture new application types and emerging workloads.”
SDM runs on servers, making it a candidate for incorporation in NetApp’s coming hyper-converged product that will use SolidFire’s all-flash technology. NetApp has yet to formally launch the hyper-converged appliance.
The Plexistor acquisition is an example of how NetApp is trying to use flash – as well as the cloud – to bounce back after a tough two-year stretch. Kurian said the vendor has turned the corner, offering its results last quarter as proof.
NetApp’s revenue of $1.48 billion last quarter increased 7.3% over the previous year and beat the consensus Wall Street analyst expectation by $400 million.
Kurian called the last fiscal year “a pivotal year for NetApp. We started the year with bold commitments, and we delivered against all of them. We did what many said could not be done: return the company to growth while simultaneously expanding operating margins. With each successful step in our transformation, my confidence in our ability to create new opportunities and execute against those opportunities grows.”
NetApp’s problems may not be completely behind it, though. Its forecast for this quarter fell short of expectations. NetApp guided for $1.24 billion to $1.39 billion this quarter, which amounts to around two percent growth at the midpoint and falls below analysts’ expectations.
When Kurian became CEO two years ago, NetApp struggled with its flash strategy and with a disruptive upgrade process that slowed customer upgrades from its Data OnTap 7-Mode operating system to Clustered Data OnTap (CDOT). The vendor was stuck in a cycle of flat or declining revenue, which it didn’t snap until the final quarter of 2016.
NetApp now stands second behind Dell EMC in all-flash revenue and Kurian said most of the capacity on NetApp FAS arrays has moved to CDOT. He said 95% of FAS systems that shipped last quarter had CDOT installed. “The transition from 7-Mode to Clustered OnTap is now behind us,” he said.
The transition from disk to flash continues. Kurian called it the “early innings” in flash, and said NetApp’s all-flash revenue grew almost 140% last quarter. He said NetApp’s All-Flash FAS (AFF), EF Series and SolidFire are on pace for $1.4 billion in revenue over the next year.
“We are winning with flash and expanding our intellectual property in this market, positioning us for success in the multiyear transition from disk to flash,” he said
As for the low guidance, Kurian said NetApp would rather err on the side of caution. “We really are giving realistic and, in some cases, conservative estimates,” he said. “We want to make sure we meet or beat every commitment we made, as we have the last four quarters.”
Pure Storage executives claim the flash pioneer is on pace for $1 billion in revenue for 2017, plus its first profitable quarter by the end of the year.
Neither goal is assured, but both seem possible following Pure’s first-quarter earnings report Wednesday. The vendor reported $183 million, up 31% from last year. It cut losses only slightly to $62 million compared to $64 million a year ago, but execs claim spending decreases in the second half of the year, along with revenue growth, should bring it past break-even for the first time.
Pure forecast revenue of $214 million to $222 million this quarter and from $975 million to $1.025 billion for the year. That annual prediction includes a second-half surge in revenue and Pure will have to hit the midpoint of its annual revenue to achieve the $1 billion goal.
Of course, there is still a lot of 2017 left. Dietzen said he is counting on Pure Storage flash products taking advantage of hot industry trends. He expects to cash in on the emergence of Express (NVMe) solid-state drives (SSDs) with new Pure Storage FlashArray//X systems, continued growth of Pure Storage FlashBlade unstructured data storage, and the need for storage for the emerging private cloud, artificial intelligence and machine learning markets.
Dietzen said FlashBlade, which became generally available at the start of 2017, is selling at twice the rate FlashArray did when it first launched nearly six years ago. “FlashBlade is transforming the unstructured data market in the same way FlashArray revolutionized structured data,” he said.
Pure Storage FlashArray//X is another key to Pure’s achieving its goals. While Pure currently trails Dell EMC, NetApp and Hewlett Packard Enterprise in all-flash revenue, the vendor is looking to pick up organizations that want the improved speed of NVMe over current SSDs. Neither Dell EMC, NetApp nor HPE have an all-NVMe system yet, although all major flash storage vendors will eventually add NVMe.
“We see a new set of use cases that NVMe opens up,” said Matt Kixmoeller, Pure’s vice president of products. “Certainly, faster database type workloads … but we’re also really going after consolidation of cloud providers. A lot of the cloud vendors out there have really consolidated on server DAS over the past few years, and now we have an opportunity to go in there with NVMe and take the flash out of each of those servers and consolidate it at the top of the rack to drive more efficiencies for them.”
Dietzen said Pure is looking to become the storage of choice for “roughly 80% of enterprise workloads not currently a candidate for the public cloud.” He said enterprises are turning to private cloud in great numbers.
“While we occasionally compete with the big three public clouds … our customers use Pure’s data platform in conjunction with the public cloud, particularly for datasets that are too large to move across the internet,” he said.
NEW ORLEANS – Veeam Software opened its user conference with the launch of the latest version of its data protection software, moving deeper into cloud and physical device support.
Veeam Availability Suite v10 rolled out today at VeeamON adds continuous data protection, support for network-attached storage and native object storage. The object storage support includes data on Amazon Web Services (AWS) and Microsoft Azure. Veeam Availability Suite v10 frees up primary backup storage with policy-driven automated data management for long-term retention and compliance. The major upgrade to Veeam’s flagship product is in technical preview.
“V 10 takes [Veeam] to a multi-cloud world full speed,” said co-founder Ratmir Timashev. “V 10 is where everything comes together.”
Veeam Continuous Data Protection (CDP) replicates data to private or managed public clouds. The default recovery point objectives setting in Veeam Availability Suite v10 is 15 seconds. CDP is commonly found in data protection products, especially those emphasizing data recovery.
Other enhancements and features in Veeam Availability Suite include:
- Veeam Availability for AWS. Delivered through a partnership between Veeam and cloud backup and disaster recovery provider N2W, the feature offers cloud-native, agentless backup and availability to protect and recover AWS applications and data. Availability for AWS is geared toward helping enterprises migrate to and manage a multi-cloud or hybrid cloud environment.
- Veeam Agent for Microsoft Windows. Veeam announced this feature in August, but it became generally available Wednesday. It is designed to provide availability for Windows-based physical servers, workstations and endpoints, as well as Windows workloads running in public clouds.
Veeam previously released its Agent for Linux, which provides availability for public cloud and physical workloads hosted by Linux-based servers and workstations running on premises or in the public cloud.
“You need an availability strategy that’s going to extend beyond your virtualized workloads,” John Metzger, vice president of product marketing, said during the general session at VeeamON Wednesday. “Protecting workloads is important, but ensuring availability of those workloads is critical.”
Veeam Availability Suite includes Veeam Backup & Replication and Veeam ONE.
General availability of Veeam Availability Suite v10 is projected for late 2017. Pricing is not available at this time.
Veeam earlier this week announced changes to its executive team.
Check SearchDataBackup this week for more news out of VeeamON.
NEW ORLEANS — Veeam Software has changed CEOs for the second time in less than a year.
On the eve of the VeeamON user conference this week, the data protection software vendor elevated two executives into co-CEO positions as it strives to become a billion-dollar company.
Peter McKay, previously COO and president, and Andrei Baronov, co-founder and CTO, will serve as co-CEOs. McKay will retain his title as president and Baronov will continue as CTO. Former Veeam CEO William Largent moves into a new role as chairman of the company’s Finance & Compensation Committees.
The moves come 11 months after Largent replaced Veeam’s other founder, Ratmir Timashev, as CEO, and former VMware executive McKay joined the company as COO/president. Timashev remains with Veeam as a director of the private company.
McKay will lead Veeam’s “go-to-market,” finance and human resources functions, and work with Baronov to drive future growth, according to the company. The go-to-market strategy will specifically focus on the company’s continued expansion into the enterprise and cloud segments, as well as accelerating growth into the Americas and Asia/Pacific markets.
Baronov will oversee Veeam’s research and development, market strategy and product management functions. Largent will be responsible for the oversight of all corporate governance matters, tax structure, investment management and internal audits.
Founded in 2006, Veeam has a goal of becoming a $1 billion revenue company by 2018 and a $1.5 billion company by 2020, McKay said today at VeeamON.
Veeam recently reported its 2016 revenue bookings at $607 million.
“As we continue to grow and scale our business, we need to do it the right way,” McKay said.
Veeam reported about 2,500 global employees at the end of 2016 and is looking to add 800 over the next year, McKay said. The company plans to invest $126 million in marketing in 2017, which is about 20% of its revenue.
Veeam is looking to expand in four specific areas: geographic, platform (physical, virtual and cloud), segment (increased investment in SMB, commercial and enterprise markets) and partners.
Before joining Veeam, McKay was senior vice president and general manager of the Americas at VMware. He was also CEO of startups Desktone, Watchfire and eCredit.
“Peter took the company to the next level,” Timashev said of McKay’s first year at Veeam.
Veeam is growing faster and is more innovative now, Timashev said.
Veeam claims a total of 242,000 customers, and says it is adding 4,000 customers each month.
“There is an unbelievable opportunity in front of us,” McKay said. “We have to be bold.”
Nutanix is making its hyper-converged infrastructure (HCI) software available on another server platform, this time with the server vendor’s full cooperation.
Nutanix and IBM today disclosed an OEM deal for IBM to sell Nutanix HCI software on Power Systems servers. The deal gives IBM an HCI system and brings Nutanix beyond the x86 platform where hyper-convergence is dominant.
Greg Smith, Nutanix senior director of technical marketing, said IBM will sell Nutanix HCI software on IBM-branded turnkey appliances beginning sometime in 2017. This is different than Nutanix’s recent initiatives to make its HCI software available through channel partners with Cisco and Hewlett Packard Enterprise servers. Neither Cisco nor HPE were willing partners, as both sell competitive products.
“This allows our software to run on a different class of server,” Smith said. “We have done well on x86 platforms, and this allows us to venture into a different market segment. Power systems are used for more advanced big data, machine learning and AI cognitive workloads. These are demanding applications that demand high performance.”
Unlike x86 systems running Nutanix software, the IBM HCI systems will only support the Nutanix Acropolis Hypervisor (AHV). Nutanix started out supporting only VMware hypervisors, and most of its customers still use VMware virtualization. But Smith said AHV will be incorporated on all of the IBM Nutanix systems. Smith said AHV is designed for cloud-native applications and running microservices and containers.
“The objective is for [IBM] customers to run the Nutanix AHV hypervisor,” he said.
IBM sold off its x86 server platform to Lenovo and sees need to enter x86 HCI market, according to IBM storage general manager Ed Walsh. In an interview with TechTarget in February, Walsh said IBM’s converged infrastructure platform provides the same benefits to customers as x86-based HCI.
New Dell EMC cloud storage has appeared on the horizon, providing a silver lining for IT-pressed healthcare shops.
Virtustream Healthcare Cloud, which launched this week at Dell EMC World 2017, is a secure compliance archive for electronic medical records. The Virtustream cloud is built on Pivotal Cloud Foundry software running atop Dell EMC storage.
The vendor said the Virtustream Healthcare Cloud hosts mission-critical data sets in a HIPAA-and HITECH-compliant environment, with managed services and guaranteed five nines of availability. Customers use software to tier data from on-premises Dell EMC storage to the Virtustream cloud.
“We’re looking to do the same things with the healthcare cloud that we’ve done in the SAP database world: consumption billing, performance with availability guarantees, built-in disaster recovery with RPOs and RTOs, and a full managed services capability on top,” said Matt Theurer, a Virtustream founder and its senior vice president of product management.
Startup Virtustream launched in 2009 to provide public cloud hosting of legacy applications that were not written for the cloud, such as SAP Hana. It became part of EMC via a 2015 acquisition. EMC subsequently ported its Rubicon project into the Virtustream cloud to create the Virtustream Storage Cloud object platform.
Rubicon turns the Virtustream cloud into a target for underlying Dell EMC storage through Isilon CloudPools, Data Domain CloudBoost and CloudArray for Dell EMC Unity and VMAX all-flash arrays.
Dell EMC also rolled out the Virtustream Enterprise Cloud Connector for the VMware vRealize Automation suite. Theurer said customers can use Virtustream as an endpoint for cloud bursting or tiering to support evolving availability, disaster recovery, performance and security requirements.
BOSTON — Storage is rarely a focal point at OpenStack Summit keynotes, so it was interesting this week to see a Cinder block storage demo — even if it failed — and Edward Snowden discussing data in the cloud.
The OpenStack Cinder demo hit a technical glitch, but the live video feed from Moscow with the former National Security agency contractor went off without a hitch.
Snowden left the U.S. after his 2013 leak of more than a million documents revealed extensive domestic surveillance operations. He told OpenStack Summit attendees they could help the people who make the decisions on how to build the infrastructure-as-a-service layer — which he said is “increasingly becoming the bones of the Internet.
“You could use [Amazon’s] EC2. You could use Google’s Compute Engine or whatever. These are fine, right. They work. But the problem here is that they’re fundamentally disempowering,” Snowden said. “You give them money, and in exchange you’re supposed to be provided with a service. And that exists. But you’re actually providing them [with] more than money. You’re also providing them with data, and you’re giving up control. You’re giving up influence. You can’t reshape their infrastructure.
“They’re not going to change things and tailor it for your needs,” he continued. “And you end up reaching a certain point where, OK, these are portable to a certain extent. You can containerize things and then shift them around. But you’re sinking costs into an infrastructure that is not yours fundamentally.”
He cautioned that, when running on the stacks of Google or Amazon, “How do you know when it starts spying on you?” Snowden asked. “How do you know when your image has been passed to some adversarial group, whether it’s just taken by an employee and sold to a competitor, whether it’s taking a copy for the FBI, whether legally or illegally. You really don’t have any awareness of this, because it’s happening at a layer that’s hidden from you.”
Snowden said OpenStack could make users “lose that fundamental, inherent silent vulnerability of investing into things” they don’t influence, own, control or shape. He said OpenStack requires “a little bit more of a technical understanding” to build layer by layer and “continues to comply with this very free and open set of values that the open source community, in general, drives all over the place.
“We can start to envision a world where cloud infrastructures are not private in the sense of private corporations, but private in the sense of a person,” Snowden said, where a small business, a large business or a community of technologists could own, control and shape OpenStack and “lay the foundation upon which everybody builds.
“And I think that’s probably one of the most powerful ideas that shapes the history of the internet and, hopefully, will allow us to direct the future of the internet in a more free rather than a more closed way,” Snowden said.
Cinder demo problem
The Cinder block storage service factored into an OpenStack Summit demo gaffe in the context of explaining open “composable” and cloud-native infrastructure. The snafu came during an attempt to show how to run Cinder as a stand-alone service using Docker Compose to spin up containers.
John Griffith, a principal software engineer at NetApp, later explained the problem he confronted on stage: “There’s an interesting race condition that in all of our rehearsals we never hit, where the scheduler container would come up before the database container was actually ready to receive requests,” he said. “And so it would crash the scheduler container.”
Griffith said he had never encountered the problem before, despite running “this exact demo probably at least a hundred times” before the keynote.
“Unfortunately, when you’re doing a keynote live demo in front of a few thousand people, you don’t have the liberty or luxury to just [say], ‘Hey, let me try this again,’ ” Griffith said.
Kendall Nelson, an upstream developer advocate with the OpenStack Foundation, said the demo ran perfectly twice on the morning of the OpenStack Summit keynote and at least a half dozen times the day before.
Nelson said the takeaway would have been that users could deploy Kubernetes and Docker with OpenStack, and use OpenStack services such as Cinder stand-alone, without additional services such as Nova compute.
“Really, one of the most important things to take away from that, too, is the fact that Cinder actually, by itself, can be extremely easy for somebody to deploy and use,” Griffith said. “Somebody could actually download that Compose file and run that Compose file on their own and have an up-and-running Cinder deployment.”
Griffith said developers are increasingly realizing a need for persistent block storage with containers.
“There are, of course, people that say the world should be ephemeral, and there’s no persistence. The reality is that’s not the world we live in,” Griffith said. “Databases are pretty useless if they don’t have any data in them. OpenStack has been working on storage for a long time. The container space hasn’t. So this is actually an opportunity. ‘Hey, here is a storage service. You can plug this in, and now all you have to do is focus on your APIs.'”