From Silos to Services: Cloud Computing for the Enterprise

Page 1 of 2412345...1020...Last »

March 10, 2018  11:47 AM

Understanding the Variety of Kubernetes Roles and Personas

Brian Gracely Brian Gracely Profile: Brian Gracely
Applications, containers, DevOps, Kubernetes, Services

The Road to More Usable Kubernetes – Joe Beda

Depending on who you ask, you’re very likely to get many different answers to the question, “Who is the user or operator of Kubernetes?”. In some cases, it’s the Operations team running the Kubernetes platform, managing scale, availability, security and deployments in multiple cloud environments. In some cases, it’s the Development team interacting with manifest files (e.g. Helm or OpenShift Templates) for their applications, or integrating middleware services (e.g. API Gateway, MBaaS, DBaaS). Still other cases have blended DevOps teams that are redefining roles, responsibilities and tool usage.

Since Kubernetes orchestrates containers, and containers are technology that is applicable to both developers and operators, it can lead to some confusion about who should be focused on mastering these technologies.

This past week, we discussed this on PodCTL. The core of the discussion was based on a presentation by Joe Beda, one of the originators of Kubernetes at Google, that he gave at KubeCon 2017 Austin. While Joe covers a broad range of topics, the main focus was on a matrix of roles and responsibilities that can exist in a Kubernetes environment (see matrix image above) – ClusterOps, ClusterDev, AppOps and AppDev. In some cases, Joe highlighted the specific tools or process that are available (and frequently used) by that function. In other cases, he highlights where this model intersects and overlaps with the approaches outlined in the Google SRE book.

Some of the key takeaways included:

  • Even thought Kubernetes is often associated with cloud-native apps (or microservices) and DevOps agility, there can be very distinct differences in what the Ops-centric quadrants focus on vs. the App-centric quadrants.
  • Not every quadrants is as mature as others. For example, the Kubernetes community has done a very good job of providing tools to manage cluster operations. In contrast, we still don’t have federation-level technology to allow developers to build applications that treat multiple clusters as a single pool of resources.
  • Not every organization will designate these roles are specific people or groups, and some maybe be combined or overlap.
  • There is still a lot of room for innovation and new technologies to be created to improve each of these areas. Some innovation will happen within Kubernetes SIG groups, while others will be created by vendors as value-added capabilities (software or SaaS services).

It will be interesting to watch the evolution of roles as technologies like Kubernetes and containers begin to blur where applications intersect with infrastructure. Will we see it drive faster adoption of DevOps culture and SRE roles, or will a whole new set of roles emerge to better align to the needs to rapidly software development and deployments?

February 28, 2018  10:42 PM

The Kubernetes Serverless Landscape

Brian Gracely Brian Gracely Profile: Brian Gracely
containers, events, FaaS, Fn, Functions, Kubernetes, Lambda

In the traditional world of IT tech, there are currently two trends that are like rocket ships – Kubernetes and Serverless. There’s also lots of buzz around AI, ML, Autonomous Vehicles, Blockchain and Bitcoin, but I’m not putting those into more traditional IT building blocks.

Both Kubernetes and Serverless (the AWS Lambda variety) both got launched into the market within a few months of each other towards the end of 2014 and early 2015. They were both going to change how newer applications would get built and deployed, and they both promised to reduce the complexities of dealing with the underlying infrastructure. Kubernetes is based on containers, and Serverless (at least in the AWS Lambda sense) is based on functions (and some undisclosed AWS technologies).

I started following the serverless trend back in the Spring of 2016, attending one of the early ServerlessConf events. I had the opportunity to speak to some of the early innovators and people that were using the technology as part of their business (here, here, here). Later I spoke with companies that were building serverless platforms (here, here) that could run on multiple cloud platforms, not just AWS. At this point, the Kubernetes world and Serverless worlds were evolving in parallel.

And then in early 2017, they began to converge. I had an opportunity to speak with the creators of the Fission and Kubeless projects. These were open source serverless projects that were built to run on top of Kubernetes. The application functions would run directly in containers and be scaled up or down using Kubernetes. The two rocket ships were beginning to overlap in functionality. Later, additional projects like FnNuclio,  OpenFaaS, and Riff would also emerge as open source implementations of serverless on Kubernetes. And OpenWhisk would soon add deeper integration with Kubernetes. As all of this was happening in 2017, I was wondering if a consensus would eventually be reached so that all these projects wouldn’t be fragments of the same market space. I wondered if the Kubernetes community would provide some guidance around standard ways to implement certain common aspects of serverless or functions-as-a-service (FaaS) on Kubernetes.

This past week, the Serverless Working Group in the CNCF released a white paper and some guidance about event sources. While they didn’t come out and declare a preferred project, as they have with other areas of microservices, they did begin to provide some consistency for projects going forward. They also established a working group that represents a broad set of serverless backgrounds, not just those focused on Kubernetes.

We discussed all of these serverless Kubernetes projects on PodCTL this week. We highlighted where the various projects are making good projects,  as well as discussing some areas where the projects still have a long way to evolve before they will be widely adopted.

btw – there’s an interesting debate happening on Twitter these days between the serverless/Kubernetes crowd and the serverless/Lamdba crowd. If you want to keep up, follow where @kelseyhightower got started a couple days ago (and follow the mentions and back and forth.

February 19, 2018  11:19 PM

Understanding the Challenges of Kubernetes

Brian Gracely Brian Gracely Profile: Brian Gracely
certification, Compatibility, containers, DevOps, Kubernetes, Microservices, platform

Last week I spoke about some of the critical factors that will be important for IT leaders to consider if they are bring containers and Kubernetes into their environments. These decision-criteria are elements that will have longer-term impacts on how well their application developers and IT operations teams are able to deploy and scale their containerized applications.

But the Kubernetes technology and community are moving very quickly, with lots of new features and entrants coming into the market. And anytime a market is moving quickly, there can be some short-term myths and misunderstandings that can confuse both the education process and decision-making process. For the last couple weeks, we’ve been looking at some of the more commons misperceptions that exist around the Kubernetes community, and some details about how to rationalize them with the existing technology available. We did this in a two-part series (Part I, Part II).

We covered the following topics:


Myth/Misunderstanding 1 – Kubernetes is a platform.
Myth/Misunderstanding 2 – Containers are only for microservices
Myth/Misunderstanding 3 – Microservices are always “micro” (small in size)
Myth/Misunderstanding 4 – Kubernetes is only for stateful apps


Myth/Misunderstanding 5 – Architecture – Kubernetes Multi-Tenancy
Myth/Misunderstanding 6 – Architecture – Kubernetes is only for Operators

Compatibility and Certification

Myth/Misunderstanding 7 – What does “GKE Compatible” mean?
Myth/Misunderstanding 8 – Enterprises should run Kubernetes as trunk version

Open Source Communities

Myth/Misunderstanding 9 – Are OSS stats important? How to interpret them?

In going through this list, we often found that the misperceptions were not created by vendor FUD, but mostly came from a lack of experience with a broad range of potential applications, or deployment models. In other cases, the misperception untrue now, but had previously been true in the early days of Kubernetes (e.g. v1.0, or several years ago).

We know that it can be difficult to always keep up with all the changes happening in the Kubernetes community, so we hope that those two shows help to eliminate some of the confusion. Going forward, it may be useful to look at resources such as “Last Week in Kubernetes Development” or the KubeWeekly newsletter from the CNCF.

February 5, 2018  11:38 PM

5 Critical Criteria for choosing Kubernetes platforms

Brian Gracely Brian Gracely Profile: Brian Gracely
Code, communities, Kubernetes, Multi-cloud, UX

If 2017 began with a market full of choices for managing containers, it ended with a fairly unanimous vote from the community – Kubernetes has won the container orchestration battles. So now that nearly every vendor is claiming to support Kubernetes in 2018, what are the critical criteria that companies will use to determine which Kubernetes offering to choose for the applications that will drive their digital transformations? Let’s take a look at five that will be top-of-mind for decision-makers (in no particular order).

User-Experience / Developer-Experience

Ultimately, companies deploy modern infrastructure (e.g. containers) and application platforms to improve their ability to deploy and support modern applications; applications that are focused on improving the business (e.g. better customers experiences, better delivery of services, increased profitability, etc.). In order for this to happen, developers must be willing to use the platform. The developer experience allows developers the freedom to use the tools and interfaces they determine will make them the most productive. In some cases, these will be tools that are container-native. In other cases, developers just want to write code and push it into code repositories and automated testing systems. The best platforms will allow developers to operate in either mode, seamlessly. And subsequently, the operations teams shouldn’t have to think about these different modes as independent “silos”. They are just two different ways to access the same platform.

Engineering Commitment

If a company is considering an investment in a commercial platform, they are fundamentally looking for assistance in augmenting their internal development or operations teams. This means that they want to understand how a vendor’s (or cloud-provider or  system-integrator’s) engineering staff and experience will make them successful. Do they commit code to the core projects? Do they support the code for extended periods of time? Will they create bug fixes? Will they backport those fixes into upstream projects? Do they only support the projects they lead, or will they also support additional projects which give them more flexibility and choice?


Unlike some of the IaaS platforms/projects, which are primarily targeted to “private cloud” environments, Kubernetes has always been designed to be multi-cloud, and allow application portability. This is where it’s important to find reference architectures for various cloud implementations (AWS, Azure, GCP, VMware, OpenStack, bare metal), as well as tools to help automate and upgrade the environments. This is also an area where some implementations are offering support for Open Service Broker capabilities, that allow Kubernetes platforms to integrate with 3rd-party services from various cloud providers. Vendors such as AWS, Red Hat, Ansible, GCP, Microsoft and Pivotal have implemented open service brokers.


While Kubernetes already has a large number of companies that have proven to have certified implementations, it’s still important to know which implementations are making the effort to test and validate their working together. This is not just press-release partnerships, but also proven working code, automation playbooks, and implementations that are out in the open.

Operational Skills

One of the last things to consider is how well a company’s existing operational skills will match with the needs of the new platform requirements. Are they able to integrate existing build tools (CI/CD pipelines), or does that get replaced? Can they reuse existing automation or virtualization skills? Can they get training and certifications to improve their skills, or is it mostly just code examples to sift through? Or does the vendor offer options to augment existing operational skills with managed services or automated updates directly delivered by the vendor?

February 1, 2018  7:29 PM

Understanding Helm – The Kubernetes Package Manager

Brian Gracely Brian Gracely Profile: Brian Gracely
Kubernetes, Microsoft

Virtual machines are a fairly easy concept to understand. It’s software that emulates an entire computer (server). It allows a physical server to be logically “segmented” into multiple software-computers, or virtual machines. And they are an interesting way for Infrastructure/Operations teams to more efficiently run more applications than before on a single physical computer. For the most part, virtual machines are a technology tool that is primarily used (and managed) by the Infrastructure/Operations teams, with Developers being more focused on the applications that run inside the virtual machine.

In contrast, containers are a technology that is applicable to both developers and operators, depending on the task at hand. For developers, containers provide a standard way to package an application, as well as dependencies (additional libraries/dependencies, security credentials, etc.). They also allow developers to run and validate applications locally on their laptops. For operators, container platforms allow applications to be immutably deployed, as well as having the scaling and availability characteristics deterministically described.

So how does a developer “describe” to the operations team how they want the application to be deployed and run? One technology that makes this possible is called “Helm“, the Kubernetes Package Manager. Helm is an official CNCF project, with a large development community supporting the project. Helm creates a set of executable files, called Helm charts, that describe the application, its dependencies and description of how the Kubernetes controller should treat the application. Helm charts can be version controlled, to allow for rollback if a problem occurs after deployment. Helm charts can be centrally stored in an online registry-type service, such as KubeApps from Bitnami.

We recently sat down with Taylor Thomas, Software Engineer at Nike, and @HelmPack Maintainer. We walked through the basics of Helm technology, how it interacts with commonly use container tools and Kubernetes, as well as the how the Helm community is planning for the upcoming Helm v3 release. We also talked about the agenda for the upcoming Helm Summit in Portland.

During that discussion, we highlighted some great resources for anyone that wants to learn more about Helm, or start using the technology for their containerized applications.

January 8, 2018  9:45 AM

Understanding Service Meshes for Microservices

Brian Gracely Brian Gracely Profile: Brian Gracely
containers, Kubernetes, Load balancing, Microservices, Proxy

One of the most popular topics coming out of the CNCF’s KubeCon event in Austin was the concept of a “Service Mesh”.

There were a number of great sessions (videos) at KubeCon about Service Mesh technologies (including Istio, Envoy, Linkerd and Conduit).

This week we discussed the basics of Services Meshes on the PodCTL podcast, and I’ve previously discussed Istio and Linkerd on The Cloudcast.

What is a Service Mesh?

If you look at the origin of the service mesh projects that have emerged over the last year, most of them begin as a necessity in the webscale world. Linkerd was created by engineers that had worked at Twitter (and have since founder Bouyant, who also created the Conduit project). Envoy was created by the engineering team at Lyft. And Istio started as a project at IBM, but have since seen large contributions from Google, Lyft, Red Hat and many others.

In its most basic definition, a service mesh is application-layer routing technology for microservice-centric applications. It provides a very granular way to route, proxy and load-balance traffic at the application layer. It also provides the foundation for application-framework-independent routing logic that can be used (at a platform layer) by any microservice. This article from the Lyft engineering team does an excellent job of going in-depth on the basic use-cases and traffic flows where microservices might benefit from having a service mesh, as opposed to just using the native (L2-L4) routing from a CaaS or PaaS platform + application-specific logic.

Why are Service Meshes now gaining attention?

The biggest reason that we’re now hearing about Service Meshes is because of broader adoption of microservice architectures for applications. As more microservices are deployed, the more complicated it can be to route traffic between them discover new services, and instrument various types of operation tools (e.g. tracing, monitoring, etc.). In addition, some companies wanted to remove the burden of certain application functionality from their application code (e.g. circuit breakers, various types of A/B or Canary deployments, etc.) and Service Meshes can begin to move that functionality out of the application and into platform-level capabilities. In the past, capabilities like the NetFlix OSS services were application specific (e.g. Java), which allowed teams to get similar functionality, but only if they were writing applications in the same language. As more types of applications emerge (e.g. mobile, analytics, real-time streaming, serverless, etc.), the desire to have language-independent approaches are also desirable.

Want to Get Started with Service Meshes?

Consider working through the tutorial for Istio on Kubernetes over on Katacoda. Also consider listening to these webinars about how to get Istio working on OpenShift (here, here).

January 2, 2018  10:32 PM

The Top 10 Stories from 2017

Brian Gracely Brian Gracely Profile: Brian Gracely
ai, AWS, Azure, Bitcoin, Blockchain, containers, Docker, HCI, Kubernetes, ml, VMware

It goes without saying that a lot happened in 2017, as tech and politics and culture intersected in many different areas. Before the year came to a close, we looked back at 10 of the stories that had the biggest impact on the evolution of IT.

Container Stuff

1. Docker >> New CEO; docker >> Moby

The company and open source project that was once darling of the tech media went through some significant changes. Not only did they replace their long-time CEO, Ben Golub, with Steve Singh, but they also changed the identity of the project that created such a devoted open source community. In May, Docker decided to take control of the intellectual property around the name “Docker”, and change the name of the popular project from “docker” to “moby”.  This move provided a more distinct line between Docker, Inc (a.k.a. “big D Docker”) and the open source community (a.k.a. “little d docker”) and their future for monetization plans.

2. Kubernetes winning the Container Orchestration Wars – but do we now only talk about it being boring and irrelevant?

Just a few years ago, there was a great deal of debate about container orchestration. CoreOS’s Fleet, Docker Swarm, CloudFoundry Diego, HashiCor’s Nomad, Kontena, Rancher’s Cattle, Apache Mesos, and Amazon ECS were all competing with Kubernetes to gain the favor or developers, DevOps teams and IT operations. As we closed out 2017, Kubernetes has run away from the field as the clear leader. Docker, Cloud Foundry, Rancher and Mesosphere added support for Kubernetes, putting their alternative offerings on the back burner. AWS and Azure brought new Kubernetes services to market, and even Oracle jumped in the game. And early adopter Red Hat continues to do Red Hat things in the Kubernetes space with OpenShift. 

And with that standardization came some level of boredom from the early adopter crowd. The annual KubeCon event was littered with talks about how Kubernetes has become “boring” and “a commodity”. They have all moved onto 2018’s newest favorite, “service meshes”.

Cloud Stuff – When do we hit the Cloud Tipping Point?

3. Azure getting bigger, but big enough? ($20B run-rate; includes O365)

As Microsoft continues to evolve it’s business towards move cloud-based services and support of open source technologies, it continues to grow it’s Azure business and Office 365. But is it growing fast enough to keep up with AWS? Microsoft claims to be on a $20B run-rate for their cloud business, but it doesn’t break out individual elements such as Azure vs. O365 vs. Windows Server or management tools.

4. AWS re:Invent – can it be stopped? ($18B run-rate)

With dozens of new announcements at AWS re:Invent, the industry continues to chase the leader in cloud computing. The Wall Street Journal recently looked at ways that Amazon (and AWS?) may eventually slow down, but it’s hard to see that happening any time soon.

5. Google Cloud – still ??? (says almost 2,500 employees)

Given the growth of Alibaba Cloud in Asia and continued growth from Oracle Cloud (mostly SaaS business), does Google Cloud still belong in the discussion about “the Big 3”? Apparently Google’s cloud business is now up to 2500 employees, and has made some critical new hires on the executive staff, but when will we start to see them announce revenues or capture critical enterprise customers?

Infrastructure Stuff

6. New Blue Print Emerging for Infrastructure?

Over the last 3-5 years, no segment of the market has undergone as much change and disruption as IT infrastructure. VMware is a networking company, Cisco is a server and security company, HPE isn’t sure what it once was anymore, and DellEMC is focused on “convergence” of its storage and server business. The combination of Hyper-Converged Infrastructure (e.g. Nutanix, VSAN, etc.) and “Secondary Storage” (e.g. Rubrik, Cohesity, etc.) are all bringing Google and Facebook DNA to infrastructure, with an element of public cloud integration. Throw in the transition from VMs to containers and infrastructure is setting up for more consolidation, more software-defined and more disruption in 2018.

7. VMware’s cloud direction?

The Big 3 Public Clouds have all placed their bets on some form of a hybrid cloud story – AWS + VMware; GCP + Nutanix or Cisco or VMware/Pivotal; and Azure + Azure Stack. Will VMware continue to have a fragmented story of VMs to AWS and Containers to GCP/GKE, or will it create new partnerships to make this story more consumable by the marketplace? And what ever happened to the VMware + IBM Cloud partnership from 2016?

8. Where does Serverless Fit? Does it? Is it a Public Cloud only play?

Serverless or FaaS (Functions as a Server) is rapidly climbing up the hype curve, with AWS Lambda leading the way. AWS has been taking serverless in broader directions lately, with “serverless” RDS database offerings, and “Fargate” which is a serverless-like offering for the infrastructure under their EKS container offering. But what about the rest of the market? Will a strong open-source option emerge for multi-cloud or on-prem offerings (e.g. OpenWhisk, Kubeless, Fission, OpenFaaS), or will serverless be the one technology that never really translates from the public cloud into other environments? Expect to see 2018 be a big year for serverless, as more and more examples of applications and usage patterns begin to emerge from Enterprise companies.

Stuff we don’t understand 

9. BitCoin / Blockchain – did you bet your mortgage on it? 

Banks were allowing people to take out 2nd mortgages and HELOCs against their house to buy Bitcoins. Bitcoin was up 16,000%+ in 2017 and some “experts” were saying that it wasn’t a bubble. And the underlying technology, Blockchain, it still somewhat of a mystery to many people, although it’s supposed to be the next big thing for almost every industry. What’s happening here?

10. AI & ML – Everyone’s including AI in their products

These things we know:

  • The hype around AI and ML is huge
  • Where areas of AI or ML improve (e.g. auto-fill in searches), the industry assumes it’s simple and moves the goal posts of what miracles of computer science we expect next.
  • Everyone is afraid of SkyNet

January 2, 2018  7:05 PM

Kubernetes Year in Review

Brian Gracely Brian Gracely Profile: Brian Gracely
AWS, Cloud Foundry, containers, Docker, Kubernetes, OpenShift, Oracle, Red Hat

Over at the PodCTL podcast, we’ve been discussing Kubernetes and Containers for the last 6 months on a weekly basis. Before the break, we looked back on how far the Kubernetes community had evolved in just a short period of time.

  • SETUP | GETTING STARTED: A few years ago, people said that getting started with Docker Swarm was easier than Kubernetes. The Kubernetes community created tools like Minikube and Minishift to run locally on the laptop, automation playbooks in Ansible, services like Katacoda have made it really simple to have online tutorials to learn, and multiple cloud offerings (GKE, AKS, EKS, OpenShift Online|Dedicated) make it simple to get a working Kubernetes cluster immediately.
  • ENSURING PORTABILITY: Nearly every Enterprise customer wants a Hybrid Cloud environment, but they need to understand how multiple cloud environments will impact this decision. The CNCF’s Kubernetes Conformance model is the only container-centric framework that can ensure customers that Kubernetes will be consistent from one cloud environment to another. And since it’s built entirely on the APIs and tools used to build the Kubernetes technology, it allows companies to include compliance testing as part of their day-to-day development.
  • INFRASTRUCTURE BREADTH: Other container orchestrators had ways to integrate storage and networking, but only Kubernetes created standards (e.g. CNI, CSI) that have gained mainstream adoption to create dozens of vendors/cloud options. This allows dozens of networking or storage vendors (or open source projects) to easily integrate with Kubernetes and the breadth of conforming Kubernetes platforms. 
  • APPLICATION BREADTH: The community has evolved from supporting stateless apps to supporting stateful applications (and containerized storage), serverless applications, batch jobs, and custom resources definitions for vertical-specific application profiles.
  • IMPROVING SECURITY: A year ago, there were concerns about Kubernetes security. Since then, the community has responded with better encryption and management of secrets, and improved Kubernetes-specific container capabilities like CRI-O and OCI standardization. In addition, the security ecosystem has embraced new innovations around continuous monitoring, scanning, and signing images within the container registry. 
  • IMPROVING PERFORMANCE: Red Hat (and others) have started the Performance SIG in the Kubernetes community to focus on high-performance applications (HPC, Oil & Gas, HFT, etc) and profiling the required performance characteristics of these applications in containerized environments.
  • IMPROVING THE DEVELOPER EXPERIENCE: One of the themes of KubeCon 2017 (Berlin) was focusing on developer experience, and in just a few months we’re seeing standardization around the Helm format (for application packaging), Draft to streamline application development, Kubeapps to simplify getting started with apps from a self-service catalog. We also seen Bitnami built a parallel to their existing container catalog with applications that are packaged specifically for OpenShift’s security model of non-root containers (vs. the Docker model of root-enabled containers). 

All-in-all, it was an amazing evolution of the Kubernetes community from ~ 1000 people at KubeCon 2016 in Seattle to over 4300+ at KubeCon in Austin. 2018 will bring increased competition and innovation, as well as many more customers running production application on Kubernetes – both in their own data centers and in the public cloud.

November 30, 2017  12:17 AM

AWS Expands the Definition of Serverless

Brian Gracely Brian Gracely Profile: Brian Gracely
Aurora, AWS, Kubernetes, Lambda

The keynote of the AWS re:Invent show has become a firehose of new announcements and new technologies. This year was no different. But instead of focusing on the laundry list of new features or services, I wanted to focus on one specific aspect of several of the announcements.

When AWS Lambda (e.g. “Serverless”) was announced in 2014, it opened two discussions. The first was focused on how this new, lighter-weight programming paradigm would be used. The second was that the usage of this service didn’t require (nearly) any operational involvement, from the perspective of planning the underlying infrastructure resources (capacity, performance, scalability, high-availability). In essence, it was truly on-demand. The third was the beginning of extremely granular pricing model, which is truly pay-per-usage and not pay-per-start-the-instance.

During the keynote, AWS made several announcements that appear to extend the concept of “serverless-style operations” to non-Lambda AWS services.

In particular, both Fargate and Aurora Serverless are helping to remove the concerns of the underlying infrastructure planning from more complex services – Kubernetes/Container Cluster Management, and Database Management.

AWS is by no means the only cloud provider that is starting to drive this trend. Azure has a service called “Azure Container Instances” that does something similar to AWS Fargate. Google Cloud Platform has several data-centric services (e.g. Big Query, DataFlow, DataProc, Pub/Sub) that don’t require any planning of the underlying infrastructure resources.

These new (or enhanced) services shouldn’t be confused with the Serverless or Functions-as-a-Service development paradigm. But they do begin to show the beginnings of what many in the Serverless community are beginning to called “DiffOps” (or Different Ops). It’s not the all-or-nothing claims of “NoOps”, but it’s a recognition that the requirements of Ops teams has the potentially to significantly be changing with these new services. And what that DiffOps will be is still TBD. We discussed it briefly on The Cloudcast with Ryan Brown a few weeks ago.

For Enterprise Architects and Operations teams, this will definitely be a trend to watch, as it has the potential to significantly change or disrupt the way that you do your day-to-day jobs going forward.

November 19, 2017  11:58 AM

The Evolution of the Kubernetes Kommunity and Konformance

Brian Gracely Brian Gracely Profile: Brian Gracely
API, certification, Cisco, Compliance, CoreOS, Google, Kubernetes, Linux, Microsoft, Red Hat, VMware

This past week the Cloud Native Computing Foundation (CNCF) announced their “Certified Kubernetes Conformance Program“, with 32 distributions or platforms being on the initial list. This allows companies to validate that their implementations do not break the core Kubernetes API implementations, along with allowing them access to usage of Kubernetes naming and logos.

Given that the vast majority of vendors are either now offering Kubernetes, or plan to offer Kubernetes in 2018, this is a valuable step forward to help customers reduce concerns they have about levels of interoperability between Kubernetes platforms.

[NOTE: We’ve been covering all aspects of Kubernetes on the new PodCTL podcastThe show is available on RSS FeediTunesGooglePlayStitcherTuneIn and all your favorite podcast players.]

Beyond the confidence this can provide the market, the Kubernetes community should be credited for doing this in a transparent way. Each implementation needs to submit their validated test results via GitHub, and the testing uses the same automated test suite that is used for all other Kubernetes development. There’s no 3rd-party entity involved.


This may seem like a nitpick, but I think it’s important to get some terminology correct, as this has confused the market with previous open source projects. We dissected an open source release in previous posts. While open source projects have been around for quite a while, and usage has grown significantly over the years, the market still confuses how they speak about them. Let’s look at a couple examples:

#1 – “The interoperability that this program ensures is essential to Kubernetes meeting its promise of offering a single open source software stack supported by many vendors that can deploy on any public, private or hybrid cloud.” (via CNCF Announcement)

#2 – Dan Kohn said that the organization has been able to bring together many of the industry’s biggest names onboard. “We are thrilled with the list of members. We are confident that this will remain a single product going forward and not fork,” he said. (via TechCrunch)

You’ll notice that terms like product and stack are interchanged for project. This happens quite a bit, which can sometimes set the wrong expectations for customers in the market who are used to certain support or lifecycle expectations from software they use. We often saw this confusion with OpenStack, which was actually many different projects all puled together under one umbrella name, but could actually be used together or independently (e.g. “Swift” Object Storage).

It’s important to remember that Kubernetes is an open source project. Things that passed that conformance test are categorized at either “distributions” or “platforms”, which means they are vendor products (or cloud services). And this program doesn’t cover things that plug into non-Kubernetes-API aspects like Container Network Interface (CNI) or Container Storage Interface (CSI) or Container Registries.

Beyond the Conformance Tests (and Logos)

While there are very positive aspects of this program, there are other elements that still need to evolve.

Projects vs. Products (Distributions & Platforms)

It is somewhat unusual to have a certification of a open source project, especially a fast-moving one like Kubernetes, since the project isn’t actually certified, but the vendor implementations of that project (in product form). Considering that Kubernetes comes out with a new release every 3 months, it will be interesting to watch how the community (and the marketplace) reacts to constantly having to re-certify, as well as questions that will arise about backwards compatibility.

Another area which is somewhat unique is that vendors have been allowed to submit offerings before they are Generally Available in the market.

A third aspect that will be interesting to watch is how certain vendors handle support for implementations if they don’t really contribute to the Kubernetes project. For example, Pivotal/VMware, Cisco and Nutanix have all announcement partnerships with Google to add Kubernetes support to their platform. Given that those three vendors have made very few public contributions to the Kubernetes projects, these appear to be more like “OEM” partnerships. So how will a customer get support for these offerings? Will they always need a fix from Google, or will they be able to make patches themselves?

Long-Term Support (LTS)

One last area that will be part of the community discussion in 2018 will be an LTS strategy, or Long-Term Support. With new Kubernetes releases coming out every three months, many companies have expressed an interest in a model that is more focused on stability. In the past this eventually happened with the Linux community, and is beginning to happen with OpenStack distributions. It will be an interesting topic to watch, as many people within the community say that LTS models stifle innovation. On the flip-side is customers that might need/want to run the software, but are struggling to keep up with frequent upgrades and patching.

Page 1 of 2412345...1020...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: