From Silos to Services: Cloud Computing for the Enterprise


March 31, 2018  8:57 PM

Adding Artificial Intelligence into Software Platforms

Brian Gracely Brian Gracely Profile: Brian Gracely
ai, Artificial intelligence, Machine learning, ml, Zugata

Image Source: FreeImages

It’s difficult to go a day in the tech industry without hearing a prediction (e.g. here, here, here, here, and many, many more) about Artificial Intelligence or Machine Learning. Jobs for these skills are in high demand and companies in nearly every industry are trying to figure out how to embed these capabilities into their products and platforms before their competition.

The question for many CIOs or Product Managers is, “How do we add AI into our platforms?”. Do they hire a few PhDs or Data Scientists? Do they try using one of the AI/ML services from public cloud providers like Google, AWS or Azure? Or is there some other path to success?

Recently I asked this question to Srinivas “SK” Krishnamurti (@skrishna09; Founder/CEO of Zugata) as his company recently announced a new AI service to augment their existing SaaS platform. I wanted to understand the complexity of the technology, the difficultly in finding engineering talent, and how to make the product more attractive with the embedded AI capabilities.

The first thing we discussed was what types of business (or customer) problems that AI/ML could potentially solve. SK highlighted that it was important to understand who the would be using the software and how much experience or expertise could be assumed about their use-cases. Once this was a well understood domain, then it was possible to understand if or how AI/ML should be used.

The next thing we discussed was how AI/ML advanced would be perceived in those use-cases. Would they create a measurable difference from what could be manually accomplished now, and would the difference be perceived as valuable enough to make the investment?

Once we got past the business value and use-cases, we began to focus on how to find the right staff to start the AI/ML process. SK shared with me that their journey lasted well over a year before they began to feel confident that their efforts would be valuable. This included hiring talent, looking at data models, building the models, and the long process of training the models with data. He said that the longest amount of time was in training the models, as they had to frequently ask themselves if they were biasing the system to get the answers they believed were needed vs. the system coming to those answers by itself.

Finally, we talked about the challenges of building AI/ML models that were influencing non-human systems (e.g. electronic trading or IT Observability systems) vs. systems that would directly impact human decisions (e.g. hiring, firing, evaluating emotional state, etc.). He said that this added yet another layer of complexity into their analysis of their models, as again they needed to make sure that a broad set of scenarios were being properly evaluated by the system.

It was clear to me that there is no single way to add AI/ML into a software platform, but the guidelines and guidance from SK may prove to be valuable to the readers as they begin to explore their journey to improve their software platforms. I’d love to hear of any experience that the readers have about their own systems.

March 31, 2018  6:25 PM

Can Open Source and IPOs Fly Together?

Brian Gracely Brian Gracely Profile: Brian Gracely
Cloudera, Docker, hortonworks, IPO, Open source

Image Source: FreeImages

There was some buzz in the industry about a week ago as Pivotal finally announced their intention to file for IPO by filing their S-1 document with the SEC. This has long been rumored to be Pivotal’s plan, since they have taken $1.7B in funding (both from companies within the Dell/EMC/VMware family, as well as outside investors such as Ford, GE and Microsoft). This was the first time that Pivotal had to publicly disclose many aspects of their business (customers, revenues, costs, breakdown of the business mix, etc.); with more more detail than was provided when their numbers were reported by EMC back in 2015. We previously looked into those numbers in the context of the changing landscape in the PaaS and CaaS marketplace.

There has been no timeline established for when Pivotal might attempt an IPO, and there are also rumors that Dell or VMware may take actions to avoid an IPO. Lots of speculation happening before the traditional Dell World (previously EMC World) event in May.

But all of that aside, it brings up the question about how compatible today’s “open source centric” startups are with eventually growing a successful companies to IPO. Pivotal’s James Watters has argued that Pivotal isn’t an open source company. Other recent IPOs from companies that were open source centric, including Hortonworks ($HDP), Cloudera ($CLDR) and MongoDB ($MDB), have also gone the route of “open core” business models with relative success. All took large levels of VC funding (Hortonworks – $250M; MongoDB – $311M;  Cloudera – $1B), but have all been able to grow revenues since their IPO. And in most cases, those companies were primarily focused on being a software-centric company, while Pivotal has historically been more of a services-centric company that also sold software.

Venture Capitalist and Startup founder Joseph Jacks believes that there could be many more IPOs on the way. He tracks Commercial Open Source Software companies with more than $100M in revenues  (note: private companies revenues can not be verified as they are not publicly published) and believes this growing list is an indication that open source is becoming more mainstream in Enterprise accounts.

Of the companies on the list, more of them have been acquired prior to IPO than have completed the IPO process. When this happens, especially if the acquiring company is not strong in open source, they original company’s technology often no longer remains open source. It is often very difficult to merge proprietary and open source cultures and development models, as we recently saw DellEMC eliminate their open source focused {code} team.

Given the uniqueness of the Pivotal situation (high VC funding levels, Dell ownership levels), it’s not clear if their outcome – IPO or acquisition (or other) – is indicative of more open source centric IPOs in the future. We may have to wait for another 1-2 IPO declarations from that list before we can see any new trends emerging.


March 10, 2018  11:47 AM

Understanding the Variety of Kubernetes Roles and Personas

Brian Gracely Brian Gracely Profile: Brian Gracely
Applications, containers, DevOps, Kubernetes, Services

The Road to More Usable Kubernetes – Joe Beda

Depending on who you ask, you’re very likely to get many different answers to the question, “Who is the user or operator of Kubernetes?”. In some cases, it’s the Operations team running the Kubernetes platform, managing scale, availability, security and deployments in multiple cloud environments. In some cases, it’s the Development team interacting with manifest files (e.g. Helm or OpenShift Templates) for their applications, or integrating middleware services (e.g. API Gateway, MBaaS, DBaaS). Still other cases have blended DevOps teams that are redefining roles, responsibilities and tool usage.

Since Kubernetes orchestrates containers, and containers are technology that is applicable to both developers and operators, it can lead to some confusion about who should be focused on mastering these technologies.

This past week, we discussed this on PodCTL. The core of the discussion was based on a presentation by Joe Beda, one of the originators of Kubernetes at Google, that he gave at KubeCon 2017 Austin. While Joe covers a broad range of topics, the main focus was on a matrix of roles and responsibilities that can exist in a Kubernetes environment (see matrix image above) – ClusterOps, ClusterDev, AppOps and AppDev. In some cases, Joe highlighted the specific tools or process that are available (and frequently used) by that function. In other cases, he highlights where this model intersects and overlaps with the approaches outlined in the Google SRE book.

Some of the key takeaways included:

  • Even thought Kubernetes is often associated with cloud-native apps (or microservices) and DevOps agility, there can be very distinct differences in what the Ops-centric quadrants focus on vs. the App-centric quadrants.
  • Not every quadrants is as mature as others. For example, the Kubernetes community has done a very good job of providing tools to manage cluster operations. In contrast, we still don’t have federation-level technology to allow developers to build applications that treat multiple clusters as a single pool of resources.
  • Not every organization will designate these roles are specific people or groups, and some maybe be combined or overlap.
  • There is still a lot of room for innovation and new technologies to be created to improve each of these areas. Some innovation will happen within Kubernetes SIG groups, while others will be created by vendors as value-added capabilities (software or SaaS services).

It will be interesting to watch the evolution of roles as technologies like Kubernetes and containers begin to blur where applications intersect with infrastructure. Will we see it drive faster adoption of DevOps culture and SRE roles, or will a whole new set of roles emerge to better align to the needs to rapidly software development and deployments?


February 28, 2018  10:42 PM

The Kubernetes Serverless Landscape

Brian Gracely Brian Gracely Profile: Brian Gracely
containers, events, FaaS, Fn, Functions, Kubernetes, Lambda

In the traditional world of IT tech, there are currently two trends that are like rocket ships – Kubernetes and Serverless. There’s also lots of buzz around AI, ML, Autonomous Vehicles, Blockchain and Bitcoin, but I’m not putting those into more traditional IT building blocks.

Both Kubernetes and Serverless (the AWS Lambda variety) both got launched into the market within a few months of each other towards the end of 2014 and early 2015. They were both going to change how newer applications would get built and deployed, and they both promised to reduce the complexities of dealing with the underlying infrastructure. Kubernetes is based on containers, and Serverless (at least in the AWS Lambda sense) is based on functions (and some undisclosed AWS technologies).

I started following the serverless trend back in the Spring of 2016, attending one of the early ServerlessConf events. I had the opportunity to speak to some of the early innovators and people that were using the technology as part of their business (here, here, here). Later I spoke with companies that were building serverless platforms (here, here) that could run on multiple cloud platforms, not just AWS. At this point, the Kubernetes world and Serverless worlds were evolving in parallel.

And then in early 2017, they began to converge. I had an opportunity to speak with the creators of the Fission and Kubeless projects. These were open source serverless projects that were built to run on top of Kubernetes. The application functions would run directly in containers and be scaled up or down using Kubernetes. The two rocket ships were beginning to overlap in functionality. Later, additional projects like FnNuclio,  OpenFaaS, and Riff would also emerge as open source implementations of serverless on Kubernetes. And OpenWhisk would soon add deeper integration with Kubernetes. As all of this was happening in 2017, I was wondering if a consensus would eventually be reached so that all these projects wouldn’t be fragments of the same market space. I wondered if the Kubernetes community would provide some guidance around standard ways to implement certain common aspects of serverless or functions-as-a-service (FaaS) on Kubernetes.

This past week, the Serverless Working Group in the CNCF released a white paper and some guidance about event sources. While they didn’t come out and declare a preferred project, as they have with other areas of microservices, they did begin to provide some consistency for projects going forward. They also established a working group that represents a broad set of serverless backgrounds, not just those focused on Kubernetes.

We discussed all of these serverless Kubernetes projects on PodCTL this week. We highlighted where the various projects are making good projects,  as well as discussing some areas where the projects still have a long way to evolve before they will be widely adopted.

btw – there’s an interesting debate happening on Twitter these days between the serverless/Kubernetes crowd and the serverless/Lamdba crowd. If you want to keep up, follow where @kelseyhightower got started a couple days ago (and follow the mentions and back and forth.


February 19, 2018  11:19 PM

Understanding the Challenges of Kubernetes

Brian Gracely Brian Gracely Profile: Brian Gracely
certification, Compatibility, containers, DevOps, Kubernetes, Microservices, platform

Last week I spoke about some of the critical factors that will be important for IT leaders to consider if they are bring containers and Kubernetes into their environments. These decision-criteria are elements that will have longer-term impacts on how well their application developers and IT operations teams are able to deploy and scale their containerized applications.

But the Kubernetes technology and community are moving very quickly, with lots of new features and entrants coming into the market. And anytime a market is moving quickly, there can be some short-term myths and misunderstandings that can confuse both the education process and decision-making process. For the last couple weeks, we’ve been looking at some of the more commons misperceptions that exist around the Kubernetes community, and some details about how to rationalize them with the existing technology available. We did this in a two-part series (Part I, Part II).

We covered the following topics:

Applications

Myth/Misunderstanding 1 – Kubernetes is a platform.
Myth/Misunderstanding 2 – Containers are only for microservices
Myth/Misunderstanding 3 – Microservices are always “micro” (small in size)
Myth/Misunderstanding 4 – Kubernetes is only for stateful apps

Architecture

Myth/Misunderstanding 5 – Architecture – Kubernetes Multi-Tenancy
Myth/Misunderstanding 6 – Architecture – Kubernetes is only for Operators

Compatibility and Certification

Myth/Misunderstanding 7 – What does “GKE Compatible” mean?
Myth/Misunderstanding 8 – Enterprises should run Kubernetes as trunk version

Open Source Communities

Myth/Misunderstanding 9 – Are OSS stats important? How to interpret them?

In going through this list, we often found that the misperceptions were not created by vendor FUD, but mostly came from a lack of experience with a broad range of potential applications, or deployment models. In other cases, the misperception untrue now, but had previously been true in the early days of Kubernetes (e.g. v1.0, or several years ago).

We know that it can be difficult to always keep up with all the changes happening in the Kubernetes community, so we hope that those two shows help to eliminate some of the confusion. Going forward, it may be useful to look at resources such as “Last Week in Kubernetes Development” or the KubeWeekly newsletter from the CNCF.


February 5, 2018  11:38 PM

5 Critical Criteria for choosing Kubernetes platforms

Brian Gracely Brian Gracely Profile: Brian Gracely
Code, communities, Kubernetes, Multi-cloud, UX

If 2017 began with a market full of choices for managing containers, it ended with a fairly unanimous vote from the community – Kubernetes has won the container orchestration battles. So now that nearly every vendor is claiming to support Kubernetes in 2018, what are the critical criteria that companies will use to determine which Kubernetes offering to choose for the applications that will drive their digital transformations? Let’s take a look at five that will be top-of-mind for decision-makers (in no particular order).

User-Experience / Developer-Experience

Ultimately, companies deploy modern infrastructure (e.g. containers) and application platforms to improve their ability to deploy and support modern applications; applications that are focused on improving the business (e.g. better customers experiences, better delivery of services, increased profitability, etc.). In order for this to happen, developers must be willing to use the platform. The developer experience allows developers the freedom to use the tools and interfaces they determine will make them the most productive. In some cases, these will be tools that are container-native. In other cases, developers just want to write code and push it into code repositories and automated testing systems. The best platforms will allow developers to operate in either mode, seamlessly. And subsequently, the operations teams shouldn’t have to think about these different modes as independent “silos”. They are just two different ways to access the same platform.

Engineering Commitment

If a company is considering an investment in a commercial platform, they are fundamentally looking for assistance in augmenting their internal development or operations teams. This means that they want to understand how a vendor’s (or cloud-provider or  system-integrator’s) engineering staff and experience will make them successful. Do they commit code to the core projects? Do they support the code for extended periods of time? Will they create bug fixes? Will they backport those fixes into upstream projects? Do they only support the projects they lead, or will they also support additional projects which give them more flexibility and choice?

Multi-Cloud

Unlike some of the IaaS platforms/projects, which are primarily targeted to “private cloud” environments, Kubernetes has always been designed to be multi-cloud, and allow application portability. This is where it’s important to find reference architectures for various cloud implementations (AWS, Azure, GCP, VMware, OpenStack, bare metal), as well as tools to help automate and upgrade the environments. This is also an area where some implementations are offering support for Open Service Broker capabilities, that allow Kubernetes platforms to integrate with 3rd-party services from various cloud providers. Vendors such as AWS, Red Hat, Ansible, GCP, Microsoft and Pivotal have implemented open service brokers.

Ecosystem

While Kubernetes already has a large number of companies that have proven to have certified implementations, it’s still important to know which implementations are making the effort to test and validate their working together. This is not just press-release partnerships, but also proven working code, automation playbooks, and implementations that are out in the open.

Operational Skills

One of the last things to consider is how well a company’s existing operational skills will match with the needs of the new platform requirements. Are they able to integrate existing build tools (CI/CD pipelines), or does that get replaced? Can they reuse existing automation or virtualization skills? Can they get training and certifications to improve their skills, or is it mostly just code examples to sift through? Or does the vendor offer options to augment existing operational skills with managed services or automated updates directly delivered by the vendor?


February 1, 2018  7:29 PM

Understanding Helm – The Kubernetes Package Manager

Brian Gracely Brian Gracely Profile: Brian Gracely
Kubernetes, Microsoft

Virtual machines are a fairly easy concept to understand. It’s software that emulates an entire computer (server). It allows a physical server to be logically “segmented” into multiple software-computers, or virtual machines. And they are an interesting way for Infrastructure/Operations teams to more efficiently run more applications than before on a single physical computer. For the most part, virtual machines are a technology tool that is primarily used (and managed) by the Infrastructure/Operations teams, with Developers being more focused on the applications that run inside the virtual machine.

In contrast, containers are a technology that is applicable to both developers and operators, depending on the task at hand. For developers, containers provide a standard way to package an application, as well as dependencies (additional libraries/dependencies, security credentials, etc.). They also allow developers to run and validate applications locally on their laptops. For operators, container platforms allow applications to be immutably deployed, as well as having the scaling and availability characteristics deterministically described.

So how does a developer “describe” to the operations team how they want the application to be deployed and run? One technology that makes this possible is called “Helm“, the Kubernetes Package Manager. Helm is an official CNCF project, with a large development community supporting the project. Helm creates a set of executable files, called Helm charts, that describe the application, its dependencies and description of how the Kubernetes controller should treat the application. Helm charts can be version controlled, to allow for rollback if a problem occurs after deployment. Helm charts can be centrally stored in an online registry-type service, such as KubeApps from Bitnami.

We recently sat down with Taylor Thomas, Software Engineer at Nike, and @HelmPack Maintainer. We walked through the basics of Helm technology, how it interacts with commonly use container tools and Kubernetes, as well as the how the Helm community is planning for the upcoming Helm v3 release. We also talked about the agenda for the upcoming Helm Summit in Portland.

During that discussion, we highlighted some great resources for anyone that wants to learn more about Helm, or start using the technology for their containerized applications.


January 8, 2018  9:45 AM

Understanding Service Meshes for Microservices

Brian Gracely Brian Gracely Profile: Brian Gracely
containers, Kubernetes, Load balancing, Microservices, Proxy

One of the most popular topics coming out of the CNCF’s KubeCon event in Austin was the concept of a “Service Mesh”.

There were a number of great sessions (videos) at KubeCon about Service Mesh technologies (including Istio, Envoy, Linkerd and Conduit).

This week we discussed the basics of Services Meshes on the PodCTL podcast, and I’ve previously discussed Istio and Linkerd on The Cloudcast.

What is a Service Mesh?

If you look at the origin of the service mesh projects that have emerged over the last year, most of them begin as a necessity in the webscale world. Linkerd was created by engineers that had worked at Twitter (and have since founder Bouyant, who also created the Conduit project). Envoy was created by the engineering team at Lyft. And Istio started as a project at IBM, but have since seen large contributions from Google, Lyft, Red Hat and many others.

In its most basic definition, a service mesh is application-layer routing technology for microservice-centric applications. It provides a very granular way to route, proxy and load-balance traffic at the application layer. It also provides the foundation for application-framework-independent routing logic that can be used (at a platform layer) by any microservice. This article from the Lyft engineering team does an excellent job of going in-depth on the basic use-cases and traffic flows where microservices might benefit from having a service mesh, as opposed to just using the native (L2-L4) routing from a CaaS or PaaS platform + application-specific logic.

Why are Service Meshes now gaining attention?

The biggest reason that we’re now hearing about Service Meshes is because of broader adoption of microservice architectures for applications. As more microservices are deployed, the more complicated it can be to route traffic between them discover new services, and instrument various types of operation tools (e.g. tracing, monitoring, etc.). In addition, some companies wanted to remove the burden of certain application functionality from their application code (e.g. circuit breakers, various types of A/B or Canary deployments, etc.) and Service Meshes can begin to move that functionality out of the application and into platform-level capabilities. In the past, capabilities like the NetFlix OSS services were application specific (e.g. Java), which allowed teams to get similar functionality, but only if they were writing applications in the same language. As more types of applications emerge (e.g. mobile, analytics, real-time streaming, serverless, etc.), the desire to have language-independent approaches are also desirable.

Want to Get Started with Service Meshes?

Consider working through the tutorial for Istio on Kubernetes over on Katacoda. Also consider listening to these webinars about how to get Istio working on OpenShift (here, here).


January 2, 2018  10:32 PM

The Top 10 Stories from 2017

Brian Gracely Brian Gracely Profile: Brian Gracely
ai, AWS, Azure, Bitcoin, Blockchain, containers, Docker, HCI, Kubernetes, ml, VMware

It goes without saying that a lot happened in 2017, as tech and politics and culture intersected in many different areas. Before the year came to a close, we looked back at 10 of the stories that had the biggest impact on the evolution of IT.

Container Stuff

1. Docker >> New CEO; docker >> Moby

The company and open source project that was once darling of the tech media went through some significant changes. Not only did they replace their long-time CEO, Ben Golub, with Steve Singh, but they also changed the identity of the project that created such a devoted open source community. In May, Docker decided to take control of the intellectual property around the name “Docker”, and change the name of the popular project from “docker” to “moby”.  This move provided a more distinct line between Docker, Inc (a.k.a. “big D Docker”) and the open source community (a.k.a. “little d docker”) and their future for monetization plans.

2. Kubernetes winning the Container Orchestration Wars – but do we now only talk about it being boring and irrelevant?

Just a few years ago, there was a great deal of debate about container orchestration. CoreOS’s Fleet, Docker Swarm, CloudFoundry Diego, HashiCor’s Nomad, Kontena, Rancher’s Cattle, Apache Mesos, and Amazon ECS were all competing with Kubernetes to gain the favor or developers, DevOps teams and IT operations. As we closed out 2017, Kubernetes has run away from the field as the clear leader. Docker, Cloud Foundry, Rancher and Mesosphere added support for Kubernetes, putting their alternative offerings on the back burner. AWS and Azure brought new Kubernetes services to market, and even Oracle jumped in the game. And early adopter Red Hat continues to do Red Hat things in the Kubernetes space with OpenShift. 

And with that standardization came some level of boredom from the early adopter crowd. The annual KubeCon event was littered with talks about how Kubernetes has become “boring” and “a commodity”. They have all moved onto 2018’s newest favorite, “service meshes”.

Cloud Stuff – When do we hit the Cloud Tipping Point?

3. Azure getting bigger, but big enough? ($20B run-rate; includes O365)

As Microsoft continues to evolve it’s business towards move cloud-based services and support of open source technologies, it continues to grow it’s Azure business and Office 365. But is it growing fast enough to keep up with AWS? Microsoft claims to be on a $20B run-rate for their cloud business, but it doesn’t break out individual elements such as Azure vs. O365 vs. Windows Server or management tools.

4. AWS re:Invent – can it be stopped? ($18B run-rate)

With dozens of new announcements at AWS re:Invent, the industry continues to chase the leader in cloud computing. The Wall Street Journal recently looked at ways that Amazon (and AWS?) may eventually slow down, but it’s hard to see that happening any time soon.

5. Google Cloud – still ??? (says almost 2,500 employees)

Given the growth of Alibaba Cloud in Asia and continued growth from Oracle Cloud (mostly SaaS business), does Google Cloud still belong in the discussion about “the Big 3”? Apparently Google’s cloud business is now up to 2500 employees, and has made some critical new hires on the executive staff, but when will we start to see them announce revenues or capture critical enterprise customers?

Infrastructure Stuff

6. New Blue Print Emerging for Infrastructure?

Over the last 3-5 years, no segment of the market has undergone as much change and disruption as IT infrastructure. VMware is a networking company, Cisco is a server and security company, HPE isn’t sure what it once was anymore, and DellEMC is focused on “convergence” of its storage and server business. The combination of Hyper-Converged Infrastructure (e.g. Nutanix, VSAN, etc.) and “Secondary Storage” (e.g. Rubrik, Cohesity, etc.) are all bringing Google and Facebook DNA to infrastructure, with an element of public cloud integration. Throw in the transition from VMs to containers and infrastructure is setting up for more consolidation, more software-defined and more disruption in 2018.

7. VMware’s cloud direction?

The Big 3 Public Clouds have all placed their bets on some form of a hybrid cloud story – AWS + VMware; GCP + Nutanix or Cisco or VMware/Pivotal; and Azure + Azure Stack. Will VMware continue to have a fragmented story of VMs to AWS and Containers to GCP/GKE, or will it create new partnerships to make this story more consumable by the marketplace? And what ever happened to the VMware + IBM Cloud partnership from 2016?

8. Where does Serverless Fit? Does it? Is it a Public Cloud only play?

Serverless or FaaS (Functions as a Server) is rapidly climbing up the hype curve, with AWS Lambda leading the way. AWS has been taking serverless in broader directions lately, with “serverless” RDS database offerings, and “Fargate” which is a serverless-like offering for the infrastructure under their EKS container offering. But what about the rest of the market? Will a strong open-source option emerge for multi-cloud or on-prem offerings (e.g. OpenWhisk, Kubeless, Fission, OpenFaaS), or will serverless be the one technology that never really translates from the public cloud into other environments? Expect to see 2018 be a big year for serverless, as more and more examples of applications and usage patterns begin to emerge from Enterprise companies.

Stuff we don’t understand 

9. BitCoin / Blockchain – did you bet your mortgage on it? 

Banks were allowing people to take out 2nd mortgages and HELOCs against their house to buy Bitcoins. Bitcoin was up 16,000%+ in 2017 and some “experts” were saying that it wasn’t a bubble. And the underlying technology, Blockchain, it still somewhat of a mystery to many people, although it’s supposed to be the next big thing for almost every industry. What’s happening here?

10. AI & ML – Everyone’s including AI in their products

These things we know:

  • The hype around AI and ML is huge
  • Where areas of AI or ML improve (e.g. auto-fill in searches), the industry assumes it’s simple and moves the goal posts of what miracles of computer science we expect next.
  • Everyone is afraid of SkyNet


January 2, 2018  7:05 PM

Kubernetes Year in Review

Brian Gracely Brian Gracely Profile: Brian Gracely
AWS, Cloud Foundry, containers, Docker, Kubernetes, OpenShift, Oracle, Red Hat

Over at the PodCTL podcast, we’ve been discussing Kubernetes and Containers for the last 6 months on a weekly basis. Before the break, we looked back on how far the Kubernetes community had evolved in just a short period of time.

  • SETUP | GETTING STARTED: A few years ago, people said that getting started with Docker Swarm was easier than Kubernetes. The Kubernetes community created tools like Minikube and Minishift to run locally on the laptop, automation playbooks in Ansible, services like Katacoda have made it really simple to have online tutorials to learn, and multiple cloud offerings (GKE, AKS, EKS, OpenShift Online|Dedicated) make it simple to get a working Kubernetes cluster immediately.
  • ENSURING PORTABILITY: Nearly every Enterprise customer wants a Hybrid Cloud environment, but they need to understand how multiple cloud environments will impact this decision. The CNCF’s Kubernetes Conformance model is the only container-centric framework that can ensure customers that Kubernetes will be consistent from one cloud environment to another. And since it’s built entirely on the APIs and tools used to build the Kubernetes technology, it allows companies to include compliance testing as part of their day-to-day development.
  • INFRASTRUCTURE BREADTH: Other container orchestrators had ways to integrate storage and networking, but only Kubernetes created standards (e.g. CNI, CSI) that have gained mainstream adoption to create dozens of vendors/cloud options. This allows dozens of networking or storage vendors (or open source projects) to easily integrate with Kubernetes and the breadth of conforming Kubernetes platforms. 
  • APPLICATION BREADTH: The community has evolved from supporting stateless apps to supporting stateful applications (and containerized storage), serverless applications, batch jobs, and custom resources definitions for vertical-specific application profiles.
  • IMPROVING SECURITY: A year ago, there were concerns about Kubernetes security. Since then, the community has responded with better encryption and management of secrets, and improved Kubernetes-specific container capabilities like CRI-O and OCI standardization. In addition, the security ecosystem has embraced new innovations around continuous monitoring, scanning, and signing images within the container registry. 
  • IMPROVING PERFORMANCE: Red Hat (and others) have started the Performance SIG in the Kubernetes community to focus on high-performance applications (HPC, Oil & Gas, HFT, etc) and profiling the required performance characteristics of these applications in containerized environments.
  • IMPROVING THE DEVELOPER EXPERIENCE: One of the themes of KubeCon 2017 (Berlin) was focusing on developer experience, and in just a few months we’re seeing standardization around the Helm format (for application packaging), Draft to streamline application development, Kubeapps to simplify getting started with apps from a self-service catalog. We also seen Bitnami built a parallel to their existing container catalog with applications that are packaged specifically for OpenShift’s security model of non-root containers (vs. the Docker model of root-enabled containers). 

All-in-all, it was an amazing evolution of the Kubernetes community from ~ 1000 people at KubeCon 2016 in Seattle to over 4300+ at KubeCon in Austin. 2018 will bring increased competition and innovation, as well as many more customers running production application on Kubernetes – both in their own data centers and in the public cloud.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: