From Silos to Services: Cloud Computing for the Enterprise

Page 1 of 2412345...1020...Last »

June 10, 2018  9:40 PM

Looking Ahead – Less Focus on Nodes

Brian Gracely Brian Gracely Profile: Brian Gracely
CoreOS, Immutable infrastructure, Lambda, operators

Back in 2013, I was introduced to a young kid that was the CTO of a startup called CoreOS. His name was Brandon Phillips and he told us that he was building a new type of Linux that had a very small footprint, was focused on containers, and was automatically updated for security. The founders at CoreOS wanted to treat Linux hosts like tabs in a browser, where they could be started or stopped very easily, and would always be secure. In essence, they wanted to take the SysAdmin complexities  out of the hosts.  At the time, the concept didn’t really connect with me, because I was still thinking about hosts as hosts. I couldn’t quite grasp the idea of not having to actively manage (and upgrade) core infrastructure. Maybe it was because he was still calling it “Linux” instead of some other term.

Fast forward to late 2014, and AWS introduced a new service called Lambda, which eventually would be known by the concept of “Serverless”. Lambda promised to let developers just write code, and AWS would manage all of the underlying infrastructure resources, including security updates and scalability.

In November 2017, AWS introduced a new service called “Fargate“, which attempts to abstract the node-level services under the AWS Elastic Kubernetes Service (EKS) and Elastic Container Service (ECS).

In December 2017, Microsoft introduced the Virtual Kubelet, which makes many physical nodes look like a single virtual node.

In May 2018, CoreOS open sourced the Operator Framework, which will include Operators for the elements of a Kubernetes platform, as well as embed Day 2 operations capabilities for applications.

All of these evolutions and new services (along with several others) highlight an emerging trend that is taking the automation and operations of nodes to a new level. Not only are they creating “immutable infrastructure” (pets vs. cattle), but also hiding many of the administrative tasks to manage every aspect of the node-level computing. Some of these new capabilities will be valuable to developers (e.g. writing Lambda functions), which others (e.g. Operators) will be valuable to both operations teams and application ISVs.

These new technologies don’t eliminate the concepts of the operating system, or the underlying security capabilities, but they are focused on simplifying how they are managed.

While these new concepts may or may not immediately apply to your existing applications, they may apply to new applications or operational models that will alter how developers or operators do their day-to-day tasks.

May 30, 2018  8:20 PM

The Many Choices for Deploying Applications to Kubernetes

Brian Gracely Brian Gracely Profile: Brian Gracely
containers, DevOps, Kubernetes, OpenShift, operators, Service Broker

This past week, on the PodCTL podcast (@PodCTL on Twitter), we received a question from a listener asking about a frequently asked question (and frequent point of confusion). With all the different tools that exist for deploying a containerized application to Kubernetes, how is someone supposed to know which one to choose?

Tools are an interesting space in the open source world. No matter what the broader project, two things are always true: [1] People love their tools and like they to be specific to them, [2] People love to build new tools, even when a previous tool already did about 75-90% of what they put into their new tool. And since tools are often outside the scope of the main project, the ability to have sprawl is pretty easy, without disrupting the main project.

In the case of Kubernetes, there is actually some reason for why there are many tools available to help deploy applications. In the simplest terms, it because Kubernetes can have many different types of people or groups that will interact with it. Kubernetes co-creator Joe Beda discussed this at a recent KubeCon, and we discussed it on PodCTL #28 episode. There can be developers that deploy applications to a Kubernetes cluster, and operators that also deploy applications to a Kubernetes cluster. Applications can be deployed just within the Kubernetes cluster, and other applications that have to interact with services or application that reside outside a Kubernetes cluster. These various tools and frameworks, from Helm Charts to OpenShift Templates to Service Brokers to Operators, are all focused on the Day 1 and Day 2 aspects of getting containerized applications onto a Kubernetes cluster.

But beyond those tools (and more) is an emerging set of tools that are focused on trying to make it simpler for a developer to write an application that is Kubernetes-aware. Tools such as these are now beginning to be lumped into a category called “GitOps

It’s important to understand that the earlier tools that were used with Kubernetes tended to be replacements or augmentations for existing config-management tools such as Chef, Puppet, Ansible, Salt, Terraform. The newer generation of tools, such as the Operator Framework, are more advanced as they now take into consideration the context of the application (deployments, upgrades, failure scenarios) and are leveraging the native APIs within Kubernetes to be able to align with the deployment models. Expect to see more ISVs begin to package their applications as Operators, as it makes it easier “embed” operational knowledge, and it gives them a consistent way to deploy their application into a Kubernetes cluster running in any private cloud or public cloud environment.


May 28, 2018  6:08 PM

The Digitally Transformed Business and Ecosystem

Brian Gracely Brian Gracely Profile: Brian Gracely
Business model, containers, culture, DevOps, Digital transformation, Kubernetes

A few week ago I had the opportunity to speak at the Gateway 2 Innovation event in St. Louis. For a regional event, it drew a very large audience (1200+) and a diverse set of speakers and topics of discussion. Since it was an executive-level event, I was asked to try and connect the dots between a technology topic (e.g. Containers, DevOps) and a business framework to help the attendees be able to explain it to their teams, or take actions to influence a business challenge.

Since we were in St.Louis, and not Silicon Valley, I wanted to make sure that we could take a pragmatic approach, instead of the normal unicorns and rainbows – hence my contrast between the colorful skyline and the muddied waters of the mighty Mississippi River.

Source: Brian Gracely, Red Hat (2018)

The framework of the talk was to compare a traditional business structure/organization, with the evolution of how today’s disruptors see opportunity in the established models.

Source: Brian Gracely, Red Hat (2018)

 

As we begin to compare the traditional structure to a more digitally transformed business, we begin to see several changes have occurred:

  • Marketing is often distributed (omni-channel) and optimized for each specific channel (Web SEO, Mobile App, etc.)
  • The locality of physical stores, and the value of physical proximity to customers, can be displaced by the Internet.
  • Partners and Suppliers have often merged or are served through a smaller number of entities.
  • Product creation becomes split between in-house creation and those things that can be outsourced, obtained via OEM/ODM partnerships, or are created through open communities.

In addition to the transformation of the end-to-end value-chain, the paths and availability of information and feedback loops evolve. The advantages gained in the traditional model through information asymmetry begin to go away and other forms of differentiation will need to be created.

Source: Brian Gracely, Red Hat (2018)

The next area of focus was looking at how to map the evolution of these supply-chains to the underlying technologies, allowing the audience to begin to connect their technology initiatives to the challenges and changes facing the business.

Source: Brian Gracely, Red Hat (2018)

We finished the talk by looking at an example of a Digitally Transformed (and continuing to transform) industry, around all aspects of online travel. As you can see in the diagram, you can start mapping the companies to the “newer” diagrams and see the new interactions within the industry. Many of these companies have spoken publicly about how they are using Containers, Kubernetes and DevOps in production.

Source: Brian Gracely, Red Hat (2018)


May 1, 2018  8:39 PM

Understanding Kubernetes

Brian Gracely Brian Gracely Profile: Brian Gracely
containers, Digital transformation, Docker, Interop, Kubernetes, Microservices

This week at Interop 2018 in Las Vegas, I have the opportunity to speak on the topic of “Understanding Kubernetes“. The Kubernetes ecosystem has grown so large over the last few years, that it’s sometimes valuable to step back and look at not only how this community evolved, but also what basic problems the technology solves. In addition to that, I’ll spend some time explaining the basics of what is included in Kubernetes and what is NOT included. This is an area that is often confusing for many people, as they just assume that the Kubernetes project includes an SDN, storage, container registry, monitoring, logging, and many other elements. This often happens because the governance of the Kubernetes project is managed by the CNCF (Cloud Native Computing Foundation) as well as a number of other projects that address those additional requirements. The CNCF represents those projects in their Cloud Native Landscape.

Given the amount of time allocated for this talk, we just focused on the basics of how Kubernetes interacts with some of those core elements (e.g. registry, networking, storage, etc.). For a much broader discussion of how the newest projects interact (e.g. Prometheus, Istio, Grafeas, OpenWhisk, ), it’s best to keep up with all the things happening at KubeCon this week in Copenhagen (and past KubeCon events).


April 30, 2018  10:47 PM

Kubernetes as the Unified Application Platform

Brian Gracely Brian Gracely Profile: Brian Gracely
ai, Big Data, Cloud Foundry, Docker, HPC, iot, Kubernetes, OpenShift

Three years ago, I was working as a technology analyst and was asked to do a technical evaluation of a number of application platforms – at the time it was the intersection of mature PaaS platforms and emerging CaaS platforms.

At the time, most of these platforms were built on fragmented, homegrown orchestration technologies that packaged applications in containers and then made sure they would operate with high availability. Most of these platforms were based on some element of open source, but (for the most part) they all differed in how the applications needed to be packaged and how the applications would be orchestrated on the platform. Docker was just beginning to gain wide acceptance as a packaging standard for applications, and the market was heavily fragmented between container schedulers (Cloud Foundry /Diego, Kubernetes, Mesos, Docker Swarm, Hashicorp Nomad and a few proprietary others). The platforms all had limitations – from proprietary extensions, to limited language or framework support, to cloud deployment options.

At the time, the Cloud Foundry ecosystem had the largest following, but the platform was limited to only supporting 12-factor stateless apps and didn’t yet support applications packaged using docker containers. Google had just open sourced the Kubernetes project and Red Hat OpenShift v3 was the only commercial platform to include Kubernetes for orchestration, after placing their homegrown packaging and orchestration technologies in OpenShift v2 with docker and Kubernetes. Given the immaturity of Kubernetes, Google’s lack of experience with public open source projects, it was hard to image the growth that would happen to Kubernetes and that community over the next 3yrs.

Fast forward less than one year and my perspectives were beginning to change. I went from believing that “Structured” platforms would dominate Enterprise adoption, to seeing that “Composable” platforms were winning in the market. Highly opinionated (“Structured”) platforms were only able to address ~ 5% of customer applications, while more flexible platforms (“Composable”) could not only address those 5% of new applications, but also large percentages of migrations for existing applications. This flexibility led to significantly better ROI for companies adopting these platforms. And recently released market numbers show that the market is choosing Composble platforms, based on Kubernetes, as more than a 2:1 ratio vs. other platform architectures over the past 3 years.

A Unified Platform for Any Application

Fast forward three years and it has become clear that Kubernetes is positioned to be the unified application platform for many types of applications, deployed across hybrid-cloud environments. These three years have not only watched Kubernetes mature, but also watched the industry begin to reconcile that the artificial division between PaaS and CaaS was unnecessary. OCI-compliant containers are proving to be the dominant standard for application packaging, either directly by developers, or indirectly by CI/CD pipelines or integrated build features within application platforms. And each day, companies are moving new types of applications into production on Kubernetes.

Source: Brian Gracely (2018)

It took a little while for the market to realize that Kubernetes application platforms could do more than just new cloud-native applications, but now they are realizing that containers are also a great vehicle for migrating existing (stateful) applications as well. For many companies, this not only provides a modernization path for 60-80% of their application portfolio, but it also unlocks the ability to significantly reduce costs from previous infrastructure decisions (e.g virtualization, lack of automation, etc.). Beyond those applications, which drive business-critical functions today, we’re seeing new classes of applications being deployed on unified Kubernetes platforms – from Big Data to IoT to Mobile to HPC to Serverless. And the recently released “Custom Resource Definitions” and “Open Service Broker” extensions are expected to unlock a whole new set of vertical-market opportunities, as well as integration with emerging public cloud services such as AI/ML.

As the operational experience with a breadth of application classes on Kubernetes application platforms grows, these learnings will soon be codified and automated as native services within the Kubernetes platform. This will not only enable applications to be consistently deployed across any cloud environment, but will deliver as-a-Service experiences for developers that are not dependent on a specific cloud platform. Driving consistent Application Lifecycle Management across any cloud environment will significantly increase the efficiency and agility for both developers and operations teams.

AND Better Operations

While the developer and application communities have gotten onboard with Kubernetes application platforms, the ability of operations teams to consolidate many types of applications with a consistent set of operational tools (automation, networking, storage, monitoring, logging, security, etc.) is a huge benefit to their businesses. Not only are these operational tools based on lower-cost open source software, but also broad communities of experience and expertise drive them. The ability to learn from the community about a broad set of deployment, upgrade and Day-2 operational scenarios will help accelerate the learning curves of all operational teams, reducing their costs and time to expertise. Application platforms based on multiple orchestration technologies drive up cost and complexity for operational teams by not allowing them to standardize on tools and infrastructure, in additional to drive up expenses in hiring and training for experienced personnel.

The new standard for operational teams will be to significantly reduce the gap between current private cloud (data center) operations speed and efficiency and similar experiences in the public cloud. The growing use of immutable infrastructure, from OS to Platform

Standard Foundations, Consistent Operations

In the world of security professionals, one of the most significant recent challenges has been the decomposition of the security perimeter. New devices, new cloud services, and new working patterns have forced security teams to drive consistent security policies and process from the data center edge to every operating environment (office, branch, teleworkers, partners, cloud).

For application platforms, the same challenges are being faced as more and more companies choose to leverage both private cloud and public cloud resources to deliver on business goals. This decomposition of the data center from single operational model to hybrid cloud operational model will be a critical success factor for many IT organizations.


April 22, 2018  11:06 AM

Being a Technical Advocate / Evangelist

Brian Gracely Brian Gracely Profile: Brian Gracely
Blogs, cloud, Github, Open source, podcasts, YouTube

Back when I started working for technology vendors, the landscape was much different than it is today:

  • Vendors tended to focus on a specific technology area; hence their portfolio was primarily made up of various sizes and price-points of the same technology (e.g. security, database, storage, networking, etc.).
  • The ecosystem consistent of Vendors – Distributors/Resellers – Customers. Everything originated from the vendors (e.g. products, documentation, reference architectures). The flow of information was primarily a one way path, with only the largest customers giving direct feedback (and buying influence) back to vendors.
  • Blogs, podcasts, free video distribution and GitHub didn’t exist. Meetups weren’t independently organized. Most conferences were very large and happened once or twice a year.

During that time, the people most responsible for the intersection of technology knowledge, content creation and interacting with the marketplace were “Technical Marketing Engineers” (TMEs). Vendor or reseller “Systems Engineers” also did it, but they also carried a sales quota, so they often didn’t have much free time to create content. The TME job was a strange combination of a technology expertise, jack-of-all-trades communicator, content creator and a role that never fit neatly into an organizational chart. The oxymoronic platypus combination of “Technical” and “Marketing” meant that the production engineering teams never really thought they did engineering, and the product marketing team never thought they had proper marketing skills. And TMEs usually need lab equipment to be able to validate features, create demos, and write reference architectures, but ownership of lab funding was always a hot potato topic when annual budgets came around. The engineering teams would fund them if they could get some DevTest cycles out of them, and Marketing teams always wanted the work done without the burden of taking any dollars away from their typical marketing spend. This strange dichotomy of responsibility, funding and expectations led to frequent shuffling in org charts, and frustration about career paths.

Fast forward a decade+ and the markets changed significantly:

  • There is still quite a bit of information that flows from Vendors – Resellers – Customers, but there is now a ton of independently created content that originates from communities of people that share a common interest in various technology subjects. This could be independent consultants, independent developers, industry analysts, bloggers/podcasters, resellers, customers or any number of other sources.
  • The rise of blogs, podcasts, free video distribution and GitHub make sharing and distribution of ideas and code impossibly simple. People with knowledge can come from anywhere, with very little investment other than their passion and time.
  • Whether it’s open source software, free tiers of public cloud services, and vendor trials; the ability to get access to technology has never been easier. The need for massive, hardware-centric labs is a thing of the past.
  • More and more technology is being created in open source communities, so the vendors have to be much more attune to the input and demands of these communities.
  • These new technologies are emerging much faster than before. Keeping up with the new technologies and changes is proving to be more and more difficult for many companies.

With these changes taking place, companies decided that they needed to create some roles that would be more involved with these communities and customer challenges. This created the rise of “Evangelist” and “Advocate” roles. Some of these roles focus on operations-centric technologies, while others focus on developer-centric technologies. Just like the mythical Full-Stack Engineer, it very difficult to find many people that can be an Advocate/Evangelist for both operations and developer interests.

The functions of an Advocate/Evangelist can vary significantly, based on experience, skills and interest of the hiring company. In general, they will perform these functions:

  • Have deep technical skills in a specific area, or group of technologies
  • Create presentation material and provide talks at industry events (conferences, meetups, etc.).
  • Create technology demonstrations and provide talks at industry events (conferences, meetups, etc.).
  • Have the ability to speak to a wide variety of operators, developers, and prospective customers about a wide variety of topics related to your focus domain (and sometimes others).
  • Create on-going content for blogs, podcasts, videos, and demos that are widely shared.
  • Be willing and able to travel to events around the country or around the world.
  • Be able to work on vendor-centric activities at conferences and meetups (e.g. staffing the booth at tradeshows).
  • Be able to join customer calls with account teams to be a subject-matter-expert.

And like TMEs, these roles can be great for people that enjoy wearing many hats.

  • Travel to interesting cities
  • Meet lots of interesting people
  • Work on leading-edge (and often bleeding-edge) technologies
  • Educate and inspire people

And like TMEs, there roles can be very difficult to explain and sustain, as they don’t always create obvious metrics or results that can be immediately tracked back to ROI-justified business metrics.

  • Travel is (typically) a constant part of the role. And the travel can often involve many, many hours of journey for very few hours of interaction. This can be hard on your health and your family.
  • The cost of the role can become quite expense when on-going T&E are included. It’s not unusual to incur $30-100k in expenses, depending on travel distances (international), conference attendance (tickets) and the frequent lack of planning (e.g. need you in Europe in 2 days).
  • It’s not always clear where the roles report in an organization. It could be sales (e.g. “Field CTO”), marketing, product groups, office of CTO, etc. Each type of group may have different expectations and success metrics.

Having been an Advocate/Evangelist and managed teams of both TMEs and Advocate/Evangelists, I can say that deal with all the duplicities can be very complicated. How to justify budgets for teams that don’t carry a direct sales quota or marketing funnel? How do you create a career path for people/groups that are appreciated with a given organization? How to prevent burnout of people that are in high-demand and often don’t have enough time to create content/demos/technology? How do you keep them motivated to stay at your company when other companies with newer technologies come along?

I don’t have the answers to these questions. It was often “it depends”, based on lots of individual reasons and contexts. It was often “you need to control your career path, but I’ll provide you the opportunities to learn the necessary next-skills to get there”. It was often “I know they love your work, but apparently not enough to recognize it with the proper budgets.”

These roles, whatever the name, can be great for a few years at a time. The trick is to be aware that they are fundamentally “overlay” roles that are put in place until the knowledge of a few can be transferred to the knowledge of many. The other trick is knowing when that transition is coming and planning your next steps accordingly. Maybe it’s getting on the next wave (pioneering), and maybe it’s being part of the expansion and stabilization of that trend (settler).


March 31, 2018  8:57 PM

Adding Artificial Intelligence into Software Platforms

Brian Gracely Brian Gracely Profile: Brian Gracely
ai, Artificial intelligence, Machine learning, ml, Zugata

Image Source: FreeImages

It’s difficult to go a day in the tech industry without hearing a prediction (e.g. here, here, here, here, and many, many more) about Artificial Intelligence or Machine Learning. Jobs for these skills are in high demand and companies in nearly every industry are trying to figure out how to embed these capabilities into their products and platforms before their competition.

The question for many CIOs or Product Managers is, “How do we add AI into our platforms?”. Do they hire a few PhDs or Data Scientists? Do they try using one of the AI/ML services from public cloud providers like Google, AWS or Azure? Or is there some other path to success?

Recently I asked this question to Srinivas “SK” Krishnamurti (@skrishna09; Founder/CEO of Zugata) as his company recently announced a new AI service to augment their existing SaaS platform. I wanted to understand the complexity of the technology, the difficultly in finding engineering talent, and how to make the product more attractive with the embedded AI capabilities.

The first thing we discussed was what types of business (or customer) problems that AI/ML could potentially solve. SK highlighted that it was important to understand who the would be using the software and how much experience or expertise could be assumed about their use-cases. Once this was a well understood domain, then it was possible to understand if or how AI/ML should be used.

The next thing we discussed was how AI/ML advanced would be perceived in those use-cases. Would they create a measurable difference from what could be manually accomplished now, and would the difference be perceived as valuable enough to make the investment?

Once we got past the business value and use-cases, we began to focus on how to find the right staff to start the AI/ML process. SK shared with me that their journey lasted well over a year before they began to feel confident that their efforts would be valuable. This included hiring talent, looking at data models, building the models, and the long process of training the models with data. He said that the longest amount of time was in training the models, as they had to frequently ask themselves if they were biasing the system to get the answers they believed were needed vs. the system coming to those answers by itself.

Finally, we talked about the challenges of building AI/ML models that were influencing non-human systems (e.g. electronic trading or IT Observability systems) vs. systems that would directly impact human decisions (e.g. hiring, firing, evaluating emotional state, etc.). He said that this added yet another layer of complexity into their analysis of their models, as again they needed to make sure that a broad set of scenarios were being properly evaluated by the system.

It was clear to me that there is no single way to add AI/ML into a software platform, but the guidelines and guidance from SK may prove to be valuable to the readers as they begin to explore their journey to improve their software platforms. I’d love to hear of any experience that the readers have about their own systems.


March 31, 2018  6:25 PM

Can Open Source and IPOs Fly Together?

Brian Gracely Brian Gracely Profile: Brian Gracely
Cloudera, Docker, hortonworks, IPO, Open source

Image Source: FreeImages

There was some buzz in the industry about a week ago as Pivotal finally announced their intention to file for IPO by filing their S-1 document with the SEC. This has long been rumored to be Pivotal’s plan, since they have taken $1.7B in funding (both from companies within the Dell/EMC/VMware family, as well as outside investors such as Ford, GE and Microsoft). This was the first time that Pivotal had to publicly disclose many aspects of their business (customers, revenues, costs, breakdown of the business mix, etc.); with more more detail than was provided when their numbers were reported by EMC back in 2015. We previously looked into those numbers in the context of the changing landscape in the PaaS and CaaS marketplace.

There has been no timeline established for when Pivotal might attempt an IPO, and there are also rumors that Dell or VMware may take actions to avoid an IPO. Lots of speculation happening before the traditional Dell World (previously EMC World) event in May.

But all of that aside, it brings up the question about how compatible today’s “open source centric” startups are with eventually growing a successful companies to IPO. Pivotal’s James Watters has argued that Pivotal isn’t an open source company. Other recent IPOs from companies that were open source centric, including Hortonworks ($HDP), Cloudera ($CLDR) and MongoDB ($MDB), have also gone the route of “open core” business models with relative success. All took large levels of VC funding (Hortonworks – $250M; MongoDB – $311M;  Cloudera – $1B), but have all been able to grow revenues since their IPO. And in most cases, those companies were primarily focused on being a software-centric company, while Pivotal has historically been more of a services-centric company that also sold software.

Venture Capitalist and Startup founder Joseph Jacks believes that there could be many more IPOs on the way. He tracks Commercial Open Source Software companies with more than $100M in revenues  (note: private companies revenues can not be verified as they are not publicly published) and believes this growing list is an indication that open source is becoming more mainstream in Enterprise accounts.

Of the companies on the list, more of them have been acquired prior to IPO than have completed the IPO process. When this happens, especially if the acquiring company is not strong in open source, they original company’s technology often no longer remains open source. It is often very difficult to merge proprietary and open source cultures and development models, as we recently saw DellEMC eliminate their open source focused {code} team.

Given the uniqueness of the Pivotal situation (high VC funding levels, Dell ownership levels), it’s not clear if their outcome – IPO or acquisition (or other) – is indicative of more open source centric IPOs in the future. We may have to wait for another 1-2 IPO declarations from that list before we can see any new trends emerging.


March 10, 2018  11:47 AM

Understanding the Variety of Kubernetes Roles and Personas

Brian Gracely Brian Gracely Profile: Brian Gracely
Applications, containers, DevOps, Kubernetes, Services

The Road to More Usable Kubernetes – Joe Beda

Depending on who you ask, you’re very likely to get many different answers to the question, “Who is the user or operator of Kubernetes?”. In some cases, it’s the Operations team running the Kubernetes platform, managing scale, availability, security and deployments in multiple cloud environments. In some cases, it’s the Development team interacting with manifest files (e.g. Helm or OpenShift Templates) for their applications, or integrating middleware services (e.g. API Gateway, MBaaS, DBaaS). Still other cases have blended DevOps teams that are redefining roles, responsibilities and tool usage.

Since Kubernetes orchestrates containers, and containers are technology that is applicable to both developers and operators, it can lead to some confusion about who should be focused on mastering these technologies.

This past week, we discussed this on PodCTL. The core of the discussion was based on a presentation by Joe Beda, one of the originators of Kubernetes at Google, that he gave at KubeCon 2017 Austin. While Joe covers a broad range of topics, the main focus was on a matrix of roles and responsibilities that can exist in a Kubernetes environment (see matrix image above) – ClusterOps, ClusterDev, AppOps and AppDev. In some cases, Joe highlighted the specific tools or process that are available (and frequently used) by that function. In other cases, he highlights where this model intersects and overlaps with the approaches outlined in the Google SRE book.

Some of the key takeaways included:

  • Even thought Kubernetes is often associated with cloud-native apps (or microservices) and DevOps agility, there can be very distinct differences in what the Ops-centric quadrants focus on vs. the App-centric quadrants.
  • Not every quadrants is as mature as others. For example, the Kubernetes community has done a very good job of providing tools to manage cluster operations. In contrast, we still don’t have federation-level technology to allow developers to build applications that treat multiple clusters as a single pool of resources.
  • Not every organization will designate these roles are specific people or groups, and some maybe be combined or overlap.
  • There is still a lot of room for innovation and new technologies to be created to improve each of these areas. Some innovation will happen within Kubernetes SIG groups, while others will be created by vendors as value-added capabilities (software or SaaS services).

It will be interesting to watch the evolution of roles as technologies like Kubernetes and containers begin to blur where applications intersect with infrastructure. Will we see it drive faster adoption of DevOps culture and SRE roles, or will a whole new set of roles emerge to better align to the needs to rapidly software development and deployments?


February 28, 2018  10:42 PM

The Kubernetes Serverless Landscape

Brian Gracely Brian Gracely Profile: Brian Gracely
containers, events, FaaS, Fn, Functions, Kubernetes, Lambda

In the traditional world of IT tech, there are currently two trends that are like rocket ships – Kubernetes and Serverless. There’s also lots of buzz around AI, ML, Autonomous Vehicles, Blockchain and Bitcoin, but I’m not putting those into more traditional IT building blocks.

Both Kubernetes and Serverless (the AWS Lambda variety) both got launched into the market within a few months of each other towards the end of 2014 and early 2015. They were both going to change how newer applications would get built and deployed, and they both promised to reduce the complexities of dealing with the underlying infrastructure. Kubernetes is based on containers, and Serverless (at least in the AWS Lambda sense) is based on functions (and some undisclosed AWS technologies).

I started following the serverless trend back in the Spring of 2016, attending one of the early ServerlessConf events. I had the opportunity to speak to some of the early innovators and people that were using the technology as part of their business (here, here, here). Later I spoke with companies that were building serverless platforms (here, here) that could run on multiple cloud platforms, not just AWS. At this point, the Kubernetes world and Serverless worlds were evolving in parallel.

And then in early 2017, they began to converge. I had an opportunity to speak with the creators of the Fission and Kubeless projects. These were open source serverless projects that were built to run on top of Kubernetes. The application functions would run directly in containers and be scaled up or down using Kubernetes. The two rocket ships were beginning to overlap in functionality. Later, additional projects like FnNuclio,  OpenFaaS, and Riff would also emerge as open source implementations of serverless on Kubernetes. And OpenWhisk would soon add deeper integration with Kubernetes. As all of this was happening in 2017, I was wondering if a consensus would eventually be reached so that all these projects wouldn’t be fragments of the same market space. I wondered if the Kubernetes community would provide some guidance around standard ways to implement certain common aspects of serverless or functions-as-a-service (FaaS) on Kubernetes.

This past week, the Serverless Working Group in the CNCF released a white paper and some guidance about event sources. While they didn’t come out and declare a preferred project, as they have with other areas of microservices, they did begin to provide some consistency for projects going forward. They also established a working group that represents a broad set of serverless backgrounds, not just those focused on Kubernetes.

We discussed all of these serverless Kubernetes projects on PodCTL this week. We highlighted where the various projects are making good projects,  as well as discussing some areas where the projects still have a long way to evolve before they will be widely adopted.

btw – there’s an interesting debate happening on Twitter these days between the serverless/Kubernetes crowd and the serverless/Lamdba crowd. If you want to keep up, follow where @kelseyhightower got started a couple days ago (and follow the mentions and back and forth.


Page 1 of 2412345...1020...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: