It’s been a few months since I posted anything on this blog, and especially anything having to do with Kubernetes. Part of that had to do with work projects and other technology projects, but it also had to do with moving (and updating) all of the content for the weekly Kubernetes-centric podcast that I host – PodCTL.
[Note: If you’re new to Kubernetes and wonder why I’d choose such an odd name, it’s a play on the CLI tool that’s used by Kubernetes, called “kubectl“. btw – there is an interesting debate around how to pronounce this tool (here, here) and then of course the CNCF had to weigh in on an “official” pronunciation so the entire community wouldn’t diverge into chaos or Game of Thrones. I mention this because we go out of our way on PodCTL to cater to all potential pronunciations of the show’s title.
If you’re interested in Kubernetes, I’d suggest subscribing to the show. It’s available via RSS Feeds, iTunes, Google Play, Stitcher, TuneIn and all your favorite podcast players. The show focuses on Containers, Kubernetes, technologies around the CNCF communities, Cloud-native application development and a number of associated technologies.
With the move to the dedicated website, we’ve made it easier to:
- Search for shows or topics
- Have a more flexible player via the web (change speeds, skip forward/back)
- View all the past shows (all the way back to 2017)
If you’re interested in Kubernetes, the show is a mix of technology basics and more advanced design considerations. It includes technology discussions from across a broad spectrum, with guests that are actively involved in writing the actual code that is enabling Kubernetes.
2018 was a big year for Cloud Computing in terms of industry growth, major acquisitions and mergers, and shifting trends in software usage.
There is no easy way to summarize all the announcements and activities that take place in an entire year, let alone the entire Cloud Computing industry. So I’ll try and narrow it down to a few of the biggest trends and events that not only impacted 2018, but will have long-term impacts into 2019 and beyond.
Gartner IaaS MQ is down to 6 companies (AWS, Azure, GCP, Alibaba, IBM, Oracle)
For years, we’ve watched the largest cloud providers invest billions of dollars (per quarter) in data centers and infrastructures around the world. Those investments are beginning to be payoff in a big way, as the Big 6 cloud providers are distancing themselves from the rest of the industry.
The Big 2 (AWS, Azure) are growing really fast
I wrote about the growth of AWS and Azure just a couple months ago. While Q3 was slightly “bumpy” for both companies, they both continue to grow at an astounding rate. This is the direct result of years of investments, as well as a willingness to not be bound to a specific set of technologies.
Big acquisitions around Open Source (Red Hat, GitHub, Hortonworks)
The biggest acquisitions and mergers in 2018 were all open source centric. IBM acquired Red Hat for $34B, Microsoft acquired GitHub for $7.5B, and Hortonworks merged with Cloudera. By themselves, each of the acquisitions addressed critical needs for each company. In addition, they made a bold statement about the importance of open source software, developer ecosystems, and the need to consolidate in order to effectively compete in a market with some very large cloud providers.
Is Open Source licensing at a cross-roads?
While the center of technology innovation has heavily shifted towards open source communities, the economics of cloud computing has been shifting more and more towards public clouds. This is leading some companies (Redis, Confluent) that lead popular open source projects to re-evaluate how they license their project (known as “Common Clause“) in a way that doesn’t allow public cloud providers to use the software, offer a managed service and not contribute back to the project in a meaningful way. This is a change to the more traditional open source licenses, and feedback on the change has been mixed. It will be interesting to see if this is an anomaly, or more projects adopt this competitive approach.
Can a new CEO improve the Google Cloud Platform?
With the departure of Diane Greene and the replacement by Thomas Kurian (formerly of Oracle), GCP will go into 2019 with a new leader and a potential culture change. Everyone will be watching to see if GCP can figure out how to break through into the Enterprise, and how much change Kurian will need to create in order to move GCP out of 3rd-place.
Kubernetes continues to dominate containers and cloud-native (see: @PodCTL podcast)
As we saw with the growth of the 2018 Seattle KubeCon event, and several major acquisitions (CoreOS, Heptio, Red Hat) that involved Kubernetes, the Kubernetes market is preparing for significant growth in 2019, as well as greater levels of competition.
A few weeks back, the Kubernetes community gathered in Seattle for their annual KubeCon conference. The event is the centerpiece of the Cloud Native Computing Foundation (CNCF), in conjunction with the CloudNativeCon event.
Growth and Breadth of the Community
When KubeCon was held in Seattle in 2016, there were just over 1,000 attendees. In Austin in 2017, the number had grown to 4,500, and by 2018 the attendee list had expanded to over 8,000 people (with a waiting list). The growth was not only seen in the number of attendees, but in the number of pre-show events to train attendees, as well as the number of sponsors and companies displaying their Kubernetes technologies and services.
Kubernetes is trying to balance stability and expandability
At a high level, the Kubernetes project is trying to balance two parallel paths:
- How to make the core as stable (and scalable) as possible?
- How to build expandability around the core, without destabilizing the core and also allowing a broad set of use-cases?
This creates a challenging set of decisions for the architects of the project, as they want to make sure the technology is stable enough for production use-cases, but also build in enough flexibility to allow for many types of applications to run on Kubernetes. Extensibility has been built around capabilities such as CRDs, CSI, and CNI. These models allow pluggability for both storage and networking, as well as add-on projects such as Istio, Knative and others.
One area that is critical to this balance is the Operator Framework, which builds upon the extensibility of CRDs, while bringing automated operations and application lifecycle management to a broad set of applications that can run on Kubernetes.
Kubernetes will be less of the focus of KubeCon / CloudNativeCon
For the first four years of the show (KubeCon), Kubernetes has been at the core of every discussion. But as more and more add-on projects begin to make their way into greater levels of maturity within the CNCF (e.g. Istio, Envoy, Knative, Prometheus, etc.), the focus will shift up the stack and onto projects that are closer to the application. We’re already seeing expanded focus on CI/CD tools, Service Mesh, Developer frameworks, Serverless and Registry/Security integrations. While there will always be a focus on the infrastructure and automated operations for Kubernetes, I would expect to see increased focus on improved ways to enable cloud-native applications on Kubernetes.
Cloud-native Application Development is still an evolving space
While the container and orchestrator (Kubernetes) has become standardized, the development model for applications running on Kubernetes is still evolving. While there are many efforts to make it easier to embed tools in IDEs and create new languages and frameworks (Knative, Draft, Brigade, Ballerina, CNAB, s2i, Buildpacks, ODO, etc.), there has yet to be a consensus around what is best or be more widely used by developers. The good news is that many projects are currently exploring this space, as well as many vendor offerings that are emerging.
Enterprise Usage of Kubernetes in Production is expected to grow
Since Kubernetes first started shipping (as OSS, Commercial Software and Cloud Services) almost 4 years ago, we seen it be adopted by companies around the world and in nearly every industry. As the market moves from early adopters to cross the chasm, mainstream adopters want to hear about the challenges and successes of the early adopters. For the last couple of KubeCon events, the number of companies speaking publicly (example) about their adoption has been growing. This means that concerns about adoption will be reduced, as companies now have references that they can reach out to directly.
While Google Cloud Platform has been around for many years (going back to the Google App Engine days in 2008), I’ve been attending Google Cloud events since 2016. Throughout that timeframe, it has often been difficult to understand the messaging, differentiation and overall direction of Google Cloud. On one hand, Google has always been recognized for their technology strengths and ability to scale applications at a global level. On the other hand, Google has never been known to be skilled in person-to-person communication or collaboration, as the core of their business has been automating transactions and person-to-machine (or machine-to-machine) interactions. This person-to-person communication is often considered to be at the core of successful “Enterprise IT” companies, of which Google Cloud Platform is making some progress, but still struggling to master in relation to other public cloud providers (AWS, Azure, Alibaba, Salesforce, Twilio, etc.) or even growing software companies (VMware, Red Hat, ServiceNow, etc.).
So with that backdrop, Google Cloud decided to bring in a new CEO to run the business. Former Oracle executive Thomas Kurian will be replacing Diane Greene as CEO, beginning in January 2019. Before Kurian begins his tenure, here are a few tips and suggestions to address some of the challenges that have kept Google Cloud from succeeding in the past.
- Define the Relationship between Google and Google Cloud. In the past, Google Cloud has highlighted that its DNA and much of its technology comes from the parent company Google. When Google Cloud engineers talk about this relationship, they often blur the lines. Many of the Google Cloud engineers came from Google, so they love to highlight all the great technologies they internally had in their past roles. But not all of this “greatness” is available to Google Cloud customers. And not all potential Google Cloud customers have a “Googley” culture, so many “Googley” things (technology, culture, etc.) aren’t available to potential customers. Customers care about what’s available to them, not what’s behind the curtain.
- “Beta” and “Enterprise” don’t mean the same thing. Google (or Alphabet) and Google Cloud share the same brand name. Millions (or Billions) of consumers have interacted with Google services. Some of those Google services, which were beloved, got cancelled with little to no notice. Technologist remember this stuff. And as much as the folks at Google Cloud like to say, “that’s not us!”, potential customers have that concern or doubt in the back of their minds. The new CEO really needs to make it clear, maybe with some form of financial guarantee, that Google Cloud services won’t be killed off. It’s a perception problem that Google Cloud needs to address.
- Figure out how to change the rules of the game. When looking at the basic services, GCP does a reasonably good job matching up with AWS (or Azure). In some cases, GCP technology is faster (booting VMs, network latency, etc.). In some cases, GCP pricing is better (e.g. preemptive credits). But none of those advantages were so significant that (most) customers would choose GCP over AWS. But GCP is still recognized as being significantly better at AI/ML, and at globally scaling applications. And Google/GCP is very good at making those technologies relatively easy to use, because there is so much AI/ML brainpower behind the scenes at Google. This is the type of technology that might allow a retain company to compete with Amazon. Or a smaller company to compete with a large bank. So how could GCP help companies leapfrog to using more AI/ML/BigData services? Maybe it means they need to make the cash cow of AWS (compute & storage) essentially free. Google has so much compute & storage capacity in their data centers, the margin costs for any additional server or storage must be very close to zero. And data gravity is everything in cloud computing. If the amount of friction to on-board the data, compute and store the data is so low that getting to the AI/ML goodness is considerably faster, then maybe the rules of the game are changed for a segment of nearly every potential customers business.
- Consider big partnerships. Given how far behind GCP is in Enterprise sales, it might be time to consider a partnership with Microsoft/Azure or Oracle. GCP is way ahead of Azure in AI/ML capabilities, and Oracle just can’t seem to figure out how to build a modern cloud. So maybe GCP considers a partnership where they either OEM their AI/ML to Azure, or OEM their entire cloud to Oracle. Both of them have massive Enterprise installed bases that AWS is actively recruiting, and AWS has already locked up a strong partnership with VMware.
Just a few things for the new CEO to consider as he begins to figure out how to improve GCP’s current standing as the 3rd or 4th largest public cloud.
AWS re:Invent is happening this week in Las Vegas, so that means long bus lines for attendees, and dozens of blog posts about new features/services from AWS’ Jeff Barr.
As AWS has grown their portfolio of services from basic infrastructure (compute, storage, networking), to application-services (database, load-balancing, caching & queuing) to data-services (data-warehouse, Hadoop, AI/ML, streaming analytics, etc.), it can be very difficult to keep up with all the trends, announcements and features.
Is Azure catching up to AWS?
As we saw from the recent earning announcements from both Amazon and Microsoft, the gap between AWS and Azure is closing. Which one is leading at this point depends on what services you count in the “cloud” totals – Azure spreads it’s “cloud” revenues across a couple buckets. Given how competitive the two Seattle giants are with each other (many ex-Microsoft people work for AWS), it will be interesting to see if AWS goes after Microsoft/Azure strengths in their keynote or announcements.
From recent research from Citi, AWS has 6224 open reqs.
Do AWS customers care about Amazon’s business ventures?
If you’re UPS or FedEx or DHL, announcements like this might be concerning. If you’re ADT or Brinks, announcements like this might be concerning. If you’re one of the giant insurance companies, announcements like this might be concerning. If you’re one of the giant healthcare companies, announcements like this might be concerning. If you’re in the grocery industry, announcements like this might be concerning. If you’re in any aspect of retail, announcements like this might be concerning. If you’re ESPN or FOXSports, announcements like this might be concerning.
AWS’ parent company Amazon obviously have very big ambitions to get into many diverse industries. Do these ambitions concern existing or potential AWS customers, who may find themselves competing directly or indirectly with their IT-vendor?
AWS is a long-tail business, dominated by data-gravity
If you haven’t already read the “Cloudability – 2018 State of the Cloud” report, I highly recommend it from a company that has a tremendous about of insight into actual AWS customer usage. At the core of the report is “out of hundreds of cloud services, only four account for the majority (85% of spend)”. Compute (EC2) is obviously a huge portion of revenue, but the more important service is data storage. As more data flows into AWS services (S3, RDS, etc.), the more sticky those applications become to other AWS services. And since AWS doesn’t charge for inbound data/traffic, but does charge for outbound traffic, the cost of moving out of AWS is significantly more than moving into AWS.
Is serverless a strategy or a feature?
When AWS announced Lambda in November 2014, it was considered an unusual service. But it created the concept of “serverless” as a distinct service – “distinct” in that earlier public cloud “PaaS” services (e.g. Heroku, Google App Engine) had previously abstracted the underlying infrastructure from developers, but with different pricing and scaling models. Since 2017 re:Invent, AWS has been pushing “serverless” more and more, beyond just Lambda. Now they are pushing the concept into things like Databases and other scalable services. Now in 2018, there are numerous serverless conferences around the world, and there are many advocates who will tell you that any new applications should be build with serverless (or FaaS) technologies. This approach (whether correct or not) is more in the strategic camp.
Others will argue that serverless (or FaaS) is a feature or service-pattern of a broader application platform. This is where many in the Kubernetes camp are leaning with technologies like Knative.
Will AWS double-down on an on-premises offering?
Last week we asked our Cloudcast audience what they thought AWS would do to address on-premises applications. Almost 2/3rd thought they would make a bigger push. If this happens, does AWS create an offering like AzureStack or Google’s GKE On-Prem, or create a hybrid-cloud offering like Red Hat OpenShift, or do they create a more robust offering that tries to significantly change the rules of on-premises IT?
How will “the Edge” evolve?
When AWS announced both “Greengrass” and enhancements to “Snowball Edge“, they signaled that they wanted to be a significant player as more computing moved to the edge. The use-cases for IoT, edge analytics, video monitoring and many others can benefit from a combination of edge computing and centralized cloud services. But it’s still very early in Edge architectures, so many vendors and cloud providers are all trying to put their stamp on these use-cases.
Which AWS partners should be worried after the keynote?
Unfortunately, this is a running joke every year. During the keynotes, there is always a subset of the vendors on the trade show floor, that were previously “partners”, that quickly move into the competitor category because AWS announces a native service that competes with their current SaaS offering. So who will be on the list this year?
About 4 years, I wrote about taking this weird job working on open source software (OSS) at EMC, back before OSS became a thing that every enterprise software company thought it needed to do and VC and M&A started throwing $Billions of dollars at it.
One of the details that I didn’t share in that story was how I prepared for the presentation I had to make to the EMC executive staff in order to finalize the funding. At the time, EMC was a pretty stodgy old company, where all the executives wore suits (and often ties) every day, in every meeting. They were asking me to focus on open source software, which isn’t a very suit and tie friendly environment. There was a part of me that was excited about the opportunity, and part of me was seriously concerned that they would quickly allow the suit & tie culture to smother this new thing that went counter to everything in their culture. So while I spent a good bit of time preparing my presentation, it was a different litmus test that I used to determine if this program would have a change of succeeding.
I wore jeans and a t-shirt to the presentation. In room full of suits and ties, I stood out like a sore thumb. I almost didn’t get into the board room, because the admin at the desk thought I was lost. But I needed a way to gauge if they would focus more on my appearance or my story. I needed to see how comfortable they would be with being uncomfortable. Ultimately they listened to the entire presentation and agreed to fund the program, which lasted about 3yrs.
As I watched the initial media tour after IBM announced that it intended to acquired Red Hat, I noticed that Red Hat CEO Jim Whitehurst worn jeans in the interviews along side IBM CEO Ginni Rometti.
[Disclosure: I am employed by Red Hat, so I’ve seen this as the day-to-day “uniform” that is always worn by Whitehurst. Also, this commentary is only on the jeans worn in the interviews, not on any financial, technical or strategic aspect of the IBM acquisition.]
While Red Hat’s CEO didn’t consult with me about his wardrobe for the interviews, I suspect that he went through a somewhat similar thought process about conveying what he represented in this new context. It was an important signal to both Red Hat employees and the market at large that he would be representing who Red Hat was in the market. They are a company that represents open source communities, and they represent a more engineering-centric approach than IBM’s more executive and sales-centric approach to the market.
There are lots of different approaches to job interviews.
- Dress for the job you want
- Dress for success
- How you dress doesn’t matter, it’s more about your knowledge and experience personality
All of those might be relevant to a specific situation. But for a situation in which you’re asked to be part of a significant change, and you’re not sure if the change will be accepted, maybe it’s a good idea to wear jeans.
This week both Amazon and Microsoft announced quarterly earnings for Q3CY2018. At the corporate level, Amazon’s revenues fell below expectations, while Microsoft’s beat by a significant amount. Looking one layer deeper, AWS’ revenues slightly missed expectations (but still grew 46% QoQ) and Azure continues to grow, but at a slightly lower rate than the previous quarter (76%, from 89% last quarter).
What can we take away from these announcements? Well, lots of things, depending on how you look at the numbers.
- AWS is still growing very fast (+40%), especially for division with already large revenues ($25B+).
- Amazon now completely breaks out the AWS revenues (from the “Other” category thru early 2015). This is because AWS has become a $20B+/yr business, and it drives significant profits for Amazon. This is important, as it’s different from how Microsoft breaks out “Azure” revenues, which are spread across multiple buckets (Office 365 Cloud and Intelligent Cloud). We’ll explain more about that later.
- AWS’ Operating Margins grew to over 30% for the quarter, the highest levels in more than 4 years. This means that while revenues aren’t growing as faster as previous quarters, they are more profitable revenues.
- Azure’s revenues are not explicitly broken out (still under “Intelligent Cloud”), but they are assumed to be in the $7.5-7.75B range, which would make it larger than AWS. That would be a bold claim, especially without exact numbers, so let’s just say that the two Seattle cloud giants are playing the game in the same ballpark.
- Operating Margins for Azure aren’t specially broken out, so we don’t have any real picture of profitability of the Microsoft Cloud, the same way we do for AWS. Granted (see below), how costs are allocated to usage of the data-center and cloud resources could varying significantly across different companies.
Anytime you look at the public cloud revenues or margins, it’s also important to consider the levels of capital spending they are doing to continue the growth. To compete at the highest levels, this required more than a $1B per quarter investment. Granted, these investments can be leveraged for many activities, not just public-cloud facing customer resources (e.g. search, retail websites, autonomous vehicle telemetry, gaming, etc.), which allows for broader economies of scale for all areas of the business.
Looking at the past two Gartner IaaS Magic Quadrants (2018 and 2017), it’s clear that the market for the largest public cloud providers is shaking out to a small number, and it is becoming much more competitive.
Google Cloud is beginning to figure out how to talk to Enterprise customers, whether it’s about core technologies like Machine Learning and AI, or exploring ways to get into customer’s data-centers with early offerings like GKE On-Prem.
IBM will be making an expanded push around open source and Hybrid Cloud with it’s recent acquisition of Red Hat.
And Alibaba Cloud has the footprint and cultural understanding to grow in Asian markets and potentially beyond.
What was once considered by some to be a single horse race in the public cloud is beginning to turn into a multi-horse race, between some very large and fast moving alternatives. It’ll be interesting to watch the significant moves being made by each company to see how they navigate the competitive waters, and appeal to customers that will be making architectural decisions that will impact their next 5-10 years of business survival.
Back in the day, when technologies like server virtualization were starting to change the landscape of IT (like back in 2007), it would not be unusual for someone to build out a home lab to test out new technologies. This mean buying a few servers, some SAN or NAS storage, a network switch, and likely a bunch of software licenses to make it all work. It wouldn’t be unusual for people to spent $5000 to $10,000 on these home labs, as well as the on-going electrical costs and maintenance of the system.
But as more cloud-native technologies emerged, both in the open source communities and via public cloud services, a new trend is emerging in how people are able to learn and test. As would be expected, the trend is moving the testing environments to the public cloud, and with a set of online services that don’t require anything except a web browser.
All of the major public clouds, AWS and Azure and Google Cloud Platform, all have a “free tier” that allows user to try any service up to a certain capacity. These free tiers are a great way to test our new services, or potentially run some lightweight applications. The free tiers have all the same features as the paid tiers, just with lesser available resources.
In addition to accessing free cloud resource, the public cloud providers are also providing various levels of training resources – AWS, Azure, GCP. Some of these courses are tutorials, while others are quick-starts to get basic functionality working in a live environment.
Public Cloud Certifications
Another popular service that is targeting certifications for the public cloud services is A Cloud Guru. We learned how they build their service on serverless technologies on The Cloudcast. Initially targeting training for basic AWS services and AWS certifications, it has expanded it’s offerings to include other cloud services (Azure and Google Cloud), as well as starter courses to learn things like Alexa Skills or Serverless application programming.
Learning Open Source Cloud-Native Skills
Yet another platform that is gaining popularity is Katacoda. We recently spoke with their creator, Ben Hall, on The Cloudcast. Katacoda provides interactive tutorials for emerging technologies such as Kubernetes, Docker, Prometheus, Istio, OpenShift, Jenkins and GitHub. Each technology The platform allows the user to use their browser to emulate being directly on a machine, via directly CLI access. One of the highlights of Katacoda is that users can directly follow the step-by-step tutorials, or be flexible in how they use the platform. This makes is easy to learn, but also make mistakes with having to completely start over a module.
All of these new platforms are making it much easier, and less expensive for both beginners and experts to learn and trial all of these emerging technologies.
For the last 7 or 8 years, the list of companies that have attempted to transform their business through technology is very long. Companies in every industry and in every geography. Early on, it was called “Cloud First” and attempted to emulate how Silicon Valley companies ran their IT departments. Over time, it has evolved to being called things like “Agile Development” or “DevOps” or “Digital Transformation”. At the core of all of these changing dynamics are the intersection of new technology which enables faster software development, and the cultural/organizational challenge of aligning to more frequent deployments. These topics are discussed in many DevOps Days events around the world. Vendors such as IBM, Red Hat and Pivotal (and others) have programs to help companies reshape their internal culture and software development processes. Consulting companies such as Thoughtworks (and many others large SIs) have also specialized in these broad transformational projects.
In researching many of the success stories, there are lots of examples of companies that were able to get critical pieces of technology to work for them. These newer technologies (e.g. CI/CD pipelines, automated testing, infrastructure-as-code, Kubernetes, serverless, etc.) are all centered around automating a previously manual process. They allow companies to more tightly couple steps in a complex software build or release process. And they allow the companies to do these tasks in a repetitive way so that they create predictable outcomes. The implementations of these technologies, which they can often take 6 to 12 to 18 months to get fully operational, depending on existing skills or urgency of need, often create stunning results. Results like we see in the State of DevOps Report.
But one critical element that is often overlooked, or explicitly stated as a cause for success, is the role of internal marketing and evangelism of the successes along the way. The storytelling behind the small steps of progress in the transformation.
For many engineering teams (ITOps or Developers), the idea of “storytelling” about changes often seems awkward or unusual. From their perspective, that’s a “fluff” or “worthless” activity that isn’t really addressing the difficult challenges of engineering. And yet so many transformations stall because not enough people within the organization know about the changes that are happening. These IT organizations are not that different from a technology company that’s trying to sell products in a crowded marketplace. IT organizations already have a lot on their plate, lots of previous goals to achieve and sometimes they are just not that interested in change if it doesn’t impact them individually.
The way that technology vendors offset this noise in the market is through marketing and evangelists/advocates. People that are trained to listen to the market about challenges, and showcase how specific products/technologies can solve problems. These evangelists/advocates are often not just talking about the technology, but sharing stories about how their customers were able to achieve success or overcome challenges. This is a function that many internal IT organizations would be smart to emulate.
A few resources that I’ve found useful in learning how to construct ways to convince people that change is beneficial are:
Any of the books by the Heath Brothers.
- “Made to Stick” talks about why some new ideas have staying power and some other fail.
- “Switch” talks about how to convince people to make changes when it seems like it’s nearly impossible to get people to change.
- “The Power of Moments” does a great job of explaining why it’s so important to elevate the importance of certain moments and activities to help inspire people to achieve big things.
Another good, recently released book to read is “The Messy Middle” by Scott Belsky. The book looks at how to navigate through the peaks and valleys of new projects.
Both of these sets of resources will seem unusual to many in IT, but they fundamentally look at how to manage through change, and establish communication models that help get other people to want to participate and achieve common goals.
So if your IT transformation project is stalling, it’s worth taking a look at if you’re not spending enough time getting others involved and excited about the project.
This past week, while scrolling through Twitter, I saw an image (right) with the caption “Get a crash course on Containers and Kubernetes 101”. The images was from VMworld 2018 and the room was pretty full. It seemed like lots of virtualization admins were now interested in containers and Kubernetes, very new concepts at a VMworld event. Having been heavily involved in this space for the last 3+ years, and seeing 1000s of container-enthusiasts attend event like DockerCon or GoogleNEXT KubeCon or Red Hat Summit, I had to remind myself that the technology is still in the early days. And during these early days, it’s important to provide 101 level content so people can learn and quickly get up to speed on new technologies. The great thing about today’s world vs. when many of these VM Admins were learning about virtualization, is that we’re no longer bound by the need to buy a bunch of extensive physical hardware or maintain a home-lab. There are great learning tools like MiniKube that runs on your laptop, or online tutorials for basic and advanced Kubernetes scenarios.
So with the goal of helping VM Admins learn more about containers and Kubernetes, we decided to focus this week’s PodCTL podcast on how their worlds are different and similar. This wasn’t intended to be a “which one is better?” comparison, but rather to look at how much similarity these was, and how many new concepts would a VM admin need to learn or adjust to in order to succeed with containers.
We discussed a number of areas:
- Control Plane
- Content Repository
- Data Plane (Hosts, OS, Apps)
- Networking, Storage, Management, Logging, Monitoring
- What is automated (by default vs. tooling)
- Availability (models)
- Stateful vs. Stateless apps
- Automated (integrated) vs. Manual tasks
- OS and Software Patching
Like any new technology, there is definitely a learning curve, but the tools and resources available to learn in 2018 are far better than they were learning virtualization in 2008-2011. In terms of priorities, understanding both containers and Kubernetes is probably something that virtualization admins should place high on their lists for 2018-2019, as more and more developers and even packaged applications will using containers.
Take a listen and let us know what areas we missed, or areas you think we may have gotten incorrect? Are you trying to learn more about containers and Kubernetes? Share with us in the comments how your journey is going.