From Silos to Services: Cloud Computing for the Enterprise

Page 2 of 2212345...1020...Last »

June 30, 2017  7:01 PM

Open Source Software and the Grateful Dead

Brian Gracely Brian Gracely Profile: Brian Gracely
Open source software

long-strange-trip-logo-1480x832For the last few weeks, I’ve been traveling quite a bit, so I’ve spent a decent amount of time on airplanes. When airplane WiFi is poor (quite frequently), so pass the time watching movies. For me, I’ve been watching the excellent “Long Strange Trip” documentary on Amazon Prime, about the history of the Grateful Dead. If you like music, or history, or just enjoy good storytelling, I highly recommend the series.

Coming up on my 1yr anniversary of working at Red Hat, it struck me how many parallels there are between the evolution of the Dead and how open source software communities [LSD trips and being under the constant influence of drugs excluded]. The Grateful Dead have often been characterized as “a tribe of contrarians who made art out of open-ended chaos”. They phrase could easily apply to many open source communities.

[Episode 1] Committed to Constant Change

The Grateful Dead are known as being a touring band, not one that spent time focused on commercial success via studio albums. Like open source software, their music was constantly evolving, and it was interpreted differently by nearly everyone that saw them perform live. As they began to slow in their contributions, their model was “forked” and replicated by touring bands like Phish and Widespread Panic. Similarly, open source software is less about a single project than a style of development and collaboration that is constantly evolving and the principles being copied (and evolved) by many other projects.

[Episode 2] Finding Success on Their Own Terms 

While the record labels wanted them to conform to their recording and sales models that were used by most other bands, the Grateful Dead decided to adopt alternative business models. At the time, selling albums would have been more profitable, but they were actually ahead of their time in focusing on live events and allowing their music to be fragmented and easily copied (bootleg tapes). Similarly, many analysts would like to see open source companies deriving revenues in similar ways to proprietary companies, but that model hasn’t been fruitful. Successful open source companies have adopted support models and SaaS models to drive revenues and success.

[Episode 3] Let’s Join the Band

While the Grateful Dead had 5 or 6 original members, the documentary highlights how Donna and Keith Godchaux “just decided to learn the music and join the band” in 1971. Random fans of the Dead actually joined the band and stayed with them for many years. This is not unlike how anyone can join an open source project just by showing interest and making a meaningful contribution.

[Episode 4] Who’s In Charge Here? 

For many people, the connection between Linus Torvalds and the Linux project is the model that they expect all open source projects to have. They expect a BDFL (Benevolent Dictator for Life). In most projects, the BDFL role doesn’t really exist. There might be strong leaders, but they realize that broader success needs many leaders and tribes to emerge. This same dichotomy emerged for Grateful Dead, where Jerry Garcia was the visible leader, but he didn’t want to set all the rules for how the band (or their audience) needed to behave.

[Episode 4] and [Episode 5] I’ve yet to see these episode yet (the next airplane flights), but looking at the previews, they appear to have similar open source parallels. They focus on the growing success of the band and how people set higher expectations than the band wanted to take on themselves. This can often happen with successful projects, where commercial expectations begin to drift from core community expectations. This is where strong leadership is needed just as much as the early days of the project.

If you’re interested in open source software, or some insight into how communities ebb and flow, I highly recommend this documentary. And the music is obviously great too.

June 26, 2017  10:24 PM

Walmart vs. Amazon – Battling Outside the Box

Brian Gracely Brian Gracely Profile: Brian Gracely
Amazon, AWS, Azure, Google Cloud, Jeff Bezos, Public Cloud, Walmart

141201144920-walmart-thumb-1024x576This past week, Walmart issues a statement to their retail partners, suggesting that they should not run their technology stack on the AWS cloud. This is not an unprecedented move for Walmart, who has required that their partners have a physical presence in Bentonville, AR (Walmart HQ) for many years, in order to simplify meetings and reduce travel costs for Walmart.

It’s understandable that Walmart wants to keep valuable information about their business trends and details about their partners away from AWS (and indirectly, Amazon). This is not to imply (in any way) that customer data is collected by AWS, but there is no way to determine how much meta-information that AWS can collect about usage patterns that could influence the services they offer.

What’s interesting about this statement from Walmart is that they don’t offer a Walmart-branded hosted cloud alternative to AWS. This brings up an interesting dilemma – [1] Does this create a unique opportunity for the Azure cloud or Google cloud?, [2] Does Walmart have concerns about Google’s alternative businesses (e.g. Alphabet) collecting data patterns about their partners?, [3] Will Walmart partners be swayed by this edict, especially given Amazon’s growing market share in retail?  [4] Will this force Walmart to get into the hosted cloud business? Do they keep enough cash on their balance sheet to compete in that market?

Back in December, I predicted that the Trump administration would pick a fight with Amazon, as proxy for Jeff Bezos’ ownership of the Washington Post. That hasn’t materialized yet, although the year is only half way complete.

This action by Walmart ultimately brings up the question: Can non-traditional tech companies begin to impact AWS in ways that traditional tech companies have been unable to do – e.g. slow down AWS growth? The reach of companies such as HPE haven’t been able to slow it down, but maybe Walmart’s massive reach can have a different impact on the market. It will be interesting to see if Walmart reports this in their quarterly reports, or begins to make this a public issue with their Office of the CTO.

Beyond Amazon vs. Walmart, this bring up yet another interesting question – Will we see existing companies with large ecosystems or supply-chains (e.g. automotive, healthcare, etc. ) apply cloud guidance to their partners (e.g. must use XYZ cloud), or has the world of APIs completely changed what a modern supply-chain now looks like? The concepts of “community clouds” have never really taken off in practice.


June 13, 2017  11:15 PM

OpenStack was a pivotal time in IT

Brian Gracely Brian Gracely Profile: Brian Gracely
AWS, Open source, OpenStack, Private Cloud, Public Cloud, VMware

openstackThis past week, we did some reflection on The Cloudcast about the evolution of technology over the last 6+ years. One of the topics were discussed was the impact that OpenStack had on the industry. People has various (strong) opinions about the level of success that OpenStack has achieved, but we discussed how OpenStack changed the IT landscape in a number of significant ways.

Announced and launched in 2010, OpenStack was designed to deliver an API-driven cloud infrastructure, similar to AWS EC2 compute and S3 storage. At the time, there was a split about whether the project(s) would focus on being a VMware replacement, or an open version of AWS services. This was heavily debated by groups focused on both agendas.

Software Defined Infrastructure

While OpenStack was by no means the first implementation of infrastructure services (networking, storage, firewall, proxy, etc), it was the first significant time when this approach to technology was embraced by Enterprise-centric vendors. Until then, both Enterprise-vendors continued to provide hardware-centric offerings that complimented offerings like VMware virtualization. Since then, API-centric infrastructure is becoming more commonplace in the Enterprise, especially with the emergence of containers and container platforms.

Open Source in the Enterprise

While companies like Red Hat, SUSE and Canonical had been selling commercial open source to the Enterprise for many years, OpenStack was the first time that companies like Cisco, HPE, NetApp, EMC and many others were attempting to combine proprietary and open source software into their go-to-market offerings. Since then more IT vendors have been building open source offerings, or partnering with open source centric companies to bring offerings to market for customers that are demanding open-first with their software.

Who’s in Charge of OpenStack?

While Rackspace may have wanted to leverage all the engineering talent to take on AWS, it wasn’t able to maintain ownership of the project. The OpenStack foundation was an early attempt at trying to bring together many competing vendor interests under a single governance model. Critics would argue that it may have tried to take on too many use-cases (e.g. PaaS, Big Data, DBaaS) and projects in the early days, but the project has continued to evolve and many large cloud environments (Enterprise, Telco) are running on OpenStack.

Since the creation of the OpenStack Foundation, several other highly visible open source projects have created independent foundations to manage the governance of the projects (e.g. CNCF, Cloud Foundry, etc.)

Founders Don’t Always Make the Big Bucks

While OpenStack was viewed as a disruptive threat to the $1T Enterprise infrastructure industry, and heavily funded by venture capital, most of the founding individuals didn’t make out in a big way financially. Piston Cloud and Cloudscaling were sold to Cisco and EMC, respectively, with relatively small exits. SwiftStack has pivoted from just supporting OpenStack to also supporting multiple public cloud storage APIs and software-defined storage use-cases. Nebula went bankrupt. Even Mirantis has moved their focus over to Kubernetes and containers. Ironically, Red Hat has become the Red Hat of OpenStack.


May 28, 2017  5:32 PM

Do IT admins fear for their future?

Brian Gracely Brian Gracely Profile: Brian Gracely
Automation, Cloud Computing, DevOps, IT admin, learning, Planning, software-defined

downloadMost tech events that I attend are fairly positive events, with people talking about new technologies and how these might “change the world”. The pushback on most talks is about the viability of the technology, or who would actually attempt to use that technology in production.

But a couple weeks ago at Interop, I experienced a much different vibe at several of the cloud computing talks. At several of the talks I attended, people in the audience were asking how this technology would replace their job and what they could do to prevent it.

We’ve Seen this Before

Now, this isn’t really a new sentiment. We heard it from mainframe and mini admins when open systems and client-server computing was introduced. We heard this from telecom admins when voice-over-IP was introduced. And we heard it from various infrastructure teams when virtualization and software-defined were introduced.

What seemed different about the concerns at this event were that most of the people asking questions didn’t believe that they’d ever get the opportunity to expand their current skills at their current employer. In essence they were saying, I don’t doubt that DevOps or Public Cloud or Cloud-native apps will happen, we just don’t see how they’ll happen via the IT organization at their company.

hello-languages-570x371I’ve written before about how learning new technologies has never been more accessible (here, here, here). But I also realize that many people aren’t going to take the time to learn something new if it can’t be immediately applied to your current job. It’s sort of like taking classes in a foreign language, but not having any people else to practice your new language with.

Do we need more IT Admins?

During one of the session by Joe Emison (@joeemison), he made the point that while developers are driving more changes within IT today, that developers aren’t every good at many of the tasks that IT admins typically perform. But this is leading them to leverage more and more public cloud services (see chart).

screen-shot-2017-05-28-at-5-11-29-pm

It was a sobering slide for those in attendance, especially those what had spent many years building up those skills. There was also a realization that they were part of IT organizations that had previously never really been measured or incentivized to optimize for speed, but rather to optimize for cost-reduction and application up-times.

Double down on developers?

There really weren’t many answers for people asking about their future in a world of DevOps, Public Cloud, Automation and more focus on developing and deploying software quickly. Most answers were focused on learning the software skills necessary to program something – where it was an application or the automation tools needed to stand up infrastructure/security/CI pipelines quickly. Those might not have been the answers that IT admins wanted to hear, but they are the answers that provide some path forward. Answers that tell people to do nothing, or just wait for the future to change probably aren’t going to create the future that people in the audience had hoped for.


May 28, 2017  1:16 PM

3 Lessons Learned: Containers vs. Container Platforms

Brian Gracely Brian Gracely Profile: Brian Gracely
containers, Docker, Kubernetes, Linux, malware, OpenShift, Security, swarm

brg3akbThis past week I had the opportunity to present a session entitled “Managing Containers in Production: What you need to know” at the Interop conference in Las Vegas. In addition to the talk, I had the opportunity to watch several other presentations about containers and cloud-native applications. One session was focused on “The Case for Containers: What, When, and Why?”. It was primarily focused on Containers 101 and some examples of how you might run containers on your local machine. It highlighted for me three distinct differences between running containers locally and running them in production.

Local Containers vs. Container Platforms

One of the discussion points was getting from running a single containers to running several containers that make up an application, or several interconnected services. The suggestion was that people can just use the build in “Swarm Mode” to interconnect these clusters. While this is true, the session failed to mention the more popular way to do this, using Kubernetes. A member of the audience also asked if this could create a multi-tenancy environment for their business, and they were told that there were no multi-tenant technologies for containers. It’s true that Swarm Mode does not natively support multi-tenancy. But it is also incorrect that multi-tenancy isn’t supported for containers. Red Hat OpenShift delivers a multi-tenant environment for containers (via projects, etc.), built on top of Kubernetes.

Docker Hub vs. Managed Container Registries

imagesThroughout the talk, the speaker used Docker Hub as the source for all container images. While Docker Hub has done a great job of bringing together the containerized applications of ISVs and independent engineers, it does have it’s challenges. First, several independent studies have show that many images on Docker Hub have known security vulnerabilities or viruses. This means that it’s important to know the source of container images as well as have a mechanism to scan/re-scan any images you use in your environment. Second, Docker Hub is a registry located across the Internet from your environment. What will you do if Docker Hub isn’t reachable in your application pipeline? This leads many companies to look at using local container registries to not only improve availability, but also manage bandwidth requirements which might be high for large container images. It also allows companies to better manage image sources (e.g. corporate standard for trusted images) and scanning capabilities.

Aligning Container OS vs. Host OS

unnamed-2A final topic that came up as a result of an audience question was whether or not you should align the base Linux image in the container with the OS in the host where the container is running. This is an important topic to discuss because containers are a core element of the Linux operating system. In essence, they divide the Linux running on the host into two sections: container image and container host.

unnamed-1For an individual’s machine, it may not matter that there is alignment between container base image and the host OS. This can often happen if you’re using the defaults in a tool like Docker for Windows/Mac (e.g. LinuxKit or Alpine Linux) and the popular images from Docker Hub (e.g. Ubuntu Linux).  But as this moves into a production environment, it becomes a more critical alignment. There are many elements to Linux containers and Linux hosts. There can be differences between different versions of an OS, version of Linux kernel and the libraries included with each one. This can introduce security vulnerabilities or a lack of functionality.

Overall, it’s great to see container topics being widely discussed as not only DevOps and Developer-centric events, but also as Infrastructure-centric events like Interop. But it’s important that we discuss not only the basics, but how the emerging best-practices get put into production in a way that not only benefits developers and the applications, but also give operators and infrastructure teams a model to keep those applications running and secure.


May 13, 2017  2:13 PM

Managing Containers in Production

Brian Gracely Brian Gracely Profile: Brian Gracely
CaaS, containers, DevOps, Docker, Kubernetes, Linux, OpenShift, PaaS, pipeline, SDN, Storage

_65888562_triple-e_2_editedNext week at Interop 2017 in Las Vegas, I’m giving a talk about managing containers. The focus of the talk is to look at the expanded interactions that are required as engineers move from having a single container, running on their laptop, to moving it into production. It looks at how much developers need to know about containers to get their applications working, and what operations teams need to plan for in terms of container scheduling, networking, storage and security.

Breaking down the talk, there are three critical messages to take away.

The Need for Container Platforms

Platforms that manage containers have been around for quite a while (the artist formerly known as “PaaS”), just like Linux containers have been around for much longer than docker. But as containers are becoming more popular with developers, as the native packaging mechanism for applications, it becomes increasingly important than operations teams have the right tools in place to be able to manage those containers. Hence the need for container platforms, and the emergence of technologies like Kubernetes.

The Developer Experience Matters

As platforms transition from PaaS to CaaS, or some combination of the two, it’s important to remember that the container is just a packaging mechanism for applications. It’s critical to make sure that developers are able to use the platform to rapidly build and deploy applications. This could means that they package the application on their laptop using a container, or push their code directly into a CI/CD pipeline. In either case, the container platform must be able to take that application and run it in a production environment. The platform shouldn’t restrict one development pattern or another.

Operational Visibility is Critical

While containers bring some interesting properties around packaging and portability, it’s important for operational teams to realize that they have different characteristics from virtual machines. Containers may run for a few seconds or for long periods of time. This means that the management, monitoring and logging tools have to be re-thought in order to be valuable in a container-platform environment.


April 30, 2017  6:06 PM

Serverless Moves from Functions to Events

Brian Gracely Brian Gracely Profile: Brian Gracely
AWS, Azure, events, Functions, Lambda

The discussions and sessions at ServerlessConf Austin ’17 were a good mix of emerging technology and existing use-cases. The high level message is that serverless allows developers to focus entirely on their applications and all (OK, most) of the infrastructure and operations challenges get handled by the underlying services.

But as with any emerging technology, after a year+ of learnings, the focus of “what’s next?” begins to evolve. In discussions with several early-stage users, they said that there are two big areas they expect to see highlighted in 2017/2018:

  1. The evolution of frameworks from being focused on the functions, to being focused on the events and connected services. In particular, the Serverless Framework was mentioned as one that will need to evolve their focus from functions to events.
  2. The set of corner-cases and advanced use-cases that can’t be addressed with current services and tools.

 From Functions to Events – The Natural Platform Evolution

c-breuiuqaecf3iIf we go back a couple years, when AWS Lambda was first announced, it had somewhat limited functionality and had limited language support. Fast forward a couple years and the number of other AWS services that can now interact with Lambda has grown significantly, as has the number of supported languages. Services like API Gateway, Kinesis and CloudFormations are now capable of invoking Lambda functions.

Moving outside the AWS ecosystem, and we now see Azure beginning to expand not only their Azure Functions capabilities, but also the tools that connect Azure Functions to other services. At ServerlessConf, it was highlighted that Azure Logic Apps now supported over 120 connectors, to a mix of Azure and 3rd-party services.

c-bbwptvoaagg8bAs we move from being focused on the code within a function to the connections with other services, the mindset begins to change from “solving technical challenges” to “thinking about business logic”. With the broad range of connectors that will become more readily available in the marketplace, across many services, it’s not hard to imagine a line-of-business leader being able to bring together a new application to address to business model concept with very limited effort. It begins to democratize the development process.

This move from thinking about functions to thinking in terms of events and connectors also creates the possibility for more asynchronous business interactions. Instead of thinking about the end-to-end process of a transaction or process, the steps of interaction can be asynchronous and more loosely coupled. Or they can become more micro-interactions, eventually allowing a more customized experience to be tailored to individual customers.

There are still many use-cases and corner-cases to be worked out as serverless moves the focus from functions to events, but it is definitely an area that the community at ServerlessConf was spending quite a bit of time thinking about and discussing possibilities.


April 29, 2017  8:43 PM

Thoughts from ServerlessConf Austin

Brian Gracely Brian Gracely Profile: Brian Gracely
AWS, FaaS, Functions, Google, Lambda

c-bkgbyuiaary4oThis past week was ServerlessConf in Austin. The event has now done a world tour over the last year, starting in New York in 2016 and coming back to the states in 2017.

We have been following the serverless space for over a year now (see @serverlesscast), but this was the first time we had a chance to attend an event in person. I went down there looking to better understand four key areas:

  1. The Community – Who is attending, what are they working on, and what level of progress have made?
  2. The Market – Is this as an area where people are using it for real business challenges, and is there a market to make money from this technology?
  3. Cloud options vs. Open options – Most of the discussion about serverless so far has been about services delivered via public clouds (e.g. AWS Lambda, Azure Functions, Google Firebase, Auth0, Netlify, etc)
  4. Market Investment – Is this a market where a lot of VC money has already been placed, or is funding coming from somewhere else?

The Community

The events are hosted by A Cloud Guru, which not only does training for cloud services such as serverless, but they have built their entire business on serverless technologies. They have done a great job building the community and attracting end-users that are building interesting things with serverless.

The show attracted about 250 people for the Day 1 hands-on training sessions, and almost 450 people for the Day 2/3 talks and networking. The audience still seems to be people that are doing early-stage projects with serverless, and it hasn’t (yet) been overtaken by the vendors. Nearly everyone in attendance was working on some serverless project for their business, from large (beta) rollouts to entire businesses being delivered via serverless architectures.

The Market

Serverless is still in the early days, as we don’t have any companies publicly announcing their revenues at this point. AWS is currently the leader, as their AWS Lambda services have been out in the market the longest and attracted the early-adopter following. But Microsoft Azure Functions had a very strong presence, showcasing now only Azure Functions but also a broad range of event-connectors through their Azure Logic Apps. Google has a split portfolio between Firebase, which has been a Backend-as-a-Service for a while and the beta Google Functions. Applications frameworks like Serverless Framework and Go Sparta are also starting to emerge.

There were several businesses talking about their real-world use cases, including consumer companies like iRobot and large Enterprise companies like Accenture.

Cloud options vs. Open options

Most of the event was focused on cloud services for serverless, but the market does have a few open options that are beginning to emerge. IBM has donated OpenWhisk to the Apache Foundation. Kubernetes has a few projects (Funktion, Fission, Kubeless) that are maturing for on-premises or cloud deployments.  Also, a new project from stdlib was announced, called FaaSLang. Since one of the key value points of Serverless/FaaS is only paying for usage, it will be interesting to watch if any on-premises or open-source offerings catch hold in the market.

Market Investment

The amount of VC funding that was present, via startup funding, wasn’t very large at ServerlessConf this week. There were smaller companies such as IOpipe, stdlib, Fauna, A Cloud Guru and Serverless Framework, but none of them had raised a large amount of money yet. The market is still trying to figure out how quickly developers will be attracted to this new mode of developing applications, and if there will be any white space left after the public cloud providers begin to roll out more serverless tools and services.


April 17, 2017  3:01 PM

Looking ahead to DockerCon 2017

Brian Gracely Brian Gracely Profile: Brian Gracely
containers, Docker, Google, Kubernetes, Microsoft, Multi-cloud, orchestration, Red Hat

screen-shot-2017-04-17-at-12-54-53-pmTwo years ago, the drinking game of the tech industry was “Docker, Docker, Docker.” Venture Capitalists like Adrian Cockcroft were calling the growth of the Docker ecosystem “accelerating to plaid”. At the time, Docker was hosting two major events a year, in the US and Europe.

One year ago, Docker changed direction on several fronts:

So as DockerCon 2017 approaches, what could developers and operators expect from Docker?

Hint…here’s what world famous industry prognosticator @cloud_opinion expects during the week:

screen-shot-2017-04-17-at-1-18-50-pm

 

 

 

 

 

 

 

 

[note – Rancher did announce “Project Longhorn” earlier today to focus on block storage in containers, which is often stated as a challenge for customers. But building storage systems from scratch is very, very difficult, as CoreOS learned last year with “Project Torus“, and DellEMC’s Chad Sakac has been saying for years.]

Docker vs. OCI vs. Containerd

A couple years ago, Docker worked with many other vendors to create the Open Container Initiative (OCI) and begin to standardize on the a container format and runtime. From this work, projects like “runc” were created. Last year, Docker separated the runtime into a new project called “containerd” and this was recently donated to the CNCF. All of these efforts were targeted at creating distinction between Docker, Inc (commercial company) and docker project (open source), as well as to create a container standard that could evolve to being stable (and boring) for the long-term.

runc is now at v1.0RC3 and close to shipping, and containerd is at v0.2 and will take time to complete and mature. Since Docker is now planning to maintain their runtime on a different schedule than the containerd project, and bundle it in the Docker EE commercial offering, it will be interesting to see how Docker messages the idea of an independent container standard vs. Docker offerings.

Kubernetes vs. Docker Swarm

After the SwarmKit announcement in 2016, some analyst thought that Docker Swarm would become the leading container scheduling/orchestration framework because it was built into the Docker Machine engine. Much debate and discussion (here, here) ensued about how the broader docker open source community would respond to this change, both in technology and how Docker communicated roadmaps with the community.

Based on recent survey data (herehere, here, here), that prediction has not played out in the marketplace, with Kubernetes having the lead (by varying amounts). Over the past year, the Kubernetes community has grown significantly in terms of developers (1100+), projects and offerings in the marketplace.

Will Docker attack the technical merits of Kubernetes, or will they potentially announce an integration with Kubernetes platforms?

Windows Containers

Given the continued push by Microsoft to work closely with Docker, I expect that we’ll hear more about Docker for Windows Containers. Microsoft has been more open to open source technologies of late, as well as working with popular container technologies (Mesos, Kubernetes, Deis Workflow, etc.). I predicted that the two of them would get together a couple years ago, but we’ll see if that becomes strategic or something more.

More Details on Customers or Revenues?

After taking $180M in VC funding, over 7 rounds, the market is very interested in hearing some specifics about customers and more importantly revenues. Docker changed the pricing and bundling model earlier in the year with Docker EE, as well as new details about how long they will support customers on a given release. While Docker is not a public company and is not obligated to disclose any financials, those details are desired by both customers and partners to understand the viability of this market and to determine what their commitment to Docker should be long-term.


April 14, 2017  5:51 PM

Kubernetes continues to lead the Container Industry

Brian Gracely Brian Gracely Profile: Brian Gracely
Cloud Foundry, Google, IBM, Kubernetes, Microsoft, OpenShift, Red Hat

container-1097206_1920_1This past week, monitoring company Sysdig released the findings of their latest container survey data. The report used actual monitored data as the source, rather than use user-feedback via survey. The reports focused on many different container usage aspects, with orchestration being a key element.

screen-shot-2017-04-14-at-5-12-36-pm screen-shot-2017-04-14-at-5-16-29-pm

These results are not surprising, as we’ve see the number of community developers (1100+) working on Kubernetes grow to 5-8x as large as any other container orchestration framework (including Mesos, Docker SwarmKit and Cloud Foundry Diego).

The other aspect that was not surprising is that the number of “DIY” scheduling frameworks – essentially people building their own container platforms with scripts and other customer tooling – continues to decrease over time. Given how quickly projects are moving (quarterly releases) in the container ecosystem, and the associated applications being built on top of them, teams are quickly coming to realize that they aren’t adding business value by constantly having to update and maintain those platforms.

The Growing Kubernetes Community

Obviously one survey does not define an industry or an industry trend. But lets look at what has been happening around Kubernetes over just the last 6 months.

Kubernetes is Getting Easier to Use

A year ago, the container community went through some growing pains as Docker decided to embed Docker SwarmKit into the Docker Engine, rather than keeping the engine independent from the orchestration/scheduling layer. Some of the finest analysts in the industry thought this might destroy Kubernetes, claiming that Docker would just become the default orchestrator/scheduler because it was simple to setup. Instead, this seem to bring the Kubernetes community together to focus on this challenge (which was well known at the time) and a broad range of projects has come of this re-focus on setup and simplifying Kubernetes environments. Projects like minikube, minishift and several “Quick Start” efforts on public clouds (e.g. on GKE) have made it much easier to setup a local or remote cluster. And free training from Katacoda and the CNCF is expanding the knowledge base of certified operators and developers.  Throw in 250+ meetups around the world and the community of Kubernetes users and operators continues to not only expand, but make it simple to get engaged and learn.


Page 2 of 2212345...1020...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: