One of the most interesting things about doing The Cloudcast (podcast) is the variety of topics we get to discuss and the perspectives from people across our industry. Over the past few weeks, we’ve done shows focused on the evolution of PaaS, trends with AWS and public cloud, and how large-scale web companies scale. Regardless of whether the perspective was about large Enterprises, using Public Cloud providers or running large-scale operations, every one of the guests repeatedly brought up the growing demand for DevOps skills.
Let me step back and clarify a little bit. Saying a company needs “DevOps skills” is sort of like saying that a company needs to buy “Cloud products”. Both Cloud and DevOps are terms used to describe operational models. And in most cases, they are intertwined. Cloud tends to be more focused on self-service, on-demand resources. DevOps is focused on agile development and agile operational models than are tightly integrated. Both are focused on increasing the velocity that applications can be deployed (and updated) to help a business reach their goals.
There were a few interesting/insightful comments from those shows that got me thinking quite a bit:
“While many companies will use tools like Puppet or NewRelic (or whatever) to deploy to AWS, they have alot of skepticism that their tools companies will be around in a few years. So these companies are encouraging their people to really understand the mechanics of how they deal with operations and deployments. Over the next 2yrs, DevOps skills will be the most highly in demand, especially for anyone that considers themselves in IT Ops today.” – Mat Ellis (Cloudability)
“The problems within an Enterprise are no different than the problems of a start-up, especially if you buy into the notion that every successful company over the next 10yrs will have to have a significant involvement with software (across any industry).” – Mark Imbriaco (GitHub)
Last week I wrote about how access to Cloud skills (free learning, on-demand resources) has never been easier. Knowledge of Linux, being able to write code that interacts with APIs, having working knowledge of some of the more popular open-source tools (Jenkins, Puppet/Chef/Ansible/SaltStack, etc.), experience with some public cloud environments, is valuable whether you’re developing applications (eg. “developer”) or you’re creating tools (eg. “operations”).
Start small. Learn, experiment, add in new things over time.
- Attend a local DevOps meetup. Get to know people in your area that have similar interests or existing skills that you can learn from.
- Follow some of the DevOps related IRC channels to see what types of challenges are being addressed (here’s one from our local Triangle DevOps group)
- Get a single Server/VM instance running (locally or on a public cloud)
- Setup a basic LAMP stack and get familiar with aspects of Linux or the basics of a database.
- Build a Puppet manifest to have it grab the right set of installation packages, or just use an existing one and familiarize yourself with the logic and syntax. And learn about using Vagrant to keep things consistent.
- Setup a GitHub account and begin to push your code or scripts to your repo
As I’ve said many times, the modern data-center is a 21st Century bits factory. The script on how the physical-world factories changed in the 20th century is well-known, and it can be applied to almost any industry. There is still time to get ahead of this change and be valuable, whether you’re working in a local bits factory or you’re using one provided by the public cloud.