From Silos to Services: Cloud Computing for the Enterprise

Page 1 of 2312345...1020...Last »

November 19, 2017  11:58 AM

The Evolution of the Kubernetes Kommunity and Konformance

Brian Gracely Brian Gracely Profile: Brian Gracely
API, certification, Cisco, Compliance, CoreOS, Google, Kubernetes, Linux, Microsoft, Red Hat, VMware

This past week the Cloud Native Computing Foundation (CNCF) announced their “Certified Kubernetes Conformance Program“, with 32 distributions or platforms being on the initial list. This allows companies to validate that their implementations do not break the core Kubernetes API implementations, along with allowing them access to usage of Kubernetes naming and logos.

Given that the vast majority of vendors are either now offering Kubernetes, or plan to offer Kubernetes in 2018, this is a valuable step forward to help customers reduce concerns they have about levels of interoperability between Kubernetes platforms.

[NOTE: We’ve been covering all aspects of Kubernetes on the new PodCTL podcastThe show is available on RSS FeediTunesGooglePlayStitcherTuneIn and all your favorite podcast players.]

Beyond the confidence this can provide the market, the Kubernetes community should be credited for doing this in a transparent way. Each implementation needs to submit their validated test results via GitHub, and the testing uses the same automated test suite that is used for all other Kubernetes development. There’s no 3rd-party entity involved.

Terminology

This may seem like a nitpick, but I think it’s important to get some terminology correct, as this has confused the market with previous open source projects. We dissected an open source release in previous posts. While open source projects have been around for quite a while, and usage has grown significantly over the years, the market still confuses how they speak about them. Let’s look at a couple examples:

#1 – “The interoperability that this program ensures is essential to Kubernetes meeting its promise of offering a single open source software stack supported by many vendors that can deploy on any public, private or hybrid cloud.” (via CNCF Announcement)

#2 – Dan Kohn said that the organization has been able to bring together many of the industry’s biggest names onboard. “We are thrilled with the list of members. We are confident that this will remain a single product going forward and not fork,” he said. (via TechCrunch)

You’ll notice that terms like product and stack are interchanged for project. This happens quite a bit, which can sometimes set the wrong expectations for customers in the market who are used to certain support or lifecycle expectations from software they use. We often saw this confusion with OpenStack, which was actually many different projects all puled together under one umbrella name, but could actually be used together or independently (e.g. “Swift” Object Storage).

It’s important to remember that Kubernetes is an open source project. Things that passed that conformance test are categorized at either “distributions” or “platforms”, which means they are vendor products (or cloud services). And this program doesn’t cover things that plug into non-Kubernetes-API aspects like Container Network Interface (CNI) or Container Storage Interface (CSI) or Container Registries.

Beyond the Conformance Tests (and Logos)

While there are very positive aspects of this program, there are other elements that still need to evolve.

Projects vs. Products (Distributions & Platforms)

It is somewhat unusual to have a certification of a open source project, especially a fast-moving one like Kubernetes, since the project isn’t actually certified, but the vendor implementations of that project (in product form). Considering that Kubernetes comes out with a new release every 3 months, it will be interesting to watch how the community (and the marketplace) reacts to constantly having to re-certify, as well as questions that will arise about backwards compatibility.

Another area which is somewhat unique is that vendors have been allowed to submit offerings before they are Generally Available in the market.

A third aspect that will be interesting to watch is how certain vendors handle support for implementations if they don’t really contribute to the Kubernetes project. For example, Pivotal/VMware, Cisco and Nutanix have all announcement partnerships with Google to add Kubernetes support to their platform. Given that those three vendors have made very few public contributions to the Kubernetes projects, these appear to be more like “OEM” partnerships. So how will a customer get support for these offerings? Will they always need a fix from Google, or will they be able to make patches themselves?

Long-Term Support (LTS)

One last area that will be part of the community discussion in 2018 will be an LTS strategy, or Long-Term Support. With new Kubernetes releases coming out every three months, many companies have expressed an interest in a model that is more focused on stability. In the past this eventually happened with the Linux community, and is beginning to happen with OpenStack distributions. It will be an interesting topic to watch, as many people within the community say that LTS models stifle innovation. On the flip-side is customers that might need/want to run the software, but are struggling to keep up with frequent upgrades and patching.

November 19, 2017  8:34 AM

Looking ahead to AWS re:Invent 2017

Brian Gracely Brian Gracely Profile: Brian Gracely
Aurora, AWS, containers, GPUs, ISVs, Kubernetes, Lambda, Oracle, Service Broker, SIS

As we get closer to the annual AWS re:Invent event, it’s time for all of us prognosticators to speculate on what new products and services that AWS might announce.

What I’ve learned over the years is that their announcements tend to follow a few common rules:

  • It’s never a great idea to be a top-level sponsor, as it means that your business is successful, which puts you on AWS’s radar. I know that sounds weird, but AWS has much better insight into what happens on their platforms than most IT vendors have of their ecosystem. In the past, there are usually 1-2 of these companies that have their service replicated as a new AWS service each year.
  • If you’re a business that hasn’t adapted it’s business model in a while, you’re potentially vulnerable to a new AWS service. Last year it was both managed services and virtual private servers
  • If there is a popular technology that is primarily being used as DIY (Do-It-Yourself) today, it’s not uncommon for AWS to created a bundled, more managed offerings (e.g. AWS Aurora for databases).
  • AWS is less interested in maintaining the status quo, especially within IT, than it is to unlock new potential for “builders” and business owners. This means that IT Ops team may often feel threatened by new services that automate tasks that used to require highly skilled (and certified) IT personnel.
  • AWS chips away at highly complex problems piece-by-piece. Things like Big Data, Data Science, Machine Learning, Artificial Intelligence are huge challenges. AWS has been trying to make them modular and simplified with each new service they add.
  • Data is sticky. Data has gravity and is difficult to move. So AWS is always looking for new ways to get customer data within their services. The ingestion fees are usually $0, and the fees to take action on the data or send it back out of the system (e.g. interact with customer applications) is where AWS makes their money.
  • AWS has created a large portfolio of services and capabilities. They always like to talk about how many new features they have created. This is sometimes overblown, as any large IT vendor with a broad portfolio is creating 100s of new features each year across many products – they just usually don’t mention them in the content of # of features. Last Week in AWS and Top Stories from AWS this Week are two excellent sources of information to keep up with new updates each week.

So given all of that, what might we expect to be announced at re:Invent?

  • More CPU types and adjusted pricing for Compute or Storage.
  • More Regions and Availability Zones, especially in Europe and Asia.
  • New networking capabilities, with a focus on higher bandwidth access into the cloud and across clouds.

Enterprise Partnerships – The biggest revenues in IT are in the Enterprise, which has been AWS’ focus for the last 3-4 years. Expect to see them continue to highlight SI partnerships to help scale delivery. Expect them to highlight some new ways that they’ll create hybrid cloud environments (example: AWS Service Broker with Red Hat OpenShift).

Lots of Data and Lots of Lambda: CTO Werner Vogel’s keynote is supposedly going to focus on the intersection of data and serverless, two areas where AWS is extremely focused and two areas where their services are very sticky (read: much potential for lock-in to the AWS cloud). I expect to see many early-adopter customer stories and use-cases highlighted.

Going after “the old guard” – AWS likes to refer to large, existing IT vendors as “the old guard”. They favorite seems to be Oracle. They have bee aggressively trying to offer alternatives to the Oracle DB (e.g. Aurora), as well as database migration tools into the AWS cloud. They’ve also gone after Oracle data warehouses with AWS Redshift. I’d expect to see them begin to target AWS Lambda at edges of common Oracle DB capabilities (e.g. batch processing).

Containers – Containers have been a hot technology for the past 2-3 years. Many surveys show that customers are already running containers (e.g. docker) on AWS, along with homegrown Kubernetes clusters. AWS has a managed container service (AWS Elastic Container Service – ECS), but with the rising popularity of Kubernetes, I’d expect to see AWS offer a managed Kubernetes service to compete with Azure AKS and Google GKE.

Talk about Open Source: AWS has had a mixed track record on open source. They consume a lot of it, but their contributions has been scattered. Google and Microsoft have been highlighting their commitment to OSS. AWS’ Adrian Cockcroft has been more visible this year, growing his open source team, so I’d expect them to highly their commitment to open source.


October 22, 2017  3:56 PM

Is Per-Minute Billing the Next Step to Unlimited Cloud Plans?

Brian Gracely Brian Gracely Profile: Brian Gracely
AWS, Azure, cloud, EC2, Public Cloud

As human beings, we either like having a level of certainty about how much we’re going to pay for something, or we don’t like to think about it at all. We buy an all-day pass at Disney World, or we sign up for bills to automatically be charged to our credit cards each month. Constantly having to think about the cost of an activity eventually takes away from the experience of any given activity.

In that context, one of the most complicated elements of cloud computing is pricing. We’ve covered it on The Cloudcast many times (here, here, here, here).

How much does anybody actually pay for a cloud service? There are fees to use a given service (e.g. computing), as well as fees to access the service (e.g. bandwidth), and then some fees are only charged based on certain usage patterns (e.g. outbound traffic vs. inbound traffic; intra-region vs. inter-region traffic). And then there are all the options about how to pay for the cloud service: on-demand, reserved instances, spot market, sustained usage discount, pre-emptive services, etc.

Per-Second Billing. Who needs it?

Recently, AWS made some noise by announcing that their EC2 (compute) and EBS (local storage) services would have the option for per-minute billing. Google Cloud quickly followed along by announcing similar pricing options, as well as reminding the market that some of their existing Big Data services were already billed at this granular level.

The initial reaction from the mainstream part of the market was similar to the infamous 8-minute abs vs. 7-minute abs debate.  They asked who really needed per-second-billing when there was already per-hour billing? Per-hour already feels very granular, especially in a market where the majority of companies buy computing in 3-5yr intervals.

But then we look at the rise of Serverless computing, with it’s per-second billing for millions of transactions. The framework for these types of applications is already in place, albeit that it’s only used by a small fraction of the market.

And then think about all the short-run batch jobs that take place in the evenings or off-hours. Somebody has probably been looking at their cost spreadsheet from AWS and noticing that many of those runs lasted less than an hour, and yet they were still being billed for the full hour of EC2 or EBS usage. There is an opportunity to save money for those customers that have gotten sophisticated about their usage patterns.

So what’s next?

Maybe the end game is just very granular billing models, where the cloud providers can incentivize additional incentivize usage by bundle a certain amount for free each month. They already do this for a number of services. Maybe the serverless model has unlocked a new psychological threshold around cost modeling that AWS believes is the next frontier, similar to how we no longer think about the annual cost of Amazon Prime, we just enjoy getting FREE shipping with each other.

But maybe there is something else on the horizon. Maybe there are more bundled offerings on the horizon. Maybe there will be an “unlimited” plan for enterprise IT, similar to how many Enterprises currently buy ELAs (Enterprise License Agreements) today. The early approaches to all-you-can-eat cloud have had mixed reviews (and some failures), but failures have never stopped AWS in the past (remember the Fire Phone, before the Echo/Alexa?). Most CFO’s would love some level of certainty around IT spending, so maybe the next frontier is to just buy blocks of “any cloud service”, with some concept of “unlimited” usage.

In the bigger picture of things, the public cloud has already attacked “maintenance is hard”. Maybe the next item on their attack list is “cost modeling is hard”.


October 8, 2017  2:20 PM

Technical Support is a good 1st Step

Brian Gracely Brian Gracely Profile: Brian Gracely
AWS, Cloud Computing, Technical support

There is nothing worse than going home for the holidays and having a family member ask you to fix their computer, or WiFi, or smartphone, or Facebook account. It’s not that the issues are that complicated, they often aren’t, but it’s the pre-repair stories that you have to hear about what they didn’t do (“nothing changed”), what they didn’t click on (“Aunt Millie sends funny pictures”), or what they expect the computer to automagically do for them. And for that reason, many people assume that a role in Technical Support is 8×5 of your families complaints over a long holiday weekend.

In reality, technical support is a great place to break into the IT industry. This guidance applies to people interested in computers, hardware, software, infrastructure or application development. My first “real” technical job was in Technical Support, at Cisco, many years ago, and I still believe it gave me the foundational skills that I’ve needed for almost every other job I’ve had in the tech industry.

So why am I bringing up technical support? This past week I had the opportunity to speak with a young woman named Tanya Selvog, who has been working her way into the technology industry through a series of training courses and self-driven initiatives. Her goal is to become a programmer/developer, but she mentioned that she’s often given the guidance that she should go into technical support first, as a way to get her foot in the door at technical companies.

I can understand the hesitation to move into technical support roles. The perception is that you’re on the phone all day, taking questions from people that are upset about their current situation (e.g. computers are broken) and you’re going to get the brunt of their frustration. Or the flip side is to believe that you’re role would quickly be replaced by an automated bot or some other “intelligent system” that would allow self-service for the customer. To some extent, both of those concerns are valid and true.

But the other reality is that technical support is going to force you to learn a technology, or several technologies, really in-depth. You not only have to know how the technologies work, but you also have to know how they behave when they are broken or misconfigured or misused. This allows you to expand your ability to not only learn, but become a creative problem solver. Both of these skills are foundational for any role in technology. Technical support roles also force you to put yourself in the end-users shoes and understand the challenges of technology from their perspective. This forces you to have compassion and empathy for people that don’t get to go to training classes or have extra time to keep up with all the latest rumblings on Hacker News. It also gives you the opportunity to take the self-guided initiative to figure out ways to make problems go away on a broader scale. Maybe that means developing a new tool that customers can easily download to fix a problem, or make sure a problem doesn’t happen in the first place.

The bottomline is that technical support can be a great role to learn technology, learn people skills and learn how to proactively help scale your knowledge. I’ve seen plenty of people that started in the IT industry in tech support roles (at vendors, at customers, as independent contracts) and go on to much bigger roles over time because they could blend all those skills together.

Technical support isn’t for everyone, but if you’re trying to break into the industry and your first path is bumpier than expected, maybe give it a try for a year or two. The cloud is going to need a whole new set of people, with modern skills, that are going to need to keep all those devices working as expected – both for businesses and for in-laws.


September 30, 2017  2:45 PM

Dissecting an Open Source Release (Kubernetes 1.8)

Brian Gracely Brian Gracely Profile: Brian Gracely
Kubernetes

As more IT organizations adopt open source technologies, it’s useful to understand that community releases are somewhat different than proprietary software from a single vendor. Let’s take a look at the most recent release of the popular Kubernetes project, version 1.8.

What’s Included with Kubernetes?

Last week on PodCTL (@PodCTL) we discussed a topic that often comes up with companies that want to build a DIY platform using Kubernetes. How much is included in the Kubernetes open source project, and how many other things have to be integrated to create a functional platform for deploying applications into production?

We explored:

  • What’s included?
  • What’s not included?
  • How to manage integrations.
  • What standards exist to make it simpler?
  • What’s the difference between DIY, a Kubernetes distribution, a Kubernetes platform, or a Kubernetes cloud service?

Anatomy of a Release

The first thing you’ll probably notice that’s different between OSS and proprietary “releases” is that OSS often contains individual features marked as “alpha” or “beta’. In the Kubernetes 1.8 release, they are broken down between alpha, beta and “stable”. These relate to the levels of maturity of each feature, as determined by the project’s steering committee. Since OSS project software is DIY support, via mailing lists, Slack groups, IRC, etc., it’s up to companies to decide if they want to enable certain features.

If companies decide to use a commercial offering based on OSS, or a public cloud service based on OSS, then it’s important to understand what they will fully support vs. make available as “alpha or beta” support.

For example, from Google’s GKE service:

OSS alpha features are available on GKE Alpha clusters. GKE Alpha cluster aren’t upgradable but managed (which makes sense, OSS alpha means the APIs will break). Every new Kubernetes release is roll out immediately to GKE Alpha clusters. All the rest is available in GKE clusters which are fully managed and supported at the cluster level.

Who Built the Features?

Often times, after a release is published, people will attempt to analyze who developed certain features. Some vendors publish blogs highlighting their involvement (e.g. here, here, here). Other vendors highlight projects that add value outside the core project, as they focus more on training or services. You can always dig into the details through sites like Stackalytics or GitHub. You can also dig into the details of individual features or functional areas by reading the notes from the Special Interest Groups (SIG).

There are lots of opinions on how to interpret stats around OSS commits. Some believe they include too much vanity, or can be gamed. Others believe that they are a good indicator of how much an individual company (or person) is investing in a project, and how much they could help customers that want official support of an OSS project.

Updates Come More Frequently

The last thing to keep in mind is that OSS releases tend to come out much more frequently that proprietary releases. In the case of Kubernetes, it’s consistently been every 3 months. This means that if a company is running the software (vs. using a public cloud service), they need to make sure they are prepared to stay up-to-date with infrastructure/platform updates. They need to have skills around CI/CD and config management tools like Ansible, Chef, Puppet. . This is a skill that development teams have been evolving for the past few years, but it’s a skill that operations teams will have to improve quickly.


September 24, 2017  12:21 PM

Creating a Technology Podcast

Brian Gracely Brian Gracely Profile: Brian Gracely
Google Play, iTunes, Podcast

at-wide-6cUsually I use this blog to write about new technologies or technology trends. But as I’ve been out on the road lately, I’ve had quite a few people ask me about how to get started in podcasting. There are lots of podcasts out there these days, covering nearly every topic. I co-host a couple weekly podcasts (The Cloudcast, and PodCTL) that are regularly in the iTunes Top 100 for technology, so I suppose I have some scars and experiences that can be shared with others. Here are the some the most common tips I’d give anybody that wants to create a technology podcast.

btw – here are some great tips from the company that I host my podcasts with (Buzzsprout)

The Basics

We’ll get into more details later, but here are the basics of every successful podcast:

  1. Be interesting. People are potentially going to give you some of their valuable time, and allow you to be right in their ears, so don’t waste that opportunity.
  2. Be consistent. If people like your content, they want to know when it’ll be available. It’s difficult to do, but focus very hard on being consistent in producing new content. The goal should probably be weekly, and around the same time each week.
  3. Be “listen-able”. In a digital world, it’s very simple. People won’t waste their time on good/great content if the audio quality is bad. Be willing to invest in good microphones and some basic editing tools. It only costs a few hundred dollars and will often be the #1 or #2 factor in if you create a following.

Beyond those basics, here are some other things to consider.

Figure out why you want a podcast.

When we got started with The Cloudcast, our goal was to find way to learn about the fast moving technologies that were being created in Silicon Valley. We lived in Raleigh, NC and didn’t have immediate access to the people making it happen, so we decided that we needed a way to facilitate that learning. Our basis premise was that we wanted to learn and those people might be interested in having their knowledge broadcast on the Internet. It was a win-win scenario.

So for some people, the ability to have technical conversations is the motivation for having a show. For other people, maybe it’s the need to get better at “public” speaking. Still others may want to use it as a marketing vehicle for their company, using this growing new medium.

Pick a topic that you’re passionate about.

If your goal is to create the show on a regular basis, make sure to pick a topic or area that you’re passionate about. You have to bring energy to the discussion, otherwise your listeners won’t give you their time and attention.

And consider a topic/area that has some range, because otherwise you might find yourself out of things to talk about after just a few episodes.

Consider the format – scheduling is difficult.

Some shows are two people, just talking. Other shows have more co-hosts or bring on guests. The general rule of thumb is that the more people included, the more complicated it will be to schedule the show. It also makes the dynamic of the on-air conversation more difficult, as you don’t have the normal “visual clues” that you get with face-to-face topics.

With regard to scheduling, the best advice I can give is to pick a regular block of time that you’ll record and stick to it as much as possible. This aligns to Tip #2 (above) about consistency. If you have a fixed time to record, it makes it easier for your to coordinate your schedule and it simplifies the scheduling with co-hosts and guests. People have busy schedules and the last thing you need is 10-12 emails floating around about times when people “might” be available.

Another consideration for format is that most technology podcasts are either “educational” or “entertainment”, with the majority being the former. What this means is that you probably need to spend less time giving opinions and more time either not talking (let the guests speak) or focusing on educating in a non-biased way.

Spend some time on the audio quality

There are two pieces to the audio quality:

  1. The actual quality of the sound.
  2. The quality of the conversation, from the perspective of speech “tics” (e.g. umms, ahh, you knows, rights, etc.).

The quality of the sound is fairly easy to address these days. There are lots of lists of the best podcasting microphones, most of which can be acquired for a few hundred dollars. A couple things to consider:

  • Will I be recording at a known location (e.g. your house/apartment) or will you want to be mobile in how you record?
  • Get the “pop” filter to go in front of the microphone.
  • Get a basic stand for the microphone so you can direct the angle at which you speak into it.
  • If you use a table-top stand, get a shock mount to avoid the noise created when you nervously tap on the table while you’re talking.

It seems obvious, but try to record in a quiet location. It also helps it the location lacks echo, which can be minimized with things like rugs, furniture, curtains or other noise-cancelling mechanisms.

Regarding the quality of the actual content, this is ultimately a time vs. quality tradeoff. In 99.99% of the shows, you’re going to have people that have the normal human ticks in their speech. These will range from occasionally to consistently annoying. When it comes time to publishing the show, you have to decide how much time you’re willing to edit these out (or reduce them). It can often take 1-2hrs per 30mins to do this sort of editing. In general, I’d suggest not spending much time reduce the nature conversation, unless it was impactful to getting any useful information out of the show.

Recording

When it comes to recording, there are two models that most people use:

  1. Record it themselves, using something like Skype, Google Hangouts or one of the many conference services (Zoom, Bluejeans, WebEx, Highfive, etc.). We typically use Skype, just because we always have and it’s fairly ubiquitous, and we record with a tool called Audio Hijack Pro.  There are lots of recording tools out there. Just be sure that you can natively record from the app/service being used, and I highly recommend recording in AIFF format for higher fidelity. Using AIFF also comes in handy when editing.
  2. Use a 3rd-party service for podcast recordings like Zencastr. It costs some money, but it takes out lots of hassles for you (trade off time vs. money). Plus people can record directly from the web.

Now, if you decide to record it yourself, there are two tools that I highly recommend. Audacity is a very good tool for editing and mixing audio tracks. For example, you might want to add in music/intro/outro tracks to the beginning and end of shows. Levelator is a great tool to make sure that all the audio on the show is at the sound sound level. This avoid someone talking too loud vs. too softly. NOTE: Leverator only works with a few audio formats, hence why I mentioned recording in AIFF earlier. If you have to record in MP3, then use Audacity to transcode it to AIFF for you.

..and finally, Publishing

I’m sure there are advanced strategies for publishing, but we tend to live by one simple rule. Publish it everywhere you can afford, because you never know where your audience will be.

  1. Use one of the podcast publishers like Buzzsprout or Libsyn. They will store the files for you and give you an RSS feed. They usually charge a monthly fee based on how many hours of content you produce.
  2. Add the RSS feed to the major distributors – iTunes, Google Play, Stitcher, TuneIn, SoundCloud. NOTE: Some of these will distribute for free, while others have a monthly charge. Still others like Spotify have a filtering process and aren’t easy to gain access to their network.
  3. Let people know where they can find your show through various social media channels, word-of-mouth or any other marketing channel you have access to.


September 23, 2017  1:04 PM

Are Container (Standards) Wars Over?

Brian Gracely Brian Gracely Profile: Brian Gracely
Docker, Google, Kubernetes, Linux, Microsoft, Red Hat, swarm

mv-rena-nzThe open source world is different than the proprietary world in that there really aren’t formalized standards bodies (e.g. IEEE, IETF, W3C, etc.). That world is mostly defacto standards, with some governance provided by foundations like The Linux Foundation, Cloud Native Computing Foundation, Apache Foundation and several others.

Container Standards – Migrating to OCI

In the world of containers, there have been many implementations for customers to choose from over the past 5-7 years. On the container side, there was native Linux capabilities like cgroups and namespaces, and then simplified implementations like LXC, docker, rkt, appc, runc. A couple years ago, a new group was formed (the Open Container Initiative – OCI) to try and unified around a common container format and runtime. It took a couple years, but the OCI has finally come out with OCI 1.0. We discussed those details with one of the project leads, Vincent Batts from Red Hat. We dug into the list of requirements, and how they created a standard that works across Windows, Linux and Solaris operating systems.

Container Orchestration Standards – Kubernetes is Leading the Pack

Around the same time that OCI was getting started, several options were emerging for container orchestration. The PaaS platforms had all created their own homegrown orchestrators several years before. But maintaining your own orchestrator is a very difficult engineering task. The game changed when some of the web scale companies, specifically Google and Twitter, released implementations of their internal systems (Kubernetes and Mesos, respectively) into the open source communities. In addition, Docker created the Swarm orchestrator for the docker container format/runtime.

container-platforms

For several years, companies and customers made investments around each of these standards.  Dozens of comparative charts were made, trying to position one vs. the others. But over time, more and more developers started focusing their efforts on Kubernetes. By the current count, Kubernetes now has 4-5x as many developers as all the other projects combined. Early adopters like Google, Red Hat and CoreOS jumped into the Kubernetes project, and recently almost every major vendor has gotten in line. From VMware to Microsoft to Oracle to IBM, and a list of startups such as Heptio, Distelli, and many others.

One important thing to note about Kubernetes is that CRI-O, the Container Runtime Implementation for OCI, is now the default. In the past, the docker runtime was the default, but now Kubernetes is allow more standard options to be used. Google’s Kelsey Hightower has an excellent write-up, as well as making it part of his “Kubernetes the Hard Way” tutorial.

And beyond Kubernetes, we’re also seeing the most popular ecosystem projects being focused on Kubernetes first. From monitoring projects like Prometheus, to application service-mesh projects like Istio and Linkerd. As CNCF CTO Chris Aniszczyk recently said, “CNCF is becoming the hub of enterprise tech.”


August 27, 2017  10:03 AM

Can VMware cross the Cloud chasm?

Brian Gracely Brian Gracely Profile: Brian Gracely
AWS, Azure, containers, HCI, Kubernetes, Nutanix, VMware, VMworld

gknkyxosAround this time every year, prior to VMware’s VMworld conference, I write a predictions post. Here’s a few from past years – here, here, here, here, here. This year I decided to change up the format and record it as an audio podcast – The Cloudcast #308 – Can VMware cross the Cloud chasm? – with some of the smartest people in the industry covering VMware, virtualization, software-defined-infrastructure and hybrid cloud. Keith Townsend (@ctoadvisor) and Stu Miniman (@stu) will be hosting theCUBE all week at VMworld, and have covered the event for the last 5-7 years.

Here is some of their pre-show coverage:

It was a great discussion that covered updates on the Dell/EMC/VMware merger; the evolution of VMworld’s ecosystem vs. public cloud providers like AWS, elephants in the room and the HCI market.

  • Topic 1 – What’s new in your world and what trends are on your radar these days?
  • Topic 2 – A couple weeks ago The Register forecasted that VMworld is a shrinking event and a stark contrast to the growth of AWS re:Invent. From your perspective, what’s the state of the VMware ecosystem these days?
  • Topic 3 – With Dell being private but VMware is still public, and their stock being up about 2.5x since the merger ($40-100), is there any sense of how money is flowing within VMware (e.g. R&D) vs. flowing over to pay Dell’s debts?
  • Topic 4 – You both get access to the VMware executive team during the week of VMworld. What questions do you wish you could ask them, but it’s not appropriate during the interview or Q&A formats that exist?
  • Topic 5 – Can you explain VMware’s “Cloud Strategy” to us?
  • Topic 6 – HCI (HyperConverged Infrastructure) is growing very quickly and all the vendors now have an HCI play (Nutanix, DellEMC VxRail, HPE Simplivity/Nimble, Cisco SpringPath, Red Hat HCI, etc.). Does the market need this many similar offerings? Which one of these is non-existent in 3-4 years?


August 22, 2017  7:18 PM

Digging into Containers and Kubernetes

Brian Gracely Brian Gracely Profile: Brian Gracely
containers, Docker, Google Play, iTunes, Kubernetes, OpenShift, orchestration, RSS

podctl-logo-august2017For the last 6+ years, I’ve hosted a weekly podcast called The Cloudcast (@thecloudcastnet), which focuses on all aspects of cloud computing. One of the challenges of having a broad focus is that you rarely ever get to dig deeply into a specific topic. And when that topic is growing extremely fast (e.g. Kubernetes), it can be frustrating spend so little time understanding the technical and business elements.

A year ago, the buzz around “serverless” or functions-as-a-service was growing so loud that we kicked off The ServerlessCast (@serverlesscast) shows. Those allowed us to go in-depth on many topics around serverless.

For the last 2+ years, the rise of containers and container orchestration has grown tremendously fast, with open source projects like docker and Kubernetes gaining market share and attracting 1000s of developers. So once again, we’re spawning off another new podcast to dig into this technology in-depth. The show is called PodCTL (@podctl), and it can be found as on RSS FeedsiTunesGoogle PlayStitcher and all your favorite podcast players. This weekly show will come in two flavors – the PodCTL shows and the PodCTL “Basics” shows. The PodCTL will go in-depth on technical topics, interview with thought-leaders and developers, as well as review the news of the week and areas for learning the technology. The “Basics” shows will be a short, ~ 10min introduction to core technology areas, and will be suited to beginners or business leaders looking to better understand this space.

The first 3 shows are now available:

Each show is filled with show notes that provide links to all the information discuss each week, such as news stories, training, new technology announcements, etc.

Feedback to the show is always welcomed. If you enjoy the show, please subscribe, tell a friend or give it a rating/review on your favorite podcast network.


August 10, 2017  4:23 PM

Bringing DevOps to Highly Effective People

Brian Gracely Brian Gracely Profile: Brian Gracely
Automation, Deming, DevOps

What aspect of DevOps poses the biggest challenge for IT leaders? devops-toolchain-svgAs I discussed DevOps with hundreds of leaders during a recent event series with Gene Kim, an author of The DevOps Handbook, I heard one answer loud and clear: It’s the culture change. The concepts of blended or shared responsibilities, blameless postmortems, and speed vs. stability often run counter to the principles you’ve been taught about leading IT. You realize that the future of the business will be highly impacted by your ability to deliver software faster (via new features, new products, and new routes-to-market) – but you struggle to find a language or framework to communicate to your teams (and peers) about how to make DevOps and these results happen.

Far too often, we ask IT leaders to learn the language of lean manufacturing or the principles taught by W. Edward Deming. While these approaches are valuable, and can draw a parallel between the factories of physical goods and 21st-century bits factories, they require yet another learning curve before change can occur and take hold in an organization.

So with that in mind, I thought it would be useful to adapt a popular business framework, Steven Covey’s “The 7 Habits of Highly Effective People”, into a model that can be used by business leaders trying to bring DevOps culture into their organizations. Let’s see if this can help solve your language struggle.

1. Be proactive

Let’s start with the one constant in IT: Change happens, constantly. If you started 20 years ago, client-server was just beginning to gain momentum. If you started 10 years ago, the iPhone had barely been introduced and AWS had 2 services. If you started 5 years ago, Linux containers were still overly complicated to use and web-scale companies weren’t open sourcing important frameworks.

We’ve seen massive changes over the last 50 years regarding which companies lead their respective industries, and which market segments are the most valuable (hint: today it’s technology). Business leaders must recognize that technology is driving these changes, at a more rapid pace than ever before and be proactive at being prepared for the next round(s) of changes. Be the change agent that the business requires. Be responsible for behavior, results, and growth.

screen-shot-2017-08-10-at-5-21-20-pm2. Begin with the end in mind

No business executive wakes up and says, “We have a DevOps problem!” Instead, you lose sleep over reducing time-to-market for new capabilities, reducing security risks, and other metrics that can be directly tied to the top or bottom line. This is why I believe that “Confidence in Shipping Software into Production” is the most important DevOps metric.

At its core, this metric begins with the end in mind: Can we get software into production safely and reliably? From there, you work backwards to determine how frequently this can happen, which leads to an examination of existing skills (people), ability to manage deployment frequency (culture), and if the right tools and platforms are in place (technology). Focus your time and energy on things that can be controlled.

screen-shot-2017-08-10-at-5-22-36-pm

3. Put first things first

You need to execute on the most important business priorities. While it’s easy to imagine what a greenfield organization would do to make “DevOps” the default technology culture, the reality is that this is not an immediate reality for most organizations. Their org charts are optimized for yesterday’s business model and distribution strategy. They have many application platforms, often siloed for different lines of business. And they need to adapt their applications to become mobile-native in order to address emerging customer expectations.

Putting first things first, these core elements need to be in place before the business can expect to be successful at rapidly deploying software into production.

  • Automation – It is critical to build core competency in the skills and tools needed to automate repetitive tasks, both for applications and infrastructure. For many companies, it’s valuable to begin with a focus on existing applications (both Linux and Windows) and infrastructure (e.g. networking, storage, DHCP/DNS), and then evolve to automating new applications and services.
  • CI/CD Pipelines – Just as factories (since the early 1900s) have been built around assembly lines, building modern software is driven by automated pipelines that integrate source code repositories, automated testing, code analysis, and security analysis. Building skills to manage pipelines and the process around frequent software updates is critical for building a framework to manage frequently updated software applications.
  • Application Platform Once applications have been built, they need to be deployed into production. In today’s world, customers expect to get updates to their software on a frequent basis (e.g. mobile app updates each week), so it’s important to have a repetitive way to deploy application updates and scale to meet the business demands on the application. Managing the day-to-day activities of applications is the role of an application platform. For many years, companies tried to build and maintain their own application platforms, but that approach is rapidly changing as companies realize that their value-add is in the applications, not the platform.

Once these elements are in place, many IT teams are ready to start containerizing their existing and modern applications.

screen-shot-2017-08-10-at-5-24-14-pm

4. Think win-win

Far too often, the DevOps discussion is framed as the tension and disconnect between Development and Operations teams. I often call it an “impedance mismatch” between the speed that developers can push new code and the speed that operators can accept the updates and make sure that production environments are ready.

Before we blame all the problems on operations being too slow, it’s important to look at why it’s believed that developers are so fast. From the 2017 State of DevOps report, we see that Gene Kim (and team) measure the speed at the point when developers push code into source control (e.g. Git, GitHub.)

They aren’t measuring the speed of design and development. Even in a microservices environment, it can take several weeks or months to actually develop the software features.

So how do teams potentially get to a win-win scenario? Here are a few suggestions:

  • For Operations teams adopting automation tools and Infrastructure-as-Code principles (e.g. using source control for automation playbooks), both development and operations are beginning to use common practices and process.
  • For Development teams, insist that security people are embedded within the development process and code review. Security should not be an end-of-process step, but instead embedded in day-to-day development and testing.
  • For both teams, require that automated testing becomes part of normal updates. While many groups preach a “cloud-first” or “mobile-first” policy for new applications, they should also be embracing an “automated-always” policy.

screen-shot-2017-08-10-at-5-25-35-pm

5. Seek first to understand, then be understood

Six or seven years ago, nearly every CIO said that they wanted to try and emulate the output of web scale giants like Google in terms of operational efficiency (1000 servers per 1 engineer) and be more responsive to developers and the business. Unfortunately, at the time, it was difficult to find examples of companies outside of Silicon Valley that could actually execute at a similar level. And the technology Google used was not publicly available. But times have changed significantly over the last few years. Not only is Google’s technology (e.g. Kubernetes) readily available via open source projects, but the examples of enterprise companies implementing similar success are plentiful.

So before you send a few architects out to Silicon Valley to study with the masters, it might be more valuable to study similar companies to your own. This will surfaces experience that are industry-specific, region-specific, and use-case-similar. It will also help answer the question of, “But, how can we do that without hiring 100+ PhD-level engineers, or $1M+ salaried employees?” Sometimes the right answer is to leverage the broad set of engineers working on popular open source projects.

screen-shot-2017-08-10-at-5-26-39-pm

6. Synergize

I’ve often said that everything someone needs to be successful in DevOps they learned in first grade. For example, “play nicely with others,” “share,” and “use your manners.” The challenge is that org charts and financial incentives (such as salaries, bonuses, and promotions) are often not aligned between Dev and Ops teams in order to accomplish these basic goals.

Some knowledge of Conway’s Law comes in handy here. If the goal is a specific output (e.g. faster deployment of software into production), make sure that the organizational model is not the top barrier to accomplishing the goal. Cross-pollination of ideas becomes critical. Teams need to share their goals, challenges, and resource availability with other teams. You want to innovate and problem solve with those who have a different point of view.

screen-shot-2017-08-10-at-5-31-50-pm

7. Sharpen the saw

It would be easy to say that IT organizations need to make sure that their teams are keeping up-to-date on training and new skills. But all too often, this becomes a budget line-item that sometimes get ignored. The proper way to address the need for “skills improvement” is not to think about it as “training” (perhaps attend a course, get a certification), but rather to incorporate it into an actual work activity.

We’re moving into an era in IT where all the rules and best practices that have been stable for the last 15 to 20 years are being re-written. This means that it’s critical to leverage modern ways to learn new skills. Encourage employees to seek the best way for them to learn (such as side projects, meetups, and online learning) and then have them bring those new skills back to the rest of the team. Make it a KPI to improve the skill levels and skill diversity of the entire team, with incentives for individuals and the overall team to get better. Bottom line: Seek continuous improvement and renewal professionally and personally.

screen-shot-2017-08-10-at-5-33-50-pm

The importance of storytelling

The 7 Habits framework has proven to be successful in helping individuals and groups improve interpersonal skills. Those skills are at the core of any cultural transformation.

screen-shot-2017-08-10-at-5-30-34-pm

Beyond the 7 habits, one more skill should be on every leader’s mind. One of the most important skills that IT leaders can leverage as they drive transformation is the ability to tell stories of success and change. The storytelling skill can inspire emotions and can help spread successes from group to group. Storytelling also helps a group to personalize its challenges, and adapt solutions to suit the group’s particular cultural nuances.


Page 1 of 2312345...1020...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: