Uncharted Waters

Oct 2 2018   11:08AM GMT

The Pipeline Problem

Matt Heusser Matt Heusser Profile: Matt Heusser

Tags:
Continuous delivery
DevOps
DevOps - testing / continuous delivery
devops training
future in devops

Pipelines at KWSQAI spent part of last week at the Targeting Quality Conference in Cambridge, Ontario, Canada. That conference is hosted by the Kitchener-Waterloo Software Quality Association, and is commonly known as “KWSQA-Conf.” During the session I spent some time at the DevOps tutorial with Lisa Crispin, Rob Bowyer, and Mike Hrycyk. One of the exercises was designing your own Continuous Delivery Pipeline.

Given a real, large, familiar website like Facebook, Google or Kayak, we had to come up with a bug or feature request then design the pipeline for that feature.

Instead of dealing the excuses of our painful reality — where the database is non-relational and the transactions need to run in batch overnight and fill_in_problem_here — we started by removing all obstacles and design an ideal state.

It is a good exercise.

The challenge I see is moving from “regression testing” to a continuous delivery model. The problem isn’t coming up with the ideal state, or mapping out the existing state.

The problem is minding the gap.

Defining Continuous Delivery

Lisa Cripsin DevOps PipelinesHrycyk gave a definition of Continuous Delivery that was deliberately vague. I don’t mean that as a criticism — his definition boiled down to “The ability and practice of moving small pieces of code to production very frequently.” That is, there is a process of moving code, a pair of programmers can roll independent changes toward production several times a day, and the process to get that code rolled out is measured in hours. Perhaps, on the long end, the process might take a business day.

Mike did make a distinction between Continuous Delivery, which can require a human to push the final deploy button, and Continuous Deployment. Under Continuous Deployment the entire pipeline is automated, so it rolls out as soon as all the checks pass. It is possible to roll out changes ‘dark’, hidden behind a configuration flag. In practice that means the programmer writes an “if’ statement, with the new code behind it and the older code on the “else” side. The “if” statement is typically tied to a user type, so, perhaps, the code does not roll dark for testers. Of course it is possible the programmer makes a mistake in that “if” statement and the changes roll automatically to production — that is where practices like continuous monitoring and the ability to roll a (very) quick fix out come into play.

Many organizations I work with struggle to get a new release out to production every two weeks during a sprint. Some have a regular sprint followed by a “hardening/deploy” sprint. It also varies by product type. It might be just fine for a Microsoft Windows Desktop product to have releases four times a year which are heavily tested.

And yet.

Somehow, when teams accidentally let out a show-stopping bug with an easy fix, they find a way to deploy the change rapidly.

So start there.

Defining the Pipeline

Create a map that is the from-commit to in-production for the simplest possible bug fix. One where you can have high confidence the change should not have broken anything else. That process might be a simple flowchart. Lean Software Delivery teaches us that fewer steps is better — the more steps, the more opportunity for “blowback” that requires human intervention and causes delays.

A Pipeline Diagram

There’s an example. Something simple, easy, light.

The classic approach is to not make test/deploy cheap (above) but instead to ship less often. In the name of “saving time”, we group more changes per release. Those changes create uncertainty and instability. The instability requires testing and other forms of risk management. The  testing and risk management is expensive, so we try to deploy less often, with bigger batches. The bigger batches have more uncertainty, testing becomes more expensive, so we deploy less often … in a vicious circle.

More Than One Pipeline

Most teams I work with have several different pipelines, depending on the size of the change and the risk it introduces. They have the “usual” cadence for “regular” deploys, one for emergency changes, and sometimes an even slower cadence for large changes.

The emergency pipeline is usually too slow to use for regular rollouts. It also tends to contain a skeleton that is required for every change that goes to production. Automating the emergency rollout process speeds up every deploy. Once build/deploy is fast, the team can work to add the monitoring, architecture, and other changes to enable continuous delivery.

So start with the emergency process. Make that automatic. Grow the pipeline from there.

And leave a comment or drop me an email.

I’d love to hear how it is working for you.

 Comment on this Post

 
There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when other members comment.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to:

Share this item with your network: