My family and I moved into a new house about a month ago. Buying houses, selling houses, and moving are all terrible. I don’t recommend it. Especially 4 months after having a new child. About a week after we were moved in, sales people started coming by. This is whatever you would call the in-person version of a cold call. No warning, just a random person showing up on my doorstep, usually around dinner time, to hock their wares. They were selling security systems in this case.
Each salesperson used an identical tactic and it reminded me of everything I dislike about sales people, and filled me with a touch of self-loathing for having to sell.
At one or time or another in most technical careers, we get, well .. a chance. Due to a re-org, a firing or a maybe getting a new job, the technical staff is finally free from ignorant middle management. So we make our plans.
The next generation product is going to be good. Really good.
They want to know when it’s going to be done and how much it will cost. Fuh. Not our problem.
Two months later, the company has a new manager, even more ignorant than the one before. This one promises the work will be done before we even know what we are building!
… What just happened?
The body of the paper was shorter than the title.
The title of the paper was “On the Optimization of physically distributed analysis, design, code, test, operations, and management resources.”
The body of the paper two words:
At the time, the best advice of the software literature said that communication costs were expensive. That made the handoff between roles expensive, and mis-understandings and “backwash” when problems escape to the next role much more expensive. Recognizing that, we hoped to improve delivery by “getting it right”, so defects could not escape to the step to follow. For example, we talked about creating a specification that was the 3 C’s and U: Consistent, Complete, Correct, and Unambiguous.
Around this time, a group of rebels in Utah created the Agile Manifesto, which made the claim that responding to change is more valuable than following a plan.
Eighteen years later, I’ve come to believe there is a better way to develop software, and applying it to distributed teams is easier than many people think.
Let’s talk about it.
Take any part of the development process – say testing, my bread and butter. Your team has it all figured out, with a magic automation tool. By magic, I mean exactly that. Push a button and you will get instant test results, clearly articulated.Not fast — instant. I’m talking about magic.
The automation tool never makes a mistake. It magically knows what a change will be and how it should impact the system, so it never reports a false error when the system behavior changes. Because of this, it needs no maintenance. Just change the code, click the button, see all the problems.
Now, will having that accelerate your software delivery at all?
In many cases, the answer is “no.” Even with the organizations that benefit, the benefit might only be a two to four percent increase in throughput.
Here’s why, and how to do better.
Prior to the release of the Nest Thermostat in 2011, the Internet of Things was mostly hype. Refrigerators that texted you when milk spoiled were not really possible. Lawnmowers that texted you they had not been used in awhile were less useful than, say, looking at the grass.
Today, internet of things devices are real, and valuable.
The only problem is getting them to work.
Hard-coded test directories on the server.
Hard-coded test databases in the code.
Exception handling that couldn’t actually handle the exception at the level it was called for.
I reported the errors to my management, and they replied that I did not understand the code review process. It was not my job to give an up-or-down vote, but instead to review the code. The code review had occurred, therefore I needed to check the box for code review and move the code into production. If it were ugly or not was merely a matter of my opinion.
If only I had known what to say — that ugly code is bad code. Code so complex I cannot understand what is happening.
Code that bad will have errors that we cannot even find, because we cannot figure out what the code is doing.
How does code become ugly?
One change at a time.
Let’s talk about that.
The path to leadership in software usually looks something like this. A junior programmer takes a job out of college and works diligently. Every couple of years, that person hops to a new company to get a new job title and an accompanying raise. Eventually, they find a place to settle in for a while. They work into the senior role, get antsy for what ever is next, and poof, they are given a promotion and become a newly minted ‘lead’.
In that time, our developer friend learned to become competent at their trade (hopefully), became familiar with a couple of code bases, and probably has a reasonable professional network. But, where does the leadership come in?
Somehow, sometime, the concept of a spike got introduced into agile practice and language. The general idea is this; we need to make a product change but aren’t quite sure what direction to take or how to find that direction in the first place. A spike is the black flag we raise to say we need time to work on something that isn’t _obviously_ productive. The end result should be information and a general idea of what we need to do next, not code and software in production.
The rules are simple, the implementation is not. What I usually see, is spikes that result in a day of reading API documentation. So, the question then, is when is it a good idea to spike something out and when might we be better served by digging into the work and learning from that experience?
The longer the function, the more complex it is, the more likely we are to make mistakes. Knowing that makes measuring code complexity, and making the complexity of functions smaller, seems like a very good idea.
So how do we measure it? The most common measure of code complexity is probably “cyclomatic complexity.”
So what exactly is “cyclomatic complexity?”
The definition both confuses me and puts me to sleep at the same time – and I’m a formally trained mathematician with coursework in graph theory.
Seriously, try to make sense of this. You can just skim; I’m going to give an explanation in English later:
The formula of the cyclomatic complexity of a function is based on a graph representation of its code. Once this is produced, it is simply:
M = E – N + 2
E is the number of edges of the graph, N is the number of nodes and M is McCabe’s complexity.
To compute a graph representation of code, we can simply disassemble its assembly code and create a graph following the rules:
- Create one node per instruction.
- Connect a node to each node which is potentially the next instruction.
- One exception: for a recursive call or function call, it is necessary to consider it as a normal sequential instruction. If we don’t do that, we would analyze the complexity of a function and all its called subroutines.
Here’s the Matt Heusser GoodEnough™ explanation of complexity, why it matters, and how sometimes, it doesn’t.
The October 2015 cover of Inc. Magazine featured Elizabeth Holmes, the pioneering CEO of Theranos. Holmes had discovered a way to take a hundred-plus blood tests in one – in a fraction of the time, from the prick of a finger instead of a vial-per-test. The technology even had the potential for home use, more like a at-home pregnancy kit than the wait-two-weeks lab tests that are the current state of the practice.
The only problem is that it wasn’t real.
None of it.
On her warpath to home-testing, Holmes raised 1.3 Billion in investment funds.
How did that happen? Continued »