when relevant content is
added and updated.
when relevant content is
added and updated.
Modern Unit Testing came out of Test Driven Development (TDD). The first Unit Test Tooling was designed for Smalltalk; jUnit and nUnit were strange, free plugins that were (shudder) open source. The people in the Alt.Net community found the Microsoft tool stack actively resisted TDD, continuous integration, and other technical improvements the XP and Agile Communities were making to the way software was delivered.
In technology terms, that was ancient history. Let’s talk about now.
Today, when I work with a company, they have likely heard of TDD.
They just don’t use it.
Here are a few patterns I’ve seen.
Part One: The Big Legacy Monster
Most examples of TDD are simple algorithms – calculate the score for a bowling game, or convert roman numerals to decimals. Real code is part of a greater system that interacts with the outside world. As such, it needs to connect to the session on the web, get the data in a cookie, then connect to a database, then call an API. Our little blob of code, all grown up, looks something like this:
In order to test it, we need those little red objects, called Mocks. Except, of course, mocks really only test the object in isolation. “There’s nothing like the real thing baby”, goes the thinking, and besides, setting up mocks is hard. So programmers skip the mocking step and write full integration tests that connect to everything, for reals.
The result is a 50-100 line “unit test” that spends nearly half of its lines on setup, calls the function once (which requires all the real connections to run end-to-end), does one or two asserts, and then spends the other half on lines on teardown.
Programmers, looking at that, reasonably say “ugh.”
If the web service has changed, if the database is down, or if the session is invalid, the unit test “fails.”
The mocked version is not much better. Instead of the delays and flakiness, we have extra layers of complexity.
Given this system of forces, Programmers rightly say something like “haha. TDD. Write a test, make it fail, make it pass. That trick never works. We’ll have a couple of tests around our most important APIs.”
Part The Second: Michael Jackson’s Nose
Then we come into the second problem. As a programer on a team, I am assigned a task – to add a new condition onto a bit of code, a flag to read from the database for each member. If that flag is set, we print out a few additional rows of text, because that member has not only medical but also pharmacy insurance. If not, we don’t. The right way is to pull that functionality out, make it (unit) testable, then put it back in. I could extract the parameters to the function into a class, which means changes to the calling program and a little retesting.
Or I could just pass in another boolean value, add an if statement, and cut and paste a little.
The first approach, doing it “right” will take a good programmer a half-day. Without unit tests in place, the work might require a bit of retesting around to see if any of the changes caused damage.
The second approach, “hacking and if statement” is a ten minute job.
As a result, changes to legacy codebases are a little like Michael Jacksons Nose – all the later doctors said it was in horrible shape when they got to him; all they did was add one little thing.
The cumulative impact, however, is that things just kept getting worse; the same thing that happens on a legacy codebase.
Learning To Unit Test
Today, I’m not sure if that is the right advice. The ten programmers in the system who are heaping on the technical debt will continue to do so, making the codebase a little worse, and future progress a little slower – but at each step, they’ll be much faster than the programmer who is trying to do the “right” thing.
Instead, I recommend programmers start by doing katas on their own. Learn to write good code in isolation on a greenfield codebase. It will be a lot easier.
Then begin writing new functionality in isolation. The new code will need a “seam” to the old code – and that interface could have a test or two. Eventually this new functionality, tested well, will start to pop up over the codebase. When a set of new features code a specific part of the codebase, then entire old modules will start to look like skeletons, with all the new code “hanging off”, and tested.
That’s quite a journey. So don’t worry too much about getting code coverage to 80% or taking a month off to refactor a major piece of architecture.
Just start with the first step.