Today I’m going to tell you a secret about testing tooling, a sort of truth. It’s a kind of truth like the knowledge a partner is cheating. You need to know it, but you’d rather not, and sometimes you’ll wish you could go back to ignorance.
It’s about those automated tests which that one tester is making, over in the corner. You know, the ones you make for every story, the ones that may or may not run under CI.
Your testers are skipping a ton of tests.
That is, of the list of flows to automate, there are a bunch they are … Just. Not. Doing.
And they are getting away with it.
No one is checking their work.
Oh, maybe there is code review or some such. In my experience, what doesn’t happen is anyone goes back and makes sure the important flows are tested. Instead, the “Software Development Engineer in Test” automates the easy stuff until they run out of time, calls the feature ‘done’ and moves on to the next thing.
This is a simple function of incentive design. No one is checking the work, so the tester starts with the work that is easy. Eventually, they run into something that is hard to automate, which they skip.
The problem is, if it was hard to automate, it was probably hard to create, to program in the first place. That code is likely very brittle. It will likely have to be modified, have no meaningful tests, something will break and eventually be found.
When it is found, the tester might no longer be on the team.
If the tester is on the team, it is possible no one notices the test was missing.
If they notice the test was missing, the tester can cry “whoopsie.”
If the tester cries whoopsie, the team will probably not have any source material to go back to to figure out if the test was “skipped.” If they do, they likely have better things to do than have a silly argument about some document from last year.
Thus the tester has no incentive to be diligent, and every incentive to skip.
Doesn’t this happen for all testing?
A few years ago I worked at a company that pushed a lot of data around in databases. Some of the the setup conditions were complex and odd. Occasionally I felt the tug of laziness, the tug to skip something that would be annoying to setup and “should just work.”
When I felt that tug, I tested harder, and found bugs in the software every single time.
In the tae of test tooling, usually the testing by hand is easy. The tooling can be hard. So you test it by hand and skip it. In that case, the immediate result is code gets to production that works. The incentive is immediate, certain, and positive (ICP). The idea that at some time in the future there might be a problem is delayed, uncertain, and negative (DUN).
Without some counter-balancing forces, like culture and management, people will be down with ICP and never get DUN.
… Or maybe I’m wrong
At this point I expect someone to say “Not all automated tooling! We use a special handy-dandy GUI-generator thing that makes all our code super testable. Every element has an ID, and the whole team agrees explicitly on exactly what we will create before we create it. We involve the tester in the whole process, from concept to cash, so we know exactly what we are creating as a whole team.”
At my company, Excelon Development, we have had a few clients that work this way, so I know the questing beast does exist. We like these sorts of clients, as they generally are not afraid of experiments and trying new things — they keep us on our toes. Sadly, the number of them is vanishingly small.
I’d like to offer a challenge to all those organizations that skip tooling and do not communicate well.
Transform yourself into organizations that do communicate well.
It is possible that some day, all the comments on this blog are about how wrong I am and the world is a better place.
Wouldn’t that be nice?