when relevant content is
added and updated.
when relevant content is
added and updated.
If you want to get better at software testing, you might start out looking for a test maturity model, maybe the one from the Illinois Institute of Technology that became the TMMi.
Wait! Before you click that link, stick around. I’ve got a better today.
Today I’m going to propose a way to think about tester maturity. One you can use to evaluate candidates, yourself, and yes, the overall culture of the organization. It is a model, which means it a general. All generalizations have exceptions. All generalizations are wrong. Still, if it can help you test better, interview better, hire better, then I would suggest that less wrong is better.
Here’s my proposal: A test maturity model that is less wrong.
Level Zero – Oblivious
The team does not recognize testing as a concept. The software goes to the customer, who finds problems that we fix. The software model for this is “code and fix.” This is essentially what we do with Excel spreadsheets, and I’ve seen it done a time or two with software. Sometimes, the risk is small enough that this is appropriate. On large projects, it is more likely that some time is assigned for “regression testing”, say Monday morning, but if you walk by the team, no one is actually testing anything. In this age where we have product owners (who care about what) and Scrum masters (who care about ceremonies) but no development managers, it’s easy to lose track of who is doing the testing. Add a “self-organized” team that might be a little be dis-organized, and the problem becomes obvious.
Level One – The Happy Path
It’s easy enough for people to see the need for testing. All it takes is buggy software in production. At level one, there is a recognition that testing needs to occur. However, it is seen as a simplistic and unskilled activity. This kind of thinking leads to having interns test and sending the work to the absolute cheapest possible tester. Often this leads to sending work several time zones away, which injects delay. Because the work is unskilled, the results are poor. Worst of all, level one leads to the test skill paradox. That is, skilled testers find risks which take time to investigate. The unskilled do not even see those risks at all. They finish testing in five minutes and say “I don’t see what her problem is; testing just isn’t that hard.” In my experience, it is likely that instead of learning, the organization gives up on testing and devolves to level zero.
Level Two – Quick Attacks
These are easy-to-learn common failure modes for software. They include both software in general and a specific platform. So, for example, you can learn quick attacks for the web or for responsive design in a few hours. Quick attacks allow the tester to find a great deal of bugs quickly. They work for any software. The tester does not need to understand the underlying business rules. Quick attacks also tend to find shallow bugs. When I jump into a buggy system, I often start with quick attacks. In an hour or two I’ve learned a handful of things. The programmers will rush off to fix, then I can start to really understand the business rules. The common risk with quick attacks is that people learn some techniques in a few hours and think that is all there is to testing.
By level three we recognize the impossibility of complete testing. The techniques covered here are designed to reduce an infinite number of test ideas to the powerful few. Dr. Cem Kaner’s Domain Testing Workbook cover this space. It is four hundred and eighty eight pages long.
At this point we looking at the rules the software operates under. In a game, that might be how the guards are notified of a prison break. In a mortgage application, it is the calculations. For insurance we care about if the deductible is met, what day the claim occurred, when the person had coverage, and so on.
With level three, the tester can find all kinds of interesting problems. They can break the software, turn it into a pile of goo on the floor. The risk I see here is that those failures can be disconnected from business value. Testing is, after all, an investment. If the time spent on the testing exceeds the value the testing delivers, well … that’s a problem. Which leads us to level four.
Level Four – Risk and Coverage
Testers who achieve this level look at the impact if something goes wrong, the chance it will go wrong, and the cost to test it. They balance the testing they do, considering elements like how often the feature has failed in the past, the churn, and the cost of rollback. This kind of testing tries to identify the right time to do the right testing that has the best bang for the buck, along with how to visualize and communicate about that testing. That’s the kind of things I was talking about in the video below.
By level four, the team can actually talk about risks and tradeoffs so the business can make an informed decision. That includes what to invest in what kind of testing, such as unit, API, system, and human exploration, the kind of bugs those investments will yield, and the kind they will be blind to.
That’s kind of a big deal.
Epilogue: On Maturity Models
Nearly ten years ago I proposed the Fishing Maturity Model. The article was an April fools post. It is a joke, a hilarious sendup of the person that layers a generic maturity model on top of something they don’t understand.
This is not that. It is not a generic way to get to repeatability. Instead, I’ve actually studied fishing, er, software testing. The work here is not about reporting or repeatability or training or process or automation. It is simply how people think about finding bugs. In the fishing model, that would be how people catch fish.
That may not not everything, but I do think it is important, and too-often overlooked.
If you read this far, you probably care about testing too. You love to kick ideas around. Please, tell me how i’m wrong.
This is going to be fun.