Uncharted Waters

Jul 30 2019   10:18AM GMT

RCRCRC Applied

Matt Heusser Matt Heusser Profile: Matt Heusser

Tags:
quality
Regression testing
Software testing

Imagine you have an assembly line of finished product to inspect, but every product is different, and you know how. That means each ‘build’ will have different risks. So you can customize your inspection to address just those risks. Karen N Johnson recognized this ten years ago when she published RCRCRC – a set of guidelines for regression testing. Karen’s RCRCRC stood for Recent, Core, Risk, Configuration Sensitive, Repaired, and Chronic. That is, when you have to regression-test, prioritize those things, sort, and test until you run out of time.

It’s easy to poo-poo these ideas in this day and age of micro services and continuous delivery. On the other hand, let’s get real. Plenty of organizations can’t do continuous delivery, for plenty of reasons. Some of them test physical devices with hardware that is only updated once per year. Even with a firmware update, you kind of sort of want it to most work out of the box, right? Others are releasing to a store that may take days to a week to approve changes. Some simply have not developed the maturity to do continuous delivery well. After all, anyone can do continuous delivery of bugs and low uptime to production. It doesn’t mean they should.

Recently I worked on a project that used RCRCRC to mine for test ideas. Four continents, maybe 15 teams. The challenge was where to get the data. Here is my story.

Where the RCRCRC comes from

Core. The system was instrumented, so we knew how many times, and how many people, had clicked on user actions in the past six months. These numbers differed. If we sorted by most popular, we might not get most used. Recent features would have low numbers, because they were only partially available during the period. New features would not be core at all.

Chronic and Risk. I searched in Jira for the defects found in the previous relesase by later-stage testing — earlier stage testing might not have a ticket. Yes, I looked at every single one, doing an affinity map on them. That is a fancy way of saying that I grouped them by the underlying feature or problem that was flaky,

Recent and Risk. This likely corresponds to the code that changed. I used git churn to find the code that had changed since the last release, then later the code that had changed since the last regression testing. Despite a personal reassurance that “nothing was changing”, git log said otherwise. And of course things were changing, otherwise we would just ship the previous version. (To be fair, some of the problems were server configuration). Git-churn isn’t perfect. It provides a list of files and line inserts and deletes. A whole new file will “pop” to the top of the list. It is possible that a single-line code change creates a real problem. Also, what we really want is churn – how many commits are made, as this can indicate code-try-fix behavior. It could, on the other hand, represent extremely disciplined test driven development. So more than that I’d look at code that has a lot of churn that is complex. Going through the churn files, I wrote some scripts to find the authors of what changed since when, so I could generate an email and ask them “if you had a group of testers covering your back to look at these changes, what should they be looking for?”

Complexity. While strictly speaking not one of the C’s, I think this is implied in “Risk.” Basically, code that is complex is likely to have bugs. Once functions become so long that the programmer cannot keep the entire function in their mind at one time, then changes start to introduce side effects, requiring more changes. Sonarqube is a free-to-start program that can measure complexity and even unit code coverage. Sadly, complexity metrics were not yet available in the hot new mobile language one team was using, but for the second, there was plenty. The most complex code also happened to be the highest churn. This was no surprise to the team, and can influence what we test when for regression, when invariably we will not have time to do everything.

Configuration Sensitive. This is incredibly challenging as the huge numbers of possible device interactions, and the handheld devices interact with hardware, and that hardware works over a series of releases. Different physical hardware and versions within that hardware, no firmware updates. We do have use statistics for the most popular handheld devices and we are working on the firmware. However, remember, “most popular” is a moving target. If handheld devices are replaced every 3 years and hardware every 5, then tomorrows’s hottest will not be the most popular today. This very minute’s most popular will fall out of fashion soon. If the release cadence takes a month or two, looking at last week’s data will show an old picture of a moving target.

Repaired. Between the JIRA analysis and the git-chun, we get this data almost free.

Putting It All Together

RCRCRC + Low Tech Testing Dashboard

`

The challenge here is interleaving the conflicting data and finding commonalities. That is, the core features that are complex and have a great deal of churn and bug fixes — those are probably going to have new bugs.

For now, we have a low tech testing dashboard that corresponds to the app features — more on that in another post. In addition to the dashboard, we add a new layer at the top on emergent risk. These risks correspond to the issues found in the RCRCRC analysis. Every issues then gets a RAID analysis (Rotate, Automate, Institutionalize, Drop) – which also deserves its own blog post.

For now, yes, you can do an RCRCRC analysis. And if you don’t know where to start, here’s a fistful of test ideas to go get the data.

There is a loud voice in the testing world that says testing should be no-thinking. Easy. Either automate it or hire a bunch of low skill workers to easy, repeatable tests. click-click-click inspect.

Those people cannot do this level of work.

We can do better.

 Comment on this Post

 
There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when other members comment.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to:

Share this item with your network: