## Stating the Obvious

Sep 11 2012   3:34PM GMT

# A/B, Multivariate and Taguchi Testing

Profile: Joseph Carrabis

I had to explain this to three clients this past month. Whenever I have to repeat myself that frequently I figure the gods are telling me what I’m sharing isn’t general knowledge, so I’m sharing it here so others can benefit.

### Better One or Better Two?

The basic concept of any test is that there be some standard the test has to pass. For A/B, Multivariate and Taguchi testing (and very basically), two items are tested side by side according to some scale. The item that passes the test is the item that scores higher on that scale.

You can already see that selecting the correct scale for the test is critically important. Chose the correct metric for what you’re testing and you have a well defined and understood race with an equally well defined and understood finish line. Whoever crosses the finish line first, A or B, wins.

I think of well done A/B, Multivariate and Taguchi testing in terms of that joke about the two campers being chased by a bear. One camper passes the other and the camper falling behind says, “You can’t run faster than a bear,” to which the faster camper replies, “I don’t have to run faster than the bear, I have to run faster than you!”

Choose the correct metric and you stay ahead of the bear (your visitors). Chose the incorrect metric and either A or B still wins but the winning is unimportant because the bear still eats you, but does so by abandoning your site.

### Testing Tests

Most people lump Multivariate and Taguchi in with A/B testing. Please don’t. A/B is like a Ford, Multivariate is like a BMW and Taguchi is like a Ferrari. Most businesses do A/B testing only and that’s fine provided you understand what you’re testing and recognize what the outcomes should be.

For example, you know enough user psychology to understand things like template bias, habituation, inattentional blindness, experiential memory and the like, and how they affect these tests, yes?

### New Wine in Old Skins

You can’t really test A against B and be definitively successful when you’re testing, for example, two new versions of a website and part of your audience is your existing visitor population. You need to make sure some visitors see version A and some version B, yes, and you also need to make sure previous visitors see completely new designs while new visitors see updated designs.

Do anything else and you’re just adding to the existing visitor population’s frustration. Previous visitors — especially frequent visitors — suddenly encountering an update or modification to a previous design will demonstrate habituation and template bias. They’re so familiar with the old design that they can’t find what they’re looking for and are easily frustrated (especially when they need to get something done). Not good.

Complete rebuilds/redesigns should be tested with previous visitors. The radically new design will signal that their past experience isn’t valid. I can also let you know that the first thing they’ll do — especially if they really want to get something done rather than explore your beautiful new interface — is look for a way back to what they know. Being able to return to what they know makes visitors feel safe rather than abused and victimized. They’re much more willing to explore what’s new when they know it’s their choice.

### Summary

Test modifications of existing interfaces on brand new visitors. They’ll have no template bias, no habituation, no experiential memory, …, to interfere with their using the interface. Good for you and good for them.

Test brand new designs on previous visitors and watch carefully to see if their behavior (“experience”) changes.

Available NextStage Trainings: