Blog Posts from December, 2005

Investigation vs. Confirmation

Thursday, December 29th, 2005

Over the last little while, I’ve been corresponding fairly frequently on the Agile Testing mailing list. You can find it yourself at

It’s a stimulating, but sometimes frustrating forum. The Agile movement itself (and its most prominent sect, eXtreme Programming, or XP) seem heavily oriented towards developer-centric concerns. That’s fair enough. However, the forum’s mandate confuses me: the charter states that the forum “…is not a group to discuss whether such a thing as agile testing exists, whether agile software development is a good idea, whether XP is a nefarious plot by programmers to gain license to hack, and so forth. We do not require list members to be agile enthusiasts (though the owners are), but we require them to acknowledge that people are testing in projects that call themselves agile, and that our group is about helping those people do the best job they can.” That’s also fair enough. But many on the forum–to be fair, perhaps only its most vocal contributors–seems mired in a specific perspective on testing, which I think has certain risks. This is the focus on testing as confirmation, rather than as investigation.

In the Rapid Software Testing course, we talk about testing as asking questions about the product. We ask those questions of the product itself by operating it. The program answers by behaving in some way that typically changes its state–or not, whichever is appropriate.

Most kinds of automated tests involve asking questions of the product that begin “Do you still…?” “Do you still produce this output when given this input?” “Do you still finish this process within a certain amount of time?” “Do you still produce the same results on this platform as you did that one?” That is to say, the tests are confirmatory. These tests are valuable, especially to programmers, because they serve as important detectors of undersired side effects when the program changes.

However, it sometimes seems to me as though the Agilistas believe that these are the only questions worth asking. A more thorough approach to testing, in my view, involves questions of the product that begin “What if…?”: “What if I try to overwhelm you with a volume of data that you didn’t expect?” “What if I compare your behaviour to a previous version of this product, or to some other product in its category?” “What if I pretend to be a novice or malicious user?” “What if I observe some aspect of your behaviour to which I haven’t had access, or to which I haven’t been paying attention?”

When agile developers do test-driven development, I think they do ask some of these questions, and answer them with automated tests. That’s a Good Thing, and very powerful; I think that harsh, critical TDD-level tests will eliminate a lot of bugs. However, I think it’s important to remember Ward Cunningham’s observation that TDD is a design activity, not a testing activity, lest we get into trouble.

The trouble comes in a few different forms. First, TDD tests will reflect the biases and errors of omission that programmers are wont to make. (Pair programming is a partial antidote to this.) Second, if TDD tests are intended to drive design, the quality of the tests as tests is open to question. Third, at some point during the development of a unit of code, there’s a strong (and reasonable) temptation to stop asking questions at a certain point, and move on to the next bit of code to be written. At that point, the TDD tests stop being investigative in any way, and start being confirmatory. I think that’s good design, but it’s not in any way guaranteed to be thorough testing.