Blog Posts from August, 2005

Intermittence

Monday, August 8th, 2005

On August 5, 2005, James Bach posted in his blog a really interesting piece on intermittent problems. It’s, as usual, thoughtful and well-considered. You can read it at http://www.satisfice.com/blog/archives/34.

It’s a little agonizing to think that an intermittent problem might depend upon achieving a certain threshold value in some variable which might be difficult to reach. One form of intermittent problem might require hundreds of thousands of preceding transactions. This would be hard to reproduce if it first appeared after months of testing, and if you wanted to reproduce it on a new system. Another form of intermittent problem might happen only with the first use of the system. This would be hard to reproduce after the first test, of course.

James talks about input a lot in Possibility 4. However, the problem might be easier to spot or to model if we seek patterns in the output—not often, perhaps, but if you’re looking for a hard-to-find problem, a variety of models is generally preferable to one; we don’t know which model is the right one until we’ve finally found the bug.

I think it’s very important to consider that the heuristics for finding intermittent problems are, in the main, excellent heuristics for finding problems generally. For fun, I tried rereading the article removing the word “intermittent”. The article still made a lot of sense. “The ability and the confidence to investigate an intermittent bug is one of the things that marks an excellent tester”; “Many intermittent problems have not yet been observed at all, perhaps because they haven’t manifested, yet, or perhaps because they have manifested and not yet been noticed”; “Some General Suggestions for Investigating Intermittent Problems”; and so forth.

This underscores the point that all problems can be seen as intermittent; it’s just that, for “regular bugs”, the patterns are simpler on one level. We can see the nature of the patterns more easily in some circumstances than others. That’s why modelling, pattern-spotting, and systems thinking skills are so crucial for testers.

Investigation vs. Confirmation

Sunday, August 7th, 2005

Over the last little while, I’ve been corresponding fairly frequently on the Agile Testing mailing list. You can find it yourself at http://groups.yahoo.com/group/agile-testing.

It’s a stimulating, but sometimes frustrating forum. The Agile movement itself (and its most prominent sect, eXtreme Programming, or XP) seem heavily oriented towards developer-centric concerns. That’s fair enough. However, the forum’s mandate confuses me: the charter states that the forum “…is not a group to discuss whether such a thing as agile testing exists, whether agile software development is a good idea, whether XP is a nefarious plot by programmers to gain license to hack, and so forth. We do not require list members to be agile enthusiasts (though the owners are), but we require them to acknowledge that people are testing in projects that call themselves agile, and that our group is about helping those people do the best job they can.” That’s also fair enough. But many on the forum–to be fair, perhaps only its most vocal contributors–seems mired in a specific perspective on testing, which I think has certain risks. This is the focus on testing as confirmation, rather than as investigation.

In the Rapid Software Testing course, we talk about testing as asking questions about the product. We ask those questions of the product itself by operating it. The program answers by behaving in some way that typically changes its state–or not, whichever is appropriate.

Most kinds of automated tests involve asking questions of the product that begin “Do you still…?” “Do you still produce this output when given this input?” “Do you still finish this process within a certain amount of time?” “Do you still produce the same results on this platform as you did that one?” That is to say, the tests are confirmatory. These tests are valuable, especially to programmers, because they serve as important detectors of undersired side effects when the program changes.

However, it sometimes seems to me as though the Agilistas believe that these are the only questions worth asking. A more thorough approach to testing, in my view, involves questions of the product that begin “What if…?”: “What if I try to overwhelm you with a volume of data that you didn’t expect?” “What if I compare your behaviour to a previous version of this product, or to some other product in its category?” “What if I pretend to be a novice or malicious user?” “What if I observe some aspect of your behaviour to which I haven’t had access, or to which I haven’t been paying attention?”

When agile developers do test-driven development, I think they do ask some of these questions, and answer them with automated tests. That’s a Good Thing, and very powerful; I think that harsh, critical TDD-level tests will eliminate a lot of bugs. However, I think it’s important to remember Ward Cunningham’s observation that TDD is a design activity, not a testing activity, lest we get into trouble.

The trouble comes in a few different forms. First, TDD tests will reflect the biases and errors of omission that programmers are wont to make. (Pair programming is a partial antidote to this.) Second, if TDD tests are intended to drive design, the quality of the tests as tests is open to question. Third, at some point during the development of a unit of code, there’s a strong (and reasonable) temptation to stop asking questions at a certain point, and move on to the next bit of code to be written. At that point, the TDD tests stop being investigative in any way, and start being confirmatory. I think that’s good design, but it’s not in any way guaranteed to be thorough testing.