DevelopsenseLogo

Very Short Blog Posts (34): Checking Inside Exploration

Some might believe that checking and exploratory work are antithetical. Not so.

In our definition, checking is “the algorithmic process of operating and observing a product, applying decision rules to those observations, and reporting the outcome of those decision rules”.

We might want to use some routine checks, but not all checks have to be rote. We can harness algorithms and tools to induce variation that can help us find bugs. Both during development of a feature and when we’re studying and evaluating it, we can run checks that use variable input; in randomized sequences; at varying pace; all while attempting to stress out or overwhelm the product.

For instance, consider inducing variation and invalid data into checks embedded in a performance test while turning up the stress. When we do that, we can discover when the product falls to its knees under how much load—how the product fails, and what happens next. That in turn affords the opportunity to find out whether the product deals with overhead associated with error handling—which may result in feedback loops and cascades of stress and performance problems.

We can use checks as benchmarks, too. If a function takes significantly and surprisingly more or less time to do its work after a change, we have reason to suspect a problem.

We could run checks in confirmatory ways, which, alas, is the only way most people seem to use them. But we can also design and run checks taking a disconfirmatory and exploratory approach, affording discovery of bugs and problems. Checking is always embedded in testing, which is fundamentally exploratory to some degree. If we want to find problems, it would be a good idea to explore the test space, not just tread the same path over and over.

1 reply to “Very Short Blog Posts (34): Checking Inside Exploration”

  1. This is what I try to get my colleagues to understand. They keep missing bugs because they follow highly specific test steps to the letter. I encourage them to do that first if they need to, then set the test case aside and think of what else could happen.

    That’s a good idea. I’d suggest something a little different, though: glance over the test case; try to discern the task that’s being modeled or the information that’s being sought; then try to fulfill the task without referring to the test case. That puts the tester in command to try the task and stumble over problems in getting to the goal. Since the end user is not going to be following the test case either, those problems are likely to be bugs.

    Reply

Leave a Comment