DevelopsenseLogo

Very Short Blog Posts (6): Validating Assumptions

Some may say that the purpose of testing is to validate assumptions made by business analysts, designers, or developers. To me, that is at best a potential side effect of testing—but not the goal. If you want testing to reveal important problems in the product, do not focus on validating assumptions (to do so would be more like checking; testing may include some checking). To foster discovery, excellent testing—like excellent science—seeks to invalidate assumptions.

To put it another way, it’s easy to check to show that something can work. As testers, we must probe, challenge and test the assumptions that might cause people to believe mistakenly that it will work. Those assumptions are where the risks live.

1 reply to “Very Short Blog Posts (6): Validating Assumptions”

  1. Thanks for this, Michael. I’ve been tracking yours and James Bach’s thoughts on this for awhile, and I’m turning my attention to something in the Heuristic Test Strategy Model. Your own discussion of it in your “Test Patterns” article seem to run contrary to the point you are making here. Specifically, in your article you state the following,

    “Function testing involves identifying each function and then testing it independently. A function causes the system under test to exercise some behavior, either changing or maintaining the system’s state. The object is to ensure that each function does what it should, but also that it doesn’t do what it shouldn’t.”

    What you say here seems intuitive, and most testers I know test this way. In fact I agree with nearly the whole of the statement, but I’m stuck on the part that states we “ensure that each function does what it should.” Since experiment-driven testing is ultimately the practice of taking test outcomes together with test expectations as the premises from which to draw our conclusions, I’m doubtful that this can be accomplished at all, and that we should instead **always** look for demonstrations that the function doesn’t do what it is supposed to when working as intended/desired. In practice the steps to validate seem to occur anyway as part of the whole test/learn/test cycle, but deliberately taking time to do this kind of checking seems a waste of time. Those at least are my assumptions 🙂

    The impetus for my post is that I’m training a fledgling QA engineer starting next week, and I’m trying to put my head in the right place. I’m verging on telling her to avoid expending effort to check that the program does what it should do, but rather to ALWAYS test that it doesn’t do what it is supposed to. It’s a strong statement, and I might be wrong, so I’m looking for any challenge to such a strong statement.

    Michael replies: “Ensure that each function does what it should” could mean: working on a theory that a specific functional requirement must be fulfilled to some degree, seek some kind of indication that the functional requirement has been fulfilled to some degree. That is, check it to see if it can work, since ensuring that it will work is, as you say, impossible. We could also take a less-than-strict reading of “ensure” and interpret that as “test sympathetically“.

    Indeed, times have changed, and my view has sharpened, so now I’d consider “ensure” to be a reasonably poor choice of words. These days I might prefer “observe that each function can do what it should”.

    I also think you’re right to downplay the significance of “doing what it should” since, in trying to find problems, we usually get “doing what it should” as a by-product of a test that doesn’t find a problem. And, of course, you can do both at once. Nonetheless, be careful of “always”. Sympathetic testing can be important when you’re in a high learning mode; looking for benefits in the product that would inform your model of it; looking for desirable behaviour to prepare a search for undesirable behaviour. In addition, we typically have a handful of explicit, closed premises and outcomes in mind for an experiment, but a whole bunch of tacit and open ones too.

    I’m not so worried about your junior tester. If she’s in an environment where her boss reads so closely and so thoughtfully, she’ll do fine.

    I appreciate close readings and critique. Thank you.

    Reply

Leave a Comment