DevelopsenseLogo

On Green

A little while ago, I took a look at what happens when a check runs red. Since then, comments and conversations with colleagues emphasized this point from the post: it’s overwhelmingly common first to doubt the red result, and then to doubt the check. A red check almost provokes a kind of panic for some testers, because it takes away a green check’s comforting—even narcotic—confirmation that Everything Is Going Just Fine.

Skepticism about any kind of test result is reasonable, of course. Before delivering painful news, it’s natural and responsible for a tester to examine the evidence for it carefully. All software projects—and all decisions about quality—are to some degree loaded with politics and emotions. This is normal.

When a tester’s technical and social skills are strong, and self-esteem is high, those political and emotional considerations are manageable. When we encounter a red check—a suggestion that there might be a problem in the product—we must be prepared for powerful feelings, potential controversy, and cognitive dissonance all around. When people feel politically or emotionally vulnerable, the cognitive dissonance can start to overwhelm the desire to investigate the problem.

Several colleague have recalled circumstances in which intermittent red checks were considered sufficiently pesky by someone on the project team—even by testers themselves, on occasion—that the checks were ignored or disabled, as one might do with a cooking detector.

So what happens when checks consistently return “green” results?

As my colleague James Bach puts it, checks are like motion detectors around the boundaries of our attention. When the check runs green, it’s easy to remain relaxed. The alarm doesn’t sound; the emergency lighting doesn’t come on; the dog doesn’t bark. If we’re insufficiently attentive and skeptical, every green check helps to confirm that everything is okay.

Kirk and Miller identified a big problem with confirmation:

Most of the technology of “confirmatory” non-qualitative research in both the social and natural sciences is aimed at preventing discovery. When confirmatory research goes smoothly, everything comes out precisely as expected. Received theory is supported by one more example of its usefulness, and requires no change. As in everyday social life, confirmation is exactly the absence of insight. In science, as in life, dramatic new discoveries must almost by definition be accidental (“serendipitous”). Indeed, they occur only in consequence of some mistake.

Kirk, Jerome, and Miller, Marc L., Reliability and Validity in Qualitative Research (Qualitative Research Methods). Sage Publications, Inc, Thousand Oaks, CA, 1985.

It’s our relationship between the checks and our models of them that matters here. When we have unjustified trust in our checks, we have the opposite problem that we have with the cooking detector: we’re unlikely to notice that the alarm doesn’t go off when it should. That is, we don’t pay attention.

The good news is that being inattentive is optional. We can choose to hold on to the possibility that something might be wrong with our checks, and to identify the absence of red checks as meta-information; a suspicious silence, instead of a comforting one. The responsible homeowner checks the batteries on the smoke alarm, and the savvy explorer knows when to say “The forest is quiet tonight… maybe too quiet.”

By putting variation into our testing, we rescue ourselves from the possibility that our checks are too narrow, too specific, cover too few kinds of risk. If you’re aware of the possibility that your alarm clock might fail to wake you, you’re more likely to take alternative measures to avoid sleeping too long.

Valuable conversations with James Bach and Chris Tranter contributed to this post.

7 replies to “On Green”

  1. I would have to say that I don’t fully agree with Kirk and Miller. Much research is aimed at supporting existing theories and some discoveries are accidental. But the big part of discoveries is research for new theories that disagree with existing ideas. Those ideas might be born from accidents or mistakes. And those ideas need to be confirmed through research. So the real inventor are those who know to say “This theory seems conclusive… maybe too conclusive”.

    I also think that the seemingly big part of confirmatory research is due to human nature (or academic environment) to hide failures. Research that does not go smoothly is not talked about. People will look into what went wrong and maybe learn from it, but unless it spectacularly (and accidentally) proves current theory wrong it gets hushed up.

    Regarding the alarm clock, it is not just the possibility of failure of the clock that would prompt one to take alternative measures. But also, how important it is to wake at certain time – what is going to happen when one sleeps too long? Missing breakfast, or flight? Or your wedding anniversary surprise?

    Reply
  2. “Skepticism about any kind of test result is reasonable”

    Can you please explain why is that reasonable?

    Michael replies: It’s reasonable because certainty about any test result is neither available nor appropriate for a tester. (You may be confused about what “skepticism” means. Skepticism is not the rejection of belief; it’s the rejection of certainty about belief. All testers are professionally skeptical; it’s our job to remain professionally uncertain about the system when everyone else around us is sure.)

    Even if it is reasonable to some person, why a team would be allowed to spend their time on this skepticism? They might think, adding new check is far more valuable than reviewing running green checks especially when the goal is to increase test coverage. It’s not that they have no idea about those green checks. They have some level of confidence.

    Good testers are not in the confidence business. It’s not our job to build confidence in the product. It’s our job to question and challenge unwarranted confidence.

    I think excellent review requires extensive knowledge on product and to build that extensive knowledge you need to spend extensive time.

    I agree that excellent review requires extensive knowledge. We don’t always have extensive time, so we need to learn how to develop extensive knowledge quickly. One good way to do this is to learn about the product through exploration and experimentation; through interaction with the product.

    Reply

Leave a Comment