Blog Posts for the ‘Emotions’ Category

Confusion as an Oracle

Monday, October 17th, 2011

A couple of weeks back, Sylvia Killinen (@skillinen on Twitter) tweeted:

“Seems to me that much of #testing relies on noticing when one is confused rather than accepting it as Something Computer Programs Do.”

That’s a beautiful observation, near and dear to my heart since 2007 at least. The night I read Sylvia’s tweet, I wanted to blog more on the subject, but sometimes blog posts go in a different direction from where I intend them to go. At the time, I went here. And now I’m back.

Sylvia’s tweet reminded me of a story that Jon Bach tells about learning testing with his brother James. Jon had been working in a number of less-than-prestigious jobs. James suggested that Jon become a tester, and offered to train him how to be an excellent tester. Jon agreed to the idea. Everything went fine for the first couple of weeks, but one day Jon slumped into James’ office looking dejected and demoralized. The conversation went something like this.

“What’s the problem?” asked James.

“I dunno,” said Jon. “I don’t think this whole becoming-a-tester thing is going to work out.”

“Not work out? But you’re doing great!” said James.

“Well, it might look that way to you, but…” Jon paused.

“So what’s the problem?”

“Well, you gave me this program to test,” Jon began. “But I’m just so confused.”

James peered over his glasses. “When you’re confused,” he said, “that’s a sign that there’s something confusing going on. I gave you a confusing product to test. Confusion might not be fun, but it’s a natural consequence when you’re dealing with a confusing product.” James was tacitly suggesting that Jon’s confusion cound be used as an oracle—a heuristic principle or mechanism by which we recognize a problem.

This little story suggests and emphasizes a number of serious and important points.

As I mentioned before, here, feelings don’t tell us what they’re about. Confusion doesn’t come with an arrow that points directly back to its source. Jon felt confused, and thought that the confusion was about him. But that confusion wasn’t just about Jon’s internal state; it was also about the state of the product and how Jon felt about it. Feelings—internal, non-specific and ambiguous—don’t tell us what’s going on; they tell us to pay attention to what’s happening around us. When you’re a novice, you might be inclined to believe that your feelings are telling about yourself, but that’s likely not the whole story, since emotions don’t happen in isolation from everything else. It’s more probable that feelings are telling you about the relationship between you and something else, or someone else, or the situation.

Which reminds me of another story. It happened at Jerry Weinberg’s Problem Solving Leadership workshop in 2008. PSL is full of challenging and rich and tricky exercises, and one day, one team had fallen into a couple of traps and had done rather badly. During the debrief, Jerry remarked on it. “You guys handled a much harder problem than this yesterday, you know. What happened this time?”

One of the participants answered, “The problem screwed us up.”

With only the briefest pause, Jerry peered at the fellow and replied in a gently admonishing way, “Your reaction to the complexity of the problem screwed you up.”

Methodologists and process enthusiasts regularly ignore the complex human and emotional aspects of testing, and so don’t take them into account or use them as a resource. Some actively reject feelings as a rich source of information. One colleague reports that she told her boss about a presentation of mine in which I had discussed the role of emotions in software testing.

“There’s no role for emotions in software testing,” he said quietly.

“I’m not sure I agree,” she said. “I think there might be. I think at least it’s worth considering.”

Abruptly he shouted, “THERE’S NO ROLE FOR EMOTIONS IN SOFTWARE TESTING!”

She remarked he had seemed agitated—a strange reaction considered the mismatch between what he was saying and what he appeared to be feeling. What more might we learn by noting his feelings and considering possible interpretations? What inferences might we draw about the differences between his reaction and hers?

As we increasingly emphasize in the Rapid Software Testing course, recognizing and dealing with your feelings is a key self-management skill. Indeed, for testers, feelings are a kind of first-order measurement. It’s okay to be confused. The confusion is valuable and even desirable if it leads you to the right control action, which is to investigate what your emotions might be telling you and why. If we’re willing to acknowledge our feelings, we can use them productively as cues to start looking for oracles and problems in the product that trigger the feelings—before those problems lead our customers to distress.

In my article Testing Without a Map, I discuss some of the oracles that we present in the Rapid Software Testing class and methodology.

Thanks to Sylvia for the inspiration.

I’ll be bringing Rapid Testing to the Netherlands (October 31-November 2), London (November 28-30), and Oslo (December 14-16). See the right-hand panel for registration details. Join us! Spread the word! Thank you!

The Cooking Detector

Friday, September 23rd, 2011

A heuristic is a fallible method for solving a problem or making a decision. “Heuristic” as an adjective means “something that helps us to learn”. In testing, an oracle is a heuristic principle or mechanism by which we recognize a problem.

Some years ago, during a lunch break from the Rapid Software Testing class, a tester remarked that he was having a good time, but that he wanted to know how to get over the boredom that he experienced whenever he was testing. I suggested to him that if found that testing was boring, something was wrong, and that he could consider the boredom as something we call a trigger heuristic. A trigger heuristic is like an alarm clock for a slumbering mind. Emotions are natural trigger heuristics, nature’s way of stirring us to wake up and pay attention. What was the boredom signalling? Maybe he was covering ground that he had covered before. Maybe the risks that he had in mind weren’t terribly significant, and other, more important risks were looming. Maybe the work he was doing was repetitive and mechanical, better left to a machine.

Somewhat later, I realized that every time I had seen a bug in a piece of software, an emotion had been involved in the discovery. Surprise naturally suggested some kind of unexpected outcome. Amusement followed an observation of something that looked silly and that posed a threat to someone’s image. Frustration typically meant that I had been stymied in something that I wanted to accomplish.

There is a catch with emotions, though: they don’t tell you explicitly what they’re about. In that, they’re like this device we have in our home. It’s mounted in a hallway, and it’s designed to alert us to danger. It does that heuristically: it emits a terrible, piercing noise whenever I’m baking bread or broiling a steak. And that’s why, in our house, we call it the cooking detector. The cooking detector, as you may have guessed, came in a clear plastic package labelled “smoke detector”.

Smoke Alarm

When the cooking detector goes off, it startles us and definitely gets our attention. When that happens, we make more careful observations (look around; look at the oven; check for a fire; observe the air around us). We determine the meaning of our observations (typically “there’s enough smoke to set off the cooking detector, and it’s because we’re cooking“); and we evaluate the significance of them (typically, “no big deal, but the noise is enough to make us want to do something”). Whereupon we perform some appropriate control action: turn on the fan over the stove, open a door or a window, turn down the oven temperature, mop up any oil that has spilled inside the oven, check to make sure that the steak hasn’t caught fire. Oh, and reset the damned cooking detector.

Notice that the package says “smoke detector”, not “fire detector”. The cooking detector apparently can’t detect fires. Indeed, on the two occasions that we’ve had an actual fire in the kitchen (once in the oven and once it the toaster over), the cooking detector remained resolutely and ironically silent. We were already in the kitchen, and noticed the fires and put them out before the cooking detector detected the smoke. Had one of the fires got bad enough, I’m reasonably certain the cooking detector would have squawked eventually. That’s a good thing. Even though our wiring is in good shape, we don’t smoke, and the kids are fire-aware, one never knows what could happen. The alarm could give us a chance to extinguish a fire early, to help to reduce damage, or to escape life-threatening danger.

The cooking detector is like a programmer’s unit test—an automated check. It makes a low-level, one-dimensional, one-bit observation: smoke, or no smoke. It’s oblivious to any threat that doesn’t manifest itself as smoke, such as the arrival of a burglar or a structural weakness in the building. The maximum value of the cooking detector is unlikely to be realized. It occasionally raises a ruckus, and when it does, it doesn’t tell us what the ruckus is about. Usually it’s for something that we can understand, explain, and deal with quickly and easily. Smoke doesn’t automatically mean fire. The cooking detector is an oracle, a device that provides a heuristic trigger, and heuristic devices are fallible. The cooking detector doesn’t tell us that there is a problem; only that there might be a problem. We have to figure out whether there’s a problem, and if so, what the problem is.

Postscript: There’s another thing. As the late Jerry Weinberg put it, “…when managers and developers to see the product works, telling them it works seems to provide no information; that is, “no information” seems to equal “no value”. It’s the same reason why people don’t replace the batteries in their smoke alarms—most of the time a non-functioning smoke alarm is behaviourally indistinguishable from one that works. Sadly, the most common reminder to replace the batteries is a fire.” (Perfect Software and Other Illusions about Testing, p.71.)

Yet the cooking detector comes at low cost. It didn’t cost much to buy, it takes one battery a year, and it’s easy to reset. More importantly, the problem to which it alerts us is a potentially terrible problems. Although the cooking detector doesn’t tell us what the problem is, it tells us to pay attention so that we can investigate and decide on what to do, before a problem gets serious without our notice. Smoke doesn’t automatically mean fire, but it does mean smoke. Where there’s smoke, maybe there’s fire, or maybe there’s something else that’s unpleasant or dangerous. The cooking detector reminds us to check the steak, open the windows, clean the oven every once in a while, evaluate what’s going on. I don’t believe that the cooking detector will ever detect a real, serious problem that we don’t know about already—but I’m not prepared to bet my family’s life on that.