Blog Posts from January, 2014

Very Short Blog Posts (11): Passing Test Cases

Wednesday, January 29th, 2014

Testing is not about making sure that test cases pass. It’s about using any means to find problems that harm or annoy people. Testing involves far more than checking to see that the program returns a functionally correct result from a calculation. Testing means putting something to the test, investigating and learning about it through experimentation, interaction, and challenge. Yes, tools may help in important ways, but the point is to discover how the product serves human purposes, and how it might miss the mark. So a skilled tester does not ask simply “Does this check pass or fail?” Instead, the skilled tester probes the product and asks a much more rich and fundamental question: Is there a problem here?

The Pause

Thursday, January 16th, 2014

I would like to remind people involved in testing that—after an engaged brain—one of our most useful testing tools is… the pause.

A pause is precisely the effect delivered by the application of four little words: Huh? Really? And? So? Each word prompts a pause, a little breathing space in which questions oriented towards critical thinking have time to come to mind.

  • Wait…huh? Did I hear that properly? Does it mean what I think it means?
  • Um…Really? Does that match with my experience and understanding of the world as it is, or how it might be? What are some plausible alternative interpretations for what I’ve just heard or read? How might we be fooled by it?
  • Just a sec… And? What additional information might be missing? What other meanings could I infer?
  • Okay…So? What are some consequences or ramifications of those interpretations? What might follow? What do we do—or say, or ask—next?

Those four words nestle nicely between the four elements of the Satir Interaction Model—Intake (huh?), Meaning (really? and?), Significance (so?), and Response.

We recently added “And” to the earlier set of “Huh? Really? So?” upon which James Bach elaborates here.

Very Short Blog Posts (10): Planning and Preparation

Wednesday, January 15th, 2014

A plan is not a document. A plan is a set of ideas that may be represented by a document or by other kinds of artifacts. In Rapid Testing, we emphasize preparing your mind, your skills, and your tools, and sharpening them all as you go. We don’t reject planning, but we de-emphasize it in favour of preparation. We also recommend that you keep the artifacts that represent your plans as concise and as flexible as they can reasonably be.

The world of technology is complex and constantly changing. If you’re prepared, you have a much better chance of adapting and reacting appropriately to a situation when the plans have gone awry. But all the planning in the world can’t help you if you’re not prepared.

Not-So-Great Expectations

Friday, January 10th, 2014

In my teaching and consulting work, I often ask people how they recognize a problem, and they often offer “inconsistency with expectations” as one way to do it. I agree, but I also think we should be careful to think things through. A product that fulfills our expectations may not be satisfying, and a product that violates our expectations may be terrific.

Several years ago, I bought a new computer for the kids. The discount retailer offered only Windows Vista as the operating system. I had already heard plenty about Vista, and I considered going elsewhere. However, I figured that it might be a good thing to have one computer with Vista in the house for testing purposes—and for the kids, it would Build Character.

In this house, “Daddy” also means “technical support”, so I frequently found myself troubleshooting and configuring the kids’ system. I was a Windows XP user at the time. As usual, Microsoft had run its User Interface Scrambling and Feature Hiding tool just before Vista’s general release, so I couldn’t find the controls and settings I wanted to find. Moreover, every time I tried to something, Vista would interrupt me, pop up a dialog, and ask if I had really asked to do that something. If Vista had been an employee, I would have fired it on the first day. “Yes, I did want to do that. Why do you keep asking me? Yes, I am who I say I am. Why do you keep asking me that? Don’t you think if I weren’t who I said I was, I would simply lie to you and answer Yes?” I expected Vista to suck. And it sucked in all the ways I had expected, and more. But it was consistent with those expectations.

More recently, I purchased a Nest thermostat. My expectations weren’t met; they were exceeded. I was expecting to have to fetch a screwdriver to install it; nope, the unit comes with a screwdriver (both slot and Philips head). I was expecting to have to do the fiddly wrap-wires-around-a-terminal thing. When I opened the package, the thermostat didn’t have screw terminals, as I had expected; it had clearly labeled, spring-loaded, easy-to-use wire clips. I was expecting that I’d have to patch and paint where the big old thermostat had been; the Nest is small and round. No problem; the Nest comes with a backing plate that’s slightly larger than my old unit’s backing plate. I was expecting to have to connect to my computer or do some other elaborate setup stuff. Nope; the unit automatically found our wireless router, asked me for the access key, and connected itself to the Web. So the product wasn’t what I expected, but that wasn’t a problem.

Expectations typically refer to the future, but many things that we might call “expectations” in casual speech are not clear in advance. They’re subconscious, tacit, or constructed on the fly. The set of expectations that we could have about something is intractably large, and it would be cognitively difficult and practically expensive to make them explicit and to note them—and for many problems, it wouldn’t be necessary to do that, either. Here’s an example. Look closely, and ask “what’s wrong with this picture?”

Outlook Truncates the Fields

In my version of Microsoft Outlook 2010, I use the search function to find an email message that has a particular string in it. The search returns no results for the current folder, so Outlook offers to search all mail items. In the search result, the fields for the sender, the subject of the message, and the folder all have the last characters truncated.

Neat bug, huh? Now, I could say that I recognized a problem because the behavior was inconsistent with my expectations, and you might say the same. Yet I doubt that we would ever have bothered to express this expectation: “I expect all data fields to display all characters, even the last one”. It might be more accurate to say that, when I spotted the bug, I constructed the expectation in the moment, in response to an observation of a violation of some pattern or consistency heuristic.

Our expectations, desires and requirements for a product aren’t static. We develop them as we develop the product, as we become familiar with it, and as we develop our relationship with it. When people say that something is a bug because it’s inconsistent with their expectations, I think they might be confusing expectations with desires. That is, when people say “that’s not what I expected,” they really mean “that’s not what I want“. Inconsistency with expectations isn’t a big deal at all when people are surprised or delighted by the product. In some cases, inconsistency with expectations might cause temporary discomfort or disorientation, but after a bit of experience and familiarity, what we have might turn out to be better than what we expected. Moreover, someone might have a set of expectations fulfilled, and upon observation and reflection they might realize that what they expected was not what they wanted after all. As oracles—means by which we identify a problem during testing—an inconsistency with history or expectations might be superseded by a consistency with purpose.

Every moment that we’re testing, we have the capacity to recognize a new risk, a new problem, or a buried assumption, even without an explicit expectation. Humans can do this at the drop of a hat, but machines cannot. This is why it’s so important to have humans testing, interacting with the product, observing and experimenting at all levels and with all of the product’s interfaces. By all means use tools to help you, but beware The Narcotic Comfort of the Green Bar! Operate and observe the product directly, and re-evaluate your checks and their outputs from time to time. Don’t just ask, “Is this what we expect?” Ask also, “Is this (still) what we want?”