DevelopsenseLogo

Evolving Understanding About Exploratory Testing

(This post from the past was a stepping stone on the way to our current thinking about exploratory testing which is that it’s… testing.

This post remains here as a historical artifact.)

One of the highlights of CAST 2008 was Cem Kaner‘s talk “The Value of Checklists and the Danger of Scripts: What Legal Training Suggests for Testers.”A big part of the talk was the contrast between exploratory and scripted processes, wherein Cem contrasted scripts—canned, step-by-step instructions that are executed more or less automatically, whether by an automaton or a human acting like one—and checklists—sets of things that we might want to consider, where the decision to consider them is under the control of the person using the list. For me, what was most valuable about the the talk was the evolving story of the nature of exploration and exploratory testing. So, as of September 21, 2008, here’s a report on my version (as usual, strongly entwined with Cem’s and with James Bach‘s) of the story so far. One goal of this exercise is to be able to point people to this post instead of repeating myself in testing forums.

Testing is questioning the product in order to evaluate it; that’s James’ definition. For Cem, exploratory testing these days is (deep breath, now) “a style of testing that emphasizes the freedom and responsibility of the individual tester to continually optimize the quality of her work by treating test design, test execution, test result interpretation, and learning as mutually supporting activities that continue in parallel throughout the course of the project.” This was the definition that he synthesized around the time of the Workshop on Heuristic and Exploratory Techniques in Palm Bay, FL, May 2006. (James Bach, Jonathan Bach, Scott Barber, Tim Coulter, Rebecca Fiedler, David Gilbert, Marianne Guntow, James Lyndsay, Robert Sabourin, Adam White, Cem, and I were participants in that workshop.) The short version—James’—is “simultaneous test design, test execution, and learning.” The two definitions are intended to mean the same thing. One is more explicit; the other is quicker to say.

The opposite of exploratory testing is scripted testing. Neither of these is a technique; they’re both approaches to testing. Irrespective of any other dimension of it, we call a test more exploratory and less scripted to the extent that

  • elements of design, execution, interpretation, and learning are performed by the same person;
  • the design, execution, interpretation, and learning happen together, rather than being separated in time;
  • the tester is making her own choices about what to test, when to test it, and how to test it—the tester may use any automation or tools in support of her testing, or none at all, as she sees fit;
  • everything that has been learned so far, including the result of the last test, informs the tester’s choices about the next test;
  • the tester is focused on revealing new information, rather than confirming existing knowledge about the product;
  • in general, the tester is varying aspects of her tests rather than repeating them, except where the repeating aspects of the test are intended to support the discovery of new information.

Whether a test is a black-box test (performed with less knowledge of the internals of the code) or a glass-box test (performed with more knowledge of the internals of the code) is orthogonal to the exploratory or scripted dimension. A white-box test can be done either in an exploratory or a scripted way; a black-box test can be done either way too. The key here is the cognitive engagement of the tester, and the extent to which she manages her choices and her time.

Automation (“any use of tools to support testing”) can be used in a scripted way or in an exploratory way. If the tool is being used to reveal new information rather than to confirm what we already know, or if the tester is using the tool such that activities or data are varying instead of repeating, then the test tends to be more exploratory and less scripted.

For the inevitable people who will inevitably ask “How is exploratory testing different from ad hoc testing?”, I’ll start by replying that I can’t make the distinction until you’ve provided me with a notion of what “ad hoc” means to you. Some people believe that “ad hoc” is a synonym for “sloppy” or “slapdash”. It isn’t. Exploratory testing done well is neither sloppy nor slapdash, of course.

When I go to the dictionary, I find that “ad hoc” means literally “to this”, and by convention “to this purpose“. The Rogers Commission on the Challenger was an ad hoc commission—that is, it was a commission set up for a purpose, and dissolved after its purpose was fulfilled. In that sense, “ad hoc” and “exploratory” aren’t really in different categories. Almost all testing worth doing, whether exploratory or scripted, is ad hoc; it’s done in service of some purpose, and it stops after that purpose is fulfilled. So I can’t be sure what you mean by “ad hoc” until you tell me. I’m providing my definition of exploratory testing here; you can contrast it with your notions of “ad hoc” if you like.

Is exploratory testing something that you want to do when you want fast results? Exploratory approaches tend to be faster than scripted approaches, since in an exploratory approach there are fewer gaps or lags in passing learning from person to person, and since learning tends to be faster when people are in control of their own processes.

In an exploratory mode, the tester tends to be more cognitively engaged; dynamically manages her focus; makes many observations simultaneously—some consciously and some not;makes rapid evaluations—again, some consciously and some not; and makes instant decisions as to what to do next. Even in a scripted mode, it’s hard to conceive of the notion of a single, atomic test; even a terribly disengaged human will make an insightful, off-script observation every now and again.

At least in part because of its tendency towards repetition and disengagement, human scripted test execution often feels like plodding. Scripted test execution by a machine tends to take less time than scripted test execution performed by a human, but preparing a script (whether it is to be performed by a machine or a human) tends to take longer than not preparing a script—a cost of automation that is sometimes hidden in plain sight.

As Jerry Weinberg points out in Perfect Software And Other Illusions About Testing, many important tests can happen without fingers hitting a keyboard or automation being run. Exploratory approaches can be applied to any aspect of the development process, and not just to test execution (that is, configuring, operating, observing, and evaluating the running product). Review can be done in exploratory way or in a scripted way, to the extent that the reviewer controls his own choices about what to observe.

For example, a review may be done freestyle (in which case it might be highly exploratory); under the guidance of a set of ideas, perhaps in the form of a checklist (in which case it may be somewhat more scripted and somewhat less exploratory); or under the control of a strict set of conditions such as those used by static code analyzers (in which case it’s entirely scripted).

For the programmers, testing (questioning the product in order to evaluate it, remember) is a natural element of pair programing, and since the design, execution, interpretation, and learning are highly integrated, pair programming is an exploratory process. TDD has a strong exploratory element for the same reason. Indeed, Elisabeth Hendrickson quotes one programmer’s “aha!” moment about ET: “I get it! It’s test-driven testing!” I had a similar “aha!” moment when I realized that TDD, done well, was a highly exploratory style of development.

That’s the story so far. Both testing and learning are highly open-ended and exploratory processes, so there will likely be more as we explore, learn, and test these ideas.

Update, May 1, 2013:  For several years, I’ve been maintaining a list of structures of exploratory testing.  This list is, and will remain, a work in progress.  However, if you’re read this far, the list may be interesting to you.

And, as above, see our current thinking about exploratory testing: it’s testing.

4 replies to “Evolving Understanding About Exploratory Testing”

  1. Great post Michael, outstanding definition of exploratory testing. Posts like these are the guides that all testers, test managers and developers should be consulting when talking about approaches and the pros and cons to them.

    Thanks.

    Reply
  2. Michael — Excellant post, as always. I especially like your last two bullet points…

    * the tester is focused on revealing new information, rather than confirming existing knowledge about the product;
    * in general, the tester is varying aspects of her tests rather than repeating them, except where the repeating aspects of the test are intended to support the discovery of new information.

    Not a distinction I had specifically considered as of yet, but a really good point. Since you specifically say you wrote this here so you would not have to repeat it on forums all around, I am now going to promptly go and reference it in a few of my favorite forums!! 😉

    David Gilbert

    Reply
  3. Michael,

    Thank you very much for the tips. As a beginner and self-taught tester I recognize that my knowledge is still very raw and needs great development. This is actually the main reason that I joined the test republic and right now am reading and enjoying the discussions that are posted there, trying to give my humble collaboration.

    Your blog is new to me, but I’ve already spotted some posts that really interested me. I only hope that I can catch up with all the precious information that is available.

    BTW, according to the self-exam that you passed I can say that I am already using ET, that made me very glad.

    Thanks again

    Rafael Sartor

    Reply

Leave a Comment