Blog Posts for the ‘Exploratory Testing’ Category

The Honest Manual Writer Heuristic

Monday, May 30th, 2016

Want a quick idea for a a burst of activity that will reveal both bugs and opportunities for further exploration? Play “Honest Manual Writer”.

Here’s how it works: imagine you’re the world’s most organized, most thorough, and—above all—most honest documentation writer. Your client has assigned you to write a user manual, including both reference and tutorial material, that describes the product or a particular feature of it. The catch is that, unlike other documentation writers, you won’t base your manual on what the product should do, but on what it does do.

You’re also highly skeptical. If other people have helpfully provided you with requirements documents, specifications, process diagrams or the like, you’re grateful for them, but you treat them as rumours to be mistrusted and challenged. Maybe someone has told you some things about the product. You treat those as rumours too. You know that even with the best of intentions, there’s a risk that even the most skillful people will make mistakes from time to time, so the product may not perform exactly as they have intended or declared. If you’ve got use cases in hand, you recognize that they were written by optimists. You know that in real life, there’s a risk that people will inadvertently blunder or actively misuse the product in ways that its designers and builders never imagined. You’ll definitely keep that possibility in mind as you do the research for the manual.

You’re skeptical about your own understanding of the product, too. You realize that when the product appears to be doing something appropriately, it might be fooling you, or it might be doing something inappropriate at the same time. To reduce the risk of being fooled, you model the product and look at it from lots of perspectives (for example, consider its structure, functions, data, interfaces, platform, operations, and its relationship to time; and business risk, and technical risk). You’re also humble enough to realize that you can be fooled in another way: even when you think you see a problem, the product might be working just fine.

Your diligence and your ethics require you to envision multiple kinds of users and to consider their needs and desires for the product (capability, reliability, usability, charisma, security, scalability, performance, installability, supportability…). Your tutorial will be based on plausible stories about how people would use the product in ways that bring value to them.

You aspire to provide a full accounting of how the product works, how it doesn’t work, and how it might not work—warts and all. To do that well, you’ll have to study the product carefully, exploring it and experimenting with it so that your description of it is as complete and as accurate as it can be.

There’s a risk that problems could happen, and if they do, you certainly don’t want either your client or the reader of your manual to be surprised. So you’ll develop a diversified set of ways to recognize problems that might cause loss, harm, annoyance, or diminished value. Armed with those, you’ll try out the product’s functions, using a wide variety of data. You’ll try to stress out the product, doing one thing after another, just like people do in real life. You’ll involve other people and apply lots of tools to assist you as you go.

For the next 90 minutes, your job is to prepare to write this manual (not to write it, but to do the research you would need to write it well) by interacting with the product or feature. To reduce the risk that you’ll lose track of something important, you’ll probably find it a good idea to map out the product, take notes, make sketches, and so forth. At the end of 90 minutes, check in with your client. Present your findings so far and discuss them. If you have reason to believe that there’s still work to be done, identify what it is, and describe it to your client. If you didn’t do as thorough a job as you could have done, report that forthrightly (remember, you’re super-honest). If anything that got in the way of your research or made it more difficult, highlight that; tell your client what you need or recommend. Then have a discussion with your client to agree on what you’ll do next.

Did you notice that I’ve just described testing without using the word “testing”?

On Scripting

Saturday, July 4th, 2015

A script, in the general sense, is something that constrains our actions in some way.

In common talk about testing, there’s one fairly specific and narrow sense of the word “script”—a formal sequence of steps that are intended to specify behaviour on the part of some agent—the tester, a program, or a tool. Let’s call that “formal scripting”. In Rapid Software Testing, we also talk about scripts as something more general, in the same kind of way that some psychologists might talk about “behavioural scripts”: things that direct, constrain, or program our behaviour in some way. Scripts of that nature might be formal or informal, explicit or tacit, and we might follow them consciously or unconsciously. Scripts shape the ways in which people behave, influencing what we might expect people to do in a scenario as the action plays out.

As James Bach says in the comments to our blog post Exploratory Testing 3.0, “By ‘script’ we are speaking of any control system or factor that influences your testing and lies outside of your realm of choice (even temporarily). This does not refer only to specific instructions you are given and that you must follow. Your biases script you. Your ignorance scripts you. Your organization’s culture scripts you. The choices you make and never revisit script you.” (my emphasis, there)

When I’m driving to a party out in the country, the list of directions that I got from the host scripts me. Many other things script me too. The starting time of the party—combined with cultural norms that establish whether I should be very prompt or fashionably late—prompts me to leave home at a certain time. The traffic laws and the local driving culture condition my behaviour and my interactions with other people on the road. The marked detour along the route scripts me, as do the weather and the driving conditions. My temperament and my current emotional state script me too. In this more general sense of “scripting”, any activity can become heavily scripted, even if it isn’t written down in a formal way.

Scripts are not universally bad things, of course. They often provide compelling advantages. Scripts can save cognitive effort; the more my behaviour is scripted, the less I have to think, do research, make choices, or get confused. In my driving example, a certain degree of scripting helps me to get where I’m going, to get along with other drivers, and to avoid certain kinds of trouble. Still, if I want to get to the party without harm to myself or other people, I must bring my own agency to the task and stay vigilant, present, and attentive, making conscious and intentional choices. Scripts might influence my choices, and may even help me make better choices, but they should not control me; I must remain in control. Following a script means giving up engagement and responsibility for that part of the action.

From time to time, testing might include formal testing—testing that must be done in a specific way, or to check specific facts. On those occasions, formal scripting—especially the kind of formal script followed by a machine—might be a reasonable approach enabling certain kinds of tasks and managing them successfully. A highly scripted approach could be helpful for rote activities like operating the product following explicitly declared steps and then checking for specific outputs. A highly scripted approach might also enable or extend certain kinds of variation—randomizing data, for example. But there are many other activities in testing: learning about the product, designing a test strategy, interviewing a domain expert, recognizing a new risk, investigating a bug—and dealing with problems in formally scripted activities. In those cases, variability and adaptation are essential, and an overly formal approach is likely to be damaging, time-consuming, or outright impossible. Here’s something else that is almost never formally scripted: the behaviour of normal people using software.

Notice on the one hand that formal testing is, by its nature, highly scripted; most of the time, scripting constrains or even prevents exploration by constraining variation. On the other hand, if you want to make really good decisions about what to test formally, how to test formally, why to test formally, it helps enormously to learn about the product in unscripted and informal ways: conversation, experimentation, investigation… So excellent scripted testing and excellent checking are rooted in exploratory work. They begin with exploratory work and depend on exploratory work. To use language as Harry Collins might, scripted testing is parasitic on exploration.

We say that any testing worthy of the name is fundamentally exploratory. We say that to test a product means to evaluate it by learning about it through experimentation and exploration. To explore a product means to investigate it, to examine it, to create and travel over maps and models of it. Testing includes studying the product, modeling it, questioning it, making inferences about it, operating it, observing it. Testing includes reporting, which itself includes choosing what to report and how to contextualize it. We believe these activities cannot be encoded in explicit procedural scripting in the narrow sense that I mentioned earlier, even though they are all scripted to some degree in the more general sense. Excellent testing—excellent learning—requires us to think and to make choices, which includes thinking about what might be scripting us, and deciding whether to control those scripts or to be controlled by them. We must remain aware of the factors that are scripting us so that we can manage them, taking advantage of them when they help and resisting them when they interfere with our mission.

Exploratory Testing 3.0

Tuesday, March 17th, 2015

This blog post was co-authored by James Bach and me. In the unlikely event that you don’t already read James’ blog, I recommend you go there now.

The summary is that we are beginning the process of deprecating the term “exploratory testing”, and replacing it with, simply, “testing”. We’re happy to receive replies either here or on James’ site.

Very Short Blog Posts (12): Scripted Testing Depends on Exploratory Testing

Sunday, February 23rd, 2014

People commonly say that exploratory testing “is a luxury” that “we do after we’ve finished our scripted testing”. Yet there is no scripted procedure for developing a script well. To develop a script, we must explore requirements, specifications, or interfaces. This requires us to investigate the product and the information available to us; to interpret them and to seek ambiguity, incompleteness, and inconsistency; to model the scope of the test space, the coverage, and our oracles; to conjecture, experiment, and make discoveries; and to perform testing and obtain feedback on how the scripts relate to the actual product, rather than the one imagined or described or modeled in an artifact; to observe and interpret and report the test results, and to feed them back into the process; and to do all of those things in loops and bursts of testing activity. All of these are exploratory activities. Scripted testing is preceded by and embedded in exploratory processes that are not luxuries, but essential.

Related posts:

http://www.satisfice.com/blog/archives/856
http://www.developsense.com/blog/2011/05/exploratory-testing-is-all-around-you/
http://www.satisfice.com/blog/archives/496

Very Short Blog Posts (11): Passing Test Cases

Wednesday, January 29th, 2014

Testing is not about making sure that test cases pass. It’s about using any means to find problems that harm or annoy people. Testing involves far more than checking to see that the program returns a functionally correct result from a calculation. Testing means putting something to the test, investigating and learning about it through experimentation, interaction, and challenge. Yes, tools may help in important ways, but the point is to discover how the product serves human purposes, and how it might miss the mark. So a skilled tester does not ask simply “Does this check pass or fail?” Instead, the skilled tester probes the product and asks a much more rich and fundamental question: Is there a problem here?

Why Would a User Do THAT?

Monday, March 4th, 2013

If you’ve been in testing for long enough, you’ll eventually report or demonstrate a problem, and you’ll hear this:

“No user would ever do that.”

Translated into English, that means “No user that I’ve thought of, and that I like, would do that on purpose, or in a way that I’ve imagined.” So here are a few ideas that might help to spur imagination.

  • The user made a simple mistake, based on his erroneous understanding of how the program was supposed to work.
  • The user had a simple slip of the fingers or the mind—inadvertently pasting a letter from his mother into the “Withdrawal Amount” field.
  • The user was distracted by something, and happened to omit an important step from a normal process.
  • The user was curious, and was trying to learn about the system.
  • The user was a hacker, and wanted to find specific vulnerabilities in the system.
  • The user is confused by the poor affordances in the product, and at that point was willing to try anything to get his task accomplished.
  • The user was poorly trained in how to use the product.
  • The user didn’t do that. The product did that, such that the user appeared to do that.
  • Users actually do that all the time, but the designer didn’t realize it, so product’s design is inconsistent with the way the user actually works.
  • The product used to do it that way, but to the user’s surprise now does it this way.
  • The user was looking specifically for vulnerabilities in the product as a part of an evaluation of competing products.
  • The product did something that the user perceived as unusual, and the user is now exploring to get to the bottom of it.
  • The user did that because some other vulnerability—say, a botched installation of the product—led him there.
  • The user was in another country, where they use commas instead of periods, dashes instead of slashes, kilometres instead of miles… Or where dates aren’t rendered the way we render them here.
  • The user was testing the product.
  • The user didn’t realize this product doesn’t work the way that product does, even though the products have important and relevant similarities.
  • The user did that, prompted by an error in the documentation (which in turn was prompted by an error in a designer’s description of her intentions).
  • To the designer’s surprise, the user didn’t enter the data via the keyboard, but used the clipboard or a programming interface to enter a ton of data all at once.
  • The user was working for another company, and was trying to find problems in an active attempt to embarrass the programmer.
  • The user observed that this sequence of actions works in some other part of the product, and figured that the same sequence of actions would be appropriate here too.
  • The product took a long time to respond, the user got impatient, and started doing other stuff before the product responded to his earlier request.

And I’m not even really getting started. I’m sure you can supply lots more examples.

Do you see? The space of things that people can do intentionally or unintentionally, innocently or malevolently, capably or erroneously, is huge. This is why it’s important to test products not only for repeatability (which, for computer software, is relatively easy to demonstrate) but also for adaptability. In order to do this, we must do much more than show that a program can produce an expected, predicted result. We must also expose the product to reasonably foreseeable misuse, to stress, to the unexpected, and to the unpredicted.

What Exploratory Testing Is Not (Part 5): Undocumented Testing

Wednesday, December 21st, 2011

This week I had the great misfortune of reading yet another article which makes the false and ridiculous claim that exploratory testing is “undocumented”. After years and years of plenty of people talking about and writing about and practicing excellent documentation as part of an exploratory testing approach, it’s depressing to see that there are still people shovelling fresh manure onto a pile that should have been carted off years ago.

Like the other approaches to test activities that have been discussed in this series (“touring“, “after-everything-else“, “tool-free“, and “quick testing“), “documented vs. undocumented” is in a category orthogonal to “exploratory vs. scripted”. True: usually scripted activities are performed by some agency following a set of instructions that has been written down somewhere. But we could choose to think of “scripted” in a slightly different and more expansive way, as “prescriptive”, or “mimeomorphic“. A scripted activity, in this sense, is one for which the actions to be performed have been established in advance, and the choices of the actions are not determined by the agency performing them. In that sense, a cook at McDonalds doesn’t read a script as he prepares your burger, but the preparation of a McDonald’s burger is a highly scripted activity.

Thus any kind of testing can be heavily documented or completely undocumented. A thoroughly documented test might be highly exploratory in nature, or it might be highly scripted.

In the Rapid Software Testing class, James Bach and I point out that when someone says “that should be documented”, what they’re really saying is “that should be documented if and how and when it serves our purposes.” So, let’s start by looking at the “when”.

When we question anything in order to evaluate it, there are moments in the process in which we might choose to record ideas or actions. I’ve broken these down into three basic categories that I hope you find helpful:

  • Before

  • During

  • After

There are “before”, “during”, and “after” moments with respect to any test activity, whether it’s a part of test design, test execution, result interpretation, or learning. Again, a hallmark of exploratory testing is the tester’s freedom and responsibility to optimize the value of the work as it’s happening. That means that when it’s important to record something, the tester is not only welcome but encouraged to

  • pick up a pen
  • take a screen shot
  • launch a session of Rapid Reporter
  • create or update a mind map
  • fire up a screen recorder
  • initiate logging (if it doesn’t start by default on the product you’re testing—and if logging isn’t available, you might consider identifying that as a testability problem and a related product and project risk)
  • sketch out a flowchart diagram
  • type notes into a private or shared repository
  • add to a table of data in Excel
  • fire off a note to a programmer or a product owner
and that’s an incomplete list. But they’re all forms of documentation.

Freedom to document at will should also mean that the tester is free to refrain from documenting something when the documentation doesn’t add value. At the same time, the tester is responsible and accountable for that decision. In Rapid Testing, we recommend writing down (or saving, or illustrating) only the things that are necessary or valuable to the project, and only when the value of doing so exceeds the cost. This doesn’t mean no documentation; it means the most informative yet fastest and least expensive documentation that completely fulfils the testing mission. Integrating that with testing work leads, we hold, to excellent testing—but it takes practice and skill.

For most test activities, it’s possible to relay information to other people orally, or even sometimes by allowing people to observe our behaviour. (At the beginning of the Rapid Testing class, I sometimes silently hold aloft a 5″ x 8″ index card in landscape orientation. I fold it in half along the horizontal axis, and write my first name on one side using a coloured marker. Everyone in the class mimics my actions. Without a single word of instruction being given or questions being asked, either verbally or in writing, the mission has been accomplished: each person now has a tent card in front of him.)

There’s a potential risk associated with an exploratory approach: that the tester might fail to document something important. In that case, we do what skilled people do with risk: we manage it. James Bach talks at length about managing exploratory testing sessions here. Producing appropriate documentation is partly a technical process, but the technical considerations are dominated by business imperatives: cost, value, and risk. There are also social considerations, too. The tester, the test lead, the test manager, the programmers, other managers, and the product owner determine collaboratively what’s important to document and what’s not so important with respect to the current testing mission. In an exploratory approach, we’re more likely to be emphasizing the discovery of new information. So we’re less likely to spend time on documenting what we will do, more likely to document what we are doing and what we have done. We could do a good deal of preparatory reading and writing, even in an exploratory approach—but we realize that there’s an ever-increasing risk that new discoveries will undermine the worth of what we write ahead of time.

That leads directly to “our purposes”, the task that we want to accomplish when documenting something. Just as testing itself has many possible missions, so too does test documentation. Here’s a decidedly non-exhaustive list, prepared over a couple of minutes:

  • to express testing strategy and tactics for an entire project, or for projects in general
  • to keep a set of personal notes to help structure a debriefing conversation
  • to outline testing activities for a test cycle
  • to report on activities during testing execution
  • to outline attributes of a particular quality criterion
  • to catalogue ideas about risk
  • to describe test coverage
  • to account for the work that we’ve done
  • to program a machine to perform a given set of actions
  • to alert people to potential problems in the product
  • to guide a tester’s actions over a test session
  • to identify structures in the application or service
  • to provide a description of how to use a particular test tool that we’ve crafted
  • to describe the tester’s role, skills, and qualifications
  • to explain business rules to someone else on the team
  • to outline scenarios in which the product might be used or tested
  • to identify, for a tester, a specific, explicit sequence of actions to perform, input to provide, and observations to make

That last item is the classic form of highly scripted testing, and that kind of documentation is usually absent from exploratory testing. Even so, a tester can take an exploratory approach using a script as a point of departure or as a reference, just as you might use a trail map to help guide an off-trail hike (among other things, you might want to discover shortcuts or avoid the usual pathways). So when someone says that “exploratory testing is undocumented”, I hear them saying something else. I hear them saying, “I only understand one form of test documentation, and I’ve successfully ignored every other approach to it or purpose for it.”

If you look in the appendices for the Rapid Software Testing class (you can find a .PDF at http://www.satisfice.com/rst-appendices.pdf), you’ll see a large number of examples of documentation that are entirely consistent with an exploratory approach. That’s just one source. For each item in my partial list above, here’s a partial list of approaches, examples, and tools.

Testing strategy and tactics for an entire project, or for projects in general.
Look at the Satisfice Heuristic Test Strategy Model and the Context Model for Heuristic Test Planning (these also appear in the RST Appendices).

An outline of testing activities for a test cycle.
Look at the General Functionality and Stability Test Procedure for Certifed for Microsoft Windows Logo. See also the OWL Quality Plan (and the Risk and Task Correlation) in the RST Appendices.

Keeping a set of personal notes to help structure a debriefing or other conversation.
See the “Beans ‘R Us Test Report” in the RST Appendices; or see my notes on testing an in-flight entertainment system which I did for fun on a flight from India to Amsterdam.

Recording activities and ideas during test execution
A video camera or a screen recording tool can capture the specific actions of a tester for later playback and review. Well-designed log files may also provide a kind of retrospective record about what was testing. Still neither of these provide insight into the tester’s mind. Recorded narration or conversation can do that; tools like BB Test Assistant, Camtasia, or Morae can help. The classic approach, of course, is to take notes. Have a look at my presentation, “An Exploratory Tester’s Notebook“, which has examples of freestyle notes taken during an impromptu testing session, and detailed, annotated examples of Session-Based Test Management sessions. Shmuel Gerson’s Rapid Reporter and Jonathan Kohl’s Session Tester are tools oriented towards taking notes (and, in the former case, including screen captures) of testing sessions on the fly.

Outlining many attributes of a particular quality criterion
See “Heuristics of Software Testability” in the RST Appendices for one example.

Cataloguing ideas about risk
Several examples of this in the RST Appendices, most extensively in the “Deployment Planning and Risk Analysis” example. You’ll also find an “Install Risk Catalog”; “The Risk of Incompatibility”; the Risk vs. Tasks section in the “OWL Quality Plan”; the “Y2K Compliance Report”; “Round Results Risk A”, which shows a mapping of Risk Areas vs. Test Strategy and Tasks.

Describing or outlining test coverage
A mapping establishes or illustrates relationships between things. We can use any of these to help us think about test coverage. In testing, a map might look like a road map, but it might also look like a list, a chart, a table, or a pile of stories. These can be constructed before, after, or during a given test activity, with the goal of covering the map with tests, or using testing to extend the map. I catalogued several ways of thinking about coverage and reporting on it, in three articles Got You Covered, Cover or Discover, and A Map By Any Other Name. Several examples of lightweight coverage outlines can be found in the RST Appendices (“Putt Putt Saves the Zoo”, “Table Formatting Test Notes”, There are also coverage ideas incorporated into the Apollo mission notes that we’ve titled “Guideword Heuristics for Astronauts”).

Accounting for testing work that we’ve done.
See Session-Based Test Management, and see “An Exploratory Tester’s Notebook“. Darren McMillan provides excellent examples of annotated mind maps; scroll down to the section headed “Session Reports”, and continue through “Simplifying feedback to management” and “Simplifying feedback to groups”. A forthcoming article, written by me, shows how a senior test manager tracks testing sessions at a half-day granularity level.

Programming a machine to help you to explore
See all manner of books on programming, both references and cookbooks, but for testers in particular, have a look at Brian Marick’s Everyday Scripting with Ruby. Check out Pete Houghton’s splendid examples of exploratory test automation that begin here. Cem Kaner (often in collaboration with Doug Hoffman) write extensively about automation-assisted exploratory testing; an example is here.

Alerting people to potential problems in the product
In general, bug reporting systems provide one way to handle the task of recording and reporting problems in the product. James Bach provides an example of a report that he provided to a client (along with a more informal account of the session).

Guiding a tester’s actions over a test session
Guiding a tester involves skills like chartering and checklisting. Start with the documentation on Session Based Test Management (http://www.satisfice.com/sbtm). Selena Delesie has produced an excellent blog post on chartering exploratory testing sessions. The title of Cem Kaner’s presentation at CAST 2008, The Value of Checklists and the Danger of Scripts: What legal training suggests for testers describes the content perfectly. Michael Hunter’s You Are Not Done Yet lists can be used and adapted to your context as a set of checklists.

To identify structures in the application or service
The “Product Elements” section in the Heuristic Test Strategy Model provides a kind of framework for documenting product structures. In the RST Appendices, the test notes for “Putt Putt Saves the Zoo” and “Diskmapper”, and the “OWL Quality Plan” provide examples of identifying several different structures in the programs under test. Mind mapping provides a means of describing and illustrating structures, too; see Darren McMillan’s examples here and here. Ruud Cox and Ru Cindrea used a mind map of product elements to help win the Best Bug Report award in the Test Lab at EuroSTAR 2011. I’ve created a list of structures that support exploratory testing, and many of these are related to structures in the product.

Providing a description of how to use a particular test tool that we’ve crafted
While working at a bank, I developed (in Excel and VBA) a tool that could be used as an oracle and as a way of recording test results. (Thanks to non-disclosure agreements, I can describe these, but cannot provide examples.) When I left the project, I was obliged to document my work. I didn’t work on the assumption that anyone off the street would be reading the document. Instead, I presumed that anyone assigned to that testing job and to using that tool, would have the rapid learning skill to explore the tool, the product, and the business domain in a mutually supportive way. So I crafted documentation that was intended to tell testers just enough to get them exploring.

Explaining business rules to someone else on the team
I did include documentation for novices of one kind: within the documentation for that testing tool, I included a general description of how foreign exchange transactions worked from the bank’s perspective, and how appropriate accounts got credited and debited. I had learned this by reverse-engineering use cases and consulting with the local business analyst. I summarized it with a two-page document written in simple, direct language, referring disrectly to the simpler use cases and explaining the more confusing bits in more detail. For those whose learning style was oriented toward code, I also described the tables and array formulas that applied the business rules.

Outlining scenarios in which the product might be used or tested
I discuss some issues about scenarios here—why they’re important, and why it’s important to keep them open-ended and open to interpretation. It’s more important to record than to prescribe, since in a good scenario, you’ll observe and discover much more than you’ve articulated in advance. Cem Kaner gives ideas on how to produce scenarios; Hans Buwalda presents examples of soap opera testing.

Identifying required tester skill
People with skill don’t need prescriptive documentation for every little thing. Responsible managers identify the skills needed to test, and who commit to employing people who either have those skills or can develop them quickly. James Bach eliminated 50 pages of otiose documentation with two paragraphs. (Otiose is a marvelous word; it’s fun to look it up in a thesaurus.)

Identifying, for a tester, a particular explicit sequence of actions to perform, input to provide, and observations to make.
Again, a document that attempts to specify exactly what a tester should do is the hallmark of scripted testing. James Bach articulates a paradox that has not yet been noted clearly in our craft: in order to perform a scripted test well, you need signficant amounts of skill and tacit knowledge (and you also need to ignore the script on occasion, and you need to know when those occasions are). There’s another interesting issue here: preparing such documents usually depends on exploratory activity. There’s no script to tell you how to write a script. (You might argue there’s one exception. You can follow this script to write a test script: take each line of a requirements document, and add the words “Verify that” to the beginning of each line.)

Now, just as you can perform testing badly using any approach, you can perform exploratory testing and document it inappropriately, either by under-documenting it OR over-documenting it using any of the kinds of documentation above. But, as this document shows, the notion that exploratory testing is by its nature undocumented is not only ignorant, but aggressively ignorant about both testing and documentation. Whenever you see someone claim that exploratory testing is undocumented, I’d ask you to help by setting the record straight. Feel free to refer to this blog post, if you find it helpful; also, please point me to other exemplars of excellent documentation that are consistent with exploratory approaches. If we all work together, we can bury this myth, while providing excellent records and reports for our clients.

What Exploratory Testing Is Not (Part 4): Quick Tests

Sunday, December 18th, 2011

Quick testing is another approach to testing that can be done in a scripted way or an exploratory way. A tester using a highly exploratory approach is likely to perform many quick tests, and quick tests are often key elements in an exploratory approach. Nonetheless, quick testing and exploratory testing aren’t the same.

Quick tests are inexpensive tests that require little time or effort to prepare or perform. They may not even require a great deal of knowledge about the application being tested or its business domain, but they can help to develop new knowledge rapidly. Rather than emphasizing comprehensiveness or integrity, quick tests are designed to reveal information in a hurry at a minimal cost.

A quick test can be a great way to learn about the product, or to identify areas of risk, fragility, or confusion. A tester can almost always sneak a quick test or two into other testing activity. A burst of quick tests can help as the first activities during a smoke or sanity test. Cycles of relatively unplanned, informal quick tests may help to you discover or refine a more comprehensive or formal plan.

James Bach and I provide examples of many kinds of quick tests in the Rapid Software Testing class. You’ll notice that some of them are called tours. Note that not all tours are quick, and not all quick tests are tours. Here’s a catalog.

Happy Path
Perform a task, from start to finish, that an end-user might be expected to do. Use the product in the most simple, expected, straightforward way, just as the most optimistic programmer or designer might imagine users to behave. Look for anything that might confuse, delay, or irritate a reasonable person. Cem Kaner sometimes calls this “sympathetic testing”. Lean towards learning about the product, rather than finding bugs. If you do see obvious problems, it may be bad news for the product.

Variable Tour
Tour a product looking for anything that is variable and vary it. Vary it as far as possible, in every dimension possible. If you’re using quick tests for learning, seek and catalog the important variables. Look for potential relationships between them. Identifying and exploring variations is part of the basic structure of our testing when we first encounter a product.

Sample Data Tour
Employ any sample data you can, and all that you can. For one kind of quick tests, prefer simple values whose effects are easy to see or calculate. For a different kind of quick test, choose complex or extreme data sets. Observe the units or formats in which data can be entered, and try changing them. Challenge the assumption that the programmers have thought to reject or massage inappropriate data. Once you’ve got a handle on your ideas about reasonable or mildly challenging data, you might choose to try…

Input Constraint Attack
Discover sources of input and attempt to violate constraints on that input. Try some samples of pathological data: use zeroes where large numbers are expected; use negative numbers where positive numbers are expected; use huge numbers where modestly-sized ones are expected; use letters in every place that’s supposed to handle only numbers, and vice versa. Use a geometrically expanding string in a field. Keep doubling its length until the product crashes. Use characters that are in some way distinct from your idea of “normal” or “expected”. Inject noise of any kind into a system and see what happens. Use Satisfice’s PerlClip utility to create strings of arbitrary length and content; use PerlClip’s counterstring feature to create a string that tells you its own length so that you can see where an application cuts off input.

People tend to talk a lot about input constraint attacks. Perhaps that’s because input constraint attacks are used by hackers to compromise systems; perhaps it’s because input constraint attacks can be performed relatively straightforwardly; perhaps it’s because they can be described relatively easily; perhaps it’s because input constraint attacks can produce dramatic and highly unexpected results. Yet they’re by no means the only kind of quick test, and they’re certainly not the only way to test using an exploratory approach.

Documentation Tour
Look in the online help or user manual and find some instructions about how to perform some interesting activity. Do those actions explicitly. Then improvise from them and experiment. If your product has a tutorial, follow it. You may expose a problem in the product or in the documentation; either way, you’ve found an inconsistency that is potentially important. Even if you don’t expose a problem, you’ll still be learning about the product.

File Tour
Have a look at the folder where the program’s .EXE file is found. Check out the directory structure, including subs. Look for READMEs, help files, log files, installation scripts, .cfg, .ini, .rc files.
Look at the names of .DLLs, and extrapolate on the functions that they might contain or the ways in which their absence might undermine the application. Use whatever supplemental material you’ve got to guide or focus your actions. Another way to gather information for this kind of test: use tools to monitor the installation, and take the output from the tool as a point of departure.

Complexity Tour
Tour a product looking for the most complex features, the most challenging data sets, and the biggest interdepencies. Look for hidden nooks and crannies, but also look for the program’s high-traffic areas, busy markets, big office buildings, and train stations—places where there’s lots of interactions, and where bugs might be blending in with the crowd.

Menu, Window, and Dialog Tour
Tour a product looking for all the menus (main and context menus), menu items, windows, toolbars, icons, and other controls. Walk through them. Try them. Catalog them, or construct a mind map.

Keyboard and Mouse Tour
Tour a product looking for all the things you can do with a keyboard and mouse. Hit all of the keys on the keyboard. Hit all the F-keys; hit Enter, Tab, Escape, Backspace; run through the alphabet in order, and combine each key with Shift, Ctrl, Alt, the Windows key, CMD or Option, on other platforms, the AltGr key in Europe. Click (right, left, both, double, triple) on everything. Combine clicks with shifted keys.

Interruptions
Start activities and stop them in the middle. Stop them at awkward times. Perform stoppages using cancel buttons, O/S level interrupts (ctrl-alt-delete or task manager). Arrange for other programs to interrupt (such as screensavers or virus checkers). Also, try suspending an activity and returning later. Put your laptop into sleep or hibernation mode.

Undermining
Start using a function when the system is in an appropriate state, then change the state part way through.
Delete a file while it is being edited; eject a disk; pull net cables or power cords) to get the machine an inappropriate state. This is similar to interruption, except you are expecting the function to interrupt itself by detecting that it no longer can proceed safely.

Adjustments
Set some parameter to a certain value, then, at any later time, reset that value to something else without resetting or recreating the containing document or data structure. Programmers often expect settings or variables to be adjusted through the GUI. Hackers and tinkerers expect to find other ways.

Dog Piling
Whatever you’re doing, do more of it and do other stuff as well while you’re at it. Get more processes going at once; try to establish more states existing concurrently. Invoke nested dialog boxes and non-modal dialogs. On multi-user systems, get more people using the system or simulate that with tools. If your test seems to trigger odd behaviour, pile on in the same place until the odd becomes crazy.

Continuous Use
While testing, do not reset the system. Leave windows and files open; let disk and memory usage mount.
You’re hoping to show that the system loses track of things or ties itself in knots over time.

Feature Interactions
Discover where individual functions interact or share data. Look for any interdependencies. Explore them, exploit them, and stress them out. Look for places where the program repeats itself or allows you to do the same thing in different places. For example, for data to be displayed in different ways and in different places, and seek inconsistencies. For example, load up all the fields in a form to their maximums and then traverse to the report generator.

Summon Help
Bring up the context-sensitive help feature during some operation or activity. Does the product’s help file explain things in a useful way, or does it offend the user’s intelligence by simply restating what’s already on the screen? Is help even available at all?

Click Frenzy
Ever notice how a cat or a kid can crash a system with ease? Testing is more than “banging on the keyboard”, but that phrase wasn’t coined for nothing. Try banging on the keyboard. Try clicking everywhere. Poke every square centimeter of every screen until you find a secret button.

Shoe Test
Use auto-repeat on the keyboard for a very cheap stress test. Look for dialog boxes constructed such that pressing a key leads to, say, another dialog box (perhaps an error message) that also has a button connected to the same key that returns to the first dialog box. Place a shoe on the keyboard and walk away. Let the test run for an hour. If there’s a resource or memory leak, this kind of test could expose it. Note that some lightweight automation can provide you with a virtual shoe.

Blink Test
Find an aspect of the product that produces huge amounts of data or does some operation very quickly. Look through a long log file or browse database records, deliberately scrolling too quickly to see in detail. Notice trends in line lengths, or the look or shape of the data. Use Excel’s conditional formatting feature to highlight interesting distinctions between cells of data. Soften your focus. If you have a test lab with banks of monitors, scan them or stroll by them; patterns of misbehaviour can be surprisingly prominent and easy to spot.

Error Message Hangover
Programmers are rewarded for implementing features. There’s a psychological problem with errors or exceptions: the label itself suggests that something has gone wrong. People often actively avoid thinking about problems or mistakes, and as a consequence, programmers sometimes handle errors poorly. Make error messages happen and test hard after they are dismissed. Watch for subtle changes in behaviour between error and normal paths. With automation, make the same error conditions appear thousands of times.

Resource Starvation
Progressively lower memory, disk space, display resolution, and other resources. Keep starving the product until it collapses, or gracefully (we hope) degrades.

Multiple Instances
Run a lot of instances of the app at the same time. Open, use, update, and save the same files. Manipulate them from different windows. Watch for competition for resources and settings.

Crazy Configs
Modify the operating system’s configuration in non-standard or non-default ways, either before or after installing the product. Turn on “high contrast” accessibility mode, or change the localization defaults. Change the letter of the system hard drive. Put things in non-default directories. Use RegEdit (for registry entries) or a text editor (for initialization files) to corrupt your program’s settings in a way that should trigger an error message, recovery, or an appropriate default behavior.

Again: quick tests tend to be highly exploratory, but they represent only one kind of exploratory testing. Don’t be fooled into believing that quick testing—or certain kinds of quick testing—is all there is to exploratory testing.

What Exploratory Testing Is Not (Part 3): Tool-Free Testing

Saturday, December 17th, 2011

People often make a distinction between “automated” and “exploratory” testing. This is like the distinction between “red” cars and “family” cars. That is, “red” (colour) and “family” (some notion of purpose) are in orthogonal categories. A car can be one colour or another irrespective of its purpose, and a car can be used for a particular purpose irrespective of its colour. Testing, whether exploratory or not, can make heavy or light use of tools. Testing, whether it entails the use of tools or not, can be highly scripted or highly exploratory.

“Exploratory” testing is not “manual” testing. “Manual” isn’t a useful word for describing software testing in any case. When you’re testing, it’s not the hands that do the testing, any more than when you’re riding a pedal bike it’s the feet that do the bike-riding. The brain does the testing; the hands, at best, provide one means of input and interaction with the thing we’re testing. And not even “manual” testing is manual in the sense of being tool- or machinery-free. You do you use a computer when you’re testing, don’t you?

(Well, mostly, but not always. If you’re reviewing requirements, specifications, code, or documentation, you might be looking at paper, but you’re still testing. A thought experiment or a conversation about a product is a kind of a test; you’re questioning something in order to evaluate it, pitting ideas against other ideas in an unscripted way. While you’re reviewing, are you using a pen to annotate the paper you’re reading? A notepad to record your observations? Sticky tabs to mark important places in the text? Then you’re using tools, low-tech as they might be.)

Some people think of test automation in terms of a robot that pounds on virtual keys more quickly, more reliably, and more deterministically than a human could. That’s certainly one potential notion of test automation, but it’s very limiting. That traditional view of test automation focuses on performing checks, but that’s not the only way in which automation can help testing.

In the Rapid Software Testing class, James Bach and I suggest a more expansive view of test automation: any use of (software- or hardware-based) tools to support testing. This helps keeps us open to the idea that machines can help us with almost any of the mimeomorphic, non-sapient aspects of testing, so that we can focus on and add power to the polimorphic, sapient aspects. Exploration is polimorphic activity, but it can include and be supported by mimeomorphic actions. Cem Kaner and Doug Hoffman take a similar tack: exploratory test automation is “computer-assisted testing that supports learning of new information about the quality of the software under test.” Learning new information is one of the hallmarks of exploratory testing, which usually points towards emphasizing variation rather than repetition.

That said, there can be a role for mechanized repetition, even when you’re using a highly exploratory approach: when repeating aspects of the test are intended to support discovery of something new or surprising. The key is not whether you’re mechanizing the activity. The key is what happens at the end of the activity. The less the results of one activity are permitted to inform the next, the more scripted the approach. If the repetition is part of a learning loop—a cycle of probing, discovering, investigating, and interpreting—that feeds back on itself immediately, then the approach is exploratory. James has also posted a number of motivations for repeating tests. Each one can (with the possible exception of “avoidance or indifference”) be entirely consistent with and supportive of exploration.

There are some actions that tools can perform better than humans, as long as the action doesn’t require human judgment or wisdom. Humanity can even get in the way of some desirable outcome. For example, when your exploration of some aspect of a product is based on statistical analysis, and randomization is part of the test design, it’s important to remember that people are downright lousy at generating randomized data. Even when people believe that they’re choosing numbers at random, there are underlying (and usually quite unconscious) patterns and biases that inform their choices. If you want random numbers, tools can help.

Tools can support exploration in plenty of other ways: data generation, system configuration; simulation; logging and video capture; probes that examine the internal state of the system; oracles that detect certain kinds of error conditions in a product or generate plausible results for comparison; visualization of data sets, key elements to observe, relationships, or timing; recording and reporting of test activity.

A few years back, I was doing testing of a teller workstation application at a bank (I’ve written about this in How to Reduce the Cost of Software Testing). The other testers, working on domestic transactions, were working from scripts that contained painfully detailed and explicit steps and observations. (Part of the pain came from the fact that the scripts were supplemented with screen shots, and the text and the images didn’t always agree.) My testing assignment involved foreign exchange, and the testing tasks I had been given were unscripted and, to a large degree, self-determined. In order to learn the application quickly, I had to explore, but this in no way meant that I didn’t use tools. On the contrary, in fact. In that context, Excel was the most readily available and powerful tool on hand. I used it (and its embedded Visual Basic for Applications) to:

  • maintain and update (at a key stroke) enormous tables of currencies, rates, and transaction types
  • access appropriate entries from the table via regular expression parsing
  • model the business rules of the application under test
  • display the intended flow of money through a transaction
  • add visual emphasis to the salient outcomes of tests and test scenarios
  • provide, using a comparable algorithm, clear results to which the product’s results could be compared
  • help in performing extremely rapid evaluation of a test idea
  • create tables of customer data so that I could perform a test using a variety of personas
  • accelerate my understanding of the product and the test space
  • enhance my learning about Boolean algebra and how it could be used in algorithms
  • record my work and illustrate outcomes for my clients
  • perform quick calculations when necessary
  • help me find more actual problems than the other four testers combined

All of this activity happened in a highly exploratory way; each of the activities interacted with the others. I used very rapid cycles of looking at what I needed to learn next about the application, experimenting with and performing tests, programming, asking questions of subject matter experts and programmers and managers, reporting, reading reference documentation, debugging, and learning. Tight loops of activities happening in parallel are what characterize exploratory processes. Yet this was not tool-free work; tools were absolutely central to my exploration of the product, to my learning about it, and to the mission of finding bugs. Indeed, without the tools, I would have had much more limited ideas about what could be tested, and how it could be tested.

The explorers of old used tools: compasses and astrolabes, maps and charts, ropes and pulleys, ships and wagons. These days, software testers explore applications by using mind-mapping software and text editors; spreadsheets and calculators; data generation tools and search engines; scripting tools and automation frameworks. The concept that characterizes exploratory testing is not the input mechanism, which can be fingers on a keyboard, tables of data pumped into the program via API calls, bits delivered through the network, signals from a variable voltage controller. Exploratory testing is about the way you work, and the extent to which test design, test execution, and learning support and reinforce each other. Tools are often a critical part of that process.

What Exploratory Testing Is Not (Part 2): After-Everything-Else Testing

Friday, December 16th, 2011

Exploratory testing is not “after-everything-else-is-done” testing. Exploratory testing can (and does) take place at any stage of testing or development.

Indeed, TDD (test-driven development) is a form of exploratory development. TDD happens in loops, in which the programmer develops a check, then develops the code to make the check pass (along with all of the previous checks), then fixes any problems that she has discovered, and then loops back to implementing a new bit of behaviour and inventing a new check. The information obtained from each loop feeds into the next; and the activity is guided and structured by the person or people involved in the moment, rather than in advance. The checks themselves are scripted, but the activity required to produce them and to analyze the results is not. Compared to the complex cognitive activity—exploratory, iterative—that’s going on as code is being developed, the checks themselves—scripted, linear—are trivial.

Requirement review is an exploratory activity too. Review of requirements (or specifications, or user stories, or examples) tends happens early on in a development cycle, whether it’s a long or a short cycle. While review might be guided by checklists, the people involved in the activity are making decisions on the fly as they go through loops of design, investigation, discovery, and learning. The outcome of each loop feeds back into the next activity, often immediately.

Code review can also be done in a scripted way or an exploratory way. When humans analyze the code, it’s an unscripted, self-directed activity that happens in loops; so it is exploratory. We call it review, but it’s gathering information with the intention of informing a decision; so it is testing. There is a way to review code that involves the application of scripted processes, via a tools that people generally call “static testing tools. When a machine parses code and produces a report, by definition it’s a form of checking, and it’s scripted. Yet using those tools productively requires a great deal of exploratory activity. Parsing and interpreting the report and responding to it is polimorphic, human action—unscripted, open-ended, iterative, and therefore exploratory.

Learning about a new product or a new feature is an exploratory activity if you want to do it or foster it well. Some suggest that test scripts provide a useful means of training testers. Research into learning shows that people tend to learn more quickly and more deeply when their learning is based on interaction and feedback; guided, perhaps, but not controlled. If you really want to learn about a product, try creating a mind map, documenting some aspect of the program’s behaviour, or creating plausible scenarios in which people might use—or misuse—the product. All of these activities promote learning, and they’re all exploratory activities. There’s far more information that you can use, apply, and discover than a script can tell you about. Come to think of it… where does the script come from?

Developing a test procedure—even developing a test script, whether for a machine or a human to follow, or developing the kind of “test” that skilled testers would call a demonstration—is an exploratory activity. There is no script that specifies how to write a new script for a particular purpose. Heard about a new feature and pondering how you might test it? You’ve already begun testing; you’re doing test design and you’re probably learning as you go. To the extent that you use the product or interact with it, bounce ideas off other people, or think critically about your design, you’re testing, and you’re doing it in an unscripted way. Some might suggest that certain tools create scripts that can perform automatic checks. Yet reviewing those checks for appropriateness, interpreting the results, and troubleshooting unexpected outcomes are all exploratory activities.

Supposing that a programmer, midway through a sprint, decides that she’d like some feedback on the work that she’s done so far on a new module. She hands you a bit of code to look at. You might interact with the code directly through a test tool that she provided, or (say) via the Ruby interpreter, or you might write some script code to exercise some of the functions in the module. In any event, you find some problems in it. In order to investigate a problem that you’ve discovered, you must explore. You must explore whether your recognition of the problem was triggered by your own interaction with the program or by a mechanically executed script. You’re in control of the activity; each new test around the problem feeds back into your choice of the next activity, and into the story that you’re going to tell about the product.

All of the larger activities that I’ve described above are exploratory, and they all happen before you have a completed function or story or sprint. Exploratory testing is not a stage or phase of testing to be performed after you’ve performed your other test techniques. Exploratory testing is not an “other” test technique, because it’s not a technique at all. Exploratory testing is not a thing that you do, but rather a way that you work (and think, and act), the hallmarks being who (or what) is in control, and the extent to which your activity is part of a loop, rather than a straight line. Any test technique can be applied in a scripted way or in an exploratory way. To those who say “we do exploratory testing after our acceptance tests are all running green”, I would suggest looking carefully and observing the extent to which you’re doing exploratory testing all the way along.