An article from a while back on StickyMinds entitled How To Choose Between Scripted and Exploratory Testing refers to a bunch of factors in making choices between scripted testing and exploratory testing. The problems start early: “As a test manager or engineer, you may be considering whether or not to use exploratory testing on your next project.”
If you’re not planning on investigating any problems that you find, I suppose that you could choose not to use exploratory testing (investigating a bug is not a scripted or scriptable process). If you don’t allow the tester to control his or her actions, or to feed information from the last test into the thought process and action behind the next test, then you could get away without taking an exploratory approach. If the testing ideas were all declared and written down at the beginning of the project, you could successfully avoid exploratory testing. And if you used an all-checking strategy, maybe exploratory testing wouldn’t be an approach you’d take. But would you really want all of your testing to be confirmatory? All validation and verification? All checking to make sure that existing beliefs are correct, rather than probing the product to disconfirm those beliefs, with the goal of showing that they’re inaccurate or incomplete?
This isn’t the worst article ever written on exploratory testing. It is, however, typical of a certain kind of bad article. It’s full of assertions that aren’t supported in the view of exploratory testing that a bunch of us (including Jon Bach, who, at the bottom of the article provides an unusually stern comment by his usual standards) have been developing for years. Let’s have a look at some of the points. Quotes from the original article are in italics.
Tester domain knowledge…If a tester does not understand the system or the business processes, it would be very difficult for him to use, let alone test, the application without the aid of test scripts and cases.
A few years back, a former co-worker contracted me to to document a network analysis tool that was being rebranded, fixed, and upgraded by his new company. The only documentation available was not only weak but also a couple of versions out of date. In addition, I didn’t know much about network analysis tools at all. I reviewed some literature on the topic, and I interviewed the company’s lead programmer. But for me, the most rapid, relevant, and powerful learning came from interacting with the product, figuring out how it worked (and, at several points, how it appeared to fail), and building the story of the product in my mind and in the documentation that I created. Without that interaction with the product, my learning would have not have been rooted in experience, and would have been much slower. And the documentation that I produced was deemed excellent by the client.
How can I test a product in a domain that I don’t know much about? The answer is that through testing, I learn, and I learn far more rapidly than by other means. Anyone learning a complex cognitive activity learns far more, far more quickly, in the doing of it than in the reading about it. Preschool kids do the same thing. Watch them; then watch how they tend to slow down and follow the schools’ processes instead of their own.
System complexity…End-to-end testing can be accomplished with exploratory testing; however, the capabilities and skill sets required are typically those of a more experienced test engineer.
People often argue that exploratory testing needs capable, experienced, and skilled testers. I agree. The argument suggests (and often states outright) that scripted testing doesn’t need capable, experienced, and skilled testers. To some degree that’s true; when the scripted action is checking, by definition skill and sapience aren’t required for the moment of the check. But sapience and skill are required for the design and construction of the check, and for the analysis and interpretation of the results, so the argument that scripted testing doesn’t need skilled testers doesn’t hold water if you want to do scripted testing well. Indeed, good testing of any kind requires skill. If you or your testers are genuinely unskilled, training, coaching, mentoring, and a fault-tolerant environment that fosters learning will help to build skills quickly.
I’m also puzzled by the argument in another way: the (claimed) lack of a need for skilled testers is often presented as a virtue of scripted testing. This is like saying that the lack of a requirement for medical training is a virtue of quack medicine.
Level of documentation. Scripted testing generally flows from business requirements documents and functional specifications. When these documents do not exist or are deficient, it is very difficult to conduct scripted testing in a meaningful way. The shortcuts that would be required, such as scripting from the application as it is built, are accomplished as efficiently using exploratory testing.
Actually, there are several ways to do scripted testing in a meaningful way without business requirements documents or functional specifications being present or perfect. A standout example is test-driven development, in which checks are developed prior to developing the application code. Another: unit tests, in which checks are prepared at some point after the application code has been developed. Both forms of checks support refactoring. Another example: many Agile development shops use Fitnesse to develop, explore, and discover many of the requirements of the product, and to create scripted checks as development proceeds.
In fact, no one of whom I am aware has ever seen perfect requirements documents; they’re always deficient in some way for some person’s purpose, and always to some degree that’s intentional. It’s infeasible and wasteful to document every assumption; for programmers and testers with appropriate judgment and skill, it’s also somewhat insulting. Thus, whether developing either scripted or exploratory test ideas, we should treat imperfect documents as a default assumption.
Timeframes and deadlines.The lead-in time to test execution determines whether you can conduct test design before execution. When there is little or no time, you might at least need to start with exploratory testing while documented tests are prepared after the specifications become available.
Here is the implication, once again, that exploratory tests aren’t documented. Yet exploratory ideas can be documented in advance, in the form of checklists or as charters. Existing data files—a form of documentation—can be available in advance. Exploratory tests can be guided by marked-up diagrams, by user documentation, or even by a functional specification. Exploratory testing can be informed by any kind of documentation you like. One key distinction between exploratory and scripted approaches is not whether documentation is available in advance; with ET the difference is that the tester, rather than the document, is the primary agency driving the design and execution of the test. Another key distinction is that in an exploratory approach, detailed documentation of the tester’s actions tends to be de-emphasized prior to test execution, and tends to be produced during or very shortly after testing.
Available resources. The total number of person-days of effort can determine which approach you should take. Formal test documentation has significant overhead that can be prohibitive where resources and budgets are tight.
Well, at least maybe we agree on something. But the number of person-days of effort doesn’t determine which approach you choose; people do that. It’s unclear here as to whether the person-days are a given, or whether the test manager gets to choose. Moreover, I wouldn’t use the word “formal” here, since documentation associated with exploratory approaches can be quite formal (look at session-based test management as an example). The word I’d use instead of “formal” above is “excessive” or “wasteful” or “overly detailed”. I’d also suggest that it’s fairly rare for testers to feel as though resources and budgets are ample; most testers I speak with maintain that resources and budgets are always tight, for them.
Skills required. The skill sets of your team members can affect your choices. Good test analysts may not necessarily be effective exploratory testers and vice versa. A nose for finding bugs quickly and efficiently is not a skill that is easily learned…
Perhaps not. James Bach, Jon Bach, James Lyndsay, and I (to name but four) have a lot of experience training testers, and our experience is that the capacity to find bugs can be learned more quickly than many people believe. But habitutate testers to working from detailed manual test scripts and I’ll guarantee that they don’t develop skill quickly. In fact, a huge part of learning to find problems quickly lies in the tester being given the freedom and responsibility to explore and to develop his own mental models of the product.
Verification. Verification requires something to compare against. Formal test scripts describe an expected outcome that is drawn from the requirements and specifications documents. As such, we can verify compliance. Exploratory testing is compared to the test engineer’s expectations of how the application should work.
Many kinds of testing—not just verification—require something to compare against. That “something” is called an oracle, a principle or mechanism by which we recognize a problem. The claim here is that scripted testing is a good thing because we can verify “compliance” (to what?) based on requirements and specifications documents. Actually, what we’d be verifying here is consistency between some point in a test script and a specific claim made in one of the documents. There are two problems here. One is the problem of inattentional blindness; focusing a tester’s attention on a single observation drives the tester’s attention away from other possible observations that might represent an issue with the product. The second problem relates to the fact that requirements and specification documents (as above) can be presumed to contain errors, misinterpretations, and outdated information.
Exploratory approaches defend against these two problems by their emphasis on the tester’s skill set and mindset. Rather than depending upon a single oracle (that of consistency with a specification), an exploratory approach emphasizes apply several oracles, including consistency with the product’s history, with the image the development organization wants to project, with comparable products, with reasonable user expectations, with the intended purpose of the product, with other elements within the product, and with relevant standards, regulations, or laws. Claims—statements in the requirement documents—are taken seriously by the exploring tester, of course, but the other kinds of oracles provide a rich set of possible comparisons, rather than the relatively impoverished single set provided by a script. By keeping the tester cognitively engaged and under her own control, we lessen the risk of script-induced inattentional blindness. Statements like “Exploratory testing is compared to the test engineer’s expectations of how the application should work” are not only inaccurate, but also trivialize the complex set of heuristics that skilled testers bring to the game.
At this point, I’m in agreement with Jon Bach. Refuting the rest of the article would take more time than I’ve got at the moment. So, to keep the curious and eager people occupied:
Web Resources
Evolving Understanding of Exploratory Testing (Bolton)
A Tutorial in Exploratory Testing (Kaner)
The Nature of Exploratory Testing (Kaner)
The Value of Checklists and the Danger of Scripts: What Legal Training Suggests for Testers
Exploratory Testing Dynamics (James Bach, Jon Bach, Michael Bolton)
General Functionality and Stability Test Procedure (for Microsoft Windows 2000 Application Certification) (James Bach)
Experiments in Problem Solving (Jerry Weinberg)
Collaborative Discovery in a Scientific Domain (Okada and Simon)
– a study of collaborative problem-solving. Notice the subjects are testing software.
Books
Exploring Science: The Cognition and Development of Discovery Processes (David Klahr)
Plans and Situated Actions (Lucy Suchman)
Play as Exploratory Learning (Mary Reilly)
Exploratory Research in the Behavioral Sciences
Naturalistic Inquiry
How to Solve It (George Polya)
Simple Heuristics That Make Us Smart (Gerg Gigerenzer)
Blink (Malcolm Gladwell)
Gut Feelings (Gerg Gigerenzer)
Sensemaking in Organizations (Karl Weick)
Cognition in the Wild (Edward Hutchins)
The Social Life of Information (Paul Duguid and John Seely Brown)
A System of Logic and Ratiocination (John Stuart Mill)
Sciences of the Artificial (Herbert Simon)
Yes, learning about exploratory testing in a non-trivial way might take some practice and study. That goes for anything worth doing expertly, doesn’t it?