Blog: Alternatives to “Manual Testing”: Experiential, Interactive, Exploratory

This is an extension on a long Twitter thread from a while back that made its way to LinkedIn, but not to my blog.

As of May 30, 2022. I’ve updated the post to change “attended” to “interactive”, which James Bach and I agree is a little more evocative.  And I updated the post a bit more on September 12, 2022.

Testing is easy! …isn’t it?

Testers who take testing seriously sometimes have problems with getting people to understand testing work.

The trouble often starts with a special case of the insider/outsider problem that surrounds any aspect of human experience: most of the time, those on the outside of a social group—a community; a culture; a group of people with certain expertise; a country; a fan club—don’t understand the insider’s perspective. The insiders often don’t understand the outsiders’ perspective either.

We don’t know what we don’t know. That should be obvious, of course, but when we don’t know something, we have no idea of how little we comprehend it. Our experience and our lack of experience can lead us astray. “Driving is easy! You just put the car in gear and off you go!” That probably works really well in whatever your current context happens to be. Now I invite you to get behind the wheel in Chennai.

How does this relate to testing? Here’s how:

No one ever sits in front of a computer and accidentally compiles a working program, so people know—intuitively and correctly—that programming must be hard.

By contrast, almost anyone can sit in front of a computer and stumble over bugs, so people believe—intuitively and incorrectly—that testing must be easy!

Stumbling into shallow bugs might be very easy. Finding the deep bugs can be considerably harder.

But building carefully means no problems… right?

In our world of software development, there is a kind of fantasy that if everyone is of good will, and if everyone tries really, really hard, then everything will turn out all right. It is true that diligence, discipline, and concern for craft are enormously helpful.  That’s not wrong. It’s the “everything will turn out all right” part that’s the fantasy.

If we believe that fantasy, we don’t need to look for deep, hidden, rare, subtle, intermittent, emergent problems; people’s virtue will magically make them impossible. That is, to put it mildly, a very optimistic approach to risk. It’s okay for products that don’t matter much. But if our products matter, it behooves us to look for problems. And to find deep problems intentionally, it helps a lot to have skilled testers.

Yet the role of the tester is not always welcome. The trouble is that to produce a novel, complex product, you need an enormous amount of optimism; a can-do attitude. But as my friend Fiona Charles once said to me—paraphrasing Tom DeMarco and Tim Lister—”in a can-do environment, risk management is criminalized.” I’d go further: in a can-do environment, even risk acknowledgement is criminalized.

In Waltzing With Bears, DeMarco and Lister say “The direct result of can-do is to put a damper on any kind of analysis that suggests ‘can’t-do’…When you put a structure of risk management in place, you authorize people to think negatively, at least part of the time. Companies that do this understand that negative thinking is the only way to avoid being blindsided by risk as the project proceeds.”

Risk denial plays out in a terrific documentary, General Magic, about a development shop of the same name. In the early 1990s(!!), General Magic was working on a device that — in terms of capability, design, and ambition — was virtually indistinguishable from the iPhone that was released about 15 years later.

The documentary is well worth watching. In one segment, Marc Porat, the project’s leader, talks in retrospect about why General Magic flamed out without ever getting anywhere near the launchpad. He says, “There was a fearlessness and a sense of correctness; no questioning of ‘Could I be wrong?’. None. … that’s what you need to break out of Earth’s gravity. You need an enormous amount of momentum … that comes from suppressing introspection about the possibility of failure.”

That line of thinking persists all over software development, to this day. As a craft, the software development business systematically resists thinking critically about problems and risk. For testers, that’s the domain that we inhabit. It’s out job to immerse ourselves in that stuff.

Trouble is our business.

Developers have great skill, expertise, and tacit knowledge in linking the world of people and the world of machines. What they tend not to have—and almost everyone is like this, not just programmers—is an inclination to find problems. The developer is interested in making people’s troubles go away. Testers have the socially challenging job of finding and reporting on trouble wherever they look. Unlike anyone else on the project, testers focus on revealing problems that are unsolved, or problems introduced by our proposed solution. That’s a focus which the builders, by nature, tend ot resist.

Resistance to thinking about problems plays out in many unhelpful and false ideas. Some people believe that the only kind of bug is a coding error. Some think that the only thing that matters is meeting the builders’ intentions for the product. Some are sure that we can find all the important problems in a product by writing mechanistic checks of the build. Those ideas reflect the natural biases of the builder—the optimist. Those ideas make it possible to imagine that testing can be automated.

The false and unhelpful idea that testing can be automated prompts the division of testing into “manual testing” and “automated testing”.

Testing is neither manual nor automated.

Listen: no other aspect of software development (or indeed of any human social, cognitive, intellectual, critical, analytical, or investigative work) is divided into “manual” and “automated” in this unhelpful way. There are no “manual programmers”. There is no “automated research”. Managers don’t manage projects manually, and there is no “automated management”. Doctors may use very powerful and sophisticated tools, but there are no “automated doctors”, nor are there “manual doctors”, and no doctor would accept for one minute being categorized that way.

Testing cannot be automated. Period. Certain tasks within and around testing can benefit a lot from tools, but having machinery punch virtual keys and compare product output to specificed output is not more “automated testing” than spell-checking is “automated editing”. Enough of all that, please.

It’s unhelpful to lump all non-mechanistic tasks in testing together under “manual testing”. Doing so is like referring to craft, social, cultural, aesthetic, chemical, nutritional, or economic aspects of cooking as “manual” cooking. Expert cooks use tools:  food processors and microwave ovens and blenders and timers and sous-vide machinery. But no one who provides food with care and concern for human beings—or even for animals—would suggest that all that matters in cooking is the tools. Please.

If you care about understanding the status of your product, you’ll probably care about testing it. You’ll want testing to find out if the product you’ve got is the product you want. If you care about that, you need to understand some important things about testing.

It might be a good idea to unpack “manual testing”.

If you want to understand important things about testing, you’ll want to consider some things that commonly get swept a carpet with the words “manual testing” repeatedly printed on it. Considering those things might require naming some aspects of testing that you haven’t named or even considered before.

Experiential Testing (vs. Instrumented Testing)

Think about experiential testing, in which the tester’s encounter with the product, and the actions that the tester performs, are indistinguishable from those of the contemplated user. After all, a product is not just its code, and not just virtual objects on a screen. A software product is the experience that we provide for people, as those people try to accomplish a task, fulfill a desire, enjoy a game, make money, converse with people, obtain a mortgage, learn new things, get out of prison…

Contrast experiential testing with instrumented testing. Instrumented testing is testing wherein some medium (some tool, technology, or mechanism) gets in between the tester and the naturalistic encounter with and experience of the product. Instrumentation alters, or accelerates, or reframes, or distorts; in some ways helpfully, in other ways less so. We must remain aware of the effects, both desirable and undesirable, that instrumention brings to our testing.

Interactive Testing (vs. Unattended Testing)

Are you saying “manual testing”? You might be referring to the interactive or engaged aspects of testing, wherein the tester is directly and immediately observing and analyzing aspects of the product and its behaviour in the moment that the behaviour happens. And you might want to contrast that with the algorithmic, unattended things that machines do—things that some people label “automated testing”—except that testing cannot be automated. To make something a test requires the design before the automated behaviour, and the interpretation afterwards. Those parts of the test, which depend upon human social competence to make a judgement, cannot be automated.

Transformative Testing (vs. Transactional Testing)

Are you saying “manual”? You might be referring to testing activity that’s transformative, wherein something about performing the test changes the tester in some sense, inducing epiphanies or learning or design ideas. Contrast that with procedures that are transactional: rote, routine, box-checking. Transactional things can be done mechanically. Machines aren’t really affected by what happens, and they don’t learn in any meaningful sense. Humans do.

Exploratory Testing (vs. Scripted Testing)

Did you say “manual”? You might be referring to exploratory work, which is interestingly distinct from experiential work as described above. Exploratory—in the Rapid Software Testing namespace at least—refers to agency; who or what is in charge of making choices about the testing, from moment to moment. There’s much more to read about that.

Wait… how are experiential and exploratory testing not the same?

When you’re testing, you could be exploring—making unscripted choices and performing unscripted actions—in a way that is entirely unlike the user’s normal encounter with the product. You could be probing the product without a predefined procedure guiding you. You could be deciding spontaneously to use tools to generate mounds of pathological data and then interact with the product to stress it out. You could be exploring while identifying something the product needs, and attempting to starve it of those resources. You could be performing an action and then analyzing the data produced by the product to find problems, at each moment remaining in charge of your choices, without control by a formal, procedural script.

That is, you could be doing testing that is exploratory but not experiential; exploring while encountering the product to investigate it. That’s a great thing, but it’s encountering the product like a tester, rather than like a user. It might be a really good idea to be aware of the differences between those two encounters, and to take advantage of the benefits of each approach, without mixing them up.

You could be doing experiential testing in a highly scripted, much-less-exploratory kind of way; for instance, following a user-targeted tutorial and walking through each of its steps to observe inconsistencies between the tutorial and the product’s behaviour. To an outsider, your encounter would look pretty much like a user’s encounter; the outsider would see you interacting with the product in a naturalistic way, for the most part—except for the moments where you’re recording observations, bugs, issues, risks, and test ideas. But most observers outside of testing’s form of life won’t notice those those moments. In this case, you’re doing testing that is experiential but not exploratory.

Of course, there’s overlap between those two kinds of encounters. A key difference is that the tester, upon encountering a problem, will investigate and report it. A user is much less likely to do so. (Notice this phenomenon, while trying to enter a link from LinkedIn’s Articles editor; the “apply” button isn’t visible, and hides off the right-hand side of the popup. I found this while interacting with Linked experientially. I’d like to hope that I would have find that problem when testing intentionally, in an exploratory way, too.)

Other dimensions might be lumped into “manual testing”.

There are other dimensions of “manual testing”. For a while, we considered “speculative testing” as something that people might mean when they spoke of “manual testing”; “what if?” We contrasted that with “demonstrative” testing—but then we reckoned that demonstration is not really a test at all. Not intended to be, at least. For an action to be testing, we would hold that it must be mostly speculative by nature.

Good testing can be extended and accelerated with tools, but tools can make bad testing worse.

And here’s the main thing: part of the bullshit that testers are being fed is that “automated” testing is somehow “better” than “manual” testing because the latter is “slow and error prone”—as though people never make mistakes when they apply automation to checks. They do, and the automation enables those errors at a much larger and faster scale.

Sure, automated checks run quickly; they have low execution cost. But they can have enormous development cost; enormous maintenance cost; very high interpretation cost (figuring out what went wrong can take a lot of work); high transfer cost (explaining them to non-authors).

There’s another cost, related to these others. It’s very well hidden and not reckoned: we might call it interpretation cost or analysis cost. A sufficiently large suite of automated checks is impenetrable; it can’t be comprehended without very costly review. Do those checks that are always running green even do anything? Who knows?

Checks that run red get frequent attention, but a lot of them are, you know, “flaky”; they should be running green when they’re actually running red. Of the thousands that are running green, how many should be actually running red? It’s cognitively costly to know that—so people routinely ignore it.

And all of these costs represent another hidden cost: opportunity cost; the cost of doing something such that it prevents us from doing other equally or more valuable things. That cost is immense, because it takes so much time and effort to to automate GUIs when we could be interacting with the damned product.

And something even weirder is going on: instead of teaching non-technical testers to code and get naturalistic experience with APIs, we put such testers in front of GUIish front-ends to APIs. So we have skilled coders trying to automate GUIs, and at the same time, we have non-programming testers, using Cypress to de-experientialize API use! The tester’s experience of an API through Cypress is enormously different from the programmer’s experience of trying use the API.

And none of these testers are encouraged to analyse the cost and value of the approaches they’re taking. Technochauvinism (great word; read Meredith Broussard’s book Artificial Unintelligence) enforces the illusion that testing software is a routine, factory-like, mechanistic task, just waiting to be programmed away. This is a falsehood. Testing can benefit from tools, but testing cannot be mechanized.

Testing must be seen as a social (and socially challenging), cognitive, risk-focused, critical (in several senses), analytical, investigative, skilled, technical, exploratory, experiential, experimental, scientific, revelatory, honourable craft. Not “manual” or “automated”. Let us urge that misleading distinction to take a long vacation on a deserted island until it dies of neglect.

Testing has to be focused on finding problems that hurt people or make them unhappy. Why? Because optimists who are building a product tend to be unaware of problems, and those problems can lurk in the product. When the builders are aware of those problems, the builders can address them. Whereby they make themselves look good, make money, and help people have better lives.

All that requires varying degrees of experiential, interactive, transformational, and exploratory testing.

Okay, but what’s the big deal? What’s so upsetting about saying “manual testing”?

Here’s what: as my friend Wayne Roseberry puts it:

When you see a job posting for “manual tester”, it almost always means “low-skilled person to run pre-scripted test cases by rote“, which also means “someone we are going to pay less than if they were going to write automation” (or, really automated checks. —MB)

I wrote a post about this disgusting problem, too.

Further Reading and Viewing

“Manual” and “Automated” Testing

The End of Manual Testing

Want to know more? Learn about upcoming Rapid Software Testing classes here.

3 responses to “Alternatives to “Manual Testing”: Experiential, Interactive, Exploratory”

  1. […] Alternatives to “Manual Testing”: Experiential, Attended, Exploratory Written by: Michael Bolton […]

  2. Jake Turner says:

    This is an very interesting post, and it’s generated a lot of ideas for my next testing retrospective with my team, so thank you.

    Michael replies: You’re welcome.

    I was a bit surprised to read “[…] how are experiential and exploratory testing not the same?” and that exploratory testing and experiential testing simply “overlap”.

    As you’ve written before, all testing *is* exploratory testing. Testing means exploration.

    That’s true; testing is fundamentally exploratory. But as I’ve also written before, all testing is to some degree scripted by your overlal mission, your specific charter, your experiences, or your biases, and so forth. Testing can be experiential (such that the tester is performing naturalistic actions) but those actions might be strongly influenced or guided by explicit and specific test cases.

    Even the user conducting experiential testing is exploring the product. Whether it’s their first time encountering and thus wading through the swamp of unknown, or whether they are performing the same activity on the same machine that they’ve done every day for the last 10 years (thus exploring what happens upon doing the same thing ten million times in a row).

    Also true from one perspective; and yet doing the the same thing over and over again by rote is somewhat exploratory (from the exact perspective you cite), but not very exploratory at all from most other perspectives.

    I wonder, then, if experiential testing activities are just another subset of exploratory testing, and a more accurate differential may be ‘Conscious’ and ‘Unconscious’ testing.

    That’s not how we’d describe it, but you’re welcome to work out your own notions of this. “Conscious” and “unconscious” wouldn’t work very well for us, since if it’s actually testing, it had better be conscious.

    The overlap you mention can be framed as a state transition between these two types of testing.

    Any user has the capacity to spot a problem during unconscious testing, and transition into consciously testing that problem by investigating and reporting it.

    The difference is that a user may spend a short time investigating poorly, perhaps simply retrying what they did and seeing if it happens again, and reporting it to their friend or colleague who happens to be standing next to them. “Isn’t this weird?” they say, before the problem vanishes into the ether and is never seen or spoken of again.

    A serious and responsible tester would make a contextual judgement on how much investigation to perform and how best to report their findings.

    Bug investigation is more exploratory (the tester is in control of his or her actions) and almost certainly less experiential and more instrumented (the tester is behaving less like a regular user, and more likely using tools that mediate the experience).

    Very tasty food for thought, I’m looking forward to your next post. Thanks again.

    I could make the same statement about your reply. I’m glad you’re reflecting on this, and sharing your ideas. Thank you.

  3. madhu bhatt says:

    Very Helpful blog it is, you also make it amazing and an easy-to-read blog for the readers by adding proper information.It really helped me a lot in the field of Manual vs. Automation Testing

    Michael replies: I think you might want to look at the links below, and reflect on what’s on the page that you link to in your comment.

Leave a Reply

Your email address will not be published.