DevelopsenseLogo

Exploratory Testing IS Accountable

In this blog post, my colleague James Bach talks about logging and its importance in support of exploratory testing. Logging takes care of one part of the accountability angle, and in an approach like session-based test management (developed by James and his brother Jon), the test notes and the debrief take care of another part of it.

Logging records what happened from the perspective of the test system. Good logging relieves the tester from having to record specific actions in detail; the machine does that. The tester is thereby free to record test notes—a running account of the tester’s ideas, questions, and results as he tested, or what happened from the perspective of the tester. Those notes form the meat of the session sheet, which also includes

  • coverage data
  • who did the testing
  • when they started
  • how long it took
  • the proportion of time spent on test design and execution, bug investigation and reporting, and setup
  • the proportion of the time spent on on-charter work vs. opportunity work
  • references to log files, data files, and related material such as scenarios, help files, specifications, standards, and so forth
  • and, of course, bugs discovered and issues identified.

After the session or at the end of the day, the tester presents a report—the session sheet combined with an oral account—in the debrief, a conversation between the tester and the test lead or test manager. In the debrief, the test lead reviews—that is, tests—the tester’s experience and his report. The question “What happened?” gets addressed; the oral and written aspects of the report get discussed and evaluated; the session charter is confirmed or revised; holes are discovered and, where needed, plugged with followup testing; bug reports get reviewed; issues get brought up; coaching happens; mentoring happens; learning happens; knowledge gets transferred. The goal here is for the tester and the test lead to be able to say, “we can vouch for what was tested“.

The session sheet is structured in such a way that it can be scanned by a text-parsing tool written in Perl. The measurements (in particular the coverage measurements) are collected and collated automatically into reports in the form of sortable HTML tables. Session sheets are kept for later review, if they’re needed.

If logging in the program isn’t available right away, screen recording tools (like BB Test Assistant, Camtasia, Spector, …) can provide a retrospective account of what happened. (An over-the-shoulder video camera works too.) Note that these tools simply record video (and, optionally, sound—which is good for narration). Programmatic repetition of the session isn’t the point. Nor is the point to have a supervisor review the screen capture obsessively; that wastes time, and besides, nobody likes working for Big Brother. The idea is to use the video only when necessary—to aid in recollection where it’s needed, and to help in troubleshooting hard-to-reproduce bugs.

We suggest, where it doesn’t get in the way, taking the test notes on the same machine as the application under test, and using the text editor window popping up as a way to link the execution of the application with bugs, test ideas or questions. For bugs that don’t appear to be state-critical you can also take very brief notes for later followup. Include a time stamp, where the time stamp is an index into the recording; then revisit the recording later if more detail is called for. (In Notepad, you can press F5; in TextPad, Edit/Insert/Time, and it’s macroable; other text editors almost certainly have a similar feature.)

Between a charter, the session sheet, the oral report, data files, and the logs and the debrief, it’s hard for me to imagine a more accountable way of working. Each aspect of the reporting structure reinforces the others. This is why I get confused when test managers talk about exploratory testing being “unaccountable” or “unmanageable” or “unstructured”: when I ask them what accountability and management means to them, they point lamely to a pile of scripts or spreadsheets full of overspecified actions that were written weeks or months before the software was built, or they mumble something about not knowing what goes on in a tester’s head.

Any testing approach is manageable when you choose to manage it. If you want structure think about what you mean (maybe this guide to the structures of exploratory testing will help), identify the structures that are important to you, and develop those structures in your testers, in your team, and in your approaches. If you want accountability, provide structures for it (like session-based test management), and then require accountability. If you find that your testers aren’t sufficiently skilled, train them and mentor them. (And if you don’t know how to do that rapidly and effectively, we can help you.)

If there’s something you don’t like about the results you’re getting, manage: observe what’s going on in your system of testing, and put in a control action where you want to change something. If you want to know what’s going on in a tester’s head, observe her directly and interview her as she’s testing; have her pair with another tester or a test lead; critique her notes; debrief her and coach her, until you get the results that you seek. If you want to supercharge the efficiency of your testers, work with the programmers and their managers to focus on testability, with special attention paid to scriptable interfaces, logging, and at least some programmer testing. (It might help to identify the information-hiding and feedback-loop-lengthening costs of the absence of testability). If you find individual debriefs taking too long, or if you want to share information more broadly within the test team, try group debriefs at the end of one day or the beginning of the next. If you want to add features to the reporting protocol, add them; if you want to drop them, drop them. Experiment, re-evaluate, and tune your testing as you see fit.

And if you have a more manageable and accountable approach than this for fostering the discovery of important problems in the product, please let us know (me, or James, or Jon). We’d really like to hear about it.

7 replies to “Exploratory Testing IS Accountable”

  1. Great post and something that I was struggling with.
    In your section about notes you said among others
    -who did the testing
    -when they started
    -how long it took
    -the proportion of the time spent on on-charter work vs. opportunity work

    I'd like to note that there are tools out there that let you record these items as well so the number of things the tester has to document manually shrinks which to me is a good thing.

    You wrote: "Between a charter, the session sheet, the oral report, data files, and the logs and the debrief, it's hard for me to imagine a more accountable way of working."
    I agree. You are listing 6 different items here that make sure ET is accountable. You could argue that in test scripts you only have two, the test scrript and the test log. To me it's a given that your approach adds more value as you get more information. But it's also harder to manage, it's a lot easier to throw test scripts and logs over the wall, tick the box and say, done and here's what we did in writing. It's not managing but if no one complains…

    The crux as far as I can see is that it's harder to actively manage, provide useful information and account for your actions than it is to put something on paper and say it's done.

    Reply
  2. I think the direction you are taking when it comes to managing exploratory testing or managing in general for that matter, is people working with people and not managers working with spreadsheets.

    In a sense,this approach might require a lot more from managers; they are required to be more in touch with the field, talk to their people on a daily basis, conduct daily team meetings . Some managers will prefer sitting at the comfort of their desk and reviewing detailed spreadsheets. but your approach is for sure more beneficial to the entire process.

    I like the idea of brief ET sessions. and it's easy to implement.

    Reply
  3. I think there are a lot of things that speak against using ET and RST when talking about accountability:
    * Scripted tests have had many years to package its concept and products, while you are at the start of it. As I see it, Scripted testing comes from Scientific management has over 150 years of brain washing.
    * There are few tools and especially few really, really expensive tools (considering that there can sometime be an idea that the more expensive the tool is the better). Using a script that one person has created can be seen as
    * Don't forget the lure of visualizations such as bar charts, pie charts expressing the progress mapped against planned progress.
    * Session based testing management is more chaotic and therefore more adapted to the real world and real projects, still it does not look as clean and orderly as script test management.

    I wrote this article:
    http://thetesteye.com/blog/2009/07/scripted-vs-exploratory-testing-from-a-managerial-perspective/
    as a way to hammer on the ET concept, perhaps a bit naive but I tried to understand why it was hard to sell ET into an organisation.

    Reply
  4. @Thomas & Gali; thanks for the comments.

    I'd like to note that there are tools out there that let you record these items as well so the number of things the tester has to document manually shrinks which to me is a good thing.

    That can be helpful, but be careful. A tool can't tell how much time a tester has spent on design and execution vs. bug investigation and reporting vs. setup. Those aren't intended to be precise categorizations anyway; we're just trying to capture the general level of interruption. Far more important information, for this purpose, in conversation; information immediately, rather than mediately as a tool would provide it.

    But it's also harder to manage, it's a lot easier to throw test scripts and logs over the wall, tick the box and say, done and here's what we did in writing. (Thomas)

    and

    …they are required to be more in touch with the field, talk to their people on a daily basis, conduct daily team meetings. Some managers will prefer sitting at the comfort of their desk and reviewing detailed spreadsheets. (Gali)

    Yes. But it's time to be clear: You can observe the work and help to guide people. You can use your authority and responsibility to provide them what they need and to remove obstacles that limit them or slow them down. If you do that, you're a manager. But if you sit at the comfort of your desk and review detailed spreadsheets, you're not a manager; you're a clerk. Now, Dear Manager: you're not a clerk, are you?

    @Martin…

    Thanks for the comments.

    Scientific management has over 150 years of brain washing. Yup. And no matter how much people try to whitewash it, the stains are still visible if you're looking. But actually, good science and good natural history is much older than that.

    I'm sure I'll comment on your blog post at some point fairly soon.

    —Michael B.

    Reply
  5. I just read an article from you or James this week that made the point that unstructured “banging around” in the software is not the same as Exploratory Testing.

    I am currently in a new position where the current testing approach is to “bang around” in the software before shipping it to clients. As I implement SBTM I am receiving great acceptance and feedback such as “it’s great to have a name for what we already do”. The catch is that they haven’t been doing ET because there was no tracking or significant accountability.

    As I am coming in and learning about the team, I desperately need to be able to track my testers so I know what they are testing and where the training is needed. I recognize the value of ET and this week will be implementing SBTM to facilitate that happening instead of the un-managed “banging around” that has been happening thus far.

    I have enjoyed reading your tweets about #PMI and managers over the last month. Every time I read them I am grateful I work with intelligent critical thinkers. Keep ’em comin’.

    Reply

Leave a Comment