Blog: Exploratory Testing and Interviews

I’m going to be interviewed on April 19, 2010 by Gil Broza, an expert Agile coach who is a colleague and friend here in Toronto.

Gil’s request for an interview reminded me of an experience I had a few weeks ago. I received an email from a couple of researchers in Sweden who are studying exploratory testing.  I was honoured to be asked for my point of view on the subject. However, I was a little startled when the gentlemen provided me with a list of 27 questions about the problems of exploratory testing.  And 23 about the benefits of exploratory testing. And 16 about the problems of test-case-based testing. And 17 about the benefits of test-case-based testing.

That seemed like a lot of questions to answer. To answer some of them sufficiently would have required a straight Yes or No.  To answer others well would have involved sprawling issues of ontology and epistemology.  Some questions asked for more than one answer (“Please list down at least five problems related to ET.”) Some questions were about experiences, and asked for stories. The only sure thing was a that thorough reply would have been hours of writing (and even though it’s about a subject I love, fairly tedious writing.) I was traveling, and figured I could only give them an hour, so I got them to phone me instead, one morning while I was in Trondheim, Norway.

It was a great experience. We had a grand chat (we went a few minutes over the hour, we were having so much fun) and what’s more, it provided a wonderful set of metaphors for testing.

  • Excellent exploratory testing is like interviewing a program. Imagine that you work at a placement agency, linking candidate workers to your clients.  One of your key tasks is to qualify your candidates before you send them out for jobs or for interviews.  To make sure they’ll be ready for whatever your clients might throw at them, you test them through an interview of your own.  You can plan for that interview by all means, but what happens during the interview is to some degree unpredictable, because for each question, the answer that you get informs decisions about the next question. One way to test (a great way, and an important way, I believe) is to treat your program like a prospective employee for your customers.  You’re not merely going to test that the candidate can answer some questions correctly (that is, is the candidate capable?); you’re going to look at the whole package.  Does the candidate deal appropriately with surprising or malformed situations (that is, is the candidate reliable)?  When he gets stuck, does the candidate know how to ask for help politely and attempt to move forward, or does he just sit there stupidly (in the software world, we’d ask questions about user-friendliness, usability, and error messages)?  Can the candidate deal with being stressed out or overwhelmed, or does the candidate just collapse in a heap (performance?  scalability?  reliability?)?

    Scripted testing is like sending someone a list of 83 written questions and expecting 83 written answers. The answer to one question will not inform the next question, unless you design a mechanism for feedback and course correction, such as only submitting a few questions or answers at a time.  Notice also that like “83 test cases”, the quantity “83 questions” doesn’t really mean very much to you.  Until you’ve seen the questions, you can’t really know anything about them.  You can only know what I’ve told you about them.  Were they good questions?  Bad?  Multiple choice?  Worthy of a quick, a deep, or a practical answer?  Did they reflect what the personnel agency’s clients really wanted to know about the candidate?  What they really needed to know?

    Exploratory testing takes into account the possibility that you might get a different answer to your question from one day to the next, and that that might result in you asking new and different questions. Thus exploratory testing focuses on adaptability. Scripted testing emphasizes the answers that you want to hear from a program, the same way every time you ask, without varation. Therefore, scripted testing focuses on invariance. Note that invariance can be checked, but adaptability must be tested. It makes sense to delegate the most extreme kind of scripted testing—checking—to a machine.  Humans are, as Jerry Weinberg put it, at best variable when it comes to showing that the machine can do the same thing with different numbers.  Automation is great at that stuff.  If humans are indeed unpredicatable and variable, it makes sense to treat those tendencies as assets when it comes to testing, and exploit the natural human capacity for variation and adaptation to test the system’s capacity to adapt.

  • When I received the questions, I felt overwhelmed and, frankly, irritated.  During the interview I felt comfortable and relaxed. I noted that the dialogue, the conversation, felt very natural and immediate. The list of questions, although thoughtfully conceived, had felt stilted and disjointed.  Rather than interacting with a static piece of paper, I could hear human voices at the other end of the line. The different feeling offered by the two modes of communication is natural. We enter the world as listeners and speakers, not as readers and writers. Our minds and our sensory systems are biased by our heritage to prefer immediacy, dealing with people as directly as possible. Computers and documents are media, technologies. They extend our capabilities and our senses in some ways and diminish them in others. A paper document turns a conversation from a set of loops into a linear sequence. A computer allows us to converse in real time over great distances, sending video, audio, text, and documents, but it still lacks the immediacy of actual presence and the rich sensory environment in which people evolved.
  • Since we only had an hour, the researchers and I realized that we had to begin by focusing on the questions that were most important to them. Except we started the interview with different ideas about what might be important. Thus we discovered what was really important along the way. At the end of the session, we agreed that we had covered almost all of the initial questions anyway.  It turned out that a little over an hour of conversation was enough to give them plenty of material to consider. That’s like testing too. We often find that with the right approach, discovering what we want to know might take a lot less time and a lot less effort than we think. Rapid feedback loops—like those available in an exploratory approach—can help us to work out quickly what is important to test and what might not be so important.  (That’s why, when a test team is under time pressure, there’s a powerful and natural desire to explore rather than to stick with an overly scripted process.  The trick is to have the skill to do the exploratory testing well, in expert and skillful ways.)

So, interviews are like exploratory testing.  They’re fun, and that’s why I’m looking forward to Monday, April 19th, 2010 when Gil Broza will be interviewing me on the subject, “Is There a Problem Here?”.  Join us by signing up here.

Want to know more? Learn about Rapid Software Testing classes here.

One response to “Exploratory Testing and Interviews”

  1. ? ????? ?????? ????? ????????, ?????? ????? ??????? ???? ????????,
    ? ????????? ?????????????? ???????? ???????.
    ?????? ??? ??????? ?????????

Leave a Reply

Your email address will not be published. Required fields are marked *