Blog Posts from March, 2010

Looping and Branching in Exploratory Testing

Sunday, March 28th, 2010

In the interview with the Coding QA guys that was the subject of my last post, James Bach refers exploratory testing as parallel learning test design, test execution and learning, and said that exploratory approaches are epitomized by loops.

Where do loops happen in exploratory testing? In fact, exploratory testing includes both looping and branching. When we’re testing in an exploratory way, we may branch away from the current path of our activity at any time. That might happen when we observe a particular test result (output; a single observable value) or other test outcome (anything else that might be happening on the system), or anywhere at all during design, execution or learning. We may get a new idea out of the blue. A distraction or an interruption might prompt a change in tack. We may get some new information about the product or project’s context. Some emotional reactions or feelings—for example, surprise, confusion, curiosity, frustration, or boredom— might act as a trigger for branching, while others might cause us to continue along the same path. Our charters, our specifications, and the like, prompt us to focus; the idea that we might be getting into a rut prompts us to defocus; new information prompts us to refocus. These are alternating heuristics, and they’re important aspects of exploratory skills and dynamics.

As Louis Pasteur pointed out, “In the fields of observation, chance favors only the prepared mind.” In any exploratory process, there’s an element of happenstance, since we never know for sure what we’re going to find. New ideas or an epiphanies don’t exactly follow schedules, so exploratory testers are aware of patterns of alternation and use them: doing vs. thinking; doing vs. describing; gathering data vs. analyzing data; testing quickly vs. testing carefully; generating ideas vs. elaborating ideas; overproducing ideas vs. abandoning ideas vs. recovering ideas; and so forth. We branch when something diverts us or when we divert ourselves from our current line of investigation; the branch turns into a loop when we learn, return and iterate.

The point of all this is that in exploratory testing, it’s the tester—not someone or something else—that is in control of the process of interacting with the product, with the testing mission, and with time. Consequently, self-management is an important skill of exploratory testing.

Coding QA Podcast on Exploratory Testing

Sunday, March 28th, 2010

Several months back, James Bach did an interview with the CodingQA guys, Matthew Osborn and Federico Silva Armas. In the interview, James talks about the skills of exploratory testing, sex education (now do I have your attention?) and how to use session-based test management with minimal overhead and maximum credibility.

I’m surprised at how few people have heard about the podcast, so I’m drawing attention to it here. It runs around an hour. There’s a lot of content. Inspired by Adam Goucher, I’ve written up summary notes for those who would prefer to listen later. This first post is about exploratory testing generally. In a later post, I’ll summarize the discussion of session-based test management.

On Exploratory Testing Generally

  • Think of testing is a martial art. Seek to be a master; study the arts and weapons; share the passion.
  • It’s important to field-test our processes before we claim “this is the way things should be done”. Even if you have experience with your process and you think you’ve described it well, you’re not likely to be able to impart the process successfully to other people without revising and refining it, and without training them in it.
  • Exploratory testing is not just a fancy term for “fooling around with the computer”.
  • Exploratory testing is not a technique. It is an approach.
  • A technique is way of doing something. Approaches are broader notions; they’re like additives that are applied to techniques.
  • The opposite of exploratory testing is scripted testing, but…
  • Even though scripted and exploratory are opposites, they’re not mutually exclusive. “Hot” and “cold” are opposites, but we can mix hot and cold water together to get warm water. Similarly we can mix exploratory and scripted approaches together to get testing that is partially scripted and partially exploratory.
  • An exploratory approach can be applied to any test technique (or any other approach). For example, you can do boundary testing in a scripted way or in an exploratory way. Automation provides another approach from which you can test, so you can do scripted automated testing or exploratory automated testing.
  • Exploratory testing is three activities that are done in parallel in a mutually supporting way: learning, designing your tests, and executing your tests. You’re doing exploratory testing to the degee that those activities are not separated. The thing that distinguishes exploratory testing from scripted testing is the interaction between learning, design, and execution. As soon as you start to separate them, you’re starting to take a scripted approach, and the more you separate them, the more scripted your approach.
  • People think that exploratory testing means undocumented testing, but you can very documented with exploratory testing. It doesn’t mean unrigourous testing; you might be quite rigourous in your exploration. People think that exploratory testing means unstructured testing, but exploratory testing is structured, and it might be very explicitly structured. (The linked paper is an evolving list of the constituent skills of excellent (exploratory) testing.)
  • In exploratory testing, you always have loops. As soon as you put a loop into a scripted test, you’ve just gone exploratory. If you learn something in the course of a scripted test and you go back and investigate it, that has now become an exploratory test.
  • Exploratory testing is like sex: it went on before for a long time before people started talking about it and started to provide education about it. There would still be lots of sex going on even if we didn’t talk about it. The purpose of sex education is not the continuation of the human species; that’s going to happen anyway. We provide sex education because we want people to be able to make better, more informed choices about sex.
  • People do exploratory testing and don’t realize that they’re doing it, or don’t admit that they’re doing it, or pretend that they’re not doing it. If you’ve ever run into a problem with a script and done something about it rather than just sitting there, you’ve done exploratory testing; if you’ve ever investigated a bug, that’s exploratory testing; if you’ve ever worked with the product and learned about it just prior to writing a script, that’s exploratory testing.
  • What we’re talking about is learning to do exploratory testing like a pro.
  • Exploratory testing is like chess; to learn how to play it takes very little time. To learn how to play it well is much more significant proposition.
  • When people say that exploratory testing is like ad hoc testing, ask: “So what are the skills of ad hoc testing?” They won’t have an answer, because they’ve never thought about it.
  • Many testers can’t explain how they recognize a bug; it’s “sort of abstract”. But skilled exploratory testers who have studied the craft can describe how they recognize a bug, such that the listener himself can very quickly understand it, learn how to do it and explain it to others. When we’re specific about out patterns of observing and reporting problems, we don’t have to invoke unhelpful, vague, and personal terms like “intuition” or “magic”; we can actually explain how to do our work in a skillful way.
  • As an example, consider the the HICCUPPS(F) heuristic (History, Image, Comparable Products, Claims, User Expectations, Product, Purpose, and Standards). That set of consistency heuristics was discovered by observing and interviewing testers over time.
  • The consistency heuristics can be used in a generative way, to help find bugs; or they can be used in a retrospective way, to help frame the explanation after a bug has been found.
  • Unless your work is under scrutiny by someone skilled (i.e. a manager or a test lead), you won’t have the feedback necessary to become better at it and to sharpen it.

That covers the first twenty minutes of the conversation or so. The second part is summarized here. You can of course also listen to the podcast itself..

Management Mistakes (Part 1)

Saturday, March 27th, 2010

Phil came into my office, and flopped down into the comfortable chair across from my desk. He looked depressed and worried. “Hey, Phil,” I asked him tentatively. “You look like something’s bothering you. What’s up?”

His brow furrowed. “I don’t know. I just don’t know. Sometimes I feel like people think of me as nothing more than a literary device.”

I usually don’t like fiction writing in the form of dialogs. I’m not good at the form, which leads to me not practising it, which of course leads to me not being good at it. The problem that I see in many dialogs is that the characters are at best one-dimensional. They’re trapped in an artificial story. I don’t connect with them if I don’t care about them, and I don’t care about them if I don’t connect with them.

And yet I do hear about real-life dialogs from people that I connect with and care about very deeply. Those people are testers that I meet all over the world. Everywhere I go, they tell me about conversations they have with their managers. What follows is a compendium of things that I hear at least once at every conference. It’s an exaggeration, but based on what I’ve heard consistently from testers worldwide, it’s not a major one. In my defense, it’s a dialog, but it’s not exactly fiction.

Magnus the Project Manager: “Hey, Tim. Listen… I’m sorry to give you only two days notice, but we’ll be needing you to come in on Saturday again this week.”

Tim the Tester: “Really? Again?”

Magnus: “Yes. The programmers upstairs sent me an email just now. They said that at the end of the day tomorrow, they’re going to give us another build to replace the one they gave us on Tuesday. They say they’ve fixed another six showstoppers, eight priority one bugs, and five severity twos since then, and they say that there’ll be another seven fixes by tomorrow. That’s pretty encouraging—27 fixes in three days. That’s nine per day, you know. They haven’t had that kind of fix rate for weeks now. All three of them must have been working pretty hard.”

Tim: “They must have. Have they done any testing on those fixes themselves?”

Magnus: “Of course not. Well, at least, I don’t know. The build process is really unstable. It’s crashing all the time. Between that and all the bugs they’ve had to fix, I don’t imagine they have time for testing. Besides, that’s what we pay you for. You’re quality assurance, aren’t you? It’s your responsibility to make sure that they deliver a quality product.”

Tim: “Well, I can test the product, but I don’t know how to assure the quality of their code.”

Magnus: “Of course you do. You’re the expert on this stuff, aren’t you?”

Tim: “Maybe we could arrange to have some of the testing group go upstairs to work more closely with the programmers. You know, set up test environments, generate data, set up some automated scripts—smoke tests to check that the installation…”

Magnus: “We can’t do that. You have high-level testing to do, and they have to get their fixes done. I don’t want you to bother them; it’s better to leave them alone. You can test the new build down here on Saturday.”

Tim: (pauses) “I’m not sure I’m available on Sa…”

Magnus: “Why not? Listen, with only two weeks to go, the entire project depends on you getting the testing finished. You know as well as I do that every code drop we’ve got from them so far has had lots of problems. I mean, you’re the one who found them, aren’t you? So we’re going to need a full regression suite done on every build from now until the 13th. That’s only two weeks. There’s no time to waste. And we don’t want a high defect escape ratio like we had on the last project, so I want you to make sure that you run all the test cases and make sure that each one is passing before we ship.”

Tim: “Actually, that’s something I’ve been meaning to bring up. I’ve been a little concerned that the test cases aren’t covering some important things that might represent risk to the project.”

Magnus: “That might be true, but like I said, we don’t have time. We’re already way over the time we estimated for the test phase. If we stop now to write a bunch of new test scripts, we’ll be even more behind schedule. We’re just going to have to go with the ones we’ve got.”

Tim: “I was thinking that maybe we should set aside a few sessions where we didn’t follow the scripts to the letter, so we can look for unexpected problems.”

Magnus: “Are you kidding? Without scripts, how are we going to maintain requirements traceability? Plus, we decided at the beginning of the project that the test cases we’ve got would be our acceptance test suite, and if we add new ones now, the programmers will just get upset. I’ve told them to do that Agile stuff, and that means they should be self-organizing. It would be unfair to them if we sprang new test cases on them, and if we find new problems, they won’t have time to fix them. (pause) You’re on about that exploratory stuff again, aren’t you? Well, that’s a luxury that we can’t afford right now.”

Tim: (pauses) “I’m not sure I’m available on Sa…”

Magnus: “You keep saying that. You’ve said that every week for the last eight weeks, and yet you’ve still managed to come in. It’s not like this should be a surprise. The CFO said we had to ship by the end of the quarter, Sales wanted all these features for the fall, Andy wanted that API put in for that thing he’s working on, and Support wanted everything fixed from the last version—now that one was a disaster; bad luck, mostly. Anyway. You’ve known right from the beginning that the schedule was really tight; that’s what we’ve been saying since day one. Everybody agreed that failure wasn’t an option, so we’d need maximum commitment from everyone all the way. Listen, Tim, you’re basically a good guy, but quite frankly, I’m a little dismayed by your negative attitude. That’s exactly the sort of stuff that brings everybody down. This is supposed to be a can-do organization.”

Tim: “Okay. I’ll come in.”

How many management mistakes can you spot in this conversation? In your opinion, what’s the biggest one? Here are my nominees:

Tim needs to manage his responses. He needs to learn to say No, quickly and directly.

Magnus needs to recognize that testing’s job is to gather information that will help him make decisions about the product, and that he already has more than enough information to start making decisions. He’s making a lot of mistakes here, but his biggest one is that he has closed his eyes and ears to the information all around him, and he’s not managing the project. More testing won’t help him, and Tim is already done, for now. Magnus needs to start managing the project. He needs to give the programmers time to fix their problems and test those fixes. Since the overhead for investigating and reporting bugs is so high and so damaging to test coverage, he needs to require better builds from the programmers—and he needs to provide them with the time and the resources to do that. He needs to co-ordinate the services that his testers can offer with the services that the programmers need.

Two questions for you: What do you think? And does this conversation feel familiar to you?

I Update My Blog and Discover Testing Tools

Sunday, March 21st, 2010

For the last few weeks, I’ve been updating my blog and my Web site.  This was inspired largely by Blogger’s decision to drop support for blog publishing via FTP.  That would mean moving the blog to a .blogspot.com site, or to a custom domain that wouldn’t be developsense.com or a subdomain of it (later:  not a subdomain, but a subfolder of http://www.developsense.com).  Ugh. Many of my colleagues have taken to using WordPress, and I’ve been admiring the look and feel and features of their blogs, so off I went.  Making the conversion has been a little arduous, but that’s largely because I’ve done a few things in addition to the conversion; I’ve made the blog look much more like the rest of my site, I’ve fixed a number of problems, and added a number of new features.

Along the way, I got quite a bit of help from a number of online resources and tools that I feel are worth mentioning, especially for testers who seek to learn about some of the underlying technologies.

W3 Schools (http://www.w3schools.com).  This site offers tutorials and references for most of the important Web technologies, including HTML, XHTML, CSS, PHP, and plenty more. One of the coolest things about the W3Schools site is its ability to provide you with interactive examples via the TryIt editor: in one pane, you type text; in the other, you see the effects immediately. Short feedback loops are a great way of learning, for me.

Rubular (http://www.rubular.com). This handy online tool focuses on regular expressions.  Like TryIt, it allows you to experiment with regular expressions and see their effects immediately.  When you make a mistake, it’s easy to do experiments that explain it.

Web Developer Toolbar for Firefox (https://addons.mozilla.org/en-US/firefox/addon/60). I found this browser tool indispensible for figuring out gnarly (and largely self-imposed) CSS problems.  Among many other things, it allows you to trace the trail of styles that apply to a particular element in the browser window. Once you’ve done that, you can review the style sheets that are being applied to a page, and edit the styles on the fly.  This is to CSS what a really good debugger is to other kinds of code.  I now find it really easy to figure out problems not only on my own site, but also on other people’s sites, and I can perform experiments that test out possible solutions.

CSS Based Design (http://adactio.com/articles/1109/).  This article by Jeremy Keith (who happens to be the fellow behind The Session http://www.thesession.org, a wonderful Irish traditional music resource) is so old, by Web standards, that it might as well have been written on stone tablets.  But it’s also as direct, clear, and authoritative as other stuff written on stone tablets. It also provides the clearest and simplest explanation of margins, borders, and padding that I’ve been able to find.

BrowserShots.org (http://www.browsershots.org).  Want to know what your page looks like on another browser?  Want to know what your page looks like on another 46 browsers?  Pass the address to BrowserShots, wait a while, and you’ll get to see the page on (as of this writing) up to 47 different browsers. (Admittedly, this begs the question as to why there are 47 different browsers, but I digress.) It’s a free and popular service, so there is a queue.  Submit your page, go off and do something else.

The WordPress Codex (http://code.wordpress.org).  This one is of less general utility, but if you’re setting up or troubleshooting a WordPress blog, it’s indispensible.

As for offline tools, the hands-down award winner is TextPad.  I registered my first copy of it in 1997.  It probably has the highest value per cost ratio for any software product that I’ve ever purchased.

Rapid Software Testing Public Events in Europe

Monday, March 1st, 2010

It’s a busy season in Europe for Rapid Testing this spring.

I’m going to be at the Norwegian Computer Society’s FreeTest, a conference on free testing tools in Trondheim, Norway, where I’ll be giving a keynote talk on testing vs. checking on March 26.  That’s preceded by a three-day public session of Rapid Software Testing, from March 23-25.  Register here.

After that I’m off to Germany for a three-day public offering of Rapid Software Testing in Berlin, sponsored by Testing Experience.  That class happens March 29-31.  Can’t make it yourself?  Please spread the word!

Stephen Allott at Electromind is setting up a three-day Rapid Software Testing class that I’ll teach in London, May 11-13.  There’s also a testers’ gathering to be held in some accommodating pub on Wednesday the 12th.  If you’re in the area (or can get there), I’d love the opportunity to meet and chat.  Drop a line to me for details.

While all that’s going on, my colleague James Bach will be in Sweden—delivering a public RST class for AddQ Consulting in Kista near Stockholm March 16-18; a session of Rapid Software Testing in Gothenburg March 22-24, a tutorial on Self-Education for Testers on March 25, and an appearance at the SAST conference on March 26.  That’s interspersed with a bunch of corporate consulting, after which he’ll be at the ACCU Conference in Oxford, UK April 14-17.