Blog Posts from April, 2009

Exploratory Testing: Recording and Reporting

Wednesday, April 29th, 2009

At the QUEST conference in Chicago, April 22 2009, I gave a presentation on recording and reporting for exploratory testers. You can find the presentation notes here. You can also read a more formal paper on the subject, prepared for the 2007 Pacific Northwest Software Quality Conference, here. Both documents include material on notebooks and on Session-Based Test Management, and a bunch of other stuff besides.

Test Coaching and Collaboration Sessions & The Value of Experiential Learning

Sunday, April 26th, 2009

I’ll be at STAR East Monday, May 4 through May 7 2009. Lots of other colleagues will be there too, including James Bach, Jonathan Kohl, Rob Sabourin, Karen Johnson, and James Lyndsay. I’ll be presenting a keynote talk, “What Haven’t You Noticed Lately: Building Awareness in Testers” (the title there was cheerfully lifted by me from Mark Federman, who cheerfully lifted it from Terence Gordon, who either lifted or channeled it from Marshall McLuhan, whom Federman explains cogently); an experiential tutorial called “Tester’s Clinic: Dealing with Tough Questions and Testing Myths“; and an experiential session called “Insource or Outsource Testing: Understanding Your Context“.

At every conference, big or small, I’m now offering coaching and collaboration sessions, based on an idea by (and often in cahoots with) James Bach. They’re free, of course; the whole purpose of a conference is conferring (about which more in the next post). Here’s how they work: I (or if James is around, we) announce that we’re going to be in the hotel lobby bar from 7:30 in the morning, after the regular daytime conference sessions, or any time you’d like to arrange; we bring along a bunch of testing toys, games, and puzzles, some on laptops and some not; and we work on them, engaging in exploratory play, conversation, coaching, critique of the exercises, and maybe exploratory testing of the bar’s beer list, for as long as people care to stay. In the fall, at STAR West, some of these sessions went on until 10:00pm before the toys were put away.

At these sessions, everyone learns something. The games and experiential exercises either come from the Rapid Software Testing course or are candidates for it. We already have a set of ideas as to how the puzzles might be relevant, but invariably the people that we’re working with discover new angles and give us new epiphanies as to what people can get out of the exercises.

Last week at the QUEST conference in Chicago was a great example of this kind of experience. Xavier Bignon, who attended my one-day exploratory testing workshop, is one of those testers who cannot resist talking about testing, thinking about testing, and solving testing problems. We arranged to meet in the hotel bar on Tuesday evening, after the official end of the conference day. This wasn’t an exercise I expected to do; on a whim, I pulled out a deck of cards, showed him a magic trick and asked him to reverse-engineer it. I am not a very good magician, so doing handing him this problem was the equivalent of giving him a program that has lots of interesting bugs and discoverable problems. I watched and coached him as he wrestled with each stage of the exercise. At one point, Xavier posited an explanation as to how I was getting something done; he reckoned that I was memorizing something about the cards, when actually I was performing a quick calculation—a much simpler approach. And that led me to discover a general systems law.

The Eye-Brain Law, (as far as I know identified by Jerry Weinberg in Quality Software Management Vol. 2, First-Order Measurement) says that to a certain extent, mental power can compensate for observational weakness. The Brain-Eye Law (ibid.) says that to a certain extent, observational power can compensate for mental weakness. I’d known about those ones, which provided support for my epiphany. What Xavier’s analysis made clear to me were these two:

The Memory-Brain Law says that, to a certain extent, memorization can provide a cheap and effective substitute for calculation. Memorizing your times table allows you to get the answer 56 far faster than adding eight plus eight plus eight plus eight plus eight plus eight plus eight.

The Brain-Memory Law says that, to a certain extent, calculation can provide a cheap and effective substitute for memorization. Calculating 50 x 50 as equal to 5 times five and add two zeroes is a lot faster than memorizing your times table up to 50.

If, in a computer program, you can look up a value in a table, that might be faster than calculating it (in fact, a problem with such a table was the basis for the Pentium bug). Similarly, if you can compute a value quickly, that saves immense time and space over looking it up. The skill in any kind of optimization is to figure out what things to trade, and how to trade them.

Now, it’s not like I discovered any of this stuff for the first time, for all humanity; programmers have known about these complementary principles forever. They’re two of the founding principles behind many forms of optimization. Matter of fact, people knew about the Memory-Brain Law long before there were computers; it’s the idea behind logarithm tables.

But then it struck me that it’s also one of the principles behind Rapid Testing itself: learning (ideally memorizing) a handful of lists of guideword heuristics, combined with skill developed through practice, allows testers to respond instantly and expertly to any testing situation. I’d known that intellectually, but chatting about it with Xavier helped me to realize on a deeper level that James and I are in the business of trying to optimize testing and the work that testers do.

Those are the sorts of lessons that experiential learning helps to teach, and that’s why we rely on it so heavily in the Rapid Testing course.

For his part, Xavier wrote to me: “I’m really excited about what I’ve learned with you. If there is a possibility to join a work group or something like that to be involved with your research and/or teaching, I’d love to know more about it.” You’re welcome any time, Xavier, and I will keep in touch. And everyone else is welcome too.

So… ping me (stareast@developsense.com) any time, and we’ll have fun together at STAR East!

A Message from the WAQB

Wednesday, April 8th, 2009

“Nice.. so Michael want us to buy his book .. maybe that why he have his web adress in his comments 🙂 Michael we did talk to the Ladies, and if you did the same you would know it’s fixed. Yes there was a mistake, but it’s fixed. If you want adverts for you book pls go to the papers or google adwords. There will come names, faces ect. We have decided that until the end of the Pilot which goes until September we will keep low profile, so we get this structured. We just believe that it will be usefull to work more agile, so we are working on that.. then Michael can do his Rapid Software testing – anybody believes in that ? Steen waqb”

This was posted on LinkedIn by a fellow named Steen Lerche-Jensen (who apparently signed the Agile Manifesto in the week of February 16-22, 2009, along with a link to the International Agile Testing Qualifications Board—a link that spins forever in my browser, as of tonight). Steen’s reply landed in my inbox. I’d reply to it on LinkedIn, but the thread has been deleted, so I’ll reply here.

1) Sorry to blow your theory, Steen, but I don’t have a book. In fact, everything I’ve written so far is available for free (except for three things: the current issue of Better Software Magazine; the book The Gift of Time, to which I contributed one chapter and for which I receive no royalties; and the Agile Testing book, to which I contributed a sidebar and for which I receive no royalties).

2) That was some kind of “mistake”, publishing the table of contents for Crispin and Gregory’s book as the course syllabus without attribution and without their consent. But I’m glad you’ve corrected it.

3) The comments above on keeping a low profile, and the fact of the WAQB approach as it’s currently implemented on the Web site, seem to me to be incongruent with the claim on the Web site, “WAQB will use the techniques from Open Source to ensure that the quality of the syllabus is of high quality”, and with the nature of the Agile movement itself.

Guest Reply: Rob Bach on Pilots

Friday, April 3rd, 2009

A few blog posts back, I tried to emphasize the relative importance of skilled people over documentation by remarking that commercial airlines “tend to have a captain and a first officer in the cockpit, rather than a pilot and a book on how to fly an aircraft”. “Tend to” was intended to understate the case; as Rob remarks below, you’ll see single pilots only on very small planes (like the seaplane that I took once from Nanaimo to Vancouver—one pilot and six passengers).

gmcrews commented “I don’t think you picked a very good analogy. Even though the pilot may get pretty busy, all commercial aircraft can be safely flown by a single person. The most important function that the copilot serves is quality assurance.” gmcrews also said, “And regardless of the number of pilots, you will always find checklists actively used in all aircraft.” That’s true, but my point was that the skilled humans, rather than the checklist, are at the centre of the operation.

I asked Rob Bach, brother of James and Jon and a pilot for a major airline, to respond to that, and he did, although some enthusiastic spam filter appears to have stopped the first attempt. Rob says:

ALL airlines have more than one pilot if there are more than 10 or so passenger seats on the plane. The reason is not for quality assurance, I assure you.

As a pilot for 33 years, a commercial pilot for 22 years, an airline check pilot for a few of all of those years, I can tell you exactly why there are two to FOUR pilots on any given commercial flight.

The Captains are there to keep the First Officers from killing themselves. The First Officers are there to keep the Captains from killing EVERYBODY.

No, seriously:

People make mistakes. Two people make TWICE as many mistakes as a single person, but the likelihood that those mistakes are identical in nature and time are reduced by the way we coordinate our skill sets.

The FO is not there for assurance, but to command the flight if need be, countermand the Captain if need be, learn from the Captain if possible, fly the plane every other trip leg (run the radio gear the other legs), share pre and postflight duties. FLY the plane during emergencies (unless the Cap elects to do so… but it is rare the Cap doesn’t run the checklist in an emergency), and on and on.

It physically takes two people…like hanging sheetrock.

The cockpit, the plane, the atmosphere, and the air traffic environment are amazingly complicated places where the room for error is quite small. It takes TWO brains working all sides of a flight from minute to minute to make all the magic happen.

Having flown single-pilot in heavy weather into a busy airport, I can state that I was in over my head and don’t relish the thought of going back to that space/time. There’s just too much data coming and going through your brain. Like Tetris in some insane hyper-mode where DEATH is the cost of losing the game.

OK…that was a little dramatic.

Two (or three to Europe and four to India) pilots are used to help ease mental and physical fatigue. Imagine performing at peak mental level at a reduced cabin pressure/oxygen level, in a very dry environment, being irradiated by the instrument panel AND the sun, in an uncomfortable chair where you can’t stretch your legs easily, where you can’t use the bathroom ‘CAUSE HEY, WHO’S FLYING THE PLANE! for a 15 hour day back and forth between timezones, sleeping in unfamiliar surroundings, away from family (but still dealing with all the family issues one needs to deal with) , missing graduations, births, weddings, ball games with your kids, for YEARS:

Don’t you think the reason we have at least two highly-trained professionals is something more than just quality assurance?

Thanks, Rob!

WAQB: Okay, now it’s getting creepy.

Thursday, April 2nd, 2009

This post is here only as a matter of historical record. Eventually, the bad guys go away.

Related to my post about the World Agile Testing Qualifications Board, on March 31, I posted the following discussion on the WAQB LinkedIn list:

Linkedin Groups March 31, 2009
World Agile Qualifications Board – WAQB

Today’s Activity: 1 discussion

Discussions (1)

Does anyone /know/ anything about the World Agile Qualifications Board? 1 comment »

Started by Michael Bolton, Participant in the Workshops on Teaching Software Testing

Don’t want to receive email notifications? Adjust your message setting.

LinkedIn values your privacy. At no time has LinkedIn made your email address available to any other LinkedIn user without your permission. © 2009, LinkedIn Corporation.

Today, when I visit the group (or click on the link above), I see that the discussion is no longer available—evidently removed by a moderator. Why?

A different discussion has started, though, started by Steen Lerche-Jensen, Program Test Manager at StatoilHydro, saying that more people are needed for the review board, and requesting that they apply via the WAQB Web site. There are no replies, as of this writing.

The plot thickens. Nick Malden points out that he has found another Web site, the design of which he finds strikingly similar to the WAQB’s: http://www.test4pro.com/home. Some of it is in English, some isn’t. Nonetheless, there’s lots of interesting information to be obtained. Try comparing it to the WAQB site (http://www.waqb.org/, now defunct, apparently). Try scrolling down.

Hmmmm.

Of Testing Tours and Dashboards

Thursday, April 2nd, 2009

Back in the 1980s and 1990s, Quarterdeck Office Systems (later Quarterdeck Corporation)—a company for whom I worked—was in the business of creating multitasking and memory management products to extend and enhance to Microsoft’s DOS operating system. The ideas that our programmers developed were so good and so useful that similar ideas were typically adopted by Microsoft and folded into DOS a year or so later. After each new version of the operating system, people would ask us “are you concerned about Microsoft putting more memory management stuff into DOS?” Quaterdeck’s reply was always that, as long as Microsoft supported DOS, we would find ways to improve on memory management—and that we were delighted that Microsoft had legitimized the category.

I wasn’t lucky enough to attend Dr. James Whittaker’s presentation at EuroSTAR 2008, in which he described the concept of touring the software as a way of modeling and approaching exploratory testing. Fortunately, Dr. Whittaker has presented a number of these ideas as part of his recent Webinar “Five Ways to Revolutionize Your QA” on the UTest.com site, which came to my attention on April 1, 2009.

The touring metaphor in testing has been around for a while. I learned about it through James Bach’s Rapid Software Testing course, which I started teaching in 2004, and of which I’ve been a co-author since 2006. In 2004—that’s the first version for which I have my own copies of the course notes—Rapid Software Testing included several ideas for tours:

  • Documentation Tour: Look in the online help or user manual and find some instructions about how to perform some interesting activity. Do those actions. Improvise from them.
  • Sample Data Tour: Employ any sample data you can, and all that you can. The more complex the better.
  • Variability Tour: Tour a product looking for anything that is variable and vary it. Vary it as far as possible, in every dimension possible. Exploring variations is part of the basic structure of my testing when I first encounter a product.
  • Complexity Tour: Tour a product looking for the most complex features and data. Look for nooks & crowds where bugs can hide.
  • Continuous Use: While testing, do not reset the system. Leave windows and files open. Let disk and memory usage mount. You’re hoping that the system ties itself in knots over time.

But the idea had been around before that, too. Tours were also mentioned in the Black Box Software Testing course, co-authored by James and Cem Kaner, which I attended in 2003. They were part of a larger list of test ideas called “Quick Tests”, which included other things like interruptions (starting activities and stopping them in the middle; stopping them at awkward times; performing stoppages using cancel buttons, O/S level interrupts, ctrl-alt-delete or task manager, arranging for other programs to interrupt, such as screensavers or virus checkers; suspending an activity and returning later) and continuous use (while testing, avoiding the resetting of the system; leaving windows and files open; letting disk and memory usage mount, hoping that the system ties itself in knots over time).

Note that concept of touring wasn’t terribly new in the BBST course notes and appendices either; skilled testers had been using them for a long while before . In 1995, Cem Kaner noted that the user manual is a test planning document; as he said in Liability for Defective Documentation, “It takes you on a tour of the entire program.” Elisabeth Hendrickson gave a presentation at STAR East in 2001 called “Bug Hunting: Going on a Software Safari”, which gave an overall list of test ideas using the metaphor of a tour. The idea of describing tours of a specific aspect or attribute of the product (namely the menu) appeared in an article by James Bach in the Test Practioner in 2002.

Much more serious work based on the concept of tours happened in 2005. Mike Kelly did some work with James Bach, and blogged some ideas about what they had discussed in an August 2005 blog post. Mike amplified upon that in his more complete list of tours (using the mnemonic FCC CUTS VIDS) in September 2005.

  • Feature tour: Move through the application and get familiar with all the controls and features you come across.
  • Complexity tour: Find the five most complex things about the application.
  • Claims tour: Find all the information in the product that tells you what the product does.
  • Configuration tour: Attempt to find all the ways you can change settings in the product in a way that the application retains those settings.
  • User tour: Imagine five users for the product and the information they would want from the product or the major features they would be interested in.
  • Testability tour: Find all the features you can use as testability features and/or identify tools you have available that you can use to help in your testing.
  • Scenario tour: Imagine five realistic scenarios for how the users identified in the user tour would use this product.
  • Variability tour: Look for things you can change in the application – and then you try to change them.
  • Interoperability tour: What does this application interact with?
  • Data tour: Identify the major data elements of the application.
  • Structure tour: Find everything you can about what comprises the physical product (code, interfaces, hardware, files, etc…).

Dr. Whittaker does suggest some interesting notions of his own for tours in the UTest talk:

  • Money tour: Test the features that users purchase the app for (which is rather like Mike’s “user tour” above, I guess)
  • Rained-out tour: Start and stop tasks, hit cancel, etc. (rather like Kaner’s notion of “interruptions” above)
  • Obsessive compulsive tour: Perform tasks multiple times, perform tasks multiple times, perform tasks multiple times
  • Back alley tour: Test the least-used features
  • All-nighter tour: Keep the app open overnight (like Kaner’s notion of “continuous use” above)

In his related blog post, “The Touring Test”, Dr. Whittaker says, “At Microsoft a group of us test folk from around the division and around the company are experimenting with tour-guided testing.” Cool. He also says, at the top of the post, “I couldn’t resist the play on Alan Turing’s famous test when naming this testing metaphor.” The idea that he named tours independently is a little surprising, but when we think about the practice of skilled exploratory testing, the “touring” metaphor might be obvious enough to have been arrived at independently. These things happen.

For example, in 2005, James Bach showed me a bunch of test techniques that he called “grokking”. (Grokking is a word invented by Robert Heinlein that describes deep, meditative contemplation and comprehension.) I thought “grokking” wasn’t the right name for what James was describing, because the tests depended extremely rapid cognition and removing information, the very opposite of reflective contemplation. I was reading Malcolm Gladwell’s book Blink at the time, and I suggested that we label the techniques blink testing. Only later, when I was researching the history of similar observational approaches for an article I was writing on blink testing, did I find a reference to astronomer’s tool from the 1920s: it was called a Blink Comparator. It was fun to note that discovery, a little sheepishly, in the article. So I can understand how it’s easy for people to use the same label for an idea.

But then something else came up.

In the Webinar for uTest.com, Dr. Whittaker also presents the concept of a “low-tech testing dashboard”, in which he suggests using a whiteboard and coloured markers to report on project status. This suggestion isn’t just a variation on the Big Visible Charts that are recommended in the Agile literature; it’s strikingly similar to an idea presented James Bach at STAR East in 1999 in a talk called “A Low-Tech Testing Dashboard“, posted on his Web site since around that time, and also part of the Rapid Software Testing course (pages 136-146).

I am delighted that authors as well-respected as Dr. Whittaker and that companies as prominent as uTest and Microsoft are endorsing and helping to spread ideas on tours and dashboards. I think they’re worthwhile approaches, and I believe that such endorsement helps in the wider effort to get the ideas accepted. Yet I also believe that it would be a friendly and respectful gesture if Dr. Whittaker’s presentation included acknowledgement of prior work in field that it covers. It would be similarly helpful if books like Dr. Whittaker’s How To Break Software or Page, Johnson, and Rollison’s How We Test Software at Microsoft contained bibliographies so that we could more easily find references to some of the ideas presented.

What does the community think? How important is it to acknowledge earlier work?