Blog Posts from May, 2009

James Lyndsay Mea Culpa

Wednesday, May 27th, 2009

In a recent posting, I made a mistake: I erroneously stated that James Lyndsay, the genial host of the London Workshops on Exploratory Testing (LEWT), had not attended a LAWST conference before setting up LEWT. Except I was wrong: he had. Shame on me for not checking.

If you’re not aware of James’ work, you would do well to know about it. He’s the author of a rich set of exploratory testing puzzles that take the form of engimatic black box machines. James Bach and I have used an early version of one of these machines in the Black Box Software Testing course for several years. I can’t, won’t, tell you much about them here, since the whole point is to encounter and explore for yourself. But I can tell you that they’re intriguing and stimulating, and they help to sharpen the questioning processes involved in excellent testing.

James also took the Best Paper honours at STAR East this year for The Irrational Tester: Avoiding the Pitfalls, in which he presented “his view of bias—why we so often labor under the illusion of control, how we lock onto the behaviors we’re looking for, and why two people can use the same evidence to support opposing positions.” These are important and under-explored issues in the testing business.

James is presenting a class on Exploratory Testing in Berlin on June 4-5, and in London, July 2-3. That happens to overlap the latter two days of a course I’m teaching. Nonetheless, I’m delighted to recommend his.

Automation and Coverage Part II

Monday, May 25th, 2009

Last week posted a blog entry on automation and coverage, in which I questioned the usefulness of trying to cover “everything” with automated tests, comparing them to the CCTV cameras that are in use all over the place, but especially in Britain. Despite the limitations of such schemes, there might also be some useful aspects. What might they be?

  • For certain areas that we decide to cover with a camera, like public streets or open squares, we won’t necessarily need to monitor all of the images all the time, but if the data were stored, we could review it—especially after trouble had occurred—to try to find out what went wrong and who the instigators were. This is like a continuous logging system for a program under test.
  • For places that are hidden or potentially dangerous, like subway underpasses, we might want to set up a motion-activated camera. This is like an event-based logging system, in which we record some aspect of the system state based on some occurrence that we anticipate.
  • For high-traffic areas like urban highways, where we want to direct lots of attention about the flow of vehicles, it might be a good idea to set up a number of cameras and monitors at various points on the roadway, and cycle through the images every few seconds. Even if we don’t see the incident as it happens, CCTVs allow us to spot trouble fairly shortly after it happens, since traffic will tend to change its behaviour, typically bunching up behind a blockage. Sophisticated cameras would allow us to pan and zoom, inspecting the specific nature of the blockage, and will help us determine how to respond. This another form of logging, more like polling. We can perhaps combine it with probes or indentifiers on the data to make it easy for us to follow a record or data packet that attracts our interest.
  • The same kind of cameras can follow and monitor the (mis)behaviour of drivers, such as those who are blatantly speeding or breaking the rules. This is like a monitor that allows us to track the progress of a particular data set through an application.
  • We can set up a particular camera at a particular interface that allows us to stop traffic and alter it or its behaviour in some way before we send it on. I think of tools like Burp Proxy or Fiddler in this way.
  • In the last few years, more and more people have obtained portable cameras and camera phones; most of those devices do video, too. See something interesting as you’re testing? Use a literal camera, an inexpensive point-and-shoot model, to record some aspect of your activities as you’re testing, or as you fill up a white board or a set of post-it notes during a meeting.
  • Tourists and amateur newshounds, on the spot as an event occurs, can often take pictures that later turn out to be valuable. A tester can take advantage of the data capture tools that the operating system provides. Windows Vista comes with the Snipping Tool, and previous versions of Widows include Print-Screen (to print the screen to the printer), Shift-Print-Screen (to place a copy of the entire screen on the Widows clipboard), and Alt-Print-Screen (to place the topmost window of the screen on the clipboard.) Since these tools are readily available, it’s as though everyone had a camera of his or her own.
  • A real video camera, or a screen recording tool like BBST Assistant, can allow you to record the actions of an end-user as he or she works her way through the application.
  • Many applications allow you to go forwards and backwards. See if you can get your application into a tangled state by performing activities, backtracking with Ctrl-Z, and then perform the activities again, the same way or with variation. Then backtrack again. Then do it again, perhaps brancing to a different point. Set up a monitor of some kind to help you record and observe what happens. The idea is to get the application to get confused and to tie itself in knots.
  • Ask the programmers about the built-in error checking in the program, and ask if the error -checking routine logs its actions, perhaps by a configuration switch.
  • A program can be instrumented to provide interfaces to coverage tools, or to requirement tools such as Fitnesse. Not only can you get a shapshot of results from the running program, but you can do it again and again.
  • Psychologists often put the patient in a room with a one-way mirror with a camera behind it, maybe hooking some electrodes up to his skull, and interview him or merely watch his behaviour. A debugger is a kind of equivalent of this approach.
  • When there’s an important event happening, news stations send television crews to record and report. Sometimes that footage is helpful in determining what’s going on, but note that the film crew is constantly making choices about where to place the cameras and what to observe. That is, every observation is a subjective observation at some level.
  • Note that the television show or movie that you see is typically the product of extensive preparation, rehearsal, gobs of technical equipment, and sophisticated editing. If you choose to automate and record everything you will generate a lot of stuff that will end up on the cutting room floor. Who gets to do the editing?

These are just a few examples; I’m sure you’ll come up with many more (and they’re welcome here). Note the kind of automation that I’ve been talking about here involves little to no direct, continuous control or monitoring people, and note that most of the approaches can help to reduce dependence on elaborate prescriptive or retrospective documentation. Not all of these approaches will fit your context, but there are several here that might. The idea is not to adopt them all, but to consider one or more possibilities that might be helpful in your context.

Bangalore Workshop on Software Testing

Wednesday, May 20th, 2009

In 1999, Cem Kaner and Brian Lawrence came up with the idea of having testers and test managers meet to talk about some of the problems that seemed to bedevil all of them. This was, for its time, a radical idea for the testing community. Here’s what they said, after the second LAWST but before the third:

This is a process developed by Cem Kaner and Brian Lawrence for technical information-sharing across different groups. It’s not very new. We adapted it from some academic models and models from other industries. Our goal is to facilitate sharing of information in depth. We don’t see enough of that happening today.

You can read the rest of the report here, at Cem Kaner’s site.

These conferences were to be different. They were to be small, fewer than 25 people, aggressively non-commercial, based on experience report and open, facilitated discussion. The ideal was to inspire further research and to publish papers.

Several LAWSTs were held, and then others began to happen, including (a very partial list here), the Software Test Managers’ Roundtables, the Workshops on Peformance and Reliability, the Workshops on Teaching Software Testing, the Workshops on Heuristic and Exploratory Techniques, Software Testing in Financial Services… Having attended a LAWST, James Lyndsay started a LAWST-inspired meeting called the London Exploratory Workshop on Testing (LEWT) that has a less formal structure (as far as I know, it remains the only LAWST-inspired meeting to be held in the back of a pub, although on the last day, WHET IV moved to the upstairs of one in 2007). Fiona Charles and I started the Toronto Workshops on Software Testing in 2005, more formal than LEWT but less formal than LAWST.

Cem and Brian’s gift to the community continues to expand all over the place. Back in 2007, Pradeep Soundararajan took me to a meeting of a group of testers that he had arranged in Koramangala, Bangalore. I blogged about that here, encouraging Pradeep and the rest of the group to build a community of inspired testers.

My observation is that when you’re waiting for something good to happen (all over the world, but especially in India), nothing at all seems to happen for a while… and then all of a sudden, everything happens.

In this case, the Bangalore Workshop on Software Testing happened, with the help of the Edista Testing Institute and Test Republic. And a number of skilled Indian testers prepared presentations, delivered them, and questioned each other. And Pradeep wrote about it all here.

By Pradeep’s account, a ton of ideas emerged in the discussion. You can read about those in the report. The overall theme that I see when I read the report is a group of testers using ideas that they’ve learned from elsewhere, and then (as James Bach describes it) inventing testing for themselves. That is, using the ideas that they’ve read or heard about, trying them in context, seeing if they work, tuning the stuff that does, adapting the stuff that might, and rejecting the stuff that doesn’t. And above all, questioning all the time. All in all, a terrific report on what must of been a terrific meeting. I’m now deeply envious of everyone who was there. Bravo, each and every one of you. Bravo, Edista and Test Republic. Bravo, Pradeep—and Bravo, India!

Do you want to attend a gathering like this, scaled up to a more formal conference setting, but that preserves the focus on experience reports, discussion, and learning? Do you want to go to a testing conference and confer? Do you want to explore and investigate what other testers are doing, rather than hearing a canned talk with questions only allowed if we have time? If the answer to any one of those questions is Yes, then I give you my highest recommendation: go to CAST, the Conference for the Association for Software Testing, in Colorado Springs, CO, June 13-16.

Posted: Presentation Notes from STAR East

Wednesday, May 20th, 2009

At the STAR East conference, produced by Software Quality Engineering in Orlando, FL, I gave a keynote address on Testing and Noticing. I also gave a half-day experiential workshop on Difficult Testing Questions and How to Answer Them, and a track session called Insource or Outsource Testing: Understanding Your Context.

A number of people have asked about the source for the video that I showed. It can now be revealed here: http://www.youtube.com/watch?v=ubNF9QNEQLA For reasons that I hope are obvious to those who were in attendance, I declined to provide this before the talk. :)

All of my talks since March 2005 are listed at http://www.developsense.com/past.html, and I’ve provided the notes for a fair number of them. If something is missing that you’d like to see, drop me a line and let me know.

How Far Back Does This Go?

Tuesday, May 19th, 2009

For almost as long as I’ve been a tester, with occasional lapses into process enthusiasm, I’ve been questioning the value of test automation as a presumed good, especially when the automation is deployed against the highest levels of the application. Automation is a tool, and there is great value in tools. But with that value comes risk.

The Agile Manifesto, properly in my view, emphasizes individuals and interactions over processes and tools, and it emphasizes working software over comprehensive documentation. The Manifesto notes that “while there is value in the things on the right, we value things on the left more.”

McLuhan had some remarkable observations on tools, which he considered a subset of media. I wrote about the value of McLuhan thinking for testers here. McLuhan famously identified writing—in particular the phonetic alphabet—as a technology. In his Laws of Media, he points out that one of the effects of a medium is to extend or enhance or enable or accelerate or intensify some human capability in some way. Another effect occurs when the medium is stretched or “overheated” beyond its original or intended capacity; it reverses into the opposite of its enabling or extending effect.

I was introduced to McLuhan’s work largely through a CBC Radio program called Ideas. In 1988, David Cayley covered a conference on Orality and Literacy, co-sponsored by the McLuhan Program at the University of Toronto and the Toronto Semotics Circle (for a description of the program and of the conference, go here, and then search for “literacy”). The conference was set up to question the idea of literacy as the centre of education—not to reject it, but to question it.

The motivation behind the questioning was to understand better the role of literacy in education and in the world in general. Some scholars pointed out that literacy has to be seen in its human context, as an extension of oral discourse, because it is as listeners and speakers that we evolved, not as readers and writers. As David Pattanayak, the director of the Central Institute of Indian Languages at the University of Mysore put it, “What I am worried about is that there are 800 million illiterates in the world, and for those 800 million illiterates, there is nobody to speak. We are speaking as though literacy is responsible for everything—for family welfare, GNP increase, for modernization, for all kinds of things, but I don’t think that is correct…The whole question is that there are illiterates and there are literates, and we should be looking for interaction among the illiterates and the literates, rather than trying to prove the superiority of one over the other.” Why? Because literacy is one means to an end; it helps with many things, but it certainly does not guarantee the accomplishment of our human goals. As he later goes on to say, “Literacy without social concern is meaningless.”

To me, this has a direct connection to our business and to our fascination with written and/or automated tests. For many years, we’ve tried to improve testing by trying as hard as possible to remove the messy human bits from it. James Bach describes this in his essay in The Gift of Time. We’ll call it the Chapel Hill approach to software testing; papers on it are collected in Hetzel’s Program Test Methods (something much closer to being the first book on software testing than Myers’ The Art of Software Testing, by the way). The Chapel Hill approach to reducing testing’s problems with those unreliable humans, says Hetzel, is to lean hard on media, improving “written specification methods” and using “unambiguous testable specification languages”, rather than treating testing as an open-ended investigative process.

A bunch of us, led by the work of Jerry Weinberg, Cem Kaner, and James have been questioning the value of putting media at the centre of testing for qute some time now. Questioning the value of written artifacts isn’t exactly new; it goes back still farther than that. How far? How about the Greek philosophers—Plato, and Socrates?

There’s a version of Plato’s dialogue Phaedrus online. The part that most interests me starts with the paragraph that you can find by bringing up the link and using your browser’s search function to look for “Theuth”. Read that paragraph and a little further, and you come across this, when Socrates talks of written-down speeches:

You would imagine that they had intelligence, but if you want to know anything and put a question to one of them, the speaker always gives one unvarying answer. And when they have been once written down they are tumbled about anywhere among those who may or may not understand them, and know not to whom they should reply, to whom not…”

Wait, it gets better. I recommend reading this bit slowly, savouring it:

Until a man knows the truth of the several particulars of which he is writing or speaking, and is able to define them as they are, and having defined them again to divide them until they can be no longer divided, and until in like manner he is able to discern the nature of the soul, and discover the different modes of discourse which are adapted to different natures, and to arrange and dispose them in such a way that the simple form of speech may be addressed to the simpler nature, and the complex and composite to the more complex nature-until he has accomplished all this, he will be unable to handle arguments according to rules of art, as far as their nature allows them to be subjected to art, either for the purpose of teaching or persuading;-such is the view which is implied in the whole preceding argument.

My emphasis—context-driven thinking! Wait; it gets better yet. Really slowly, now:

But he who thinks that in the written word there is necessarily much which is not serious, and that neither poetry nor prose, spoken or written, is of any great value, if, like the compositions of the rhapsodes, they are only recited in order to be believed, and not with any view to criticism or instruction; and who thinks that even the best of writings are but a reminiscence of what we know, and that only in principles of justice and goodness and nobility taught and communicated orally for the sake of instruction and graven in the soul, which is the true way of writing, is there clearness and perfection and seriousness, and that such principles are a man’s own and his legitimate offspring;-being, in the first place, the word which he finds in his own bosom; secondly, the brethren and descendants and relations of his others;-and who cares for them and no others-this is the right sort of man; and you and I, Phaedrus, would pray that we may become like him. (My emphasis added all over the place, there.)

What happens when we apply all this to testing? To me, this says

  • that conversation, rather than documentation, is central to the work that we do;
  • that notions of correctness are pointless unless they’re based on value;
  • that we need to study and practice testing, or anything else that we wish to understand;
  • that we must be cautious and think critically;
  • that the focus on nomenclature and unquestioned bodies of knowledge proffered by the ISTQB and other certifiers is foolish; and
  • that we should aspire to the values that Socrates proposes.

To me it also says that if we want to be great testers, it would be a good idea to study philosophy, focusing (as James suggests)(James Bach, not William James, although he’d probably agree) on ontology (how we conceive of the world, the nature of being) and epistemology (how we know what we know).

And, yes, there is irony too. Here’s Plato, griping through Socrates about how dangerous writing is, and he’s written down this dialogue. What does that tell us? To me, it suggests that everything that we have to say about what we think and what we do is based, not on absolute principles of truth and certainty, but on heuristics and skepticism.

What does this say to you?

Automation and Coverage

Monday, May 18th, 2009

If you don’t read the forums on the Software Testing Club, I’d recommend that you consider it. In my view, the STC is one of the more thoughtful venues for conversation about testing. (I’d recommend subscribing to the Software Testing mailing list, too.)

A correspondent recently posted a request for help in recommending an automation approach. I answered something like what follows:

Need to get a code coverage of at least 70% and as it stands, Selenium is probably only covering about 20% as it just tests basic user flows.

I’m going to try to help in a different way, by encouraging you to question some assumptions.

What does “100% code coverage” mean to you? 100% of the code your programmers write? Or 100% of the code that is invoked by the application? Remember, much of the code that your customers will use is in the platform—the code on which your application depends and that depends on the browser(s) and the operating system(s) and the application framework(s) and the plugin(s) and the version(s) of each and the…)

But more importantly, what does it mean to cover the code? To execute each line? Each branch? Each branch, with each condition that might drive code down that branch? Have we covered the code if we run it and it doesn’t crash? If it doesn’t cause a problem that we observe right away? If it doesn’t cause a problem that some automaton can observe? If there’s a problem that our test automation hasn’t been programmed to detect, how are we going to find out about it?

Let’s use an analogy. Suppose that you worked for a police force, and you wanted to monitor what was going on in the city by setting up closed-circuit TV cameras. (This is being done in many cities in the United Kingdom these days. What would it mean to cover everything with CCTV cameras? Would you cover the entire city with cameras? Every street? If you did, would you be able to see inside buildings? Down every alleyway? Inside every rubbish bin? How expensive would this become? How long would it take to install the equipment?

What sorts of problems would you use the cameras to detect? Murder? Assault? Drunk and disorderly conduct? Embezzlement? Pickpockets? Littering? Parking violations?

So suppose you sorted out these problems and you went to the effort of setting up a bunch of cameras. How would the cameras know the difference between bad behaviour and behaviour that wasn’t so bad? They wouldn’t; they’d need human monitors. How many? Would the humans get bored or inattentive watching a bunch of non-events? Would it be more helpful to have some of those humans outside, on the street, actively helping to keep the peace and stopping trouble before it happens?

Now, these questions may sound ridiculous, and to some degree they are. But here’s an important question: at what point would they not become ridiculous? Clearly we couldn’t obtain 100% monitoring of what was going on in the city, so maybe we’d settle for 70%. But 70% of what?

How do we make sure that crime isn’t bursting out all over the place? What kinds of problems aren’t observable by cameras? Are there senses other than vision that humans might bring to bear in identifying problems? Might we find out about certain kinds of problems earlier if we could smell smoke or sewage? Might we be able to identify criminals if we could listen in on their conversations?

But maybe all this monitoring isn’t as helpful as it might be. A city with little crime isn’t the result of total vigilance by the police; it’s the result of various things—many of them little things, many of them social conventions—such that there’s little need to monitor or investigate. When people are well-fed and well-housed and well-educated, when they all have valuable jobs to do, when they’re interdependent and feel responsible to one another, the crime rate goes down. When issues are debated and decided in public, based on consensus, people recognize the value in the rules and conventions, and generally feel less inclined to try to get around them. People tend not to cheat when they feel they’re getting a fair deal. Security comes with freedom, responsibility, visibility, and trust. That’s not infallibly true, but recognizing it can save a lot of effort.

Similarly, in a software development project, some practices are going to make it possible to reduce the emphasis on automation coverage at the GUI level. If your programmers are unit testing their code thoroughly, certain kinds of high-level code coverage aren’t going to be so necessary; in fact, they’ll likely be better tested than if they’re tested via the GUI, since risks and questions about the code can be targeted specifically. If testers are performing tests (interactively or getting help from automation) at some level below the GUI, there’s code coverage happening there too. If testers are operating the product as regular users do, the testers are obtaining some of the kind of coverage that users need. If testers are operating the product according to extreme, unusual, harsh, complex, challenging scenarios, then they’re getting even more of the coverage that users will need.

The thing that usually gets left out of these discussions is that coverage is multi-dimensional. For testers, the issue isn’t just code coverage (statement coverage, or branch coverage, or branch-condition coverage, or…). The issue is test coverage, testing for all kinds of dimensions of value, and code coverage is a manifestation of that. Neither test automation nor code coverage tools will tell you anything about whether a product follows the user’s workflow, or whether an error message was informative and clear, or whether the product has a useful logging system. Neither test automation nor code coverage tools will report on missing functionality, nor will they point out that your product doesn’t support some relevant standard.

It’s not that these tools are unhelpful; on the contrary, they can be very helpful. It’s just that they’re not the be-all and end-all of testing. Test automation (which we define as any use of tools to support testing) makes many impossible things possible, and makes many hard things easy. One of most useful things that code coverage does do: it points us to places where we haven’t run the code at all, and therefore haven’t tested. That might be very interesting.

I’ve written a number of articles on the distinction between code coverage and test coverage. Two of them are here:

http://www.developsense.com/articles/2008-10-GotYouCovered.pdf
http://www.developsense.com/articles/2008-11-CoverOrDiscover.pdf

In addition, Brian Marick (a long time ago, in tester years) wrote an article on how to misuse code coverage tools. That’s here: http://www.exampler.com/testing-com/writings/coverage.pdf.

To London, to London to visit… some testers

Sunday, May 17th, 2009

I’ll be in London (the U.K., not London Ontario), June 17 2009, to present a keynote, “Two Futures of Software Testing” to the British Computer Society (BCS) Specialist Group in Software Testing (SIGIST; they must have bought a vowel). In the talk, I project a dark future for testing, in which the goal is Making Sure That Tests Pass, and in which processes and tools rule the roost—chillingly reminiscent of what many testers already see every day. I also project a bright future for testing, in which the goal is learning about the product or service we’re testing, and providing valuable information to management, and in which the central figure is the mindset and the skill set of the individual tester, where everything else is support for that.

The night before, June 16, I’ll be chatting about testing, sharing stories, and sampling real ale in some pub (location as yet to be determined) with Rob Lambert and a crew of other testers. You’re most welcome to join us; contact me and I’ll send you the details as they become available.

I’ll be back in London again July 1-3 2009 to present a relatively rare three-day public session of Rapid Software Testing. The class will be held at the Crowne Plaza Hotel St. James, and it’s presented under the kind auspices of ElectroMind. Please contact Stephen Allott (stephen.allott@electromind.com) for information on pricing (package deals available) and registration. (Note that the June 8-10 event in Southampton, sponsored by iMeta, has been rescheduled.)

Not long after that, July 13-16, I’ll be at CAST 2009 in Colorado Springs, Colorado. There are a lot of very good conferences around the world, but this one is special; it’s by testers, for testers, and the focus is on conferring and learning from one another. Jerry Weinberg, Cem Kaner, James Bach, Scott Barber, Fiona Charles, and Jonathan Koomey (the author of Turning Numbers Into Knowledge) will be presenting. Not to be missed. If you’re going, please spread the word among colleagues. If you’re not going, I’m indeed sorry… but please spread the word anyway, would you?

An Experience Report from India

Thursday, May 14th, 2009

I don’t know how this slipped in under my radar, but it did until a couple of days ago.

Sharath Byregowda is a software tester in Bangalore, and he provides a marvelous experience report here. As I read the report, I’m delighted on a number of levels.

First, it’s India! India tends to be a very conservative place when it comes to testing, with many test organizations preferring scripted, document-heavy, bureaucratic and clerical approaches. Not that it would have been their idea, necessarily. A lot of Indian testers are smarter than that, but many test organizations there find themselves obliged to follow the testing missions set by companies here in the West.

If finding important problems quickly is the goal, those approaches don’t work very well. They focus on repetition, confirmation, validation, and verification. Those things are important, to be sure, but one would think that an organization that was aware of potential problems would do everything in its power to thwart those problems before the code left the shop. Why lengthen the feedback loop? If I were running a development group, I would try to make sure that my outsource lab would be in a position to tell me only things that would surprise me.

Second, Sharath is a devotee of the Rapid Testing approach. Sharath took a Rapid Software Testing course through the Edista Testing Institute in Bangalore. The course was presented by Pradeep Soundararajan, who is in turn a student of me and of James Bach.

Third, Sharath is a graduate of the Black Box Software Testing Foundations course. That course was co-authored by Cem Kaner and James Bach. It’s under continuous development, and each session is strongly influenced by the interaction between participants themselves. That’s remarkable since the course is delivered entirely online. (It’s available free to members of the Association for Software Testing.)

Fourth, I’m delighted that Sharath is blogging about his actual experiences working with actual clients. That’s important. Often our clients (or the testers who work for them) are sometimes reluctant to have people go public because…well, because Rapid Testing finds a lot of problems quickly, and no one really likes talking about problems.

Michelle Smith, another Rapid Testing student, has also provided a great experience report on how she trains testers. You can read it here, and you can read James Bach’s response here.

Well done, Sharath! Well done, Michelle!

Active Learning at Conferences

Saturday, May 9th, 2009

I was at STAR East this past week, giving a tutorial, a track session, and a keynote. I dropped in on a few of the other sessions, but at breaks I kept finding myself engaged in conversation with individuals and small groups, such that I often didn’t make it to the next session.

At STAR, like many conferences, the track presentations tend to be focused on someone’s proposed solution to some problem. Sometimes that solution is highly specific to a given problem that isn’t entirely relevant to the audience; sometimes it’s focused on a particular tool or process idea. The standard format for a track presentation is for a speaker to speak for an hour with at most a couple of minutes for questions at the very end. Typically someone is speaking because he has energy for a particular topic. So, with the best of intentions, he puts a lot of material into the talk such that there’s a morning’s worth of stuff to cover in an hour. Trust me: I know all about this, and alas my victi…I mean, my audiences do too.

So over the last several years, I’ve been trying to learn things to change that, and two annual conferences have helped to show me the way. The first, starting in 2002, was the annual AYE Conference, at which PowerPoint is banned and experiential workshops rule. The second is the annual Conference for the Association for Software Testing, which I attended in 2007 and chaired in 2008. For me, the key idea from which everything else follows is to transform the audience into the participants.

There are two basic types of sessions at CAST. One is the experiential workshop, which typically begins with an exercise, puzzle, or game that is intended to model some aspect of some problem that we all face. At the end of the exercise, the participants discuss what happened and what they’ve learned. Sometimes there’s another iteration or stage of the exercse; sometimes the discussion continues until time or energy is up. This is almost always far more memorable, more sticky, than someone’s story. The lessons learned are direct and personal. Instead of receiving lesson or hearing about an experience, we’ve lived through one.

The other kind of session at CAST is the experience report. A speaker is given a specifically limited time to tell her story. Participants may ask clarifying questions (“What does CRPX stand for?” “I’m sorry, when you said ‘we finished in two’, did you mean two days or two weeks or two iterations?”). Other than that, participants stay quiet so that the speaker can tell her story uninterrupted. Then at the end of the talk, there’s a discussion in which all of the participants have the chance to question, contextualize, and respond to the presentaton. Conversation is moderated by a trained facilitator whose job it is to direct traffic, ensure that everyone gets a chance to be heard, and to make sure that the conversation isn’t dominated by a handful of people. Being an AST facilitator can be a challenging job, keeping order while co-ordinating the threads of the discussion and the queues of questions or comments, often with energetic people in the room.

And the energy is contagious. Participants and speakers alike are mandated to challenge tropes with their own experience, to identify dimensions of context that frame their experience, and to teach and learn from each other. When a session’s time is up, if there’s energy for a particular topic, the conversation continues and we change the break time, move to another room reserved for the purpose, or break out into groups for lunches or hallway conversations. People get engaged in the conversations; they discover new colleagues

This presentation-and-discussion format is a scaled-up version of the LAWST-style workshops, a set of peer conferences which were started by Cem Kaner and Brian Lawrence in 1999 for the purpose of getting skilled testers in conversation with one another to address a specific question about software testing. At LAWST-style workshops, the typical attendance is 20 people or so. When the Association for Software Testing held its first conference in 2006, many people wonder whether the format would scale up to rooms of 100 people or more. Thanks in part to the lessons learned in the peer conferences, and also thanks to the skill of the facilitators, there have been many vigourous discussions—yet everyone who wants to be heard can be heard, even for the keynote presentations.

This year CAST will happen in Colorado Springs, Colorado, July 13-16. There are some very impressive speakers and tutorial leaders again this year, including Cem Kaner, Jerry Weinberg, James Bach, and Jonathan Koomey. It’s a conference by testers, for testers. I’ll have more to say about some of the speakers in coming weeks, but for now, follow the link and check it out.