Blog Posts for the ‘Conferences’ Category

Rising Against the Rent-Seekers

Monday, August 25th, 2014

At CAST 2014, a quiet, modest, thoughtful, and very experienced man named James Christie gave a talk called “Standards: Promoting Quality or Restricting Competition?”. The talk followed on from his tutorial at EuroSTAR 2013 on working with auditors—James is a former auditor himself—and from his blogs on software standards over the years.

James’ talk introduced to our community the term rent-seeking. Rent-seeking is the act of using political means—the exercise of power—to obtain wealth without creating wealth; see http://www.econlib.org/library/Enc/RentSeeking.html and http://en.wikipedia.org/wiki/Rent-seeking. One form of rent-seeking is using regulations or standards in order to create or manipulate a market for consulting, training, and certification.

James’ CAST presentation galvanized several people in attendance to respond to ISO Standard 29119, the most recent rent-seeking scheme by a very persistent group of certificationists and standards promoters. Since the ISO standard on standards requires—at least in theory—consensus from industry experts, some people proposed a petition to demonstrate opposition and the absence of consensus amongst skilled testers. I have signed this petition, and I urge you to read it, and, if you agree, to sign it too.

Subsequently, a publication named Professional Tester published—under an anonymous byline—a post about the petition, with the provocative title “Book burners threaten (old) new testing standard”. Presumably such (literally) inflammatory language was meant as clickbait. Ordinarily such things would do little to foster thoughtful discussion about the issues, but it prompted some quite thoughtful reactions. Here’s one example; here’s another. Meanwhile, if the author wishes to characterize me as a book burner, here are (selected) contents of my library relevant to software testing. Even the lamest testing books (and some are mighty lame) have yet to be incinerated.

In the body text, the anonymous author mischaracterises the petition and its proponents, of which I am one. “Their objection,” (s)he says, “is that not everyone will agree with what the standard says: on that criterion nothing would ever be published.” I might not agree with what the standard says, but that’s mostly a side issue for the purposes of this post. I disagree with what the authors of the standard attempt to do with it.

1) To prescribe expensive, time-consuming, and wasteful focus on bloated process models and excessive documentation. My concern here is that organizations and institutions will engage in goal displacement: expending money, time and resources on demonstrating compliance with the standard, rather than on actually testing their products and services. Any kind of work presents opportunity cost; when you’re doing something, most of the time it prevents you from doing something else. Every minute that a tester spends on wasteful documentation is a minute that the tester cannot fulfill the overarching mission of testing: learning about the product, with an emphasis on discovering important problems that threaten value or safety, so that our clients can make informed decisions about problems and risks.

I am not objecting here to documentation, as the calumny from Professional Tester suggests. I am objecting to excessive and wasteful documentation. Ironically, the standard itself provides an example: the current version of ISO 29119-1 runs to 64 pages; 29119-2 has 68 pages; and 29119-3 has 138 pages. If those pages follow the pattern of earlier drafts, or of most other ISO documents, you have a long, pointless, and sleep-inducing read ahead of you. Want a summary model of the testing process? Try this example of what the rent-seekers propose as their model of of testing work. Note the model’s similarity to that of a (overly complex and poorly architected) computer program.

2) To set up an unnecessary market for training, certification, and consultancy in interpreting and applying the standard. The primary tactic here is to instill the fear of being de-certified. We’ve been here before, as shown in this post from Tom DeMarco (date uncertain, but it seems to have been written prior to 2000).

Rent-seeking is of the essence, and we’ve been here before in another sense: this was one of the key goals of the promulgators of the ISEB and ISTQB. In the image, they’ve saved the best for last.

The well-informed reader will note that the list of organizations behind those schemes and the members of the ISO 29119 international working group look strikingly similar.

If the working group happens to produce a massive and opaque set of documents, and you’re in an environment that claims conformance to the 29119 standards, and you want to get some actual testing work done, you’ll probably find it helpful to hire a consultant to help you understand them, or to help defend you from charges that you were not following the standard. Maybe you’ll want training and certification in interpreting the standard—services that the authors’ consultancies are primed to offer, with extra credibility because they wrote the standards! Good thing there are no ethical dilemmas around all of this.

3) To use the ISO’s standards development process to help suppress dissent. If you want to be on the international working group, it’s a commitment to six days of non-revenue work, somewhere in the world, twice a year. The ISO/IEC does not pay for travel expenses. Where have international working group meetings been held? According to the http://softwaretestingstandard.org/ Web site, meetings seem to have been held in Seoul, South Korea (2008); Hyderabad, India (2009); Niigata, Japan (2010); Mumbai, India (2011); Seoul, South Korea (2012); Wellington New Zealand (2013). Ask yourself these questions:

  • How many independent testers or testing consultants from Europe or North America have that kind of travel budget?

  • What kinds of consultants might be more likely to obtain funding for this kind of travel?

  • Who benefits from the creation of a standard whose opacity demands a consultant to interpret or to certify?

Meanwhile, if you join one of the local working groups, there are two ways that the group arrives at consensus.

  • By reaching broad agreement on the content. (Consensus, by the way, does not mean unanimity—that everyone agrees with the the content. It would be closer to say that in a consensus-based decision-making process, everyone agrees that they can live with the content.) But, if you can’t get to that, there’s another strategy.

  • By attrition. If your interest is in promulgating an unwieldy and opaque standard, there will probably be objectors. When there are, wait them out until they get frustrated enough to leave the decision-making process. Alan Richardson describes his experience with ISEB in this way.

In light of that, ask yourself these questions:

  • How many independent consultants have the time and energy to attend local working groups, often during otherwise billable hours?

  • What kinds of consultants might be more likely to support attendance at local working groups?

  • Who benefits from the creation of a standard that needs a consultant to interpret or to certify?

4) To undermine the role of skill in testing, and the reputations of people who discuss and promote it. “The real reason the book burners want to suppress it is that they don’t want there to be any standards at all,” says the polemicist from Professional Tester. I do want there to be standards for widgets and for communication protocols, but not for complex, cognitive, context-sensitive intellectual work. There should be standards for designed things that are intended to work together, but I’m not at all sure there should be mandated standards for how to do design. S/he goes on: “Effective, generic, documented systematic testing processes and methods impact their ability to depict testing as a mystic art and themselves as its gurus.” Far from treating testing as a mystic art, appealing to things like “intuition” and “experienced-based techniques”, my community has been trying to get to the heart of testing skills, flexible and responsive coverage reporting, tacit and explict knowledge, and the premises of the way we do testing. I’ve seen no such effort to dig deeper into these subjects—and to demystify them—from the rent-seekers.

Unlike the anonymous author at Professional Tester, I am willing to stand behind my work, my opinions, and my reputation by signing my name and encouraging comments. Feel free.

—Michael B.

EuroSTAR Trip Report, Part 2

Thursday, December 9th, 2010

In this post, I’ll highlight a few more of the people that I met at EuroSTAR 2010. Please note that because there were so many people that I’d like to mention, there’s still more to come in subsequent posts. Also, I’ve included tons of links to these people and their work. Please use those links!

Shmuel Gershon (@sgershon on Twitter) was in the Test Lab a lot, only one of the reasons that he won the Test Lab Rats’ informal yet prestigious “Most Enthusiastic Tester” award and T-shirt. Rapid Reporter, Shmuel’s tool for taking notes in exploratory testing sessions, was prominent too. Shmuel used a number of strategies for meeting other people at the conference; he announced a pizza-and-drinks session for old and new members of the Vanguard community (also known as the Rebel Alliance or, for this occasion, Danish Alliance), and he used a cute strategy for introducing himself. On a personal note, Shmuel also helped me enormously by agreeing on the spur of the moment to act as my interviewer for an upcoming EuroSTAR “Take 10” video spot. After the conference, Shmuel and I spent a pleasant afternoon at Copenhagen’s Experimentarium, browsing the exhibits, chatting, discovering bugs in the displays, and exploring patterns of exploratory testing. I was also pleased to beat him in a virtual bicycle race. Fortunately for me, my bike was the one with the seat. And the doctors say I’m recovering well from the experience.

Teemu Vesela (@teemuvesela on Twitter) received the second award from the Lab Rats: “Most Evil Tester”. He established his reputation by asking for—and getting—the Lab’s server and router passwords from the Lab Rats. His claim was that he needed that information to see if he could perform exploit of the applications that were installed in the Test Lab. But maybe, just maybe, he was testing to see if he could obtain the trust of the network administrators, just as one would try to do in a real security penetration test. Teemu exuberantly investigated several potential vulnerabilities, found some cool bugs, and enthusiastically told concise little stories about weaknesses in system defenses. And now I’ve got someone new to talk to when I want to learn quickly about potential security risks.

Henrik Andersson (Twitter: henkeandresson) is a long-time student and advocate of the practice of exploratory approaches to testing, especially within the Swedish testing community. His success has been all the more remarkable considering that, for many years, he worked for an organization that advocates strongly scripted approaches. Henrik gave an excellent talk on his experience introducing exploratory testing extremely rapidly at a large corporation that was, in general, resistant to the idea. His focus was on the role of champions—passionate people who will support and sustain excellent work, philosophically much like those in the Vanguard. Henrik described his approach: little experiments followed by intensive debriefs; granting people the freedom and responsibility to design and evaluate their work; emphasizing the roles of discovery, learning, and feelings. Within the constraints, he was quite successful, but once again incomprehending middle management provided only tepid support.  Thanks to the ubiquitous Markus Gärtner (of whom more quite soon), here’s a detailed account of Henrik’s presentation.

Fredrik Rydberg—someone whom I didn’t know before and (alas!) did not meet in person—gave a superb experience report titled “Can Exploratory Testing Save Lives?” on using exploratory approaches in a regulated, medical context.  His conclusion was an emphatic Yes.  There’s a lot of nonsense in our craft that suggests that you can’t or shouldn’t do exploratory testing in a mission- or safety-critical environment. In fact, as Fredrik made clear, it’s exactly the opposite: if you want to reduce risk and save lives, you must take an exploratory approach to develop tests, to incorporate new information, to continuously re-evaluate your work, and to reveal previously unrecognized risks. Fredrik aptly pointed out that curiosity, patient communication, and networking skill are crucial to a successful exploratory approach; indeed, they’re important to collaborative work of any kind. I hope to meet Fredrik and chat with him more in the future. We need more stories from him, and more stories like his.

Carsten Feilberg (@carsten_f on Twitter) blew me away at the CAST 2008 conference, where he provided a mischievous foreign element during a simulation in Jerry Weinberg’s Tester’s Communication Clinic. His impishness appealed to me, but there’s far more to Carsten than that. At EuroSTAR 2010, he gave a fabulous talk on Session Based Test Management (SBTM). One of the biggest takeaways was the simplest, yet psychologically the most powerful: he took the subtle step of renaming the practice to “Managing Testing Based on Sessions” (MTBS), in order to emphasize to managers the significance of the management aspect. This allowed him to obtain rapid buy-in from skeptical managers at his organization. That simple trick reminded me of Thomas Huxley’s wonderful observation on Charles Darwin’s On the Origin of Species: “How stupid of us all not to have thought of that.” He also provided an elegant visual metaphor for the development process. He started by showing a picture of a cartoon elephant (“the requirements”)—smooth, uniform, clear lines.  Then, over a part of the cartoon elephant, he superimposed the kind of view we’d see after testing: a photograph of the same part of a real elephant—wrinkled, lumpy, hairy. It was a great image, and a great visual explanation. He gradually revealed the bits and pieces of the elephant—and noted that the real elephant had tusks, where the cartoon elephant had none. Exploring the actual product allows us to see things that we wouldn’t see otherwise.

Carsten’s experience report underscored the fact that SBTM/MTBS makes exploratory testing more legible—more readable, more understandable—for managers who might otherwise see it as undisciplined, unstructured, or incomprehensible. I’ve written a couple of blog posts on some approaches that might help clear things up here, here, and here particularly.  Yet if you want advice on how to persuade management to recognize and adopt exploratory testing in your organization, it would also be a really good idea to contact Carsten.  Alas, his presentation slides are not yet online, but Markus Gärtner’s report on Carsten’s talk is.

Ah, Markus Gärtner, another of those fellows who was everywhere, all the time, and he has the blog posts to prove it. In the Test Lab, he was a vigourous participant, asking questions, probing for ideas, and sharing insights. At the conference presentations, Markus was like an old-fashioned on-the-scene radio reporter, “blogcasting” live and typically posting his transcription of the presentation a few seconds into the question-and-answer period. He also gave a presentation on self-education for testers, which for the Vanguard means not only study, but actively practicing testing. Apropos of that, Markus was one of the founders of the European chapter of the Weekend Testing. And apropos of that

Weekend Testing was started in Bangalore, India, by Parimala Shankaraiah (@curioustester on Twitter), Manoj Nair (@manoj_mv), Sharath Byregowda (@sharathb on Twitter)), and Ajay Balamurugadas (@ajay184f on Twitter). The latter was at EuroSTAR to tell the story of how the movement began and how it has developed since then. Inspired by Pradeep Soundararajan (@testertested on Twitter), and soon assisted by Santosh Tuppad (@santhoshst on Twitter), the founders decided to take responsibility for their own education and training. On August 15, 2009, they began meeting online on Saturday afternoons to practice testing, to challenge each other, and to help each other develop skills. Sessions were structured as an hour of testing (typically in pairs) and an hour of group discussion and sharing afterwards. Side effects quickly followed: their reputations blossomed; several open source projects benefitted from their testing; and the larger community became engaged. Weekend Testing quickly sprouted chapters in Mumbai, Chennai, Europe, Australia/New Zealand, and (finally!) North America. Ambitious and eager testers have come out of the woodwork, and more senior colleagues have facilitated sessions. The great conversation of skilled testing goes on, and the Vanguard is growing! I’ll mention more of its people in the next post.

EuroSTAR Trip Report, Part 1

Tuesday, December 7th, 2010

Way way back in 2003, Bret Pettichord first published a paper on schools of software testing. The paper was controversial. Some people found it helpful to identify different schools of thought, for the purpose of understanding ways in which reasonable people might disagree reasonably.  Others found even the mention of disagreements within the field to be distasteful and divisive.  Some people identified with particular schools. Others, sometimes indignantly, refused to be pigeonholed. Yet it’s clear that in any field of endeavour, including testing, there are always communities of thought and practice. Sometimes those communities are isolated; sometimes there are trading zones between them.

No matter how one might label the communities, two broad categories were apparent to me at this year’s EuroSTAR conference. One group seems to focus on testing in terms of confirmation, verification, validation, quality assurance; getting the right answers to prescribed questions; checking. This group’s approach includes a strong focus on artifacts—requirement documents, detailed test plans, and scripted test cases. This group (let’s call it the Traditionalists) also seems to focus on processes and tools, on negotiated contracts, and on following plans—items on the right side of the Agile Manifesto. I don’t claim membership in the Agile School. Although though I greatly admire the principles in the Manifesto, for me, the first thing to look at is the project’s context, and to proceed accordingly. The Traditionalistas, as I see it, emphasize the Agile Manifesto’s “things on the right”. Probably they do so with the desire to dispel variability, subjectivity, and unpredictability from testing.  I try to be empathetic towards those who advocate the things on the right, since those aren’t unreasonable things to want; it’s just unreasonable, in my view, to believe they’re the more important things in the complex, messy, human, and constantly changing world of software development.

The other, significantly smaller—and, in general, younger—group that I observed at EuroSTAR sees testing as questioning, exploration, discovery, investigation, and learning—and quality assistance. Let’s call that group the Vanguard. The Vanguard realizes that getting the right answers is important, but asking the right questions is more important—and recognizing that today’s “right questions” today are probably different from yesterday’s “right questions” is more important still. In broad strokes, the Vanguard prefers

experience reports over “best practice” talks
conversation over lectures
hands-on exercises over PowerPoint presentations
tools for investigation over tools for confirmation
dialogue over monologue
sitting in a circle over classroom format
finding things out over hearing the answer

And, as in the Agile Manifesto, they recognize value in the things on the right, but they value the things on the left more.

The Vanguardistas are eager to participate in testing exercises, and to exchange testing skills by example and by dialog. The Vanguard raises some difficulties for traditional trainers and presenters, because the Vanguard tends to want to ask questions and challenge authority—and as a trainer and presenter, I think that’s great. Many of the Vanguardistas participate in or organize Weekend Testing sessions. Almost all of them are on Twitter. They want to revive and reinvent testing as a sophisticated art that requires vigourous critical thinking. They’re indefatigably curious and engaged, and they’re becoming recognized as leaders in their community and in the testing craft.

One hallmark of the Vanguard at EuroSTAR was that they gravitated towards doing testing in the Test Lab, once again run by James Lyndsay (@workroomprds on Twitter) and Bart Knaack (@Btknaack on Twitter) after their impressive success at EuroSTAR last year. This year, 180 people visited the Test Lab. Though probably a minority, that’s a significant percentage of the overall attendees, and is all the more remarkable because, for space reasons, the Test Lab was quite a distance away from most of the presentations. This year there were more applications to test, more sharply focused vendor presentations, specific guidance for those who needed it, and lots of pairing and sharing. For me, one of the more memorable events was a relatively impromptu exploratory testing management roundtable, facilitated by James, with more than 20 people attending—remarkable because the event wasn’t noted specifically as a scheduled part of the conference programme; it was set up in the Test Lab, advertised by word of mouth, and fundamentally collaborative. The roundtable was one of those things that put the confer back in conference.

Of many high points of the roundtable conversation, the big one for me was the group’s recognition that testers don’t need to be domain experts from the outset of a testing assignment. Instead, testers can partner with domain experts in review and hands-on testing sessions, and in that collaboration get some excellent testing work done immediately. An exploratory testing cycle—test design, test execution, test result interpretation, learning, debriefing—drives rapid and highly effective learning about the domain. As Rob Sabourin (more on Rob later) articulated it: “Here’s a beautiful charter for a test session: Sit with a customer/user and ask ‘What gets in the way of you doing your work?'”

James and Bart were assisted this year by the Test Lab apprentices, Henrik Emilsson (@henrikemilsson on Twitter) and Martin Jansson (@martin_jansson on Twitter). At EuroSTAR 2011, management of the Test Lab will pass to Henrik and Martin. It’s in good hands. Henrik and Martin are members of a blogging cabal called thoughts from the test eye, which has been producing incisive, thoughtful reflections on testing since February 2008. An outstanding example is a blog post announcing their own list of software quality characteristics, in which they build on one of the pillars of James Bach‘s Heuristic Test Strategy Model. But that’s just one example. Read the back issues and put the new ones in your feed reader.

Another member of the test eye collaborative is Rikard Edgren. Rikard was one of the conference chairs of EuroSTAR this year. He seems to have found a way to violate some fundamental law of physics by being everywhere at the same time; whenever I turned around, he was there with an expression on his face that reflected his keen observational skill and his sly humour. I’ve been lucky to have many interesting chats with him, not only this year but in years previous.

More on EuroSTAR 2010 tomorrow.

Rapid Software Testing Public Events in Europe

Monday, March 1st, 2010

It’s a busy season in Europe for Rapid Testing this spring.

I’m going to be at the Norwegian Computer Society’s FreeTest, a conference on free testing tools in Trondheim, Norway, where I’ll be giving a keynote talk on testing vs. checking on March 26.  That’s preceded by a three-day public session of Rapid Software Testing, from March 23-25.  Register here.

After that I’m off to Germany for a three-day public offering of Rapid Software Testing in Berlin, sponsored by Testing Experience.  That class happens March 29-31.  Can’t make it yourself?  Please spread the word!

Stephen Allott at Electromind is setting up a three-day Rapid Software Testing class that I’ll teach in London, May 11-13.  There’s also a testers’ gathering to be held in some accommodating pub on Wednesday the 12th.  If you’re in the area (or can get there), I’d love the opportunity to meet and chat.  Drop a line to me for details.

While all that’s going on, my colleague James Bach will be in Sweden—delivering a public RST class for AddQ Consulting in Kista near Stockholm March 16-18; a session of Rapid Software Testing in Gothenburg March 22-24, a tutorial on Self-Education for Testers on March 25, and an appearance at the SAST conference on March 26.  That’s interspersed with a bunch of corporate consulting, after which he’ll be at the ACCU Conference in Oxford, UK April 14-17.

EuroSTAR’s Test Lab: Bravo!

Wednesday, December 9th, 2009

One of the coolest things about EuroSTAR 2009 was the test lab set up by James Lyndsay and Bart Knaack.

James and Bart (who self-identified as Test Lab Rats) provided testers with the opportunity to have a go at two applications, FreeMind (an open-source mind-mapping program) and OpenEMR (an open-source product for tracking medical records). The Lab Rats did a splendid job of setting things up and providing the services and information that participants needed to get up and running quickly.

Sponsorship in the form of five laptop computers was provided through the good graces of Steve Green at Test Partners, Stuart Noakes at Transition Consulting Ltd., and Bart Knaack at Logica. James Lyndsay also lent a server and a router to the event.

Sponsorship was also provided by tool vendors (here in alphabetical order) Andagon, Microsoft, MicroFocus, Neotys, and Testing Technologies. These sponsors had their tools installed on the laptops, and presented their demos by applying them to OpenEMR and FreeMind as they were installed in the Test Lab. On a loose schedule, some of the presenters did talks and demonstrations of how they tested.

The aforementioned Stuart Noakes and Mieke Gievers gave advice and assistance to the Lab Rats.

Well, that’s all very nice, but what was it like?

As someone who spent a couple of hours in the lab, exploring the applications and listening in on the presentations, I’d say it was terrific (although the prospect that OpenEMR is being used in actual medical practices seemed faintly alarming). Both applications were sophisticated enough for some reasonably serious testing, and had interesting problems to discover and report.

Interestingly, none of the certificationists or the standardization folks sat in the lab and tested, to my knowledge.

Bravo to James and Bart, to the sponsors, to the conference organizers and to the program committee for putting this together.  Let’s see more actual testing at testing conferences!

Upcoming Events: KWSQA and STAR West

Wednesday, September 16th, 2009

I’m delighted to have been asked to present a lunchtime talk at the Kitchener-Waterloo Software Quality Association, Wednesday September 30. I’ll be giving a reprise of my STAR East keynote talk, What Haven’t You Noticed Lately? Building Awareness in Testers. (The title has been pinched from Mark Federman, who got it from Terence McKenna, who may have got it from Marshall McLuhan, but maybe not.)

The following week, it’s STAR West in Anaheim, California. I’ll be giving a half-day workshop, Tester’s Clinic: Dealing with Tough Questions and Testing Myths and a track session, The Skill of Factoring: Identifying What to Test.

I’ll also be giving a bonus session, Using the Secrets of Improv to Improve Your Testing. I’ve done this one at Agile 2008 in Toronto, and at the AYE Conference in 2006, and it’s fun, but because so much of the learning comes from the participants, in the moment, it’s also been remarkably insightful both times. Improv is about being aware of your actions, the actions of others, and how they relate to each other—immediately. Even dipping one’s toe in it is very exciting. Adam White talks compellingly about his experience of a couple of rounds of classes with Second City, and he did a well-regarded improv session at CAST 2008.

There’s an official panel discussion hosted by Ross Collard on Wednesday at 6:30, and there’s an official Meet-The-Presenter session Thursday morning. The rest of the time, James Bach and I will be holding unofficial versions of both of those things. We’ll be bringing testing toys and testing games, and workshopping old and new exercises with whomever wants to come. He’ll likely be talking about his new book, Secrets of a Buccaneer Scholar, a terrific memoir and guide to self-education.

I’d like to meet you at the conference, but I’m not sure who you are. If you’d like to do some hands-on testing puzzles, have chat about testing vs. checking, or to discuss anything you like, drop me a line—michael at developsense.com.

Active Learning at Conferences

Saturday, May 9th, 2009

I was at STAR East this past week, giving a tutorial, a track session, and a keynote. I dropped in on a few of the other sessions, but at breaks I kept finding myself engaged in conversation with individuals and small groups, such that I often didn’t make it to the next session.

At STAR, like many conferences, the track presentations tend to be focused on someone’s proposed solution to some problem. Sometimes that solution is highly specific to a given problem that isn’t entirely relevant to the audience; sometimes it’s focused on a particular tool or process idea. The standard format for a track presentation is for a speaker to speak for an hour with at most a couple of minutes for questions at the very end. Typically someone is speaking because he has energy for a particular topic. So, with the best of intentions, he puts a lot of material into the talk such that there’s a morning’s worth of stuff to cover in an hour. Trust me: I know all about this, and alas my victi…I mean, my audiences do too.

So over the last several years, I’ve been trying to learn things to change that, and two annual conferences have helped to show me the way. The first, starting in 2002, was the annual AYE Conference, at which PowerPoint is banned and experiential workshops rule. The second is the annual Conference for the Association for Software Testing, which I attended in 2007 and chaired in 2008. For me, the key idea from which everything else follows is to transform the audience into the participants.

There are two basic types of sessions at CAST. One is the experiential workshop, which typically begins with an exercise, puzzle, or game that is intended to model some aspect of some problem that we all face. At the end of the exercise, the participants discuss what happened and what they’ve learned. Sometimes there’s another iteration or stage of the exercse; sometimes the discussion continues until time or energy is up. This is almost always far more memorable, more sticky, than someone’s story. The lessons learned are direct and personal. Instead of receiving lesson or hearing about an experience, we’ve lived through one.

The other kind of session at CAST is the experience report. A speaker is given a specifically limited time to tell her story. Participants may ask clarifying questions (“What does CRPX stand for?” “I’m sorry, when you said ‘we finished in two’, did you mean two days or two weeks or two iterations?”). Other than that, participants stay quiet so that the speaker can tell her story uninterrupted. Then at the end of the talk, there’s a discussion in which all of the participants have the chance to question, contextualize, and respond to the presentaton. Conversation is moderated by a trained facilitator whose job it is to direct traffic, ensure that everyone gets a chance to be heard, and to make sure that the conversation isn’t dominated by a handful of people. Being an AST facilitator can be a challenging job, keeping order while co-ordinating the threads of the discussion and the queues of questions or comments, often with energetic people in the room.

And the energy is contagious. Participants and speakers alike are mandated to challenge tropes with their own experience, to identify dimensions of context that frame their experience, and to teach and learn from each other. When a session’s time is up, if there’s energy for a particular topic, the conversation continues and we change the break time, move to another room reserved for the purpose, or break out into groups for lunches or hallway conversations. People get engaged in the conversations; they discover new colleagues

This presentation-and-discussion format is a scaled-up version of the LAWST-style workshops, a set of peer conferences which were started by Cem Kaner and Brian Lawrence in 1999 for the purpose of getting skilled testers in conversation with one another to address a specific question about software testing. At LAWST-style workshops, the typical attendance is 20 people or so. When the Association for Software Testing held its first conference in 2006, many people wonder whether the format would scale up to rooms of 100 people or more. Thanks in part to the lessons learned in the peer conferences, and also thanks to the skill of the facilitators, there have been many vigourous discussions—yet everyone who wants to be heard can be heard, even for the keynote presentations.

This year CAST will happen in Colorado Springs, Colorado, July 13-16. There are some very impressive speakers and tutorial leaders again this year, including Cem Kaner, Jerry Weinberg, James Bach, and Jonathan Koomey. It’s a conference by testers, for testers. I’ll have more to say about some of the speakers in coming weeks, but for now, follow the link and check it out.

Why I Am Not Yet Certified — EuroSTAR Presentation

Wednesday, December 5th, 2007

Today, December 4 2007, I gave a presentation at EuroSTAR on “Why I Am Not (Yet) Certified“. James Bach was originally slated to give a different presentation with the same title, but I got the nod due to the untimely illness of James’ wife Lenore, which caused him to cancel his fall schedule (she’s much better now).

Stuart Reid, the chair of the conference, strongly supports the notion of certifications in their current forms. I disagree with that, but I have considerable respect for people who are willing to provide a platform for opposing views, and I therefore thank him for providing the opportunity to speak. I think the controversy opens up the discussion, and thereby strengthens the conference and the craft of testing.

As I said as I finished the presentation, I felt a little like Martin Luther nailing 42 PowerPoint slides to the screen. The talk was generally well received, but there were several conversations that I found rather sobering.

At least two people to whom I spoke–one a former ISEB instructor–told me that they had wanted to effect change in the multiple choice Foundation exams, but their experience was that that couldn’t happen unless the ISEB/ISTQB Syllabus were to change–and changing that proved an insurmountable obstacle for them.

Almost everyone who approached me afterwards said that they were glad that I had said the things that they had been thinking privately for several years. They tended to be enthusiastic but they also tended to check to see whether they were among friends before they spoke freely. The latter is a tendency we need to break. As it was, it felt like revolution and insurrection were in the air–but nobody was quite brave enough to speak up. I encourage people to talk about this stuff, out loud and in public. Open criticism of things that are damaging to the craft is a form of self-certification in my community.

The complacence and chill were disturbing, but once a group of people were together, the complaints started to flow. Many had taken the ISEB/ISTQB certifications. All but one found little to no value in it. They complained about the triviality and the one-and-only-one-answer nature of the Foundation Level exam. Saddest of all, they noted that in Britain and in several countries on the continent, almost all businesses that are hiring testers require applicants for entry-level jobs to have the ISEB/ISTQB certification. I’m pretty certain that this will have several nasty effects. First, it is likely to discourage people from entering the testing field the way many of our best testers have done–by accident and opportunity. In turn, this will make the profession more insular and less diverse. In turn, this will prevent new ideas from reaching the craft. This is very bad.

We’re already learning this business slowly enough. If you attend conferences–especially the major commercial ones–you’ll hear near endless repetition of the same themes: heavyweight planning and estimation for a task that should be nimble, rapid, and responsive; bloated approaches to test documentation and artifacts; relentless focus on confirmation, verification, and validation, and very little talk of investigation, exploration, and discovery. It’s narcotic–the conferences seem addicted to these talks, and they make the craft sleepy. If we’re going to repeat anything, let’s repeat Einstein’s notion that the we can’t solve problems by using the same level of thinking that we used when we created them.