DevelopSense Logo
Michael Bolton

Past Presentations

You can find an extensive list of presentations and courses that I've taught, including the slides and speaker notes for many of them, here.

Coming up—let's meet!

Check out my schedule, and drop me a line if you'd like to get together when I'm in your part of the world. If you'd like me to work with you and it doesn't look like I'm available, remember that my clients' schedules are subject to change, so mine is too. Finally, note that some conferences offer discounts—and even if they don't advertise it, maybe we can work something out.

April 7-8, 2014

Toronto, Canada

A full-day tutorial, a keynote address, and a track session at STAR Canada. The conference continues through April 9. Contact me for a special registration discount (an ultra-special discount before February 7).

April 11-17, 2014

Beijing, China

A keynote talk at an in-house testing conference, followed by a Rapid Software Testing class and a day of in-house consulting for a corporate client.

The Rapid Software Testing class originally to be held in Vienna has been postponed to September; see below.

April 28-May 2, 2014

London, UK

A return visit for four days (!) of the three-day Rapid Software Testing class at a client near Heathrow.

May 5-8, 2014

Orlando, Florida, USA

A full-day tutorial and a track session at STAR East.

May 19-13, 2014

Copenhagen, Denmark

PrettyGoodTesting is planning sessions of Rapid Software Testing for Managers and Rapid Software Testing the week of May 19. Register using the links on this page.

May 26-28, 2014

Åkersberga, Sweden

The Let's Test conference is in its third year as the leading conference for the context-driven community. This year I'll be presenting a workshop on testing games and what we can learn from them with my partner in crimePaul Holland.

June 23-26, 2014

Dublin, Ireland

A three-day Rapid Software Testing class, in-house for a corporate client.

August 11-13, 2014

New York City, USA

The Conference for the Association for Software Testing (CAST), where I'll be giving a workshop with Laurent Bossavit called "Thinking Critically About Numbers: Defence Against the Dark Arts"

September 15-24, 2014

New York City, USA

An intensive session of Rapid Software Testing with participants from Per Scholas. I'm excited about all of my classes, but this one's special.

September 29-October 3, 2014

Vienna, Austria

A three-day Rapid Software Testing class (with an extra day of consulting work) for a corporate client.

October 13-16, 2014

Anaheim, California, USA

The STAR West conference, where I'll be presenting a one day Rapid Introduction to Rapid Software Testing, and a one-day class on Critical Thinking for Testers.

October 27-November 7, 2014

Chengdu, China

A return visit to China for two more weeks of training and consulting in Rapid Software Testing with a corporate client. Watch this space for other events, too.

November 24-27, 2014

Dublin, Ireland

I'll be attending (at least) the EuroSTAR Conference.

Resources on Exploratory Testing, Metrics, and Other Stuff

Here are some resources on the Web that I've either written, found very useful, or both. I'm constantly referring people to the writings and resources on this list.

Evolving Understanding of Exploratory Testing

My community defines exploratory testing as

a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to optimize the quality of his or her work by treating test design, test execution, test interpretation, and test-related learning as mutually supportive activities that continue in parallel throughout the project.

Yes, that's quite a mouthful. It was synthesized by Cem Kaner in 2006, based upon discussions at the Exploratory Testing Research Summit and the Workshop on Heuristic and Exploratory Techniques. (hover the mouse for participants).

Sometimes, when we want to save time, we refer to exploratory testing more concisely: "parallel test design, test execution, and learning". These definitions are not contradictory. The former is more explicit; the other can be uttered more quickly, and is intended to incorporate the latter. (We used to say "simultaneous", instead of "parallel", but people had trouble, apparently, with the idea that simultaneous activities don't have to be happening at equal intensity at every moment, so parallel seems like a better word these days.)

Exploratory approaches to testing live at one end of a continuum. Scripted testing—writing out a list of step-by-step actions to perform, each step paired with a specific condition to observe—is at the other end of this continuum. Scripted approaches are common in software testing. Typically tests are designed by a senior tester, and given to someone else—typically a more junior tester—to execute.

A number of colleagues and I have serious objections to the scripted approach. It is expensive and time-consuming, and seems likely to lead to inattentional blindness. It also separates test design from test execution and result interpretation, and thereby lengthens and weakens the learning and feedback loops that would otherwise support and strengthen them.

Irrespective of any other dimension of it, we call a test more exploratory and less scripted to the extent that

James Bach and I recorded a conversation on this subject, which you can listen to here. At the 2008 Conference for the Association for Software Testing, Cem Kaner presented a talk called The Value of Checklists and the Danger of Scripts: What Legal Training Suggests for Testers. You can read that here.

Some claim that exploratory testing is "unstructured", equating it with "ad hoc testing" or "fooling around with the computer". In our definition of exploratory testing, such claims are false and unsupportable, and we reject them. Some may say that they are doing "exploratory testing" when they are behaving in an unskilled, unprofessional manner, but we reject this characterization as damaging not only to exploratory testing, but to the reputation of testers and testing generally. If you are not using the learning garnered from test design and test execution in a continuous and rapid loop to optimize the quality of the work, you are not doing exploratory testing. If exploratory testing is "fooling around with the computer", then forensic investigation is "poking around inside a dead body".

"Ad hoc" presents an interesting problem, because those who equate "ad hoc" with "exploratory" not only misunderstand the latter, but misrepresent the former as meaning "sloppy", "slapdash", "unplanned", or "undocumented". "Ad hoc" means literally "to this", or "to the purpose". The Rogers Commission on the Challenger explosion was an ad hoc commission, but it wasn't the Rogers Sloppy Commission or the Rogers Slapdash Commission. The Commission planned its work and its report, and was thoroughly documented. The Rogers Commission was formed for a specific purpose, did its work, and was dissolved when its work was done. In that sense, all testing should be "ad hoc". But alas "ad hoc" and its original meaning parted company several years ago. Exploratory testing is certainly not ad hoc in its revised sense.

Structures of Exploratory Testing

Exploratory testing is not structured in the sense of following a prescribed, step-by-step list of instructions, since that's not what structure means. Structure, per the Oxford English Dictionary, means "the arrangement of and relations between the parts or elements of something complex". In this definition, there is no reference to sequencing or to lists of instructions to follow. So, just as education, nutrition, driving an automobile, juggling, child-rearing, and scientific revolutions are structured and have structure, exploratory testing is also structured. In fact, there are many structures associated with exploratory testing. What follows is an evolving list of lists of those structures:

This is a blog posting that I wrote in September, 2008, summarizing some important points about exploratory testing.

James Bach was interviewed by Matthew Osborn and Federico Silva Armas for a CodingQA podcast in November of 2009. The recording is here, and text summaries can be found here and here.

Meaningful Measurement

The software development and testing business seems to have a very poor understanding of measurement theory and measurement-related pitfalls, so conversations about measurement are often frustrating for me. People assume that I don't like measurement of any kind. Not true; the issue is that I don't like bogus measurement, and there's an overwhelming amount of it out there.

I've written three articles that explain my position on the subject:

I agree with Jerry Weinberg's definition of measurement: the art and science of making reliable and significant observations. I'll suggest that anyone who wants to have a reasonable discussion with me on measurement should read and reflect deeply upon

Software Engineering Metrics: What Do They Measure and How Do We Know (Kaner and Bond)

This paper provides an excellent description of quantitative measurement, identifies dangerous measurement pitfalls, and suggests some helpful questions to avoid them. One key insight that this paper triggered for me: a metric is a measurement function, the model- and theory-driven operation by which we attach a number to an observation. Metrics are not measurement. A metric is a tool used in the practice of measurement, and to me using "measurement" and "metric" interchangably highlights confusion. When someone from some organization says "we're establishing a metrics program", it's like a cabinet-maker saying "I'm establishing a hammering program."

Here are some more important references on measurement, both quantitative and qualitative, and on the risks of invalid measurement, distortion, and dysfunction:

Show me measurements that have been thoughtfully conceived, reliably obtained, carefully and critically reviewed, and that avoid the problems identified in these works, and I'll buy into them. Otherwise I'll point out the risks, or recommend that they be trashed. As James Bach says, "Helping to mislead our clients is not a service that we offer."

Investigating Hard-To-Reproduce Bugs

Finding it hard to reproduce the circumstances in which you noticed a bug?

The Heuristic Test Strategy Model

This document by James Bach describes the test strategy model that is central to the practice of Rapid Testing.

Context-Driven Testing Explained

Cem Kaner and James Bach collaborated on a detailed description of context-driven testing, explaining it and contrasting it with other approaches.

Unpublished Articles


Test Matrices

A test matrix is a handy approach to organizing test ideas, tracking results, and visualizing test coverage. Read more here.

Visual SourceSafe Defects

While developing a utility to migrate files from Visual SourceSafe (VSS) to another version control package, I had to test Visual SourceSafe itself. These tests demonstrated to me that VSS's file and database management is so defect-ridden as to present a danger to customers using the product in reasonable scenarios and cirucmstances. Although it's an older article (circa 2002), it did turn out to be an excellent example of rapid and exploratory testing approaches, and an example of the kind of test report that I would issue to a client. Your mileage may vary, but these are my findings.

A Review of Error Messages

Creating a good error message is challenging. On the one hand, it needs to be informative, to assist the user, and to suggest reasonable actions to mitigate the problem. On the other hand, it needs to avoid giving hackers and other disfavoured users the kind of information that they seek to compromise security or robustness. Here are some suggestions.

Pairwise Testing and Orthogonal Arrays

Pairwise and orthogonal array test techniques may allow us to obtain better test coverage—or maybe not. Over the years, I've changed my views on these techniques. I explain all-pairs and orthogonal arrays here, and I then include some tempering of the basic story—and some counter-tempering too.

About Us | Privacy Policy | Contact Us | Report a problem   ©2012 DevelopSense    Site design by <alt>design