DevelopSense Logo
Michael Bolton

Past Presentations

You can find an extensive list of presentations and courses that I've taught, including the slides and speaker notes for many of them, here.

Coming up—let's meet!

Check out my schedule, and drop me a line if you'd like to get together when I'm in your part of the world. If you'd like me to work with you and it doesn't look like I'm available, remember that my clients' schedules are subject to change, so mine is too. Finally, note that some conferences offer discounts—and even if they don't advertise it, maybe we can work something out.

January 15-18, 2018

Vienna, Austria

The three-day Rapid Software Testing class, one day of consulting, and evening events with a corporate client.

January 29-February 1, 2018

Columbia, Missouri, USA

The three-day Rapid Software Testing class, one day of consulting, and evening events with a corporate client.

February 12-16, 2018

Portland, Maine, USA

Five days of consulting with a corporate client.

February 21-24, 2018

Guadaljara, Mexico

Rapid Software Testing makes its first appearance in Mexico! Details to come.

March 7-9, 2018

Zurich, Switzerland

For the third year in a row, House of Test Switzerland presents Rapid Software Testing with both me and James Bach co-teaching the class. This is very rare indeed, and each year we develop new material for the occasion.

April 10-13, 2018

Reykjavik, Iceland

Another new country for a public session of the three-day Rapid Software Testing class. Details to come.

April 30-May 3, 2018

Orlando, Florida, USA

The STAR East Testing conference. Details to come.

May 23-29, 2018

Lodz, Poland

A public session of the three-day Rapid Software Testing class, followed by a conference. More details coming soon!

Resources on Exploratory Testing, Metrics, and Other Stuff

Here are some resources on the Web that I've either written, found very useful, or both. I'm constantly referring people to the writings and resources on this list.

Evolving Understanding of Exploratory Testing

My community defines exploratory testing as

a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to optimize the quality of his or her work by treating test design, test execution, test interpretation, and test-related learning as mutually supportive activities that continue in parallel throughout the project.

Yes, that's quite a mouthful. It was synthesized by Cem Kaner in 2006, based upon discussions at the Exploratory Testing Research Summit and the Workshop on Heuristic and Exploratory Techniques. (hover the mouse for participants).

Sometimes, when we want to save time, we refer to exploratory testing more concisely: "parallel test design, test execution, and learning". These definitions are not contradictory. The former is more explicit; the other can be uttered more quickly, and is intended to incorporate the latter. (We used to say "simultaneous", instead of "parallel", but people had trouble, apparently, with the idea that simultaneous activities don't have to be happening at equal intensity at every moment, so parallel seems like a better word these days.)

Exploratory approaches to testing live at one end of a continuum. Scripted testing—writing out a list of step-by-step actions to perform, each step paired with a specific condition to observe—is at the other end of this continuum. Scripted approaches are common in software testing. Typically tests are designed by a senior tester, and given to someone else—typically a more junior tester—to execute.

A number of colleagues and I have serious objections to the scripted approach. It is expensive and time-consuming, and seems likely to lead to inattentional blindness. It also separates test design from test execution and result interpretation, and thereby lengthens and weakens the learning and feedback loops that would otherwise support and strengthen them.

Irrespective of any other dimension of it, we call a test more exploratory and less scripted to the extent that

James Bach and I recorded a conversation on this subject, which you can listen to here. At the 2008 Conference for the Association for Software Testing, Cem Kaner presented a talk called The Value of Checklists and the Danger of Scripts: What Legal Training Suggests for Testers. You can read that here.

Some claim that exploratory testing is "unstructured", equating it with "ad hoc testing" or "fooling around with the computer". In our definition of exploratory testing, such claims are false and unsupportable, and we reject them. Some may say that they are doing "exploratory testing" when they are behaving in an unskilled, unprofessional manner, but we reject this characterization as damaging not only to exploratory testing, but to the reputation of testers and testing generally. If you are not using the learning garnered from test design and test execution in a continuous and rapid loop to optimize the quality of the work, you are not doing exploratory testing. If exploratory testing is "fooling around with the computer", then forensic investigation is "poking around inside a dead body".

"Ad hoc" presents an interesting problem, because those who equate "ad hoc" with "exploratory" not only misunderstand the latter, but misrepresent the former as meaning "sloppy", "slapdash", "unplanned", or "undocumented". "Ad hoc" means literally "to this", or "to the purpose". The Rogers Commission on the Challenger explosion was an ad hoc commission, but it wasn't the Rogers Sloppy Commission or the Rogers Slapdash Commission. The Commission planned its work and its report, and was thoroughly documented. The Rogers Commission was formed for a specific purpose, did its work, and was dissolved when its work was done. In that sense, all testing should be "ad hoc". But alas "ad hoc" and its original meaning parted company several years ago. Exploratory testing is certainly not ad hoc in its revised sense.

Structures of Exploratory Testing

Exploratory testing is not structured in the sense of following a prescribed, step-by-step list of instructions, since that's not what structure means. Structure, per the Oxford English Dictionary, means "the arrangement of and relations between the parts or elements of something complex". In this definition, there is no reference to sequencing or to lists of instructions to follow. So, just as education, nutrition, driving an automobile, juggling, child-rearing, and scientific revolutions are structured and have structure, exploratory testing is also structured. In fact, there are many structures associated with exploratory testing. What follows is an evolving list of lists of those structures:

This is a blog posting that I wrote in September, 2008, summarizing some important points about exploratory testing.

James Bach was interviewed by Matthew Osborn and Federico Silva Armas for a CodingQA podcast in November of 2009. The recording is here, and text summaries can be found here and here.

Meaningful Measurement

The software development and testing business seems to have a very poor understanding of measurement theory and measurement-related pitfalls, so conversations about measurement are often frustrating for me. People assume that I don't like measurement of any kind. Not true; the issue is that I don't like bogus measurement, and there's an overwhelming amount of it out there.

I've written three articles that explain my position on the subject:

I agree with Jerry Weinberg's definition of measurement: the art and science of making reliable and significant observations. I'll suggest that anyone who wants to have a reasonable discussion with me on measurement should read and reflect deeply upon

Software Engineering Metrics: What Do They Measure and How Do We Know (Kaner and Bond)

This paper provides an excellent description of quantitative measurement, identifies dangerous measurement pitfalls, and suggests some helpful questions to avoid them. One key insight that this paper triggered for me: a metric is a measurement function, the model- and theory-driven operation by which we attach a number to an observation. Metrics are not measurement. A metric is a tool used in the practice of measurement, and to me using "measurement" and "metric" interchangably highlights confusion. When someone from some organization says "we're establishing a metrics program", it's like a cabinet-maker saying "I'm establishing a hammering program."

Here are some more important references on measurement, both quantitative and qualitative, and on the risks of invalid measurement, distortion, and dysfunction:

Show me measurements that have been thoughtfully conceived, reliably obtained, carefully and critically reviewed, and that avoid the problems identified in these works, and I'll buy into them. Otherwise I'll point out the risks, or recommend that they be trashed. As James Bach says, "Helping to mislead our clients is not a service that we offer."

Investigating Hard-To-Reproduce Bugs

Finding it hard to reproduce the circumstances in which you noticed a bug?

The Heuristic Test Strategy Model

This document by James Bach describes the test strategy model that is central to the practice of Rapid Testing.

Context-Driven Testing Explained

Cem Kaner and James Bach collaborated on a detailed description of context-driven testing, explaining it and contrasting it with other approaches.

Unpublished Articles


Test Matrices

A test matrix is a handy approach to organizing test ideas, tracking results, and visualizing test coverage. Read more here.

Visual SourceSafe Defects

While developing a utility to migrate files from Visual SourceSafe (VSS) to another version control package, I had to test Visual SourceSafe itself. These tests demonstrated to me that VSS's file and database management is so defect-ridden as to present a danger to customers using the product in reasonable scenarios and cirucmstances. Although it's an older article (circa 2002), it did turn out to be an excellent example of rapid and exploratory testing approaches, and an example of the kind of test report that I would issue to a client. Your mileage may vary, but these are my findings.

A Review of Error Messages

Creating a good error message is challenging. On the one hand, it needs to be informative, to assist the user, and to suggest reasonable actions to mitigate the problem. On the other hand, it needs to avoid giving hackers and other disfavoured users the kind of information that they seek to compromise security or robustness. Here are some suggestions.

Pairwise Testing and Orthogonal Arrays

Pairwise and orthogonal array test techniques may allow us to obtain better test coverage—or maybe not. Over the years, I've changed my views on these techniques. I explain all-pairs and orthogonal arrays here, and I then include some tempering of the basic story—and some counter-tempering too.

About Us | Privacy Policy | Contact Us | Report a problem   ©2012 DevelopSense    Site design by <alt>design