DevelopSense Logo
Michael Bolton

Past Presentations

You can find an extensive list of presentations and courses that I've taught, including the slides and speaker notes for many of them, here.

Coming up—let's meet!

Check out my schedule, and drop me a line if you'd like to get together when I'm in your part of the world. If you'd like me to work with you and it doesn't look like I'm available, remember that my clients' schedules are subject to change, so mine is too. Finally, note that some conferences offer discounts—and even if they don't advertise it, maybe we can work something out.

April 7-8, 2014

Toronto, Canada

A full-day tutorial, a keynote address, and a track session at STAR Canada. The conference continues through April 9. Contact me for a special registration discount (an ultra-special discount before February 7).

April 11-17, 2014

Beijing, China

A keynote talk at an in-house testing conference, followed by a Rapid Software Testing class and a day of in-house consulting for a corporate client.

The Rapid Software Testing class originally to be held in Vienna has been postponed to September; see below.

April 28-May 2, 2014

London, UK

A return visit for four days (!) of the three-day Rapid Software Testing class at a client near Heathrow.

May 5-8, 2014

Orlando, Florida, USA

A full-day tutorial and a track session at STAR East.

May 19-13, 2014

Copenhagen, Denmark

PrettyGoodTesting is planning sessions of Rapid Software Testing for Managers and Rapid Software Testing the week of May 19. Register using the links on this page.

May 26-28, 2014

Åkersberga, Sweden

The Let's Test conference is in its third year as the leading conference for the context-driven community. This year I'll be presenting a workshop on testing games and what we can learn from them with my partner in crimePaul Holland.

June 23-26, 2014

Dublin, Ireland

A three-day Rapid Software Testing class, in-house for a corporate client.

August 11-13, 2014

New York City, USA

The Conference for the Association for Software Testing (CAST), where I'll be giving a workshop with Laurent Bossavit called "Thinking Critically About Numbers: Defence Against the Dark Arts"

September 15-24, 2014

New York City, USA

An intensive session of Rapid Software Testing with participants from Per Scholas. I'm excited about all of my classes, but this one's special.

September 29-October 3, 2014

Vienna, Austria

A three-day Rapid Software Testing class (with an extra day of consulting work) for a corporate client.

October 13-16, 2014

Anaheim, California, USA

The STAR West conference, where I'll be presenting a one day Rapid Introduction to Rapid Software Testing, and a one-day class on Critical Thinking for Testers.

October 27-November 7, 2014

Chengdu, China

A return visit to China for two more weeks of training and consulting in Rapid Software Testing with a corporate client. Watch this space for other events, too.

November 24-27, 2014

Dublin, Ireland

I'll be attending (at least) the EuroSTAR Conference.

Books

How to Reduce the Cost of Software Testing

Many of the costs of software development and testing are subtle, and sometimes they're hard to quantify. That doesn't make the costs any less real. In this book, edited by Matt Heusser and Govind Kulkarni, I contributed a chapter in which I tell the story of a testing project that I worked on, and identify several dimensions of cost that often go unnoticed.

The Gift of Time

I contributed a chapter to this collection of essays, edited by Fiona Charles, in honour of the life and work of Jerry Weinberg.

Jerry is a pioneer of software testing, starting with setting up the first independent test group for Project Mercury in 1958. He's famous for having been among the first to pay attention to the human dimensions at the bottom of all technical problems. Jerry has been an enormous influence on my work, and on my community. It was an honour to participate in this project, which includes contributions from Fiona Charles, Bob Glass, James Bach, Jean McLendon, Sherry Heinze, Sue Petersen, Esther Derby, Willem van den Ende, Judah Mogilensky, Naomi Karten, James Bullock, Tim Lister, Johanna Rothman, Jonathan Kohl, Dani Weinberg, and Bent Adsersen.

Agile Testing: A Practical Guide for Testers and Agile Teams

I contributed a sidebar on exploratory testing to this book by Janet Gregory and Lisa Crispin.

Articles

Unless specified otherwise, all articles and columns mentioned here are in Adobe Acrobat (.PDF) format.

Testing Without A Map

Better Software Magazine, Vol. 7, No. 1, January 2005

This is a worked-out example and discussion of how to do one kind of exploratory testing: reconnaissance and fast evaluation of a product that you've never seen before. It's based on using the HICCUPP mnemonic (History, Image, Comparable Products, Claims, User Expectations, Product, and Purpose) consistency heuristics to guide testing; when a product is inconsistent with one of these aspects, we have reason to suspect a problem. (The HICCUPP list now includes "S", for Standards or Statutes, and an inconsistency heuristic--"F", for "familar problems.")

Comparatively Speaking

Software Testing and Quality Engineering, Vol 6, No. 7, September, 2004

"Best"—as in "best practice" or "best strategy" or "best tool" is never an absolute; it's always a relationship. That's why, when we hear the word "best", it's a good idea to ask "Best compared to what?"

Are You Ready?

Software Testing and Quality Engineering, Vol. 5, No. 3, May/June, 2003

In my working life as a consultant and contractor, I've often found that organizations aren't ready for me to arrive, which wastes money and time—both mine and theirs. Here are some problems to avoid and some ways to avoid them.

Should Automated Acceptance Tests Use the GUI the Application Provides?

LogiGear's Insider's Guide to Strategic Software Testing Newsletter, November 2007

 

Test Connection Columns from Better Software Magazine

Swan Song

Better Software, Vol. 12, No. 1, January 2010

Black Swans, in the book by Nassim Nicholas Taleb, are improbable and unexpected events with a severe impact. One of the most important goals of testing is to find problems in the product. In my last regular column for Better Software, I examine what testers can do to help reduce the likelihood that we'll encounter a Black Swan.

Constructing the Quality Story

Better Software, Vol. 11, No. 7, November 2009

Knowledge doesn't just exist; we build it. Sometimes we disagree on what we've got, and sometimes we disagree on how to get it. Hard as it may be to imagine, the experimental approach itself was once controversial. What can we learn from the disputes of the past? How do we manage skepticism and trust and tell the testing story?

Food for Thought

Better Software, Vol. 11, No. 6, September 2009

Ideas about testing can come from many different and unexpected sources, including reductionism, agrononomy, cognitive psychology, mycology, and general systems. I feasted on Michael Pollan's "The Ominivore's Dilemma" and found much to whet the my appetite for learning about how things work.

Three Kinds of Measurement (And Two Ways to Use Them)

Better Software, Vol. 11, No. 5, July 2009

How do we know what's going on? We measure. Are software development and testing sciences, subject to the same kind of quantitative measurement that we use in physics? If not, what kinds of measurements should we use? How could we think more usefully about measurement to get maximum value with a minimum of fuss? One thing is for sure: we waste time and effort when we try to obtain six-decimal-place answers to whole-number questions. Unquantifiable doesn't mean unmeasurable. We measure constantly WITHOUT resorting to numbers. Goldilocks did it.

This column was also reprinted in LogiGear's Insider's Guide to Strategic Software Testing Newsletter.

Issues About Metrics About Bugs

Better Software, Vol. 11, No. 4, May 2009

Managers often use metrics to help make decisions about the state of the product or the quality of the work done by the test group. Yet measurements derived from bug counts can be highly misleading because a "bug" isn't a tangible, countable thing; it's a label for some aspect of some relationship between some person and some product, and it's influenced by when and how we count... and by who is doing the counting.

This column was reprinted in LogiGear's Insider's Guide to Strategic Software Testing Newsletter.

Learning from Experience

Better Software, Vol. 11, No. 3, April 2009

People often point to requirements documents and process manuals as ways to guide a new tester. Research into knowledge transfer, as described in The Social Life of Information, suggests that there is much more to the process of learning. In this column, I describe my own experiences on a new project, noting how the documentation helped... and didn't.

Off the Trails

Better Software, Vol. 11, No. 2, March 2009

A focused approach toward testing a product is important, but sometimes we discover information that we didn't anticipate at all. One of the key skills in testing is dynamically managing our focus; sharpening it sometimes and widening it at other times. If we vary our approaches, we might find something surprising and broaden our coverage.

Lucky And Smart

Better Software, Vol. 11, No. 1, January 2009

Charles Darwin was certainly a great scientist, but his career and his discoveries were also strongly influenced by serendipity and luck. What could this great explorer and scientist teach us about testing?

A Map By Any Other Name

Better Software, Vol. 10, No. 10, December 2008

A mapping illustrates a relationship between two things. In testing, a map might look like a road map, but it might also look like a list, a chart, a table, or a pile of stories. We can use any of these to help us think about test coverage.

Cover or Discover

Better Software, Vol. 10, No. 9, November 2008

Excellent testing isn't just about covering the "map"—it's also about exploring the territory, which is the process by which we discover things that the map doesn't cover.

Got You Covered

Better Software, Vol. 10, No. 8, October 2008

Excellent testing starts by questioning the mission. So, the first step when we are seeking to evaluate or enhance the quality of our test coverage is to determine for whom we're determining coverage, and why.

It's In The Way That You Use It

Better Software, Vol. 10, No. 7, September 2008

Rapid testers don't think of test automation merely as something that controls a program and checks for some expected result. Instead, we think of test automation as any use of tools to support testing. With that definition in mind, it may not be the most obvious automation tool that is the most useful.

Two Cheers for Ambiguity

Better Software, Vol. 10, No. 6, July-August 2008

Some people dismiss words such as skill, diversity, problems, and mission as being too ambiguous to be useful. But one tester's ambiguity is another tester's gauge for assessing consensus on a project—and for understanding how to achieve that consensus.

Know Where Your Wheels Are

Better Software, Vol. 10, No. 5, June 2008

Testing is a complex cognitive task—like driving. What role do written rules play in achieving competence? What about experiential learning? How about the advice of mentors? We can learn about testing by remembering how we learned to drive.

Out of the Rut

Better Software, Vol. 10, No. 4, May 2008

Are you bored? Do feel as though all you do is repeat heavily scripted tests? Do you find that, as a result, you aren't learning, discovering new problems, or finding bugs? These nine heuristics can help you get out of your rut and take back control of your testing process.

Learning the Hardware Lessons

Better Software, Vol. 10, No. 3, April 2008

Systems and software aren't just about correctness; they are also about solving problems for people. According to the context-driven software testing movement, a problem isn't solved if the product doesn't work. My experience in a hardware store drives that lesson home, and shows the importance of people over computer systems.

How Much Is Enough?

Better Software, Vol. 10, No. 2, March 2008

Exploratory testers design and execute tests in the moment, starting with an open mission and investigating new ideas as they arise. But how do we know when to stop? The first step is to recognize that we can't know for sure when we're done, because any approach to answering the stopping question is necessarily heuristic. But there are at least seven ideas that we might want to consider when we're trying to decide when to stop a test, a test cycle, or a development project. The blog posting When Do We Stop a Test provides an update to this column.

Is There A Problem Here?

Better Software, Vol. 10, No. 1, January/February 2008

Suppose you were testing an application that you had never seen before with no time to prepare, no specification, no documentation, no reference programs, no prepared test cases, no test plan, and no other person to talk to. How do you know that what you are seeing is a bug?

What Counts

Better Software, Vol. 9, No. 12, December 2007

In the testing business, we are infected with counting disease—we count test cases, requirements, lines of code, and bugs. But all this counting is an endemic means of deception in the testing business. How do we know what numbers are truly meaningful?

This article was republished in the 8th edition of Sogeti Spain's QA:News.

How Testers Think

Better Software, Vol. 9, No. 11, November 2007

People think in models and metaphors, which help us make sense of the world and deal with new things. Jerome Groopman's book What Doctors Think, provides us with some interesting comparisons between the ways in which doctors diagnose illness in patients and the ways in which testers find problems in software.

McLuhan for Testers

Better Software, Vol. 9, No. 10, October 2007

If a tester is "somebody who knows that things can be different," then Marshall McLuhan was a tester par excellence. According to McLuhan, the English professor who proposed the Laws of Media, the message of a medium is not its content but rather its effects. Every piece of software is a medium, and every medium can be probed with McLuhan's thinking tools. Find out how.

Users We Don't Like

Better Software, Vol. 9, No. 9, September 2007

Mom always said, "If you can't say something nice, don't say anything at all." But I made an interesting discovery when I asked testers to talk about users they don't like. While nobody likes a complainer, listening to what your users are saying—even if you don't like it—can help you spot problems you may have overlooked.

Go With The Flow

Better Software, Vol. 9, No. 8, August 2007

Simplicity in testing is a worthy goal, but in reality it's a messy, complex world. Find out how to defocus your test strategy and use flow testing to follow a specific path through a system's functions, investigating circumstances in which it might fail.

Test Design with Risk in Mind

Better Software, Vol. 9, No. 7, July 2007

Sometimes in testing we find problems that surprise us. And that's where risk-based testing comes in. Build your tests around "What if . . . ?" statements to help you anticipate problems before they arise.

An Arsenal of Answers

Better Software, Vol. 9, No. 6, June 2007

Be ready with an answer the next time you're asked, "How long will it take to test this product?" Delve beneath the surface of the question to understand what your manager really wants to know.

When in Doubt, Reframe

Better Software, Vol. 9, No. 5, May 2007

One often-overlooked testing skill is understanding what our clients are really saying—in addition to the words that actually come out of their mouths. Sometimes reframing a seemingly irrational response can lead to a higher level of communication and a more productive relationship.

The Magic 8 Ball of Testing

Better Software, Vol. 9, No. 4, April 2007

Have you ever been taken in by a test tool that appeared too good to be true? Dr. Ralf Piolo at the University of Bala in Ontario, Canada, showed me such a tool. He called it oClear. After you're read the article, check the publication date—and see if the name "Ralf Piolo" holds any meaning for you. If not, look him up—but not on Google. Try an anagram server.

The Proof of the Pudding

Better Software, Vol. 9, No. 3, March 2007

Want to test a product effectively? There are all kinds of techniques and approaches that might help. In the end, though, if our customers find bugs, they'll mostly find them by using the product. So, in addition to testing our systems by other means, let's use them, diversifying our models of the users, the tasks they perform, and the sequences in which they perform those tasks. It's great to consult the users, to model them systematically, and to understand their interests. But in addition to that, if at all possible, let's use the product ourselves.

One Step Back, Two Steps Forward

Better Software, Vol. 9, No. 2, February 2007

There are two senses of "regression test" floating around: one is "any repeated test"; another is "any test that makes sure that quality hasn't worsened." These categories are orthogonal; a repeated test might fail to reveal a decline in quality, and a test that reveals a quality lapse may be a new test. If we want to test well, we need to understand the role of repetition, and how to make it as useful and as inexpensive as possible.

Rock, Paper, Scissors

Better Software, Vol. 8, No. 11, December 2006

On any project, there's always more information available than one might think at first glance. The trick is to be able to find and exploit those sources of information quickly and consciously. In Rapid Testing, we think about reference, inference, and conference as heuristic sources of knowledge. They're all useful, they're all incomplete, and each may contradict, reinforce, or refine the other. Ultimately, though, there is one final authoritative source for information: the product owner.

More Stress, Less Distress

Better Software, Vol. 8, No. 10, November 2006

If we overfeed the system with input, or if we starve it by depriving it of something it needs, it will break eventually? Where will it break, and how will it break? Stress testing is a family of techniques that we use to find vulnerabilities in the system—weaknesses that may surprise us. That's important, because when a system is in an unpredicted state, it's in an unpredictable state.

Master of Your Domain

Better Software, Vol. 8, No. 9, October 2006

Most programs have an intractably large set of valid inputs, and an infinitely large set of invalid inputs. To Rapid Testers, "domain testing" is focused on dividing and conquering the data—understanding inputs, outputs, and descriptions of everything around the system. That means classifying and sampling the data—and then exploring to expand your classifications and your models.

Blink or You'll Miss It

Better Software, Vol. 8., No. 8, September 2006

In his book Blink, Malcolm Gladwell points out that snap judgments and rapid observation are central to our decision-making processes. He also notes that sometimes we can improve the quality of our snap judgments by removing information, rather than adding it. Blink testing is the name that I've given to a style of testing in which we dramatically some aspect of our observation, and then exploit human pattern matching and rapid cognition to see things that might otherwise be invisible.

The Factors of Function Testing

Better Software, Vol. 8, No. 7, July-August 2006

Function testing is, as Cem Kaner puts it, easy. "All you have to do is to identify every function in the program, and then make sure that each one does what it's supposed to do and doesn't do what it's not supposed to do." Yet anything that happens or changes in a program is due to some function. What can we do to model the program's functional suffiently for good testing?

Test Patterns

Better Software, Vol. 8, No. 6, June 2006

The Heuristic Test Strategy Model (originally developed by my colleague James Bach) identifies nine different families of test techniques—function testing, domain testing, stress testing, flow testing, scenario testing, claims testing, user testing, risk testing, and automatic testing. No single technique can reveal all of the information that we seek about a system, but a variety of techniques will reveal more bugs—and more varieties of bugs.

Time for New Test Ideas

Better Software, Vol. 8, No. 5, May 2006

Things don't just happen in a product; things happen in sequences, at certain rates, over nanoseconds or over years. If they're not on schedule, they might get interrupted or delayed. On business holidays, they might not happen at all. In this column, you'll find five stories and a list of 150 time-related words that you can use to help in generating test ideas for your product. Thanks to Jonathan Kohl and to James Bach for their contributions.

Where in the World

Better Software, Vol. 8, No. 4, April 2006

Can your product be reconfigured quickly, easily, and automatically to work in another location? Does your product—and your test strategy —account for differences in regulations, currencies, time zones, languages, inventory, or culture? When you're dealing with products that make their way around the world, localizability is a key quality criterion that your product and your organization must satisfy.

Taking Our Act on the Road

Better Software, Vol. 8, No. 3, March 2006

When we ask questions about portability, we're asking "Can we take our act on the road? What might we expect—or not expect—as the result of deliberate choice to change the product's home base?" Portability helps us to think about platform dependencies and other considerations to help us anticipate change.

Maintaining Your Course

Better Software, Vol. 8, No. 2, February 2006

Maintainability is about the capacity of the program to stay the same or to adapt to change when appropriate. The program isn't the only thing that needs to be maintained, though. We also need to think about maintaining the things around the program—things like the tests and the documentation—and the cost vs. the value of maintenance.

Support For Testing; Testing For Support

Better Software, Vol. 8, No. 1, January 2006

In an ideal world, programs would never have problems and so would never need support. Here on Earth in the 21st century, programs and their users do run into trouble. When that happens, one hallmark of an excellent program is that is can be supported easily, and that some of things that make that possible add to its testability— the capacity for the program to be tested quickly and easily.

More Than One Answer; More Than One Question

Better Software, Vol. 7, No. 9, November/December 2005

"Is this a good product?" is a question with a number of possible answers. One way to answer this question well is to consider a number of possible interpretations of the question. The Heuristic Test Strategy Model provides guidewords to help think about what might (dis)satisfy potential users of the product.

Elemental Models

Better Software, Vol. 7, No. 8, October, 2005

What are the issues that you might consider in order to obtain good test coverage? The Heuristic Test Strategy Model suggests that you consider product elements and questions about the program's Structure, Function, Data, Platform, and Operations (and an element that has been added since this column was published, Time) to develop test ideas. You can commit them to memory and have them available to you at any time by using the nifty mnemonic "San Francisco Depot"—SFDPO (and, more currently, SFDPOT).

Staying on the Critical Path

Better Software, Vol. 7, No. 7, September, 2005

Asking questions about the project environment and elements of its context allows you to understand constraints and resources, and helps to focus your efforts on the testing mission. In this column, I introduce the project environment dimension of James Bach's Heuristic Test Strategy Model, and I suggest a couple of ways of using it to help discover unnoticed information. I also suggest adding your own ideas to the list to reflect specific issues you face in your own projects.

Mission Critical

Better Software, Vol. 7, No. 6, July/August, 2005

Critical thinking is a core testing skill. It guides us to question our assumptions, to consider alternative interpretations of what we think, to seek evidence, and to recognize both similarities and differences in things. This helps us to recognize not only possible problems in the product, but also possible problems in our testing.

Do You Want Fries With That Test?

Better Software, Vol. 7, No. 5, May/June, 2005

We can take lessons on learning to test from learning to cook. In both domains, techniques are valuable but skills inform the how and the why of which techniques to use, and are the centre of excellent work. Most importantly, we both test and cook to serve and to satisfy other people.

The Pleasure of Finding Things Out

Better Software, Vol. 7, No. 4, April 2005

With the first of my regular columns for Better Software, I invoke a muse: Richard Feynman, an inspiring figure for testers. He was curious about how the world worked, inventive, imaginative, resourceful, and playful. He truly took pleasure in finding things out—and so can we.


Interviews

I was interviewed for the site WhatIsTesting.com in June, 2004. You can read the interview here. Vipul Kocher, who runs the site and asked me the questions, was himself interviewed by Danny Faught here.

Michael Hunter interviewed me for his Five Questions series in Dr. Dobbs' online site; you can read that one here.

Georgia Motoc has a blog and produces a series of podcasts. She interviewed me at lunch one day in September, 2008. She's posted two segments, one here and the other here.

Here's a silly one: the CMC Media folks interviewed me and Jonathan Kohl at STAR East 2008, just after our presentation on The Angels and Devils of Software Testing. They caught us in costume and in character, and they did a nice slick production job, too. Enjoy!

About Us | Privacy Policy | Contact Us | Report a problem   ©2012 DevelopSense    Site design by <alt>design