Blog Posts from November, 2008

Interviewing the Program

Sunday, November 30th, 2008

Testing, in the quick definition that James Bach and I use, is questioning a product in order to evaluate it. One way of questioning the product is to ask ordinary questions about it, and then to operate it—supplying it with input of some kind, and operating it in some way. The product “answers” us by producing output or otherwise exhibiting behaviour, which we observe and evaluate. Another way of questioning a product is to ask questions about some dimension of it. If we want to do excellent testing, it helps to remember that our product is not merely a program consisting of code that our team wrote. The product is a system. That system includes the program, the platforms upon which the program depends, the person using the program, and the task or business process that the program enables. The system also includes everything that we know about the system.

In the agile community, there appears to be a strong focus on writing tests, often before the code is written. For programmers, especially those using Test-Driven Development—a design activity and a programming activity—this makes a good deal of sense. The TDD experts suggest writing a test, and the writing the least amount of code possible to make the test pass. If the test doesn’t pass, refine the code until the test passes (along with all the previous tests), and then repeat the process. Add tests and code as you develop an understanding of what needs to be done next. That’s a fundamentally exploratory approach to writing code, learning things and applying that learning as you go.

For testing,though, preparing too many questions too far in advance is a risky business. Here are a few reasons why that’s so.

First, the programmers and the rest of the development team will learn things as the program is being developed. This will likely cause some stories or requirements to be reinterpreted, reframed, or rejected. Other requirements will be discovered, as we realize that our initial understanding was incomplete (and I’ll argue that it is almost always incomplete).

Another problem: when we’re preparing specific tests for something in the future, we’re not testing what’s available to us right now, and that testing may be tremendously important. It may not be code that we have to test; it may be requirements or specifications or designs or prototypes. But, in general, questioning—that is, testing—something that we have now tends to be more productive than questioning something that might be, some day.

Maybe the most significant problem is a mindset that I see in talk about agile development: “we write the acceptance tests first, and when the program passes those acceptance tests, we know we’re done.” This is a dangerous idea, especially for programs with which humans interact. When we say that a program that passes its acceptance tests will be acceptable, we presume implicitly that we know, in advance, all the questions that we will eventually want to ask of the program. We presume implicitly that the program needs only to answer those questions in order to be considered acceptable. Yet our purpose in testing should not be merely to confirm that the program can pass some tests. That a program can work isn’t a such big deal; our clients would like to know that the program will work. Our purpose, therefore, should also include identifying problems that threaten the value of the program. It could also include discovering things about the way that people might use the program, such that we can provide new features, modify existing ones, or propose workarounds for problems that the program cannot yet solve. These are not confirmatory processes, but investigative ones. So in addition to (and maybe instead of) the suite of confirmatory tests, we need to have real humans interacting with and exploring the program.

Think of it in terms of being technical recruiter, inside a bank, qualifying a candidate for a programming job. In consultation with the hiring manager, you determine that the ideal candidate requires experience with C++ and Java, has worked in the financial industry, and speaks English. The candidate should also have knowledge of advanced data mining techniques, statistical programming experience, and excellent references. You search through your database of applicants, and you find someone who indeed meets all of these requirements. Her résumé shows that she’s got the languages and experience, and her references check out. That is, the candidate already passes all of your acceptance tests. What happens next? Are you going to hire that person right away?

Or will you interview her? Get a team of people, with different priorities and different points of view, to interview her? Ask to see a portfolio? Put her through an audition process, in which she demonstrates some knowledge and some skill in applying it? Do you ask unscripted follow-up questions, based on the answers you get, the potential problems that you observe, and the concerns that you might have? Do you vary your suite of questions from candidate to candidate? Are your questions designed to confirm what your already know about the current candidate, or do you intend to probe—that is, test—the candidate’s qualifications?

Excellent software testing is like an excellent qualification and hiring process. When we seek to fill an important position, we don’t go by a mere checklist of attributes and hire the first person that can check each box. We don’t typically want someone who can work exclusively on one problem in one context. In most organizations, I would argue, we’re mostly looking for someone who has the skills and the maturity to adapt reasonably and responsibly to new situations and new problems. Our interview processes should reflect that. Similarly, when we test a piece of software, we’re not looking for something that can work exclusively for one person, on one platform, with one data set, in one context. Instead, a big part of our job is to show that the software will work for many people, using many platforms, dealing appropriately with whatever data we throw at it, in a multitude of contexts. Therefore our tests should not reflect merely our notion of what the program was going to be. Our tests should reflect everything that we know about the system so far, up to the result of the last test, up to this moment; what the program has become as we’ve developed it, and as we’ve learned.

So when you’re developing a product, are you going to fall for the narcotic comfort of the green bar—or are you going to test?

Going to Vancouver

Tuesday, November 25th, 2008

I’m off to Vancouver, British Columbia, teaching Rapid Software Testing to a corporate client the week of December 8. Live near there? Want to get together and chat about testing? I’ll be there Monday through Friday nights. Drop a line to me at mb@michaelbolton.net.

We Won An Award!

Monday, November 24th, 2008

I’m ambivalent about honours. Recognition is nice, but I’m skeptical about the notion of winning over other worthy nominees. Nonetheless, at the EuroSTAR conference, I accepted the inaugural EuroSTAR 2008 CapGemini Award for Innovation, recognizing the most innovative track session, for my talk Two Futures of Software Testing. We won!

Who won? Well, I did for the presentation itself, but James Bach, Cem Kaner, and Jerry Weinberg share credit for the themes and key points of the presentation. Thanks to them.

The award was voted on by EuroSTAR’s attendees, so they won too. That’s because I’d like to believe that they voted less for the presentation and more for what it offered: a bright future of testing that is in contrast to the dark future that is so much like today. So thank you to the attendees.

Thank you also to Bob van de Burgt, EuroSTAR 2008’s Conference Chair; to the EuroSTAR staff; to CapGemini; and to all of the people with whom I’ve had such interesting conversations over the years. About the future: we can’t predict it, but we’re all in it together.

Heuristics Art Show, EuroSTAR 2008

Thursday, November 13th, 2008

Galvanized by Jerry Weinberg‘s workshop on experiential learning at AYE 2008, I led a tutorial at EuroSTAR 2008 that included an experiential exercise invented by my colleague James Bach. I call it The Heuristics Art Show.

In small groups, people contributed, discussed, and refined headlines and descriptions of some of their heuristics, mostly to do with testing, but also to do with other aspects of life and software development. It was wonderful to tap the collective wisdom and experience in the room, and I think the results were marvelous. Many thanks to all who contributed to the exercise.

The pictures are up there in high-res form. Some of them are a little blurry, but they’re all readable if you download the high-res version. One fine day I hope to transcribe them—or maybe a Kindly Contributor could do it.

This kind of exercise will happen again at future conferences, to be sure!

The Art Show approach reminds me of the Positive Deviance Initiative—a from-the-bottom-up, practice-based approach to process improvement. Wanna get better results in a hurry? Don’t bring in the massive, unreadable tomes of “maturity” models; have real people, doing real jobs, share their practices with each other.

Here’s a great example. The problem was that, in the Albert Einstein Medical Center, contaminated hospital gowns were overflowing the trash bins. When people brushed against those gowns, there was a risk of picking up the contamination and spreading it to other patients. A fellow in the patient escort department had a beautiful solution to the problem. That solution is now known as the Jasper Palmer Method.

EuroSTAR 2008 Heuristics Tutorial 1

Schools of Testing and Schools of Music

Monday, November 10th, 2008

There’s been a lot of controversy on the schools of software testing lately, in Paul Gerrard’s blog here and here and here; in James Bach’s blog here and to some extent here, and on the software-testing mailing list. I also had a pleasant chat with Paul Gerrard at coffee break and lunch today at EuroSTAR 2008.

Jonathan Kohl and I did a paper on the parallels between testing and music at CAST 2008; you can find it in the .PDF of the proceedings. Maybe something about music can offer us a way out of the dilemma.

There are lots of ways of approaching music, as a performer, a listener, or a critic. (I use the word “critic” in the sense of someone who tries to understand, describe and contextualize the work, not in the sense of someone who tries to disparage it, although these are often confused.) These different approaches are sometimes called styles, or forms, or traditions. They may be informed by a certain kind of thinking, certain aspects of practice, certain instrumentation. Specific pieces of work and specific composers are considered by their communities (or by others, or by themselves) as exemplars of these styles. Some play music just for fun. Some play on an amateur basis, but are deeply committed to the pursuit. Some play professionally, but as for all kinds of working people, some of the pros may be ambivalent about their commitment to the art. Some people talk casually about the styles, the pieces, and the artists. Others—the critics—are more serious, and study the styles, typically focusing on one or another of them. The quality of their criticism is conditioned at least in part by the ways in which they consider the similarities and differences.

Some artists choose to categorize themselves as practioners of a specific style (“I’m a blues guy”; “I’m a classical musician”.) Some artists, not wanting to be pigeonholed, refuse to categorize themselves. Yet categorization happens anyway, sometimes by admirers and sometimes by detractors.

To me, to reject or even to neglect the differences between one kind of music and another is silly, if you’re trying to become a better student of the field. Classifications can help to understand the differences and the similarities between one form of music and another—or they can be used to reject some forms. “That’s just a bunch of noise!” When her work is labelled that way, the artist has the option to reject the statement outright, essentially ignoring it, or to engage the criticism by providing counter-arguments as to why someone might value this style or this piece. And so we learn.

Within an established tradition in music, there are three rough groupings. Some artists recognize other styles and incorporate them into their work. These tend to be in the avant-garde, which pushes the boundaries of the style to some degree. Other artists tend simply to work within the style, often pretty much ignoring the edges or the roots. Then there are the staunch traditionalists—those who believe that every innovation in a genre after a certain point in history is an accretion and mischief.

So it is with testing. The notion of schools (call them what you will—styles, camps, religions, bodies of thought, cultural frameworks) is a notion that can help us to frame discussion and to identify different approaches to testing in theory and in practice. People can identify context and choices, with the goal of explaining or understanding their own styles or others. There may be controversy between the schools—their adherents, their detractors, and those who have to watch—but the idea that this kind of categorization shouldn’t exist, or should be considered prohibited speech, strikes me as silly.

In my conversation with Paul today, Paul compared the use of schools to a kind of bigotry or racism. That risk is there, but like differentiating between cultures, it depends how you intend to use the distinctions. The question is not whether there are differences; there are, they’re real, and it can be handy to identify them. I observed that far from being inherently antagonistic, the schools concept could allow us to be more polite to one another. As an example, Paul suggested (not too seriously) that if you’re not being context driven, you’re stupid. I could agree with that, but it might be more productive to say that if you’re not context driven, you might be driven by analytical-school values. This reminded me of Jerry Weinberg’s advice that when you think someone is being irrational, reframe your position to think of them as being rational from the perspective of a different set of values.

Have you noticed that some people who are staunch advocates of equivalence class partitioning and boundary value analysis seem virulently opposed to the idea of differentiating between schools of thought?

Schools can go away… when we all think alike

Thursday, November 6th, 2008

In a recent blog post, Paul Gerard wants to reject the idea of schools of software testing as defined by Bret here. To me, this means that he belongs to a school of thought that suggests that there shouldn’t be schools of thought about software testing. That’s different from my school of thought, so I guess we’re in different schools of thought, at least on that issue.

Paul argues that “no one wants to be part of a school”. That’s clearly false, since many people seem to identify themselves as members of the context-driven school. (If it makes Paul feel better to call it the context-driven community, that’s fine with me.) I think, maybe, that Paul means no one wants to be tagged as a member of a school against his or her will, or by someone else. That’s fair enough; it’s okay not to want that. But understand that it’s going to happen anyway, one way or another, any time that anyone wants to be reasonable about disagreements in thinking and approaches. An alternative is to say that people are wrong, or crazy, or idiots. I prefer to suggest that they’re in a different school of thought from mine, and for convenience I might label that school of thought when there are two or more people in it. Boris Beizer has called our community “the touchy-feely school”. Actually, I believe we’re more like the “thinky-valuey” school, but Boris can call us whatever he likes without it really bothering me. He’s not of my school, after all.

Paul’s second objection is that that the four (five) schools are stereotypes that don’t align with reality. Well, they are if you choose to take them literally. But Bret’s classification scheme is a model. All models are wrong, but some are useful. And models can be modified. Maybe there are many more schools of thought in testing than Bret describes. Maybe there are only two—context-driven and not. Different people may have different models. On the whole I’ve found Bret’s model useful in describing the approaches some people might choose and the choices some people might make in a given context. If it helps, you could turn them into adjectives without labelling people: “That’s a factory-school approach” or “That’s quality-police thinking.” There are certainly schools of software development; Agile and non-Agile, at least. There are schools of belief. Are there not schools of software testing?

Apparently there are. Paul notes in his post that he got together with a group of “testing friends” who agreed that practices from each of the schools might be appropriate some of the time. Note that it was a group of his testing friends who agreed. Are there people in the world who think differently about testing from Paul and his friends? What would happen if he were to have this chat with a bunch of people who disagreed with him about choosing practices? He says, “Call us the ‘some of the time school’, or maybe the ‘appropriate’ school or maybe, the ‘it depends on the context’ school. Whatever. We choose to adopt appropriate practices depending on our context.” That sounds like he wants to be a member of the context-driven school, and not some other. Yet Paul has also proposed axioms, universal premises about testing that apply in all contexts—which to me is not a very context-driven thing to propose. So what will we do about that? Well, we’ll talk it out like colleagues and he’ll drop the idea, or we’ll agree to divide into different schools of thought that one or the other of us might choose to label. Either conclusion would be okay.

In a different post, Paul says “If I believe ‘high process’ approaches have their place in some projects – does that preclude me being context-driven?” No; that’s of the essence of context-driven thinking. He says “But a continuing theme of these schools discussions is that high-formality approaches are the result of crooked thinking.” That’s not the way I see it; the belief that high-formality approaches are universally appropriate is the result of crooked thinking, just as the belief that they’re universally inappropriate would be too. That’s part of what the context-driven school is all about; rejecting the notion of practices that are universally best, in favour of adopting and adapting ideas from other schools of thought. Want to be a good context-driven thinker and have some fun with this? Consider at least three contexts in which context-driven thinking would be a bad idea.

Back to the more recent post. Paul claims that “NO ONE ACTUALLY behaves the way that the hackneyed stereotypes would have you believe…. The schools don’t work because the proposed stereotypes do not align with the behaviours of people in real projects.” Well, I’ve been to a large number of places, all around the world, and I observe that Paul is incorrect. There are people who claim that you can’t test, ever, without complete, consistent, unambiguous requirements and a detailed written test plan, and that you’re morally deficient if you try it. There are people who believe that completely automated unit and acceptance tests are the be-all and end-all of good testing—”the Holy Grail”, as appeared on one mailing list lately. There are some people who claim that testers should be the gatekeepers of quality. There are some people—academics, mostly—who believe that programs can be proven correct, and for whom questions about value are off the table. There are people who believe that “maturity” means “doing the same things the same way every time” instead of “being able to adapt your process and your practices reasonably and responsibly to changing context.”

You’ve seen some of these people speaking at conferences, Paul, and on occasion I’ve seen you challenge them. You don’t belong to their schools, nor they to yours.

I’m Published!

Tuesday, November 4th, 2008

I’m delighted to announce that my first contribution to a book debuted today.

The book is called The Gift of Time. It’s a collection of essays honouring the life and work of Jerry Weinberg on the occasion of his 75th birthday and his 50th year in the computing business. The book was edited by Fiona Charles, and features contributions by many of Jerry’s colleagues and students: Robert L. Glass, James Bach, Sherry Heinze, Sue Petersen, Esther Derby, Willem van den Ende, Judah Mogilensky, Naomi Karten, James Bullock, Tim Lister, Johanna Rothman, Jonathan Kohl, Bent Adsersen, Jerry’s wife Dani Weinberg… and me. That’s a rare list, and I’m honoured to be among the people on it.

The essays that I have read so far are wonderful—James’ “The Prince of Testers”; Jon Kohl’s, “Generational Systems Thinking”; and Tim Lister’s “The Consultant’s Consultant”. I’m eager to get to the rest of them.

The Gift of Time is published by Dorset House, publisher of Jerry’s own work. The book is not yet available unless you’re at the AYE Conference; it’s so new that it doesn’t yet appear on Amazon (please don’t confuse it with works of a similar name), nor even on Dorset House’s own Web site—which explains why there’s no link to the book here. That’ll come.

Many thanks to Fiona for putting the book together, and for including me in the project. Many thanks to Wendy Eakin who got it published in time for Jerry’s birthday and AYE.

Happy birthday, Jerry! Arrigato, and namaste!

Fair enough

Tuesday, November 4th, 2008

George Dinwiddie told me a wonderful story at the AYE Conference last night. He was working with a group of developers at a company with several development groups. He coached them in implementing test-driven development and unit testing, and he emphasized to the programmers the importance of delivering well-tested code to the system testers. The results were impressive. The testers found dramatically fewer problems than usual—only one bug that was classified as high severity by the project owners, and other than that the product was given a clean bill of health.

You might think that the other programmers in the company would have been impressed—but you’d been sadly mistaken. The other programmers said,

“Hey, no fair! They tested ahead of time!

Well, uh… yeah.