Blog Posts from October, 2008

Adding Value, Revisited

Tuesday, October 28th, 2008

A while back, I wrote a post on breaking code, in which I suggested that testers don’t add value to a project; they merely defend the value that’s there. This isn’t a general systems law, but any general utterance should have a couple of specific exceptions or alternative views. So, with the passage of time, here are a couple.

First, a while ago, I was chatting with James Bach, and we realized that testers sometimes can add value by recognizing new ways of using the product, or proposing refinements to what’s already there. Two wonderful examples were Quarterdeck’s products Manifest (which allowed you to inspect the state of the system) and Optimize (which placed programs in memory so that they occupied the minimum amount of memory available to the operating system. Yes, that used to be important.) Testing—in an exploratory way—revealed plenty of circumstances in which these products could be improved, or could be used in novel ways. That information triggered decisions and development work that added value.

In another case, I was on a documentation project for a network analysis program that had been purchased by a software publisher. Since the existing documentation was so shoddy and out of sync with the version that I had been asked to document, and since the third party’s developers were no longer available, my principal source of information was the product itself. I had to test it in an exploratory way, and in so doing, I identified a number of ways in which I could find (and document) ways of using the product that neither the developers nor the marketers had recognized. That information pointed to marketing opportunities that added value.

In these two examples, testers don’t add value, but they shine light on places where value might be added.

Another way of thinking about testers adding value is that the product isn’t just the running code. Indeed, as Cem Kaner points out, a computer program is not merely “a set of instructions that can be run on a computer.” That’s too limited. Instead, he says, a computer program is “a communication among several people and computers, distributed over distance and time, that contains instructions that can be run on a computer.” I love this definition, because if we honour it, it takes us way beyond questions of mere correctness, and straight to questions about value.

Does the program help the user to produce or accomplish something of value? Does the program include anything that would threaten the production or accomplishment of that value?

The system we’re testing is not merely the code or the binaries; the system is the sum of that stuff plus everything we know about it; how it can be used, how it can work, how it might not work. When testers add to the store of knowledge about the system, then testers indeed add value. That might not be easy to quantify, but there are lots of things in the world that we value in qualitative ways.

Don’t believe me? Think of a manager that you’ve liked working for, a teacher that inspired you, a friend that you’ve treasured, and think on how little mere numbers would tell you about how and why they were valuable.

Artists on Software Development

Tuesday, October 28th, 2008

I heard two wonderful things on the CBC today, both of which relate to this business of software development.

One was on the radio, on an arts magazine called Q, hosted by the urbane Jian Ghomeshi. He was interviewing the winner of the 2008 Pulitzer Prize for Fiction, Junot Diaz. At one point, Diaz said something close to this:

Appearances are not what matters…what’s the reality? As an artist, I’m not a corporate shill. What matters to me as an artist is to look into the things that the culture doesn’t want to talk about.

Does the quote remind you of the role of the tester?

The whole show is available as a podcast; the interview starts at about 25 minutes in.

The other wonderful quote was from The Hour with George Strombolopoulous. Charlie Kaufman, writer of Adaptation and The Eternal Sunshine of the Spotless Mind, and writer/director of Synecdoche, New York, said something close to this:

Failure isn’t a negative thing; it’s just a possible result of trying to do something that you don’t yet know how to do.

Does that remind you of art in general? Of the whole business of developing software, especially in agile contexts?

Once again: the CBC is one of the things that makes living in Canada a wonderful thing.

Adam Smith on Scripted Testing

Thursday, October 23rd, 2008

While reading Tim Harford’s excellent book on economics, The Logic of Life, I found this quote from Adam Smith:

“The man whose whole life is spent on performing a few simple operations … has no occasion to exert his understanding or to exercise his his invention in finding out expedients for removing difficulties which never occur. He … generally becomes as stupid and ignorant as it is possible for a human creature to become.”

This is why it’s important, I think, to drop the idea that it’s a good idea to guide (or just as bad, to train) a tester with a thick book of specific steps to follow. We don’t teach people to drive that way. People don’t learn much when they’re disengaged with the decisions about their processes and their learning.

Instead, guide testers with coaching, mentoring, collaboration, and concise documentation of coverage, oracles, risks, and test ideas. Give testers authentic problems to solve. Encourage them to explore, to invent, to adapt, and to act on their learning. Be flexible as they learn to report on their discoveries, and encourage multiple modes of communication. Evaluate them by direct observation, personal supervision, and rapid feedback. Challenge them to explain, defend, and justify their decisions. When you believe those decisions are something less than optimal, suggest better alternatives rather than dwelling on failures. Reward testers with trust and responsibility for increasingly challenging work. It’s a virtuous cycle.

If you need something to perform simple operations that don’t foster understanding or invention, by all means had those operations over to a machine—but only if you’re sure that they’re worth doing at all.

Questioning questioning questioning

Sunday, October 19th, 2008

Shrini Kulkarni reports on a conversation that he had with Rex Black. Shrini apparently offered a definition of testing, developed by James Bach, that we use in our Rapid Software Testing course: testing is questioning a product in order to evaluate it. Rex didn’t agree with this definition. “Questioning a lifeless thing like software is bizarre. I cannot question my dog,” said Rex.

Despite the fact that a statement is a lifeless thing, let’s go through the something-less-than-bizarre process of questioning it.

Is Rex’s dog lifeless? I don’t know. Maybe it is lifeless; maybe it’s dead. Maybe it’s a stuffed toy. Or perhaps Rex means that asking a question of something that isn’t sapient is bizarre.

Is it bizarre to talk to something that isn’t sapient? Maybe. But some people do talk to their dogs, and some people swear at their computers. So maybe Rex means “not helpful”, rather than bizarre. Maybe he means that it’s bizarre to talk to something that can’t hear you, or that won’t answer you back in English.

Is it bizarre to ask questions of something that can’t hear you? Maybe. But is it bizarre to ask a deaf person a question, using sign language?

Is it bizarre to ask a question of something that doesn’t have a tangible existence or that doesn’t answer back in a way that you can hear? Maybe. But is it bizarre to pray? Is it bizarre to question your beliefs?

Is it bizarre to ignore a common use of the verb “to question”? Well, now I think we might be on to something. The Concise Oxford English Dictionary says this:

Question: v. tr. 1 ask questions of; interrogate. 2 subject (a person) to examination 3 throw doubt up; raise objections to 4 seek information from the study of (phenomena, facts)

If you want to go hyper-literal, “questioning the product in order to evaluate it” is an example of the transitive meaning exactly expressed by 4 above. And 3, and 2, I suppose. But in the Rapid Software Testing course, James and I also suggest that 1 is feasible with software, in a sense. In oratory and in the course notes, we say that we ask questions of the product. “Of”, in this sense, can mean about; that is, we can ask questions about the product. That’s consistent with Oxford too:

Question: n. 1 A sentence worded or expressed as to seek information. 2 a doubt about or objection to a thing’s truth, credibility, advisability, etc. (allowed it without question; is there any question to its validity?). b the raising of such doubt, etc. 3 a matter to be discussed or decided or voted on. 4 a problem requiring an answer or a solution. 5 (foll. by of) a matter or concern depending on conditions (it’s a question of money).

But we can also ask the product questions directly, in a less than literal sense. In the course material, we say:

The “questions” consist of ordinary questions about the idea or design of the product, or else questions implicit in the various ways of configuring and operating the product. (Asking questions about the product without any intent to operate it is usually called review rather than testing.) The product “answers” by exhibiting behavior, which the tester observes and evaluates. To evaluate a product is to infer from its observed behavior how it will behave in the field, and to identify important problems in the product.

(The quotation marks appear in the original text; the emphasis I’ve added here.)

Rex and I have had a number of chats and, on at least one occasion in Canberra, a very pleasant dinner. We disagree on our some of our approaches to testing. That’s okay; reasonable people can disagree reasonably. He deals respectfully with these differences in his most recent book, and kindly acknowledges me and my work, for which I’m grateful.

One reason that bugs in software exist is that people have multiple interpretations of our world and of the words that we use to describe it to one another. In particular when we’re dealing with idea-stuff—things that don’t have a physical existence—alternative interpretations abound. We can use the same words to mean different things, or we can use different words to mean the same thing. For example, as we say in the course:

Cem Kaner prefers this definition (of testing): “Software testing is a process of empirical, technical investigation of the product under test conducted to provide stakeholders with quality-related information.” We mean no disagreement here; Kaner’s definition means the same thing as our definition in every material respect. However, we sometimes prefer more concise wording.

Successful testing depends on embracing expansive definitions, generating expansive models, considering open possibilities, reflecting on alternative assumptions—and then questioning what we think we know. That’s right: we can question assumptions, even though assumptions are lifeless things.

Here’s the thing: I don’t expect Rex to accept anyone’s definition of software testing; the choice is up to him. That’s cool. But I can’t understand how Rex, an apparently smart guy and the past president of the International Software Testing Qualifications Board, can see the world so narrowly, failing to recognize such a common usage of such a common word. What’s up with that?