Blog Posts from February, 2009

How Can A Trainee Improve His (Her) Skills

Thursday, February 12th, 2009

A blogger on TestRepublic asks “How can a trainee improve his/her skill sets in testing?”

This is what I do. I recommend it to all trainees (or “freshers”, as they say in India).

Find something that interests you, or something that would be useful to you or to a client, or something that you must do, or a problem that you need to solve, or something that you think might be fun. Listen, talk, ask questions, read, write, watch, learn, do, practice, teach, study. Solicit feedback. Practice.

Think critically. Monitor your mental and emotional state. Hang around with people who inspire you on some level. Offer help to them, and ask them for help; more often than not, they’ll provide it. Practice.

Think systematically. Seek the avant garde. Defocus; look elsewhere or do something else for a while.

Practice. Observe the things in your environment; direct your focus to something to which you hadn’t paid attention before. Seek connections with stuff you already know. Look to the traditional. Refocus.

Learn, by practice, to do all of the above at the same time. Tell people about what you’ve discovered, and listen to what they tell you in return. Recognize and embrace how much more you still have to learn. Get used to that; learn to love it. Repeat the cycle continuously.


It’s the same with any skill set. For me, it has worked for testing; it has worked for playing mandolin; it has worked for being a parent—even though there’s a universe of stuff that I still have to learn about all of those things. When I use the approach above, I make progress rapidly. When I don’t, I stall pretty quickly.

My friend and colleague James Bach has a similar approach for living and learning, and he’s written a book about it. It’s called Secrets of a Buccaneer Scholar: How Self-Education and the Pursuit of Passion Can Lead to a Lifetime of Success.

These approaches are at the heart of the Rapid Software Testing mindset. They’re also a big part of what we try to teach people by example and by experience in the Rapid Software Testing course. It may sound as though there are lots of bits and pieces to cover—and there are—but they all fit together, and we give you exercises and practice in them to get you started. And these approaches seem to help people and get them inspired.

At conferences or association meetings, we present some of what we’ve learned in a formal way, but we also get up early in the morning and/or hang out in the pub in the evening, chatting with people, playing games, exchanging puzzles, trading testing stories. When we’re on the road, we try to contact other people in our network, and hang out with them. We blog, and we read blogs. We read in the forums, we write in the forums. We seek out passionate people from whom we can learn and whom we can teach. We point people to books and resources that we think would assist them in their quests to develop skill, and ask them to do the same for us. As a novice, you can do almost all of this stuff right away, and make goals of whatever is left.

In addition to Rapid Software Testing, one of the places that we regularly point new testers is the Black Box Software Testing course, available free for self-study at, or in an instructor-led version from the Association for Software Testing. That course, co-authored by Cem Kaner and James Bach, but increasingly refined by collaboration between authors, instructors, and students, will give you lots of excellent knowledge and techniques and exercises.

The skill part—that comes with practice, and that’s up to you.

Getting Them To Do The Work

Saturday, February 7th, 2009

In the Agile-Testing list, Kevin Lawrence says “I share in the fantasy that my business people will write tests and am jealous of those who have turned fantasy into reality but, alas, I have not shared that experience.

Wanting business people to write tests, to me, feels like a cook wanting the restaurant’s patrons to sauté their own mushrooms.

Dear Madam Business Person, I don’t want to stop you writing your own tests, if that’s what you really want to do, but I’m in the service business; tell me what you want and I’ll be happy to whip it up for you. It’s my speciality, and it’s my job to save you time and effort. I’m happy to collaborate closely, or for you to give me some fairly specific directions, if you like. However, giving you what you want while surprising and delighting and impressing you with information you and your programmers couldn’t quite have conceived on your own—that’s a good day for me.

Quality: Not Merely The Absence Of Bugs

Monday, February 2nd, 2009

“Quality is value to some person.” —Jerry Weinberg

In the agile-testing mailing list, Steven Gordon says “The reality is that meeting the actual needs of the market beats quality (which is why we observe so many low quality systems surviving in the wild). Get over it. Just focus on how to attain the most quality we can while still delivering fast enough to discover, evolve and implement the right requirements faster than our competitors.” Ron Jeffries disagrees strongly, and responds in this blog post.

I think Steven is incorrect, because meeting the needs of the market doesn’t “beat quality”. A product that doesn’t meet the needs of the market (or at least of its own customers) is by definition not a quality product. Steven errs, in my view, by suggesting that “low quality systems” survive. Systems survive when, as buggy as they might be, they supply some value to someone. Otherwise, those systems would die. What Steven means, I think, is that these products fail in some dimensions of quality, for some people, while supplying at least some value for some other people.

Yet I think Ron is incorrect too when he claims that there must not be a trade-off between speed and quality, for the same reason; speed is also a dimension of quality, value to some person. So if I’ve offended either Steven or Ron, I hasten to point out that I’m probably offending the other equally.

I think it’s a mistake to suggest that quality is merely the absence of bugs, as both appear to suggest. Here’s why:

  • “Some person” is a variable; there are many “some persons” in every project community. The client and the end user are examples of “some person”, but so are the programmers, the managers, the testers, the documenters, the support people, etc., etc.
  • Value to a given person is multivariate (that is, for each person there is a collection of variables, several things that that person might value in varying degrees).
  • Capability and functionality are important dimensions of value to some person(s).
  • Rapid iteration and time to delivery are dimensions of value to some person(s).
  • Security, reliability, usability, scalability, performance, compatibility, maintainability and many other -ilities are also dimensions of quality, some of which may be of paramount importance and some of which are of lower importance to some person(s).
  • The absence of bugs is one (and only one) dimension of value to some person(s), if it’s even that. It’s more accurate, I contend, to think of bugs as things that threaten or limit value. For this reason…
  • We get severely mixed up when we describe “quality” solely in terms of the absence of bugs.
  • The absence of bugs might matter less, much less, than the presence of other things that are valuable. Despite the protestations of some “quality assurance” people who are neither managers nor business people, it might not be insane to value features over fixes. Questionable, I would argue, but among other things, wouldn’t the judgment depend on the severity of the problem and the risk of the fix?
  • The absence of bugs is completely irrelevant if the software doesn’t provide value to some person(s). A bug-free program that nobody cares about is a lousy program.

These differing views of value mean that there will be differing views on the notion of working software (also known as valuable software). How do we handle these different views when they compete? By responding to change with customer collaboration which happens by interactions between individuals. That’s what “agile” is supposed to mean. It’s not just about shorter morning meetings with no chairs; it’s about human approaches to solving human problems all day long. Well… it used to be, maybe, for a while. Maybe not any more.

So I disagree with Ron when he suggests that there isn’t a trade-off between time to delivery and quality. That’s because time to delivery isn’t distinct from quality either; it’s another dimension of quality. And there’s always a trade-off between all of the dimensions of quality, depending on what people value, who and what informs the ultimate decisions, and who has the power to make those decisions.

I do agree strongly with Ron, though, when he suggests that the presence of problems in the code is a serious threat to the many of the other dimensions of quality, and that reducing those problems as early as possible tends to be a good investment of time. In his blog post, he has articulated numerous ways in which those problems threaten the ability of the programmers to do valuable work. Test-driven development and unit tests are tremendously powerful ways of avoiding these problems. Collaboration and technical review—pair programming, walkthroughs, inspections, knowledge crunching sessions (as Eric Evans calls them in Domain Driven Design)—not only help to prevent problems, but also afford opportunities for people to learn and exchange knowledge about the product.

I’m not in the business of telling programmers how to do their work, but as a tester I can say that problems in the code threaten the quality of our work too. Specifically, they constrain the testers’ ability to provide value in the form of knowledge about the product. Testers are often asked, “Why didn’t you find that bug?” One plausible answer is “because we were so busy finding other bugs.” Well-programmed code, already tested to some degree by the programmers themselves, is enormously important for the testers. Bugs block our ability to observe certain parts of the program, they add uncertainty and noise to our observations of the system, and they cause us to spend time in investigation and reporting of the problems. This represents opportunity cost; bug investigation and reporting compromises our capacity to investigate and cover the rest of the test space, which in turn gives bugs more time and more places in which to hide out.

So a well-tested program is easier to explore more quickly. Having a hard time persuading your manager or your (cough, cough) Scrum team? This presentation on finding bugs vs. coverage sets out the problem from the point of view of the testers. As usual, the business decisions are for the those who manage the project. It’s up to us—the programmers, the testers, and the other developers on the project—to present the technical risks in the context of the business risks. It’s up to all of us to collaborate on balancing them.