Blog Posts from April, 2007

Do You Need More Testers? A Context-Driven Exercise

Sunday, April 29th, 2007

A discussion started recently in comp.software.testing about industry best practice:

When creating a complicated web based application from scratch, how many testers per developer would be considered a best practice? I have heard 1.5 testers for every developer. What are your thoughts on this?

My (lightly-edited for this blog) response was…

  • If you want to find all the bugs, 100 testers per programmer would be much better than 1.5 testers per programmer. You customers might consider this impressive (if they were willing to pay), but your CFO would freak out. And there’s still no guarantee that you’ll find all the bugs.
  • If you want to keep costs low, 0 testers would be much better than 1.5 testers per programmer. It might even work for you, but do you have the sufficient confidence that important questions about your product have been asked and answered?
  • If you want to keep costs really low, 0 testers per 0 programmers would be better yet.

We haven’t yet talked about the skills of the testers or the programmers involved. We haven’t talked about the business domain and the attendant levels of risk. We haven’t talked about whether you account for test managers and admins as testers, nor whether your programmers should be counted as testers while they’re testing. (We also haven’t talked about the ugliness that would ensue if you took this 1.5 testers per programmer “best practice” literally for a team of three programmers—which half of the fifth tester would you want to keep? Hint: pick the end with the head.)

One of my points is that this is an unanswerable question without more context information—experience with the company and the development team in the business and technical domains? budget? schedule? co-location of programmers and testers? the mission of testing? Another point is that, irrespective of the answers to the questions above, “best practice” is a meaningless marketing term, except for the meaning “something that our very large and expensive consulting company is promoting because it worked for us (or we heard it worked for someone) zero or more times”. “Industry best practices” is even worse. What industry? If you’re developing Web-based billing systems for a medical imaging company, are you in the “Web” industry, the “software” industry, the “medical services” industry, or the “financial industry”? I wrote about “best practices” for Better Software Magazine; you can find the article archived here (at http://www.developsense.com/articles/2004-09-ComparativelySpeaking.pdf).

Skilled testers help to mitigate the risk of not knowing something about the system that we would prefer to know. So: instead of looking at the number of testers as a function of the number of programmers, try asking “What do we want to know that we might not find out otherwise? What tasks might be involved in finding that stuff out? Who would we like to assign to those tasks?” And if you’re still stuck, get a tester (just one skilled tester) to help you to ask and answer those questions.

One reply on the thread went like this:

…in the end what counts is the defect rate. If the shop is using testing to improve reliability and the released defect rate is unacceptable, you need more testers. If the shop is using testing to monitor the development process and the testing defects found in your fault categories are too small to be statistically significant, then you need more testers.

That might be reasonable so far as it goes. However, here’s something that the context-driven crowd does to sharpen our mental chops. We take statements like these, and apply to it the Rule of (at Least) Three (which comes from Jerry Weinberg‘s writing and consulting). “If you can’t think of at least three alternatives, you probably haven’t thought about it enough.” When we want a moderate workout, we take (at Least) Three to (at Least) Ten.

So above we see one possible approach to reducing the release defect rate. Can we think of at least nine others?

  1. You don’t need more testers; you need better-skilled testers–fewer, even–who are capable of identifying broader coverage and more efficient oracles.
  2. You don’t need more testers; you need product managers who aren’t so quick to downplay the significance of bugs and defer them.
  3. You don’t need more testers; you need better programmers.
  4. You don’t need more testers; and you don’t need better programmers; you need to use a test-driven development strategy.
  5. You don’t need more testers; you need to stop wasting time on writing scripts, express risks and test ideas more concisely, and trust your testers to perform their own tests that address the risks (and document them on the fly).
  6. You don’t need more testers; you need your testers to spend less time in meetings.
  7. You don’t need more testers; you need closer relationships between tester, programmer, customer, and project management.
  8. You don’t need more testers; you need your program to be more testable, with scriptable interfaces, logging, and on-the-fly configurability.
  9. You don’t need more testers; you need more frequent development and test cycles to reduce the lag time between coding, finding, and fixing bugs.

And here’s a few more for free. Some of these might be good ideas, and some might be pathological, but they’re approaches to fixing “defect escape” that I’ve seen before.

  1. You don’t need more testers; you need to make it more difficult for your customers to report defects, so as to avoid embarrassment to those responsible.
  2. You don’t need more testers; you need to get out of the software development business altogether.
  3. You don’t need more testers; you need to remove system features that were rushed into development.
  4. You don’t need more testers; you need to run on fewer platforms.
  5. You don’t need more testers; you need to start testing at a lower level of the product.
  6. You don’t need more testers; you need to reduce your emphasis on developing automated activities that don’t really test the product.
  7. You don’t need more testers; you need to increase your emphasis on developing automation that can perform high-volume, randomized, long-sequence performance and stress tests.

The defect escape ratio is just a number. No number tells you what you need. Numbers might provoke questions, but the world is a complicated place. If you haven’t thought of at least three (or ten, or seventeen) possibilities, there might some important possibilities that you’ve missed.

The Big Questions of Testing

Tuesday, April 24th, 2007

There’s a perception (mine) that one of the biggest questions in testing is “did this test pass or fail?” However, that big question pales in significance to a much more important question, in my view:

Is there a problem here?

And that is what this lovely little conversation between James Bach and Mike Kelly is all about.

So how do we solve the scripting problem?

Tuesday, April 24th, 2007

Again, in the unlikely event that you read my blog before you read Jamesblog.

One of James’ correspondents, who sometimes goes by the name “Ben Simo”, is a very sharp fellow, as evinced by some of his posts on the software-testing mailing list. In response to our conversation about scripted test procedures, Ben asked a question that I think is important.

How do we teach script writers to lock down those things that need to be locked down? When I ask questions to try to nail down some of the ambiguity, I get the impression that the script writers think I’m trying to make simple things difficult.

Here’s how I think we could go about it. I’ve ranked these–the most urgent first and the most important last.

The first step is to acknowledge, from the beginning and repeatedly after that, that you’re trying to help–and that you’re not trying to be pedantic or a pain; you want to avoid the risk of misinterpreting an idea that might be important.

The second is to remind them that, as a tester, it’s your job to consider questions and possibilities that other people don’t consider. You might like to, on occasion, recognize with the other person that it’s is a peculiar and potentially funny occupational hazard. For one thing, I’d suggest joking about it every now and then.

The third is to try to solve the problem by ways other than locking people down. (Hands up, everyone who likes to be locked down.) Consider things like culture, common language, shared idioms, mentoring, training, observation, participation, collaboration,… These might suggest alternative ways of understanding one another.

Fourth, and most importantly: when you write a test script, either for automation or (ugh!) for human testers, it’s to further some purpose, to answer some question about the product. Maybe if you understand the question that’s being asked, to whom it matters, and why it matters, clarity of each and every little individual step doesn’t matter. What matters is the fundamental question of testing–”is there a problem here?”–closely followed by, “if I’m automating a test, what problem am I hoping to identify? How will this script help me to identify that problem? What steps would the script need to perform to get to that identification? What problems might I miss?” At that point, you-the-toolsmith can write the script without painful interrogation of some poor soul who might not know how to break the task down in a way that’s entirely helpful to you. Making that translation from risk and test ideas to scripts expertly, without the need for hand-holding, is the real centre of skilled toolsmithing to me.

A Conversation About Scripted Test Procedures

Tuesday, April 24th, 2007

James scooped me!

In the unlikely event that you read my blog before you read his, I’m proud to present this conversation (http://www.developsense.com/audio/whatdoscriptstellus.mp3) between him and me, which about the nature of scripted test procedures and some of the dangerous assumptions that people make about them. The chat is about one hour long. It’s only slightly marred by a phone line with a little echo.

As James suggests, before you listen, consider that you see a test script in front of you with this step:

  1. Reboot the test system.

Now, as an exercise, ponder this question: what might that line mean to you? Reflect on that for a while, then listen to us talk about it.