Blog Posts from January, 2007

Test Project Estimation, The Rapid Way

Thursday, January 25th, 2007

Erik Petersen (with whom I’ve shared one of the more memorable meals in my life) says, in the Software Testing Yahoo! group,

I know when I train testers, nearly all of them complain about not enough time to test, or things being hard to test. The lack of time is typically being forced into a completely unrealistic time frame to test against.

I used to have that problem. I don’t have that problem any more, because I’ve reframed it (thanks to Cem Kaner, Jerry Weinberg, and particularly James Bach for helping me to get this). It’s not my lack of time, because the time I’ve got is a given. Here’s a little sketch for you.

I’m sitting in my office. Someone, a Pointy-haired Boss (Ph.B.), barges in and says…

Ph.B.: “We’re releasing on March 15th. How long do you need to test this product?”

Me: (pause) Um… Let’s see. June 22.

Ph.B.: WHAT?! That can’t be!

Me: You had some other date in mind?

Ph.B.: Well, something a little earlier than that.

Me: Okay… How about February 19?

Ph.B.: WHAT!?! We want to release it on March 15th! Are you just going to sit on your hands for four weeks?

Me: Oh. So… how about I test until about, say, March 14.

Ph.B.: Well that’s… better…

Me: (pause) …but I won’t tell you that it’s ready to ship.

Ph.B.: How do you know already that it won’t be ready to ship?

Me: I don’t know that. That’s not what I mean; I’m sorry, I didn’t make myself clear. I mean that I won’t tell you whether it’s ready to ship.

Ph.B.: What? You won’t? Why not?!

Me: It’s not my decision to ship or not to ship. The product has to be good enough for you, not for me. I don’t have the business knowledge you have. I don’t know if the stock price depends on quarterly results, and I definitely don’t know if there are bonuses tied to this release. There are bunches of factors that determine the business decision. I can’t tell you about most of those. But I can tell you things that I think are important about the product. In particular, I can tell you about important problems.

Ph.B.: But when will you know when I can ship?

Me: Only you can know that. I can’t make your decision, but I can give you information that helps you to make it. Every day, I’ll learn more and more about the product and our understanding of it, and I’ll pass that on to you. I’ll focus on finding important problems quickly. If you want to know something specific about the product, I’ll run tests to find it out, and I’ll tell you about what I find. Any time you want to ask me to report my status, I’ll do that. If at any time you decide to change the ship date, I’ll abide by that; you can release before or after or on the 15th—whenever you decide that you don’t have any more important questions about the product, and that you’re happy with the answers you’ve got.

Ph.B.: So when will you have run all the tests?

Me: All the tests that I can think of? I can always think of more questions that I could ask and answer about the product—and I’ll let you know what those are. At some point, you’ll decide that you don’t need those questions answered—the questions or answers aren’t interesting enough to prevent you from releasing the product. So I’ll keep testing until I’m done.

Ph.B.: When will you be done?

Me: You’re my client; I’ll test as long as you want me to. I’ll be done when you ask me to stop testing—or when you ship.


Rapid testers are a service to the project, not an obstacle. We keep providing service until the client is satisfied. That means, for me, that there’s never “not enough time to test”; any amount of time is enough for me. The question isn’t whether the tester has enough time; the question is whether the client has enough information—and the client gets to decide that.

Okay, AppLabs, I take it all back!

Tuesday, January 23rd, 2007

I did a half-day tutorial–a Rapid Introduction to Rapid Software Testing–at the STeP-IN Summit 2007, on the morning of January 18 in Bengalooru (formerly Bangalore), India. After lunch, the choice was between a presentation on test automation strategies and building test management skills. I chose to sit in on the latter.

The presenter was Jaya Raghuram (Raghu) from AppLabs Technologies. Apparently his colleagues were unable to attend the scheduled talk, so he was a solo act. I was blown away. It was a wonderful talk, with lots of interaction between Raghu and the participants.

On several occasions, he asked for examples of the ways that people dealt with various problems–staffing, metrics gathering, estimation, release decisions, and so forth. In each case, he provided plenty of opportunities for people to suggest “best practices“, and in each case, he also allowed other participants to point out contexts in which those practices wouldn’t work. This wasn’t done for the purpose of making anyone look stupid; it was done to demonstrate that something that makes sense in one circumstance might be disastrous in another–any practice is, at best, a solution that sometimes works. He also launched an aggressive (and in my opinion, appropriate) attack on most of the metrics that we use in software development, noting how easily they could be incomplete, distorted, or gamed.

Raghu led the discussion with grace, tact, and great energy, showing an eagerness both to teach and to learn. I left the presentation feeling energized and enthusiastic, and later had a very pleasant chat with him over tea. Great stuff, Raghu–keep in touch!

One insight triggered by the talk was something about our development models. Whether we adopt Agile or Waterfall or any other process, a general systems view of things requires us to consider that our development model is inter-related at some level with everyone else’s model. No matter what decisions that we choose to make about our project and our product, those decisions are on some level vulnerable to the decisions that our customers or our service providers are making. That speaks to Pliant approaches to running our organizations; no matter how much we’d like to drive the context, we have to be at least somewhat context driven.