Blog Posts from November, 2007

A Rapid Testing Success Story

Sunday, November 25th, 2007

One of the problems in our business is that people are usually reluctant to talk about testing, even when it’s successful. In order to discuss testing, they may have to cop to problems in their product, or in their development work. Even if they’re very happy with the things that they’ve learned in the course and put into practice, they may have to acknowledge that earlier forms of testing were less effective. No matter what, there’s usually some dirty laundry somewhere, so people are understandably leery about airing it on publicly visible clotheslines.

Here’s some feedback that I got recently from a customer, who was kind enough to allow me to make these comments public. It’s verbatim, except for the name change. A little context, without stepping on non-disclosure agreements: they’re a quite successful global company that makes commercial utility products. Like most commercial software companies, the work is done under severe time pressure, and product requirements are changing rapidly in response to market conditions. The groups that I’ve spoken to at this company are quite capable and engaged in their work, but the Rapid Testing course seemed to stir up the smoldering fire that was already there. My client wrote:

We are already starting to put into practice what you taught us here is a mini case study:

Four of the group sat down last Friday and tested another product. Dan (name changed for confidentiality –MB) guided and made suggestions. None of the 4 knew the product under test. The product test lead spent half the day being a live oracle. Results: Another 50 defects. Several were crashes (buffer overrun- thanks Perlclip!). Many UI and usability defects. By the afternoon the team was starting to find more specific defects in what the product should do, but wasnt doing. However, by this time they were getting very baked. This sort of testing is really hard work. However, the product lead was amazed by what was found, and the defects found per hour invested was once again orders of magnitude more effective than the testing that was currently going on with the product. We are going to cycle this much more frequently, and the same four are going to dig deeper on the same product later this week as well.

I was especially intrigued by the notion that this kind of testing is hard work. That sounded like a good sign. (Once a friend of my engaged a four-year-old kid in an elevator who was yawning. “Tired out?” asked my friend. “That’s how you know when you’ve had a good day.”)

Rapid testing seems very natural and easy to me, but I’ve been doing it for a while. I’m pretty convinced that it’s the kind of approach that gets easier with practice, so I asked the customer on a recent visit if he agreed. He nodded, and said that he thought that it was indeed getting easier for the team members who were doing it regularly. However, the approach is something of a paradigm shift, and people can easily slip back into the familiar. I’m glad that this company seems to have champions that will sustain the work.

By the way, these days, the three-day Rapid Software Testing course includes, at the client’s option, a fourth day of hands-on testing or consulting with the team or with individual members. I encourage clients to accept the offer because it’s useful to have a whole day to deal with the work in context. It’s fun for me, too, especially when I get to test something that’s new to me.

Many thanks to my anonymous client and his team. You folks know who you are.

Pairwise Testing

Sunday, November 25th, 2007

I wrote a paper on pairwise testing in 2004 (and earlier), and now, in 2007, it’s time for an update. This post is an edited version of an appendix that I’ve recently added to that paper.

First, there appears to be great confusion in the world between orthogonal arrays and pairwise testing. People use the terms interchangeably, but there is a clear and significant difference. I’m fairly proud of the fact that I note that difference in my article albeit in some painful and not-very-interesting-to-most-people detail, and I think I get it right. If we’re going to talk about these things we might as well get them right, so if I’m wrong, I urge you to disabuse me.

Second, I’m no longer convinced of the virtues of either orthogonal arrays or pairwise testing, at least not in the pat and surfacey way that I talked about them in the original version of the article.

An on-the-job experience provided a tremor. The project was already a year behind schedule (for an 18-month project), and in dire shape. Pretty much everyone knew it, so the goal became plausible deniability&emdash;or, less formally, ass-covering. One of the senior project manager looked over my carefully constructed pairwise table, and said “Hey… this looks really good&emdash;this will impress the auditors.” He didn’t have other questions, and he seemed not to be concerned about the state of the project. Impressing the auditors was all that mattered.

This gave me pause, because it suddenly felt as though my work was being used to help fool people. I wondered if I was fooling myself too. Until that moment, I had taken some pride in the work that I had been doing. Figuring out the variables to be tested had taken a long time, and preparing the tables had taken quite a while too. Was the value of my work matching the cost? I suddenly realized that I hadn’t interacted with the product at all. When I finally got around to it, I discovered that the number, scope, and severity of problems in the product were such that the pairwise tests were, for that time, not only unnecessary but a serious waste of time. The product simply wasn’t stable enough to use them. Perhaps much later, after those problems had been fixed, and after I had learned a lot more about the product, I could have done a far better job of creating pairwise tables&emdash;but by then I might have found that pairwise tables wouldn’t have shed light on the things that mattered to the project. At that point I should have been operating and observing the product, rather than planning to test a product that desperately needed testing right away.

My test manager, for whom I have great respect, disappeared from that project due to differences with the project managers, and I was encouraged to disappear a week or two later. The project had been scheduled to deploy about six weeks after that. It didn’t. It eventually got released four months later, was pulled from production, and then re-released about six months after that.

A year or so later, there was an earthquake in the form of this paper by James Bach and Pat Schroeder. If you want to understand a much more nuanced and coherent story about pairwise testing than the one that I prepared in 2004, look there.

Pairwise testing is very seductive. It provides us with a plausible story to tell about one form of test coverage, it’s dressed up in fancy mathematical clothing, and it looks like it might reduce our workload. Does it provide the kind of coverage the kind that’s most important to the project? Is reducing the number of tests we run a goal, or is it a form of goal displacement? Might we be fooling ourselves? Maybe, or maybe not, but I think we should ask. I should have.


Saturday, November 24th, 2007

In an ongoing effort to spend even more time on the Web, I am adding a Technorati profile.