Some time in December 2006, I asked Scott Ambler if he would provide us with a presentation at TASSQ, the Toronto Association of System and Software Quality (I’m the program chair for that organization). I was delighted that he agreed to present. I was aware that his remarks would probably be controversial, and that I would agree with some of them and disagree strongly with others.
Scott gave his talk to TASSQ on January 31, 2006. I wasn’t able to attend; I was at an Exploratory Testing summit meeting and the Workshop on Teaching Software Testing, both in Melbourne, Florida (about which more in a later blog entry). However, I heard reports back from a couple of people, one of whom pointed me to a blog post, here:
I reviewed the speaker notes, and I’ve seen a similar presentation from Scott. He also participated in the Toronto Workshop on Software Testing in June of 2005, which I convened with Fiona Charles. So although I didn’t attend this presentation, I’ve got a fairly good picture of what must have gone on. If I say “Scott said,” I’m mererly assuming and alleging that he did, for convenience in addressing his arguments.
Scott, like other Agilistas, proposes that XP and other agile processes will result in higher quality software. I hope they’re right; I’m all for that. I agree with Scott when he says that test-driven development (TDD) is a technique that is likely to result in better-designed and more reliable code. If a developer writes a failing test, and then writes code to pass it, we have some assurance that the code at least fulfills that requirement. Morever, after that code is written, we have a unit test that helps the developer to modify the code with confidence; since all tests are run after each modification, the developer gets instant feedback if the code happens to break in some way.
TDD is a strong design technique. According to Ward Cunningham, who is generally credited by Beck and others with inventing the approach, TDD is really all about design. The TDD tests provide an excellent regression suite too; it provides confirmation that things are still working OK.
But I think Scott misses the mark if he believes that TDD is an effective overall testing technique. A product’s functional units can be functioning perfectly, but interactions between them might not be. TDD deliberately isn’t focussed on testing the interactions between functional units. In order to be effective, the unit tests suites need to be compact, and they need to run quickly. If the tests cover too much or take too long to run, developers won’t run them. TDD tests are therefore intentionally lightweight and low-level. That’s fine, but no sane person should mistake a suite of unit tests with good testing coverage.
More complex, user-oriented tests in agile environments are often handled by requirements-oriented tools such as Fit or FitNesse. With these tools, requirements writers describe the product in Word documents or on a Wiki, and provide examples of how certain functions should behave by inserting tables that contain examples of inputs and expected outputs. Developers provide fixtures–interfaces between these tables and the product’s actual code. At the push of a button, FIT/FitNesse runs the code associated with the documents, and turns the cells in the tables green where the tests achieve the predicted result, and red where there have been failures. This is pretty powerful, since it’s easy to see where some of the product’s code is running successfully from moment to moment.
For me, as a tester, the real limitation with TDD and FIT/FitNesse is that they’re focussed on confirmation, and not on investigation. Confirmation is a necessary and useful part of testing, but it’s not the whole story. Functionality or capability are not the only quality-related aspects of the product. Other classes of tests are important too–system testing, end-to-end testing, stress testing, flow testing (in which complete transactions are run, one after another, without the system being reset), performance testing, high-volume random testing. FIT/FitNesse generally has dreadful support for the user interface of the product under test, although other tools such as WATIR (for Web applications) can step in to some degree.
The most pernicious (and often unstated) aspect of the arguments about testing in agile environments is the suggestion that automated tests are an unalloyed good, and that manual tests are unquestionably bad. Some in the agile community decree that all tests must be automated, or that a manual test is something to be despised. This is a farcical notion, and bespeaks a fundamental ignorance of the purposes and practices of the kind of testing that I perform and promote. I’m not omniscient–not even close–but I do have the power to operate and observe a product in ways that a machine cannot. The machine has speed and precision going for it; the human has cognitive capabilities. Automated tests are very poor at evaluating certain kinds of things that humans can assess with ease. A tester with a trained brain, turned on, can use exploratory skills to find new problems in usability, security, compatibility, supportability, localization, prompted by nothing more than a thoughtful question that she asks herself.
Automated tests can’t suspect a problem and then act on the knowledge. An automated test typically operates using a single oracle (an oracle is a principle or mechanism by which we recognize a problem). At any given instant, it’s possible for a human to work through dozens of oracles; that’s what we do. Some of us are better at that than others, but we can all improve our skills.
I agree with Scott when he argues that we testers are going to have to improve our skills in order to bring value. I’ve maintained that for years; that’s why I’m a trainer and a consultant, and that’s why, for the last few years, I’ve spent six to eight weeks a year getting training and attending conferences. That’s not the only way to improve skills; there are books, online resources, and testing communities to draw upon. I agree when he says that jobs are at risk for those who don’t sharpen the saw. But I don’t agree when Scott suggests that testers are going to be slain by agile development’s silver bullet. We need intelligent, thoughtful people to ask challenging questions about software products; we need versatile, skilled people to ask those questions, in the form of manual and automated tests; and we’ll need such people forever. The question is: what kind of tester do you want to be? I’ll have much more to say in the near future.