A couple of months ago, a correspondent on the Agile Testing list asked a bunch of questions, some of which I answered in an earlier blog post. Here are the answers to the other questions.
4) How do we estimate Test Efforts for agile Testing? Can we use normal estimation models?
I’m going to differentiate here between agile testing and Rapid Testing–the kind of testing defined by James Bach, the kind of testing that I practice and teach. I’d contend that Rapid Testing is agile–in the dictionary sense, and in the sense of the principles declared by the Agile Manifesto–but James is reluctant to slap a capital-A Agile label on it. Rightly so, I think. I’m not sure if there’s broad consensus on what Agile Testing is, and if there is, I’m not going to claim to be the right person to speak about it.
In Rapid Testing, we use Session-Based Test Management (SBTM) both to account for our time and as a basis for estimating how we’re going to allocate the time we’ve got. You can read about SBTM at James Bach’s Web site, http://www.satisfice.com/sbtm. I’ve worked with it, and I like it; and there are increasing numbers of people who report good experiences–Bliss’ presentation at STAR East 2006 being a recent highlight for me.
But with respect to estimation, let’s look at what happens in many organizations.
Management has an idea for a product it wants to ship. The managers might solicit estimates from the marketing people, the developers, and the testers; there might be some haggling about the scope of the project; some resource allocation might be planned, or performed in response to more haggling.
What’s the result of the estimation effort? The result is that management will stipulate that something will ship on such-and-such a date–which, by no small coincidence, is a release date that management had in mind from the start. Then development takes more time than the developers expected, and the testers are asked to complete testing such that the product ships on the date management originally had in mind, or a little later, often with reduced scope from the original charter. Many of us have been there, right?
So Rapid Testers view things a little differently. We know that we don’t set the schedule; we don’t drive the bus, as Cem Kaner says. Our job is to provide information to management, such that management can make informed decisions about the state of the product and the project. Our commitment is to do our best to provide the project community with the fastest, highest-quality information that we can at any time. Are there particular tests that important people want us to perform? We’ll get them done as quickly as we can. Do they want to see how we’ve covered the product? SBTM is set up to provide exactly that information. Do the managers want to feel as though there are no more serious bugs in the product? We’ll focus on finding the serious bugs fast–and remind management that we can promise only a good-faith effort, not perfection.
Until there’s something to test, we’ll research the business system within which the product is expected to work; we’ll model the product, set up test environments, or work with customers and developers on framing requirements, if those things are part of our assigned mission. To the greatest extent possible, we’ll anticipate and identify risk. When we’re performing advance tasks, we’ll focus on things that are unlikely to be undermined by changes to the product as it’s being developed.
As soon as there’s anything to test, we’ll test it by operating it, observing it, and evaluating it. We’ll be prepared to report at any time. When management decides that it has enough information to ship, they ship–and then we typically stop testing. So our estimate is that we’ll provide as much information as we can within the time that management allows for development. This has the benefit of being 100% achievable.
If a “normal” estimation model is to try to figure out, in advance, all the test cases that we’re going to produce, how much effort it will take for someone to write them down, and how much effort it will take someone (else) to run them, how many bugs we expect to find, how long it will take the developers to fix some proportion of those bugs, and how long it will take us to run the resulting regression cycles… no, we tend not to do that. We contend that any attempt to predict all that stuff will be risky, and often inaccurate–except to the extent that the organization, when confronted with some unwelcome truths, will decide to drop the predictions and do whatever testing it can in the time available. Which ends up looking a lot like our model, but with the all that extra overhead associated with trying to make an ultimately unsuccessful prediction.
5) What is the typical % of testing effort in Software development Life cycle by using agile methodology?
I don’t know of a way to separate testing and development in a measurable and meaningful way, and even if I could, the measurements wouldn’t apply in the same ways to all organizations.
Part of the problem is associated with the question of measuring development or testing effort as a scalar. Do you measure effort as the number of weeks applied to the project? As the number of person-hours? As lines of code or test scripts or other documentation? If a developer writes a unit test, do we put that on the testing or development side of the ledger? If a tester writes an automated script to exercise the product in some way, she’s writing software; is that testing or development? When you measure test effort in different organizations, do you account for differences in skill or experience or value, or do you simply count the number of keystrokes ?
Agile processes tend to advocate test-driven development (TDD). TDD tends to increase the speed at which developers get feedback about bugs, and that tends to reduce lots of of coding errors. Agile processes advocate lots of participation from testers and customers (or customer representatives) througout the project; sensible Agilistas propose that we automate tests that lend themselves well to automation. To the extent that these practises are followed skillfully, they would seem to have a high probability of being helpful. Yet there’s nothing to guarantee that the practices are being followed, or that they’re being followed with skill. Moreover, skillful people, working together, have been producing valuable software forever using a huge variety of process models. Lots of successful teams wouldn’t be able to name their model, but release worthwhile software anyway.
I like agile principles. They’re intended to be humanist and pragmatic. That’s a generally a very good thing. However, no principles or processes can survive a context in which they won’t work. I offer no guarantees that agile approaches will solve your process problems, and I don’t think anyone else should either.
6) Is there any difference in testing effort for Normal testing process and Agile testing process?
Again, I don’t know how to answer the question in a way that’s helpful. There are too many dimensions that you might care about.
Rapid testing is designed to be the fastest, most effective testing that fulfills the mission. Whether that might translate to a reduction of effort to provide the same amount of information; or to exactly the same amount of effort but higher quality of information; or getting better coverage using the same resources–I couldn’t tell you without knowing about your context.
7) If the documentation is less in Agile testing, will it not impact the quality? I hope this will overcome by having daily meetings to update the status/issues.
The authors of the Agile Manifesto value working software over comprehensive documentation, and make it explicit that they believe that, while there is value in the latter, there is more value in the former.
Jerry Weinberg defines quality as “value to some person”. There is nothing inherently valuable in documentation if the person in question is only interested in working software. Correspondingly, documentation is not intrinsically without value, as long there’s someone who values it, and as long as that person matters. People err when they view quality as an attribute of a product, rather than a relationship between that product and some person. The question is: is the documentation a product or a tool that serves the development of some other product? If it’s a tool, might other tools fulfill the same function?
Again, speaking for Rapid Testing: we’ve observed lots of documentation that’s “write-only”–it gets written, stored, and never looked at again. This takes time, and that’s time that might be more valuably spent on test execution. So our goal is to produce all the documentation that’s required to fulfill the mission of the product, and no more. We also allocate time to any given piece of documentation in accordance with the audience and purpose that the document is intended to serve. If the documentation is a product, something that we must present as a project deliverable, then we’re more likely to spend time on its presentation. If it’s a tool, something that we use only for ourselves or inside the test team, we’re less likely to spend a lot of time on making it look good. If a sketch serves our purposes, a full-blown oil painting is probably a waste of time and effort.
Daily standups can be great if they’re focused and productive, but we’ve seen lots of daily meetings in which that people go through the motions instead of exchanging useful information. Agile advocates prescribe principles and approaches for making meetings valuable, but can’t guarantee that those principles and approaches are followed. An organization that has the people and the talent to use those principles appropriately will probably do well. Every project has its context and culture for exchanging information, including (but not limited to). documents, meetings, phone calls, hallway conversations, and email. Pragmatic members of the team will use whatever means of communication fit within their skills and their temperaments and the local culture.
I hope these answers help you out.