Blog: Acceptance Tests: Let’s Change the Title, Too

Gojko Adzic recently wrote a blog post called Let’s Change the Tune on some of our approaches in agile development. In changing the tune, some of the current words won’t fit so well, so he proposes (for example), “Specifying Collaboratively instead of test first or writing acceptance tests”. I have one more: I think we should change the label “acceptance tests”.

Acceptance tests are given a central role in agile development. They are typically used to express a requirement in terms of an atomic example, and they’re typically automated. That is, they’re expressed in the form of program code for a binary computer, code that helps us to determine whether some aspect of the product is functionally correct. When those tests pass, say certain proponents of the lore, we know we’re done. Yet acceptability of a product is multi-dimensional. In the end, the product is always being used by people to solve some problem. The code may perform certain functions exquisitely as part of product that is an incomplete solution to the problem, that is hard to use, or that we hate. The expression of requirements and the determination of acceptability in terms of simplistic, binary decisions delegated to a computer seems to me like a bias towards processes and tools, rather than individuals and interactions.

Done as they usually are, acceptance tests are set very close to the beginning of a cycle of development. Yet during that development cycle, we tend to learn a significant amount about the scope of the problem to be solved, about technology, about risk, about trade-offs. If the acceptance tests remain static, the learning isn’t reflected in the acceptance tests. That seems to me like a bias towards following a plan, rather than responding to change.

Acceptance tests are examples of how the product should behave. Those tests are typically performed in very constrained, staged, artificial environments that are shadows of the envionments in which the product will be used. Acceptance tests are not really tests, in the sense of testing the mettle of the product, subjecting it to the challenges and stresses of real-world use. Yet acceptance tests are often treated more as authoritative, definitive, specifications for the product, instead of representative examples. That sounds to me like a bias towards comprehensive documentation, rather than working software.

Acceptance tests are often discussed as though they determined the completion of development. While the acceptance tests aren’t passing, we know we’re not done; when the acceptance tests pass, we’re done and, implicitly, the customer is obliged to accept the product as it is. That sounds to me like a bias towards negotiated contracts, rather the customer collaboration.

The idea that we’re done when the acceptance tests pass is a myth. As a tester, I can assure you that a suite of passing acceptance tests doesn’t mean that the product is acceptable to the customer, nor does it mean that the customer should accept it. It means that the product is ready for serious exploration, discovery, investigation, and learning—that is, for testing—so that we can find problems that we didn’t anticipate with those tests but that would nonetheless destroy value in the product.

When the acceptance tests pass, the product might be acceptable. When the acceptance tests fail, we know for sure that the product isn’t acceptable. Thus I’d argue that instead of talking about acceptance tests, we should be talking about them as rejection tests.

Post-script: Yes, I’d call them rejection checks. But you don’t have to.

Want to know more? Learn about Rapid Software Testing classes here.

6 responses to “Acceptance Tests: Let’s Change the Title, Too”

  1. Very good thoughts and well put!
    Firstly, I have always felt skeptic against acceptance tests written by those that aren’t accepting the product.
    Secondly, many automated acceptance tests I have seen are designed not to break just because a need to avoid problems when the software changes a lot. I.e., they are adjusted to be less powerful and less complex in order to fulfill the need of automation.

  2. Mathew Pattara says:

    Perhaps the term should be changed to ‘Acceptance to Test’ checks?

    I mean that as should these ‘pass’, testing can then begin.

    Sort of like an entry-level criteria to be met before more in-depth testing?

  3. […] • ?????????? ?????: ??? ?? ???? ????????, ??? ??? ? ????????. […]

  4. nilanjan says:

    I think there is a deeper issue, which is not about test. If you have ‘requirements’ or ‘specifications’, then you *can* have ‘acceptance tests’.

    On the other hand, if you view stories as an alternative to ‘requirements’, and stories are ‘a reminder to have a conversation’ (Mike Cohn), you cannot have acceptance tests.

    Unfortunately, and surprisingly, many in agile keep using the words ‘requirements’ and ‘specifications’.

    See my related post:
    https://swtestmanager.wordpress.com/2013/01/01/there-are-no-requirements-in-agile/

    The point – it’s not about tests, it’s about ‘requirements’.

  5. […] and Specification by Example, they are all example based models for specifications. Also, there are problems calling them acceptance tests, as the notion of rejection checks is a better term in general. There is also already some […]

  6. […] of them as “rejection criteria”. Michael Bolton wrote an excellent blog on this, Acceptance Tests: Let’s Change the Title, Too and he says (the bold emphasis is […]

Leave a Reply

Your email address will not be published. Required fields are marked *