DevelopsenseLogo

Who Needs the Testers?

This post is a lightly-edited transcript from a LinkedIn article, which was itself adapted and extended from a recent thread on Twitter.

Another day, another story that goes like this. A colleague tells me that he’s working with an organization by training developers in how to do testing. That sounds like a pretty good idea at first, although most developers are already pretty good at the kind of testing that developers need to do, especially when they collaborate and review each other’s work.

But this is not the kind of testing that the training is focused on. It turns out that developers are being trained in higher-level integration and system testing—the kind of deep testing that requires significant domain knowledge and a substantial shift from the developer mindset. Why are the developers getting this training? Because the organization got rid of the testers who were formerly doing this work, and is hoping that the developers will do it.

Why were the testers canned? The testers were unsuccessful at a mandate handed down from management: “We have to do more automated testing! Automate all the testing! Automate everything!” This means that the testers were being asked to do programming. And, unsurprisingly, most of the testers weren’t great at that.

Part of the problem was that, at best, the testers got shallow training in programming. Another part is that nobody really knew (or knows) what “automate everything” means. (The managers were probably thinking about the visible aspects of testing: pressing keys, moving mice, and comparing output to specified, desired results. That’s something we call checking. Testing can’t be automated, although checking can.) And another part is that programming turns out to be time-consuming and tricky. Who knew?!

The result was a bunch of automated checking that was shallow, brittle, didn’t find many bugs, and that didn’t keep pace with development. Testers had lots of questions for the programmers, and investigating the problems detected by automated checks took time and effort. Many of those problems turned out to be non-problems because of errors in the check code. Programmers and managers perceived, not unreasonably, that all this was interrupting the programmers’ work.

The short story motivating the decision to get rid of the testers was: testing was slowing down development. The solution: get rid of the testers. Get the programmers to do the “testing” (that is, programmed checking), since the programmers are already good at programming.

It’s usually a very good idea for programmers to do checking, especially at the unit level. Low-level checking provides the programmers with very fast feedback, alerting them to coding errors and problems that might otherwise get buried. It helps with the discipline of building the product cleanly and simply, such that it can be built and changed safely. But checking is not all there is to testing.

In the Rapid Software Testing namespace, testing is evaluating a product by learning about it through exploration and experimentation. So let’s expand the short story above: evaluating the product by learning about it through exploration and experimentation was slowing down development.

Now, let’s make that a little more concise: learning about what they were building was slowing down development. Another way to put that is that development was going too fast for people to learn about what they were building. As a former program manager, I find that ominous.

One big error, apparently, was in believing that programmed checking is all there is to testing, which led to this interpretation: getting non-programmers to write programs as a means of learning about the product was slowing down development.

It seems to me that while writing programs might be helpful for some learning purposes, it’s a highly imperfect and incomplete way to learn about a product. Why was this not recognized?

It’s often the case, alas, that testers don’t have the skills to compose, edit, narrate, and justify the story of their testing work. It’s also common for people—even testers—to believe that testing is all about developing and following scripted procedures and checking output, rather than investigating risk.

When testing is reduced to demonstrating, by rote, that the product can work, it’s easy to believe that testing is simply a programming task. Create programs to do really fast typing and really fast comparison of desired and actual results. Simple! No testers required!

Mind you, by that line of reasoning, all programmers do is type instructions from business people into a computer, right? So, here’s a modest proposal for those managers: teach business people to program, and then we won’t need programmers OR testers. Simple! No technologists required! Yet managers rarely suggest this.

Why not? Perhaps it’s because managers are aware the programming is hard. Yet those same managers often seem unaware that investigating complex systems for deeply hidden, subtle, rare, emergent problems is also hard. Testers must take responsibility for making that clear.

That requires two things: doing excellent testing, and describing excellent testing. That won’t happen when we reduce testing to test cases; when we talk of “manual” testing and “automated” testing; when we forget that we’re here to find problems that matter to people.

Doing excellent testing requires testers to understand the context of the product, where and how it will be used, and the project and processes being used to develop it. Excellent testing requires testers to have some degree of technical skill, but also to have the social skills, the communication skills, and the domain knowledge to test effectively.

Doing excellent testing requires testers to be rapid learners. It requires testers to develop rich, detailed models of risk and of the product so that they can cover it with testing. It requires good oracles—ways to recognize problems when we encounter them—far more than comparing the product to an specified, desired, “expected” result or to some line in a requirements document. It requires testers to be aware that a product is not simply units of code; the product and its users represent a system of people and things in meaningful relationship. Such systems have interactions and properties that are emergent, not obvious from simply checking the parts.

Describing excellent testing requires testers to tell the testing story, including what isn’t being tested—and what’s slowing testing down, or making testing harder, or reducing the value of the work. It requires testers to be able to articulate all of the dimensions of excellent testing: context, risk, coverage, oracles, systems thinking—and why testing a product is so much more than writing code to check its output.

2 replies to “Who Needs the Testers?”

  1. I started reading this article quite optimistic, and as the reading progressed, things were getting dimmer and dimmer.
    The Title asks a great question that is not asked often enough – who, really, needs testers?
    Frankly? most products I’m familiar with – including the product I work on – can probably do well without highly specializing testers.

    Michael replies: That’s may be so. Many people don’t need a chartered account; not many equine veterinarians are needed in the downtown core of major American cities; and not many communities need an air traffic controller. Lots of products don’t entail significant risk.

    All products need to have a significant amount of testing, so people should wear the tester hat every now and then but for most products it’s a simple matter of learning what’s important for your organisation and then setting out to look for it.

    That might be so. On the other hand, it might be a good idea to question just how simple it is to think critically, to develop good models of risk, and to question the product in important ways.

    This task is getting easier by the day when monitoring improves and feedback cycles are getting shorter – Sure, I can go and deeply investigate the product to uncover some very odd behaviours, but 90% of the valuable information is gathered before shallow testing is done (in fact, much of it is uncovered in a much earlier testing phase by speaking with the relevant developers and stakeholders before touching the product).

    I’d be pretty happy about that, myself, if there’s something important on the line. That 90% can be a serious distraction to the important 10%.

    As much as I would like to be able to say that I bring a lot of business value by being a good tester (which I aspire to be), I can’t. Most of the value currently needed from me isn’t insights gained by testing, it’s educating my teammates on basic, shallow testing. Stuff such as mapping out the error use-cases, placing simple checks as safeguards and considering the context of a feature.

    Is that not valuable? To ask the Dear Abby question: are they better off with you or without you?

    Probably, once we get there, there will be more value in deep testing. At the moment, most companies do ok without having that. improving the performance of the simple testing seems quite reasonable to me.

    That’s a good start—and that will make the deep testing faster and easier when the time comes.

    Reply

Leave a Comment