Blog: “Manual Testing”: What’s the Problem?

I used to speak at conferences. For the HUSTEF 2020 conference, I had intended to present a talk called “What’s Wrong with Manual Testing?” In the age of COVID, we’ve all had to turn into movie makers, so instead of delivering a speech, I delivered a video instead.

After I had proposed the talk, and it was accepted, I went through a lot of reflection on what the big deal really was. People have been talking about “manual testing” and “automated testing” for years. What’s the problem? What’s the point? I mulled this over, and video contains some explanations of why I think it’s an important issue. I got some people — a talented musician, an important sociologist, a perceptive journalist and systems thinker, a respected editor and poet, and some testers — to help me out.

In the video, I offer some positive alternatives to “manual testing” that are much less ambiguous, more precise, and more descriptive of what people might be talking about: experiential testing (which we could contrast with “instrumented testing”; exploratory testing (which we have already contrasted with “scripted testing”; attended testing (which we could contrast with “unattended testing”); and there are some others. More about all that in a future post.

I also propose how it came to be that important parts of testing — the rich, cognitive, intellectual social, process of evaluating a product by learning about it through experiencing, exploring and experimenting — came to be diminished and pushed aside by obsessive, compulsive fascination with automated checking.

But there’s a much bigger problem that I didn’t discuss in the video.

You see, a few days before I had to deliver the video, I was visiting an online testing forum. I read a question from a test manager who wanted to interview and qualify “manual testers”. I wanted provide a helpful reply, and as part of that, I asked him what he meant by “manual testing”. (As I do. A lot of people take this as being fussy.)

His reply was that he was wanting to identify candidates who don’t use “automated testing” as part of their tool set, but who were to be given the job of creating and executing manually scripted human-language tests and performing all the critical thinking skills that both approaches require.

(Never mind the fact that testing can’t be automated. Never mind that scripting a test is not what testing is all about. Never mind that no one even considers the idea of scripting programmers, or management. Never mind all that. Wait for what comes next.)

Then he said that “the position does not pay as much as the positions that primarily target automated test creation and execution, but it does require deeper engagement with product owners”. He went on to say that he didn’t want to get into the debate about “manual and automated testing”; he said that he didn’t like “holy wars”.

And there we have it, ladies and gentlemen; that’s the problem. Money talks. And here, the money—the fact that these testers are going to be paid less—is implicitly suggesting that talking to machines is more valuable, more important, than deeper engagement with people.

The money is further suggesting that skills stereotypically associated with men (who are over-represented in the ranks of programmers) are worth more than skills stereotypically associated with women (who are not only under-represented but also underpaid and also pushed out of the ranks of programmers by chauvinism and technochauvinism). (Notice, by the way, that I said “stereotypically” and not “justifiably”; there’s no justification available for this.)

Of course, money doesn’t really talk. It’s not the money that’s doing the talking.  It’s our society, and people within it, who are saying these things. As so often happens, people are using money to say things they dare not speak out loud.

This isn’t a “holy war” about some abstract, obscure point of religious dogma. This is a class struggle that affects very real people and their very real salaries. It’s a struggle about what we value. It’s a humanist struggle. And the test manager’s statement shows that the struggle is very, very real.

Want to know more? Learn about upcoming Rapid Software Testing classes here.

4 responses to ““Manual Testing”: What’s the Problem?”

  1. Robert Day says:

    So that manager wanted to get a more skilled candidate but the company he represents wanted to pay less for that candidate.

    Well, good luck on that. If I’d ever come across that sort of attitude during a job interview, I’d be terminating that interview. If my employer admitted that this was their policy, I think I’d be drafting my letter of resignation. This goes beyond debates about what sort of testing is better. It’s a matter of what is acceptable management and corporate behaviour. And discriminating against specific skill sets – whilst at the same time actively seeking out those skill sets and claiming that they are of value to the company – is just plain unacceptable.

    Michael replies: Thank you for the comment. It prompted me to have a look at your blog. I’m pleased to link to it here:

  2. Marek Langhans says:

    Michael replies: Thank you for your comment.

    Why wouldn’t roles where you need more knowledge and skills (be it programming languages, be it certain tools or processes etc.) be paid better than roles where you don’t need the same?

    That’s a good question, worthy of consideration. Here are some others like it, also worthy of consideration, I think.

    For roles in testing, why would we pay people better for skills that are already easily available on the team?

    The ability to connect the world of humans to the world of machines is a valuable skill. Yet so is the ability to connect one human with another human; so is the skill of connecting humans with the world of doubt. Is that less valuable than the programmer’s role?

    The capacity to engage with the product and look for trouble, and to analyze that trouble in ways that are helpful to programmers is useful. The ability to contextualize that analysis for business people is important too. Is that less valuable than the programmer’s role?

    The capacity to write automated checks for the product may be a valuable skill on a given product. The capacity to imagine and design other ways to apply tools would be valuable to—or so it seems to me. Is a person who can imagine and apply such tools automatically less valuable than the person who can code them but not imagine them?

    If someone is really, really good at finding bugs that matter, should they by default be paid less than someone who can write code that never reveals bugs that matter?

    All of these questions refer to hypothetical situations, without context. But it’s worthwhile to discuss them, I believe, when they could become real situations.

    Why wouldn’t we pay people more for many kinds of specialist skills?

    I absolutely agree with the rhetorical question; I don’t object at all to the conclusion you’re hinting at. My objection is to the apparently widespread notion that “many kinds of specialist skills” is reduced to a list of coding languages, plus Selenium.

    If for two test roles you need deeper engagement with people and the machine(s), but for one of those roles you as well need special knowledge and skills to talk to the machine(s), wouldn’t it make sense that for the role, where you need more, you get paid better?

    It’s not the “as well” part. It’s the suggestion, apparent in this fellow’s words, that when it’s one or the other, the social skills are the less valuable of the two.

    As it takes some time and resources to acquire the knowledge and skills and then to maintain it. It then should be reflected accordingly. If such a person then applies for a test role, it should be paid better, than a person who doesn’t have such knowledge and skills and applies for similar role. I don’t see it having anything to do with genders or the labeling of the roles. Or am I missing something here?

    I don’t know. You might be missing something; maybe not. Look around your office, and in your social network. Ask your colleagues. Which genders seem to populate each role. Do you see parallel numbers?

    Something else would be if a person who has such knowledge and skills applies for a role that is paid less because the responsibilities with role associated are less.

    Sure. But I can’t see where I was advocating for equal pay for unequal responsibility.

    From my experience test roles with automation involved are basically test roles without automation involved, but you need to have experience with certain programming language and tools. If you have, you should be paid better, because you spent time, money and resources to get that knowledge and skills.

    The blog post referred to a specific case that doesn’t seem to be consistent with your experience.

    How many people developing automated checks have spent time, money, and resources to develop skill in testing?

    Now: I’ll offer you a way to make a really persuasive argument: some people called “tester” have studied programming or programming languages and have not studied testing at all. I think we can agree that that’s bad — and worse, that it’s common. And some people called “tester” have not studied testing at all and have not studied programming or programming languages either. If you want to say that that’s a real problem, you’ll get no disagreement from me. It is a real problem.

  3. […] Blog: “Manual Testing”: What’s the Problem? Written by: Michael Bolton […]

  4. Guillermo Chussir says:

    I totally agree that the expression ‘automated testing’ is… strange.
    I prefer using the expression ‘test execution assisted by automation tools’.

Leave a Reply

Your email address will not be published. Required fields are marked *