A recent discussion on LinkedIn, entitled “Why Is Automated Software Testing A Better Choice?”, prompted some discussion. (As usual, the question “better than what?” was begged.) Then came this comment from my friend Jon Hagar: “There are … environments where the team must automate, e.g. a fast embedded control system where computation cycle is 10ms.”
You might be inclined to agree with Jon, and I might have been too. But then I remembered this incident.
At one point a few years back, I was attending a testing conference. When I arrived at the registration area, the first person I saw was Jerry Weinberg. I hadn’t seen him for several months, and was delighted to see him again.
“Hi, Jerry,” I said, offering and accepting a hug. “Great to see you! And I’d really like to hang out with you as much as I can for all four days of the conference, but I have to go home.”
“No you don’t,” said Jerry.
I was taken aback. “Huh?”
“You don’t have to go home.”
“No, Jerry, I really do. My family’s there, and I’ve been away a lot lately.”
“You don’t have to go home. You don’t have to do anything.”
“You don’t have to go home. You want to go home, and you’ve chosen to go home. And that’s fine.”
I paused for a moment, and on reflection, I realized that was it was true. Jerry, in his Zen-master way, was reminding me to speak and think precisely. If we do The Right Thing reflexively and unconsciously, without recognizing that it’s optional, there’s a risk that we could do The Wrong Thing just as reflexively and just as unconsciously. Since I wanted to see my family, and since I wanted to be a good husband and a good father, I had chosen to go home. I was not compelled to go home. In that case, going home was a choice—one that was important to me, and that I was happy to make, and that would serve me well, but a choice nonetheless.
So… you don’t have to automate, not even for a fast embedded control system where the computation cycle time is 10ms. You could decide that your faith in the designers, your theory about how the system is constructed, and your confidence in how all its parts work together is good enough for you. You might decide that you could trust the designers and the theory and the implementation. You could subject the code to peer review. You might try simulating the system, or interacting with the real system in some super-slow mode. That is, you could proceed based on theory and on trust, even on that kind of system. People sometimes select tools thoughtlessly or don’t use them at all. You don’t have to use appropriate tools, because you don’t even have to test.
But if it’s important for you to learn things about that system, things that might cause intolerable problems for real customers in the real world; if it’s important to you to perform experiments that enable you to vary the conditions you set up, the inputs you provide, and the behaviours you observe; and if it’s important for you to do it quickly, in a way that’s beyond the bounds of your unaided perception, then selecting and using appropriate tools to help you test—along with developing the skills to use them expertly—will guide a choice to use tools expertly. In that case, using tools is a choice—one that’s probably important to you, and that you’ll be happy to make, and that will serve you well, but a choice nonetheless.
You don’t have to use tools, but you want to use tools, and you can choose to use tools. And that’s fine.