Blog Posts from February, 2014

Very Short Blog Posts (12): Scripted Testing Depends on Exploratory Testing

Sunday, February 23rd, 2014

People commonly say that exploratory testing “is a luxury” that “we do after we’ve finished our scripted testing”. Yet there is no scripted procedure for developing a script well. To develop a script, we must explore requirements, specifications, or interfaces. This requires us to investigate the product and the information available to us; to interpret them and to seek ambiguity, incompleteness, and inconsistency; to model the scope of the test space, the coverage, and our oracles; to conjecture, experiment, and make discoveries; and to perform testing and obtain feedback on how the scripts relate to the actual product, rather than the one imagined or described or modeled in an artifact; to observe and interpret and report the test results, and to feed them back into the process; and to do all of those things in loops and bursts of testing activity. All of these are exploratory activities. Scripted testing is preceded by and embedded in exploratory processes that are not luxuries, but essential.

Related posts:

http://www.satisfice.com/blog/archives/856
http://www.developsense.com/blog/2011/05/exploratory-testing-is-all-around-you/
http://www.satisfice.com/blog/archives/496

“We are unable to reply directly”

Monday, February 10th, 2014

Apropos of my recent post responding to the sentiment “We have to automate”, I got a splendid example of the suppressed choice again today. If you haven’t read that post, you might find it helpful to read it now to set the context for my main point here.

It started when I was sitting at home this morning, using my laptop. The dialog below popped up on my screen.

An unhelpful message from Google Calendar Sync

Clicking on the link in the dialog brought forth the Web page that that you see below it. Actually, Google Sync isn’t syncing my (Outlook) calendar with a mobile device (leastwise, not to my knowledge—which would be another issue). Google Sync is syncing my calendar with my wife’s calendar (on another laptop) and with a colleague’s calendar somewhere far away. Now: arguably a laptop is a mobile device, but that doesn’t seem to be what the Web page refers to. There are links associated with mobile browsers, Android, or iOS devices. I can’t fathom how the apparent purpose of the dialog relates to the page that gets delivered. So it seems to me that no human at Google has evaluated this dialog and this link to see that they match up—or if the human evaluation was done, no one has seen fit to address the mismatch.

So, somewhat later in the day, I got an email from the Google Accounts Help Center. The message may have been related to the fact that, on Friday, my frequent-traveler movements seem to have triggered an alert that temporarily blocked access to my mail account. The email contained an invitation to complete a short survey: “Take one minute to answer a few short questions to help make the Google Accounts Help Center better. If you visited the Google Accounts Help Center in the past 3 days, please click the button below to complete the survey.”

Well, I visited the Help Center on Friday (I guess… did I?), so that’s within the last three days. On the other hand, the survey points to something different. One question is:

How satisfied or dissatisfied are you with your most recent Google Accounts Help visit today?

Another is

Why did you visit Google Accounts Help today?

Note “today”. So maybe the survey refers to today’s calendar incident; maybe to Friday’s account block. I can’t tell. And in neither case did I ask a question per se. Oh well. I griped about the 2016 error message. If they can’t sort it out, they’ll be able to figure it out by getting in touch with me.

Yet it’s unlikely that they’ll do that. The survey finished with

While we’re unable to answer your question directly, we’ll use this information to improve our online help resources.

And here’s how this ties to Friday’s post: Google is able to answer my question directly—or they certainly could be. They don’t want to answer my question directly. They choose not to answer. That’s different.

We Have to Automate

Friday, February 7th, 2014

A recent discussion on LinkedIn, entitled “Why Is Automated Software Testing A Better Choice?”, prompted some discussion. (As usual, the question “better than what?” was begged.) Then came this comment from my friend Jon Hagar: “There are … environments where the team must automate, e.g. a fast embedded control system where computation cycle is 10ms.”

You might be inclined to agree with Jon, and I might have been too. But then I remembered this incident.

At one point a few years back, I was attending a testing conference. When I arrived at the registration area, the first person I saw was Jerry Weinberg. I hadn’t seen him for several months, and was delighted to see him again.

“Hi, Jerry,” I said, offering and accepting a hug. “Great to see you! And I’d really like to hang out with you as much as I can for all four days of the conference, but I have to go home.”

“No you don’t,” said Jerry.

I was taken aback. “Huh?”

“You don’t have to go home.”

“No, Jerry, I really do. My family’s there, and I’ve been away a lot lately.”

“You don’t have to go home. You don’t have to do anything.”

“Well…”

“You don’t have to go home. You want to go home, and you’ve chosen to go home. And that’s fine.”

I paused for a moment, and on reflection, I realized that was it was true. Jerry, in his Zen-master way, was reminding me to speak and think precisely. If we do The Right Thing reflexively and unconsciously, without recognizing that it’s optional, there’s a risk that we could do The Wrong Thing just as reflexively and just as unconsciously. Since I wanted to see my family, and since I wanted to be a good husband and a good father, I had chosen to go home. I was not compelled to go home. In that case, going home was a choice—one that was important to me, and that I was happy to make, and that would serve me well, but a choice nonetheless.

So… you don’t have to automate, not even for a fast embedded control system where the computation cycle time is 10ms. You could decide that your faith in the designers, your theory about how the system is constructed, and your confidence in how all its parts work together is good enough for you. You might decide that you could trust the designers and the theory and the implementation. You could subject the code to peer review. You might try simulating the system, or interacting with the real system in some super-slow mode. That is, you could proceed based on theory and on trust, even on that kind of system. People sometimes select tools thoughtlessly or don’t use them at all. You don’t have to use appropriate tools, because you don’t even have to test.

But if it’s important for you to learn things about that system, things that might cause intolerable problems for real customers in the real world; if it’s important to you to perform experiments that enable you to vary the conditions you set up, the inputs you provide, and the behaviours you observe; and if it’s important for you to do it quickly, in a way that’s beyond the bounds of your unaided perception, then selecting and using appropriate tools to help you test—along with developing the skills to use them expertly—will guide a choice to use tools expertly. In that case, using tools is a choice—one that’s probably important to you, and that you’ll be happy to make, and that will serve you well, but a choice nonetheless.

You don’t have to use tools, but you want to use tools, and you can choose to use tools. And that’s fine.