Blog: Automation Bias, Documentation Bias, and the Power of Humans

A few weeks I went down to the U.S. Consulate in Toronto to register Ariel, my daughter, as an American citizen born abroad. (She’s a potential dualie, because she was born in Canada to an American parent: me. I am a dualie, born in the U.S. to Canadian parents. Being born a dual citizen is a wonderful example of a best practice. You should follow it. But I digress.)

The application process is, naturally, fraught with complication and bureaucracy. There’s also a chilling and intimidating level of security; one isn’t allowed to bring anything electronic into the Consulate at all. No cell phones, no PDAs, and certainly no laptop computers. That means no electronic records, and no hope of looking anything up. So one has to prepare.

There’s a Web site for the Consular services. One of the first items that one sees on the site is a link for telephone inquiries. The telephone services are for visa information, not for general information; and that visa information costs USD$0.90 per minute for a recorded system with no operator. (Oddly, that’s the price for calls from the U.S.; calls from Canada are cheaper, at CAD$0.69 per minute.) I didn’t test that.

With only a little digging, I was able to find information related to registering a birth abroad. I gathered the information and documents that I figured I needed, and took it all down to the Consulate. I was getting ready to travel the next day, and so in typical fashion, I pushed things out to the noon deadline for receiving applications. I watched the clock on the car anxiously, parking at 11:53 and getting to the Consulate at 11:55. “Wow, that’s pushing it,” said the security guard. “Last one today.”

When I spoke to the friendly, helpful lady behind the counter (I mean that; she was genuinely friendly and helpful) she they told me some things that the Web site didn’t.

  • The application form itself is online, and these days it’s one of those PDFs that has input fields, so everything can be nice and tidy. Again, though, there are some fields in the form that have several possible answers. There is some helpful information available, but I still had questions.
  • The consular officers want to see original documents, but accept and keep only photocopies of them. You need to come with your own photocopies. If you don’t, it costs you $1.00 per document—and there are lots of documents. This isn’t noted anywhere on the Web site that I could see.
  • On one of the Web pages listing documentation requirements, it says “In certain cases, it may be necessary to submit additional documents, including affidavits of paternity and support, divorce decrees from prior marriages, or medical reports of blood compatibility.” Well, what cases? The page doesn’t tell me, and getting it wrong means an extra trip. The lady behind the counter reviewed what I had brought, answered a number of questions, and told me exactly what to bring next time.

As I travel around, I sometimes see an implicit assumption that documents tell us all we need to know. Yet documents are always a stand-in for some person, an incomplete representation of what they know or what they want. They’re time-bound, in that they represent someone’s ideas frozen at some point in the past. They can’t, and don’t answer followup questions. As Northrop Frye once said, “A book always says the same thing.” Yet if we look more closely, not even ideas that are carefully and thoroughly debated can be expressed unambiguously. That’s why we have judges. And lawyers.

The next thing that happened emphasized this. After I left the Consulate, I returned to my car. At the collection booth, the posted time was 12:20. I’d been less than half an hour, which is good because parking at that garage costs $3.00 per half-hour. I handed the attendant my ticket. The charge was $6.00.

“What?! I’ve only been gone for 25 minutes.”

She looked at the ticket. “Sorry, sir. You checked in at 11:40.”

“No way,” I said. “I know what time I checked in; I was running late. It was at least 12 minutes later than 11:40. I got to the entrance to the Consulate, just over there, at 11:55. No way I could have taken 15 minutes to walk 75 metres!” She showed me the ticket. It said 11:40. “That’s impossible. I want to check the clock.”

The difference was only $3.00, but I was furious. I exited the garage, drove around to the entrance and check the display. It read 12:24, the correct time. I pushed the button and pulled out a ticket; it too read 12:24. To her credit, the attendant appeared and checked the clock, and asked to see the ticket I had just printed. “12:24. I’m sorry, sir, there’s nothing I can do.” Quite true, no doubt. I gritted my teeth and paid the extra three bucks.

In this case, the (clearly fallible) machinery and the (clearly fallible) documentation were more credible than I. I didn’t check the ticket on the way in. And yet I know when I arrived, and I know that there must have been some kind of failure with the machinery. A one-off? A consistent pattern? Happens only at a certain time of the day? A mechanical problem? A software problem?

All the way home, I pondered over how the failure had occurred, and how one might test for it. But what impressed me most about my experience with the Consulate’s Web site, and the consular officer, and the the parking ticket machine, and the parking attendant, was the way in which we invest trust, to varying degrees and at various times, in machines and in documents and in people. When is that trust warranted, and when is it not?

Postscript: Just now, as I attempted to publish this post, the net connection at this hotel was suddenly unavailable. Again.

Want to know more? Learn about upcoming Rapid Software Testing classes here.

6 responses to “Automation Bias, Documentation Bias, and the Power of Humans”

  1. says:

    You say "I didn't check the ticket on the way in", that would suggest you yourself trusted the machinery?

    Actually, more telling than trust in machines is the *lack* of trust humans put in other humans, in this case not trusting the parking attendant to make changes to the ticket ("there's nothing I can do").

    One of the most infuriating aspect of living in a world full of technology, to me, is the apparent blindness of the designers of these technologies to the possibility of mistakes, errors, goofs and so on. But I don't see it as an issue with technology per se; as you noted at the Consulate, the same issues also crop up with *procedures*, irrespective of the sophistication of the technology bound up in them.

    I think the source of this blindness is that to allow for errors would require the designers (of machines or procedures) to assume that some human at some point is going to be trusted with *correcting* the mistake: for instance, in the attendant's case, gauging your sincerity (or maybe recalling your car and judging that you came in less than a half hour before) and taking the smaller amount. That is, to delegate some power to the human in charge.

    The really, really infuriating question is this: if the attendant isn't there to allow for mistakes and correcting them, in other words to behave *as* a human, what the heck is she there for ?

  2. Arjan Kranenburg says:

    Have you also noticed that people get angry when this trust is violated? That wasn't just you, but happens to me and I guess all of us. Shouldn't we be angry at our selves for trusting the person, machine, or software? Because while that woman totally trusted the machine, you trusted the garage as a system (machines and personnel) when entering (I assume you did, otherwise you would have checked the ticket or wouldn't get in at all).

    But how would the world be without trust? We have to trust every now and then. And we have to learn what to trust and what not. Sometimes we make mistakes in that, but I guess you will check the time on every next ticket you get from a parking garage.
    I'd say in real life it is a risk assessment on what to trust and what not. Isn't it the same in software testing?

  3. Anne-Marie says:

    "When is that trust warranted, and when is it not?"

    Is that the right question to ask though? We trust machinery because in many cases we perceive it makes our lives simpler and easier. In effect we chose to trust. Often our trust is misplaced and we get frustrated.

    In testing, I think this is apparent when asked to provide evidence that a test passed or failed. It always seems a little odd to me that a snapshot of a result is deemed acceptable proof that a test passed. Really its just a graphic, any one could have doctored the result.However people chose to trust that but not a verbal "ok" from the tester.

    Perhaps this is a case of stating the bleedin' obvious, but I think along with humanity comes failure, and so perhaps nothing warrants our trust. Its just really tough (and I suspect lonely) to live like this, so I guess people chose to trust instead.

  4. Pradeep Soundararajan says:

    Excellent narration

  5. Zach Fisher says:

    As frustrating as it was for you, I enjoyed reading this; if only as a spectator watching the unsuspecting protagonist caught up in a Rube Goldberg moment.

    In addition to trust, I think a question worthy of pontificating is: how do we arrive at our beliefs? Specifically: You had an experience upon arriving at the parking lot that led you to believe something to be true, even though evidence observed later pointed to the contrary.

    Some nagging questions for me are:

    – How did your experience come to be so easily discredited? Human failure? Machine failure? Both? Neither?

    – How does showing that one could be wrong affect the perception of your experience? Do you still believe your experience was true? Do you still believe you were right? Why?

    – Could it be inferred from this episode that some people believe human failure is more likely to occur than machine failure?

    – How do we define human failure?

    – Assuming that some believe machine failure IS human failure, what factors reinforce that delineation?

    Idealist questions, for sure. Maybe these are just pointless, but I am interested in your insights. Nonetheless, I hope the story ends well for you soon.


    P.S. I see a lot of irony in your response to the attendant. How many developers have said, "That's impossible!" when we bring them face-to-face with something that challenged their coded assumptions. Perhaps a developer somewhere is reading your story and exclaiming, "Sweet revenge!".

  6. Portuga says:

    Well, as much as we can't trust automation and documentation, there's much to be said about trusting yourself and other people. The truth is that people are known to lie, to forget and to get things confused. And being people we, me and you, and you too over there who are reading this comment, we all forget stuff, get things confused. Some of us even lie. So this issue of warranting trust is universal and timeless. When should you trust your own memory that sometimes tricks you? When should you trust your friend? When should you trust someone you don't know? When should you trust a python with your 2-year-old daughter ( When should you trust a machine, a document, a rule, a law? We can ask all these questions but the only possible answer is: there is no rule, it's a case by case thing. Well, except on the python example, that was definitely irresponsible to a criminal extent.

Leave a Reply

Your email address will not be published.