DevelopsenseLogo

Interview and Interrogation

In response to my post from a couple of days ago, Gus kindly provides a comment, and I think a discussion of it is worth a blog post on its own.

Michael, I appreciate what you are trying to say but the simile doesn’t really work 100% for me, let me try to explain.

The simile has prompted you to think and to question, so in that sense, it works 100% for me. Triggering thought is, after all, is why people use similes. (See also Surfaces and Essences: Analogy as the Fuel and Fire of Thinking.)

I would apply lean principles and cut some waste from your interview process. I will fail the candidate as soon as she gives me the first wrong answer.

I have 5 questions and all have to be answered correctly to hire the person for a junior position (release 1).

Interview candidate A:
Ask question 1 OK
Ask question 2 FAIL
Send candidate A home

Second interview to candidate A:
Ask question 1 OK
Ask question 2 OK
Ask question 3 FAIL
Send candidate A home

Third interview to candidate A:
Ask question 1 OK
Ask question 2 OK
Ask question 3 OK
Ask question 4 OK
Ask question 5 OK

Hire candidate A

All right. You seem to have left out something important in your process here, which I would apply after each step—indeed, after each question and answer: make a conscious decision about your next step. To me, that requires continuous review of your list of questions for relevance, significance, sufficiency, and information value. Interviewing is an exploratory process. A skilled interviewer will respond, in the moment, to what the candidate has said. A skilled interviewer will think less in terms of “pass or fail”, and more in terms of “What am I learning about this candidate? What does the last answer suggest I should ask next? What other information, exclusive of the answer, might I apply to my decision-making process? What else should I be looking for?” When the candidate gets the answer wrong, the skilled interviewer will ask “Was it really wrong? Maybe there are multiple right answers to the same question. Maybe she didn’t understand the question because I asked in in an ambiguous way, and she gave a right answer to an alternative interpretation. Maybe her answer was a question for me, intended to clarify my question.”

I can’t emphasize this enough: like interviewing, testing is about far more than pass or fail. Testing is about exploration, discovery, investigation, and learning, with the goal of imparting what we’ve learned to people who matter. Testing is about trying to understand the product that we’ve got, with the goal of revealing information that helps our clients decide if it’s the product they want. Testing is usually (but not always) focused on finding evident problems, apparent problems and potential problems, not only in our products, but in our ideas about our products. Testing is also about finding problems in our testing, and every one of the “fail” moments above is a point at which I would want to consider a problem with the test. (The “pass” moments are like that too, if I really want to do a great job.)

At this point when candidate A will want to be promoted to a senior position (translate with next release of the software) I will prepare other 5 different questions probing against the new skills and responsibilities and as I have automated the first 5 questions I can send her a link to a web site where she will have to prove that she hasn’t forgotten about the first 5 before she can be even considered for the new position.

I’d do things slightly differently.

First I would ask “What would prompt me to ask the same questions again? Are those still the most important questions I could ask as she’s heading for her new role? What reason do I have to believe that she might have lost some capability she previously had? Are there other questions related to her old role—not necessarily to her new one—that I should ask that might be more revealing or more significant?” Note that there might be entirely legitimate reasons to believe that she might have backslid somehow—but at that point, I’d also want to ask “What are the conditions that would have allowed her to backslide without me noticing it—and what could I do to minimize those kinds of conditions?”

Then there would be another question I’d ask: “What if she has learned to answer a specific question properly, but is not adaptable to the general case? Should I be asking the same question in a different way, to see if she gives the same answer? Should I be asking a similar question that has a different answer, and see if she notices and handles the difference?”

Now: it might be costly to vary my questions, so I might simply shrug and decide just to go with the ones I’d asked before. But the point of evaluating my process is to ask, “How might I be fooling myself in the belief that I still know this person well?”

Assumes she answers correctly the 5 automated questions, at this point I will do the interview for the senior role.

Interview candidate A for senior role:
Ask question 6 OK
Ask question 7 FAIL
Send candidate A home

and so on.

I don’t see a problem with this process as long as I am allowed using everything I learn from the feedback with the candidate up to question “N” to adapt and change all the questions greater than “N”

Up until this point, you haven’t mentioned that, and your description of your process doesn’t make that at all clear. You’ve only mentioned the “pass” and “fail” parts of “everything I learn from the feedback”. Now, you might be taking that into account in your head, but notice how your description, your process model, doesn’t reflect that—so it becomes easy to misinterpret what you actually do. In addition, you’ve focused on adapting and changing all the questions greater than N—but I’d be interested in the possibility of adapting and changing all the questions less than or equal to N, too.

More importantly: qualifying someone for an important job is not about making sure that they can get the right answers on a canned test, just as testing a product is not about making sure that the functions produce expected results for some number of test cases. The specific answers might have some significance, but if I’m serious about hiring the right people for the job, I don’t want to make my decisions solely by putting them in front of a terminal, having them fill out an online form, and checking their answers. I want to evaluate them from a number of criteria: do they respond quickly, in a polite and friendly way? Do they work well with others? Are they appropriately discrete? Are they adaptable? Can they deal with heavy workloads? Do they learn quickly? In order to learn those things, I need to do more than ask pass-or-fail questions. I need to have unscripted, spontaneous, and free-flowing conversation with them; interview and interaction, and not just interrogation. You see?

1 reply to “Interview and Interrogation”

  1. As usual Michael, great read.

    Michael replies: Thank you.

    Let me tell you what happened here. In your first post, you made a remarkable simile, I expanded on it a bit adding some variations, and you did the same again, but now I don’t know if we are talking about an interview or software delivery, reality and imagination got slightly mixed up πŸ™‚

    What if we were talking about an interview? What if we were talking about software delivery? What if we were talking about testing?

    The point of metaphor and simile is not only to reach conclusion, but also to question premises and to propose new premises and conclusions. In that sense, getting reality and imagination mixed up is a feature.

    I take all your points, and appreciate them. As you seem to like this kind of discussion, now I want to challenge you to another extension of your own simile.

    How about if instead of asking the candidate questions and expecting answers, you have a discussion with the candidate and coach her until she reaches the correct answer.

    There’s nothing intrinsically wrong with doing that. If your job is to develop the candidate (as a trainer or a manager would), that’s one thing. If your job is to be an interviewer or investigator, that’s another. You see?

    What kind of test am I doing? Or better what software development approach am I using?

    So far as I’m aware, all software development does something like what you propose. One way to distinguish different approaches is to look at how and how often and how many times they iterate through the kind of cycle you’re describing.

    P.S. let me know if you create a separate blog post in reply to my comment, I found this almost by mistake following another discussion on this discussion πŸ™‚

    Not this time. πŸ™‚ And it’s not a problem if you read everything.

    Reply

Leave a Comment