In Part One of this series, we learned what cognitive biases are and who they affect (everyone). We also suggested that for a fast-growing team trying to make good hiring decisions with inexperienced interviewers (as at SeatGeek), cognitive biases present a considerable hidden threat.
Here we’ll enumerate four particularly insidious interview-related biases in hopes of better understanding how to avoid them.
Let’s say a friend of ours recently started dating someone named Amy. You’ve met her a few times, she’s been friendly, and we’ve enjoyed her company. Our friend is lucky to have found her. Later we hear that Amy applied for a job at our work, and the hiring manager is asking us questions about whether she’d be a good fit for the role:
- Is Amy smart?
- Is Amy hard-working?
- Is Amy good with numbers?
Even if we don’t know concrete facts that would inform an educated answer to those questions (e.g. Amy’s test scores, work habits, or math skills), we’re more likely than not to answer “yes” to all three. Our overall good vibe about Amy will bleed into our evaluation of her specific traits, even those about which we objectively know nothing.
This is called the halo effect, because it inclines us to judge other people as all-good (or all-bad).
In interviews, the risk is obvious. A candidate flatters us or makes a few good jokes, and we leave the room feeling good. Afterward, other small details of the interview start to coalesce, and we end up with an oversimplified memory of an all-around likable candidate.
The halo effect is especially problematic because it can take root even when we’ve hardly met a person. Anyone who knows an active Tinderer can attest to the fickleness of first impressions. In face-to-face interactions, all it takes is a friendly greeting, a firm handshake, or even just the sight of a face for a halo to begin to form.
Worse, first impressions are often self-fulfilling. Once an impression is formed, we come to expect certain reactions. Even when our impression is wrong and the reaction is different, it’s cognitively easier to excuse a conflicting observation and to continue perceiving our impression as true than to change our position. Here’s how that might go in our head:
This guy is great!… Wait, what did he just say?… Eh, that’s an easy mistake. Maybe I misheard. I’m sure what he really meant was this…
In experiments repeated globally, Alex Todorov and others have demonstrated the power of first impressions with evidence that research subjects can correctly predict the outcome of political elections with 70% accuracy by judging the competency of candidates after only viewing their campaign portraits… for as little as a tenth of a second.
If large public elections can be detectably influenced by the glimpse of a face, then it’s alarmingly easy to imagine how that or something equally as trivial – a person’s height, a good hair day, or nice perfume – might contaminate a single interviewer’s decision.
Whereas our judgment of other people may be biased heavily by our first impressions, when we remember events, beginnings have relatively little influence. Our overall impression of any experience – how much pleasure or pain we feel during a brief incident, a vacation, or even an entire lifetime – depends on our recollection of a subset of snapshots drawn from the experience.
The peak-end rule posits that we’re very lazy rememberers. By default we’ll most heavily – or, really, only – consider the two easiest-to-recall snapshots. We most easily recall:
- the high- or low-light of the experience (the “peak”)
- the last part of the experience (the “end”)
Psychologists have demonstrated the peak-end rule by inflicting mild pain on subjects using the cold pressor test. They administer the test three times. Each time the trial starts with a 60-second wrist-deep soak in icy water. In one case, the trial ends right after the first 60 seconds have elapsed. In another case, the hand remains in for an extra 30 seconds while the experimenter slowly mixes in warmer water, bringing the temperature up ~1º and ever-so-slightly relieving the subject’s pain. In both cases, at the end of time, the subject removes her hand and receives a warm towel. After her second soak, the subject is asked to choose which version of the test she prefers to receive for her third trial.
Subjects should unambiguously prefer the shorter trial (Trial A). In math-y terms, the integral (i.e. sum) of the pain they’ve endured is absolutely less than in the longer trial (60 seconds total vs. 90 seconds total). But the majority of subjects choose the longer trial (Trial B), because their memory of the experience is biased by the slight relief they feel as the warmer water is mixed in – a more pleasant end.
According to the peak-end rule, our recap of an interview is predisposed to be dominated by one or two unforgettable moments, with little consideration for the events in between. For example, maybe a candidate knocks out of the park a particularly tough, arcane question in the middle of the interview, and finishes with some strong closing arguments. We’re more likely to construct a positive memory of her performance, even if she also gave several subpar, less remarkable answers.
Illusion of validity
Whereas the halo effect is a danger that arises from making decisions based on too little real evidence, surplus evidence can be problematic too. The illusion of validity predisposes us to believe that having more observations always equates to a surer decision.
In a fairly benign case, maybe we have duplicative evidence. We review a resume for an entry-level candidate who earned a high SAT score and also attended a prestigious university. Both are (arguably) positive signals, but in combination neither fact provides much more usable, independent information than the other. (A high SAT score is one of the surest ways to get into a good university.) Still, we feel like we have twice as many important signals.
In the worst case, the illusion of validity tricks us into making confident decisions even when the signals are totally meaningless. Daniel Kahneman, the godfather of cognitive bias research, cites his experience performing Israeli army officer evaluations as an example of what can go wrong.
Kahneman placed candidates in groups and subjected them to an elaborate physical obstacle course. His team of interviewers (some of whom were also psychologists) observed closely and took diligent notes to distinguish the leaders in the group from the followers. Their rigorous methods and the realistic setting led them to the natural conclusion that a leader identified in the test was sure to be a leader of tomorrow, and thus a strong candidate for army officer.
The observers were so confident in their method that for years, they didn’t bother to validate their predictions by analyzing later performance reviews of the candidates whom they’d promoted. When they did, their predictions turned out to be hardly more useful than a coin flip.
Most strikingly, even after Kahneman’s team learned these results, the obstacle course remained a feature of the screening process longer still, because the method seemed so productive that the team sought excuses to discredit the facts of its futility.
If an elaborate experiment conceived of by accredited psychologists can fail this spectacularly at producing meaningful signals, then we should be very skeptical of our own interview questions – even if they feel targeted and productive.
Overconfidence and the bias blind spot
Cognitive biases lead us to make poor decisions. We discard perfectly good information that would help us. We invent sometimes baseless narratives to fill in the gaps. We go to great lengths to gather meaningless data. In spite of our intuitive flaws, we plow ahead with confident decisions. When we do this – fail to recognize the holes in our logic and the low reliability of our decisions – it’s symptomatic of yet another bias, which psychologists call the overconfidence effect.
Cruelest of all, even when we think we might have mastered this, we’re wrong. Emily Pronin, coiner of the phrase “bias blind spot,” has demonstrated that learning about cognitive biases only helps us identify their role in other people’s behavior. Subjects who’ve been instructed about a bias – even immediately before being put to a test – are still no less likely to succumb to the bias and overrate the objectivity of their decision-making.
This would be a pretty hopeless place to end this series of posts: concluding that have we no control over the systematically bad decisions that we make. Fortunately, though eliminating the influence of bias in our decisions in certain situations might be next-to-impossible, re-engineering – or, better, completely avoiding – those situations is not. In Part Three (coming soon), we’ll see how this idea applies to interview decisions.
Want to learn more about biases? Here are some things to read: