Code, Design, and Growth at SeatGeek

Jobs at SeatGeek

We are growing fast, and have lots of open positions!

Explore Career Opportunities at SeatGeek

Hiring & Bias, Part 2: Revealing the Pitfalls

Part 1 of this series defined heuristics and cognitive biases and established that everyone is subject to them, whether they know it or not. In Part 2, we’ll diagnose four specific biases that commonly plague interviewers.

Halo effect

A friend of yours recently started dating someone named Amy. You’ve met her a few times, she’s been friendly, and you’ve enjoyed her company. Later you hear that Amy applied for a job at your work. The hiring manager asks you questions about whether she’d be a good fit for the role:

  • Is she smart?
  • Is she hard-working?
  • Is she good with numbers?

You probably don’t know any concrete facts that would inform an educated answer to those questions (e.g. Amy’s test scores, work habits, or her math skills). Nevertheless, you’re more likely than not to answer “yes” to each question. Your general positive attitude toward Amy bleeds into your evaluation of her specific traits, even those about which we objectively know nothing.

This is called the halo effect, because it inclines us to judge other people as all-good (or all-bad, in which case it is sometimes called the “devil,” or “horns” effect).

This bias was named by Edward Thorndike. In 1920, he published a study demonstrating that one flight commander, when asked to rate his cadets across many different dimensions, was likely to give ratings with an unduly high correlation between fundamentally independent attributes. For example, cadets rated as having an exceptional physique were also excessively likely to earn high ratings for intelligence.

In interviews, this poses an obvious risk: A candidate impresses us in one particular regard. Perhaps they are an outstanding culture fit. In reflecting on the interview, we might consequently overestimate their qualification for other, unrelated requirements of the role, like their aptitude with spreadsheets.

First impressions

The halo effect can also arise from baseless, ephemeral judgments. All it takes is a friendly greeting, a firm handshake, or the sight of a face to form a first impression.

Like the halo effect, first impressions are self-fulfilling. Even when we encounter evidence that contradicts our impression, it’s cognitively easier to excuse a conflicting observation and to continue believing in our impression than to change our position. Here’s how that might go in our head:

This guy is great!… Wait, what did he just say?… Eh, that’s an easy mistake. Maybe I misheard. I’m sure what he really meant was this…

In experiments repeated globally, Alex Todorov and others have demonstrated the influence of first impressions with evidence that research subjects can correctly predict the outcome of political elections with 70% accuracy by evaluating the competency of candidates after only viewing their campaign portraits… for as little as a tenth of a second.

If large public elections can be detectably influenced by the glimpse of a face, then it’s alarmingly easy to imagine how that or something equally as trivial – a person’s height, a good hair day, or nice perfume – might contaminate a single interviewer’s decision.

Peak-end rule

Whereas our judgment of other people may be biased heavily by our first impressions, when we remember events, beginnings have relatively little influence. Our overall impression of any experience – how much pleasure or pain we feel during a brief incident, a vacation, or even an entire lifetime – depends on our recollection of a subset of snapshots drawn from the experience.

The peak-end rule posits that we’re very lazy rememberers. By default we’ll most heavily – or, really, only – consider the two easiest-to-recall snapshots. We most easily recall:

  1. the high- (or low-) light of the experience (the “peak”)
  2. the last part of the experience (the “end”)

Psychologists have demonstrated the peak-end rule by inflicting mild pain on subjects using the cold pressor test. They administer the test three times. Each time the trial starts with a 60-second wrist-deep soak in icy water. In one case, the trial ends right after the first 60 seconds have elapsed. In another case, the hand remains in for an extra 30 seconds while the experimenter slowly mixes in warmer water, ever-so-slightly relieving the subject’s pain. In both cases, at the end of time, the subject removes her hand and receives a warm towel. After her second soak, the subject is asked to choose which version of the test she prefers to receive for her third trial.

Subjects should unambiguously prefer the shorter trial (Trial A). The total amount of pain they’ve endured (i.e. the area under the line) is absolutely, unequivocally less in the shorter, 60-second trial. But the majority of subjects choose the longer trial (Trial B), because their memory of the experience is biased by the relief they feel as the warmer water is mixed in – a more pleasant end.

According to the peak-end rule, our recap of an interview is predisposed to be dominated by one or two memorable moments, with less consideration for the events in between. For example, maybe a candidate knocks out of the park a particularly tough, arcane question in the middle of the interview, and finishes with some strong closing arguments. We’d be likely to construct a positive memory of her performance, even if she also gave several subpar, less remarkable answers.

Illusion of validity

Whereas the danger of the previous biases arises from making decisions based on too little real information, surplus evidence can be problematic too. The illusion of validity predisposes us to believe that having more observations always equates to a surer decision.

In a fairly benign case, maybe we have duplicative evidence. We review a resume for an entry-level candidate who earned a high SAT score and also attended a prestigious university. Both are (arguably) positive signals, but in combination neither fact provides much more usable, independent information than the other. (A high SAT score is a prerequisite to get into a good university.) Still, we feel like we have twice as many important signals, and therefore have more confidence in our impression.

In the worst case, the illusion of validity tricks us into making confident decisions even when the signals are totally meaningless. Daniel Kahneman, the godfather of cognitive bias research, cites his experience performing Israeli army officer evaluations as an example of what can go wrong.

Kahneman placed candidates in groups and subjected them to an elaborate physical obstacle course. His team of interviewers (some of whom were also psychologists) observed closely and took diligent notes to distinguish the leaders in the group from the followers. Their rigorous methods and the realistic setting led them to the natural conclusion that a leader identified in the test was sure to be a leader of tomorrow, and thus a strong candidate for army officer.

The observers were so confident in their method that for years, they didn’t bother to validate their predictions by analyzing later performance reviews of the candidates whom they’d promoted. When they did, their predictions turned out to be hardly more useful than a coin flip.

Most strikingly, even after Kahneman’s team learned these results, the obstacle course remained a feature of the screening process longer still, because the method seemed so productive that the team made excuses and tried to discredit the evidence of its futility.

If an elaborate experiment conceived of by professional psychologists can fail this spectacularly at producing meaningful signals, then we should be very skeptical of our own interview questions – even if they feel targeted and productive.

Overconfidence and the bias blind spot

Cognitive biases lead us to make poor decisions. We go to great lengths to gather meaningless data. We discard perfectly good information. We invent sometimes baseless narratives to fill in the gaps. In spite of these intuitive flaws, we plow ahead confidently. This act – failing to recognize the holes in our logic and the low reliability of our decisions – is a symptom of yet another bias: the overconfidence effect.

Cruelest of all, even when we think we might have mastered this, we’re wrong. Emily Pronin, coiner of the phrase “bias blind spot,” has demonstrated that learning about cognitive biases only helps us diagnose them in other people’s behavior. Subjects who’ve been instructed about a bias – even immediately before being put to the test – are still no less likely to succumb to the bias and overrate the objectivity of their thought process.

This would be a pretty hopeless place to end this series: concluding that have we no control over the systematically bad decisions that we make. Though eliminating the influence of bias in our decisions in certain situations might be next-to-impossible, re-engineering – or, better, completely avoiding – those situations is not. In Part 3 (coming soon), we’ll see how this idea applies to interview decisions.

Want to learn more about biases? Here’s some reading material:

Comments