The Linda Problem
In his most famous experiment, Kahneman and his distinguished collaborator Amos Tversky asked subjects about Linda. Here’s how they described her:
Linda is 31 years old, single, outspoken and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.
Then they ask this simple question:
Which alternative is more probable:
1. Linda is a bank teller; or
2. Linda is a bank teller and is active in the feminist movement?
About 85-90% of undergraduates at several major US universities chose the second option. But logically, both things being true must be less likely than just one of them being true: there have to be more bank tellers than feminist bank tellers, so bank teller is the right answer.
The naturalist Stephen Jay Gould got the answer correct but still struggled with it, saying “a little homunculus in my head continues to jump up and down, shouting at me, ‘But she can’t just be a bank teller; read the description.’” Kahneman responded that the “little homunculus” is Gould’s System 1 (fast, intuitive) mode of human reasoning speaking to him in insistent tones, adding that as in the Müller-Lyer illusion (see opposite), “the fallacy remains attractive even when you recognise it for what it is”.
The horizontal lines appear to be different in length, even after we have measured them and proven to ourselves they are identical. San foragers of the Kalahari are all but immune to this illusion, while Americans are abnormally susceptible.
The biases that drive us
To be a good diagnostician, a doctor needs to learn the language of medicine. To be a good thinker, with a deeper understanding of the judgments and choices of ourselves and others, also requires a richer vocabulary. There are distinct patterns, says Daniel Kahneman, in the errors people make based on the predictable biases in human thinking.
“When the handsome and confident speaker bounds onto the stage, for example, you can anticipate that the audience will judge his comments more favourably than he deserves. The availability of a diagnostic label for this bias – the halo effect – makes it easier to anticipate, recognise and understand.”
Here are some of the most common biases:
The halo effect: We put too much weight on first impressions.
The anchoring effect: We adjust our estimates to accommodate arbitrary numbers.
The availability heuristic: We base our judgments on readily available memories.
The affect heuristic: We put too much weight on judgments that are emotionally laden.
Base-rate neglect: We accept what is causally possible over what is statistically probable.
Competition neglect: We expect outcomes to be determined by our efforts alone, not the influence of competitors.
Framing effects: We vary our judgments depending on how identical information is presented or framed.
Hindsight bias: We overestimate the accuracy of our past predictions, believing that we knew it all along.
The illusion of skill: We attribute success to talent rather than luck.
The illusion of validity: We hold on to our beliefs in the face of contradictory evidence.
The planning fallacy: We plan around best-case scenarios rather than what is statistically likely.
Loss aversion: We are more averse to losses than we are attracted to equivalent gains.
Narrative fallacy: We create coherent causal stories to make sense of haphazard events.
Priming effects: We overemphasise a concept if we are “primed” with a related concept.
Representativeness bias: We lean heavily on stereotypes to compensate for partial information.
Substitution: We tackle a difficult question by answering a much simpler related question.
The sunk-cost fallacy: We continue investing in an established project rather than focus on its future outcomes.
What-you-see-is-all-there-is (or WYSIATI): We draw strong conclusions from incomplete information.