- The Soloist
- Posts
- Kahneman and Tversky - Pt.1 | #38
Kahneman and Tversky - Pt.1 | #38
How our minds play tricks on us
Kahneman and Tversky - Pt.1
No. 38 — read time 7 minutes
Welcome to The Soloist, a weekly newsletter where I share timeless ideas and insights about life, business, and creativity.
Today at a glance
How our minds play tricks on us
In in the spring of 1969, Daniel Kahneman, a psychology professor at Hebrew University in Jerusalem, invited another professor from his department whom he’d barely known to give a lecture to his class of undergraduates.
That man was Amos Tversky.
At the time, Danny had no way of knowing that over the course of the next 30 years the pair would become inseparable. And their work on understanding how the human mind behaves would change the course of psychology forever.
The Law of Small Numbers
One of the biggest a-ha moments the pair had was understanding that all of the models of human behavior by economists were basically nonsense.
Economics asserts that humans behave rationally and in their best interest all of the time.
Given that premise, you'd expect humans to be what they call "intuitive statisticians", that is that we both understand how probability works on an intuitive level and use that knowledge to make decisions in our day to day lives.
But this couldn't be further from the truth. Here's an example:
A study of new diagnoses of kidney cancer in the 3,141 counties of the United States that reveals a remarkable pattern. The counties in which the incidence of kidney cancer is lowest are mostly rural, sparsely populated, and located in traditionally Republican states in the Midwest, the South and the West. What do you make of this?
Our minds start to look for causation. We might guess that a rural life offers cleaner air, cleaner water, fresh food with no additives, less pollution, and less stress.
What if the prompt said those counties had the highest incidence?
We might start hypothesizing that the reason has to do with poverty, lack of access to good medical care, high-fat diets and too much alcohol and tobacco use.
What we fail to see, at least at first, is that the word "sparsely populated" tells us there is a small sample of people in those counties.
The likelihood, a percentage, could have swung either way.
We understand that "large samples are more precise than small samples" but we don't intuitively understand that "small samples yield extreme results more often than large samples".
To further illustrate this point image an urn with 50% red and 50% white marbles. Jack draws 4 marbles at a time and Jill draws 7 marbles at a time. The likelihood that Jack will draw 4 all-red or 4 all-white marbles is 8x more likely than Jill doing the same with 7 marbles at a time.
Small numbers lead to sampling error. And yet our minds look for causation, for stories, to explain anomalies all the time.
Judgement vs. Prediction
The difference between a judgement and a prediction isn't obvious at first glance. To Danny and Amos, the distinction was clear.
A prediction is a judgement that involves uncertainty.
When dealing with uncertainty, especially under pressure, we tend to operate with the part of the brain that is fast, intuitive, and is often called "the gut feel". This part of the brain relies on rules-of-thumb, or as scientists call it heuristics.
To paint this picture, let's try one of their tests.
When asked to predict which of the following fields of study a graduate student might enroll in, the guesses were on average related to the size of the programs.
Business: 15%
Computer Science 7%
Engineering: 9%
Humanities and Education: 20%
Law: 9%
Library Science: 3%
Medicine: 8%
Physical and Life Sciences: 12%
Social Science and Social Work: 17%
Not a stretch to agree that Humanities, a very large department, will have a higher likelihood of a generic student being enrolled vs. Library Science or even Computer Science.
These are the base rates as its called in statistics.
Then they sought to dramatize what happens in the brain when a bit more information about this generic student is provided.
Tom W is of high intelligence, although lacking in true creativity. He has a need for order and clarity, and for neat and tidy systems in which every detail finds its appropriate place. His writing is rather dull and mechanical, occasionally enlivened by somewhat corny puns and flashes of imagination of the sci-fi type. He has a strong drive for competence. He seems to have little feel and sympathy for other people, and does not enjoy interacting with others, he nonetheless has a deep moral sense.
With this information, they asked one group to go back to the fields of study and determine how "similar" Tom was to grad students in each of the nine fields. They then asked a second group to predict which department Tom is a student in.
Most people answering the question jump from similarity ("that guy sounds like a computer scientist!") to some prediction ("that guy must be a computer scientist!") and ignore the base rate (only 7%).
When presented with even a little information, however useless, we default to our gut-instinct and allow ourselves to use mental stereotypes to inform our prediction, often with great certainty.
The Linda Problem
To highlight this idea even further, consider The Linda Problem:
Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations.
Which is more probable?
1. Linda is a bank teller.
2. Linda is a bank teller and is active in the feminist movement.
If you chose Option 2 you're not alone.
Over 85% of participants in numerous surveys made the same mistake.
Since Option 2 is a subset of Option 1 it cannot be more probable than Option 1.
So what's going on here? Why does the mind instinctively choose Option 2?
The reason may shock you.
It's because the way the question is presented, we disregard the prompt asking which option is more probable and instead answer a different question — which one is more plausible. Option 2 matches the stereotype we have in our minds based on the description of Linda.
This is called The Representative Heuristic.
This heuristic, or rule-of-thumb, helps us look for coherent stories that match patterns we're familiar with.
But as the Linda Problem shows, it can also lead us to make logical errors during periods of uncertainty.
Knowing that our judgement is biased helps us make better decisions and helps us understand why friends, family, and colleagues behave the way they do.
The Anchoring Effect
Danny and Amos uncovered another phenomena we've all probably heard about by now. The power of anchoring.
Using a rigged wheel of fortune that only landed on either 10 or 65, they asked groups of students to spin the wheel (which only had 2 outcomes) and then answer a completely unrelated question:
Is the percentage of African nations among UN members larger or smaller than the number you just wrote.
What is your best guess of the percentage of African nations in the UN?
Instead of ignoring the totally useless exercise of spinning a wheel of fortune, they found that the average estimates for those who saw 10 and 65 were 25% and 45%, respectively.
We see this happen all the time.
A house listed for sale at price X will influence how much we think we should pay, even if we are determined to resist its influence.
A sign at a supermarket that offers a limited-time reduced price on soup cans with the added language "12 limit per person" leads, on average, to higher purchases than the same sign reading "no limit per person".
These effects are all around us and yet many of us are still blind to it.
Daniel Kahneman and Amos Tversky had a complicated relationship, fraught with jealousy, envy, and a constant push-pull between two very different people who became absorbed by their collaboration. Their wives compared the pair’s relationship to a marriage that became toxic toward its end.
But during the time they collaborated, they uncovered so much of how the mind works that their research still guides and informs new researchers today.
This post got a little long so I've decided to split it into 2 parts. In writing it I realized there were so many wonderful ideas to share that trying to fit it all into a single post would be a heroic feat I'm not sure your inbox would appreciate.
We'll be back next week to go over some of the other biases, heuristics, and fallacies that plague even the sharpest minds.
Till then,
Tom
P.S. Whenever you're ready, there are 3 ways I can help you:
If you save a lot of bookmarks on Twitter (like me), try dewey. —
the easiest way to organize Twitter bookmarks (I'm one of the makers).If you're looking for 1:1 coaching on audience or business growth book a slot here.
I’m putting together a podcast where I’ll be able to dive deeper into the lives and stories of outlier individuals. If that interests you sign up here to get updates when it goes live:.
This week’s newsletter is brought to you by Beehiiv.
Beehiiv is the only email service provider with a built-in referral program. Explode the growth of your newsletter by using the most powerful persuasion tool out there, word of mouth.
Start growing your newsletter faster here.
If you enjoyed today's newsletter, consider sharing it with friends and family. If they don’t hate you, they might thank you.
If this email was forwarded to you, consider subscribing to receive them in future. |