11 minute read

In the recent years I read many non-fiction books, but one truly stands out: Thinking, Fast and Slow by Daniel Kahneman.

In the book the Nobel Prize-winning author delves into the intersection of economics and psychology, debunking the concept of the homo economicus and revealing how human decisions are shaped by psychological biases and distortions.

image
Photo by Monica Sauro on Unsplash

The book is brimming with insights, and a thorough exploration would take more than twenty minutes to read. To keep the posts concise and engaging, I’ve opted to divide it into three parts:

In this initial segment, my goal is to introduce the model Daniel Kahneman employs to describe our thinking process and provide a condensed summary of the cognitive biases and heuristics outlined in “Thinking, Fast and Slow.”

In this initial segment, my goal is to introduce the model Daniel Kahneman employs to describe our thinking process and provide a condensed summary of the cognitive biases and heuristics outlined in “Thinking, Fast and Slow.”

The Two Systems: Fast & Slow

Daniel Kahneman distinguishes System 1 and System 2 - both used in different circumstances.

  • System 1 is the fast system (intuition, automated actions, …).
  • System 2 is the slow system (causal thinking, method applications, arithmetic, …).

We need both systems. While System 1 is fast, it is not very accurate and leads us to make cognitive errors. Daniel Kahneman mentions one example to showcase this effect. If you want to try it yourself, try not to think too much and just answer intuitively:

A tennis racket and a ball together cost 110. The tennis racket costs 100 more than the ball. How much does the ball cost?

If you are like most people, the intuition had suggested that the ball costs 10. The correct answer is, however, 5. If the ball would cost 10, then the tennis racket would cost 100 more than the ball, hence 110. Then they would cost 120 together which contradicts the first sentence. To discover this flaw, you have to actively think - this is you using your System 2.

Thinking actively and using System 2 requires attention in order to function. In studies the activation of System 2 can actually be measured: A sign of mental effort (activation of System 2) is that our pupils dilate.

Attention also requires energy.

Similar to physical exercise, cognitive effort and attention cause fatigue. Pulling yourself together or, in scientific terms, “ego depletion” (enduring physical pain, consciously holding back thoughts, etc.) also requires energy. Studies show that we have a kind of energy budget for “pulling ourselves together”. In one study, participants had to eat healthy vegetables while there were sweets on the same table (which they weren’t allowed to eat). These participants performed worse in subsequent tests of cognitive ability. They had depleted their mental energy by holding back thoughts of eating the sweets.

As System 2 requires energy in order to function, it is generally lazy. You must actively start System 2. This is different for System 1. It works continuously and generates suggestions all the time (it does not care if you want it to make these suggestions or not). However, you can actively decide if you pick up the suggestions with System 2 or not.

A state of effortless attention, in which one nevertheless solves System 2 problems without much energy depletion, is referred to as “flow” or “mind like water”.

11 Cognitive Biases and Heuristics

1. Cognitive Ease

Cognitive ease describes effortless thinking. It is performed by System 1 and since our System 2 is generally lazy, cognitive ease is often indulged.

Cognitive ease occurs when situations, people or facts seem familiar to us.

2. Jumping to Conclusions

System 1 builds on the knowledge that is available at the moment - this is the WYSIATI rule (What you see is all there is). System 1 therefore uses all the knowledge that is immediately available (without a detailed search in the “archive”).

This creates cognitive ease and generates stories that are coherent.

Our System 2 very often trusts these stories (because they sound plausible and in many cases reflect reality well) and does not invest the energy in questioning the assumptions critically. This is what happened in the tennis racket and ball example above.

3. The Law of Small Numbers

The problem with very small numbers in statistical tests is that extreme values are more likely to occur. This is a statistical fact well described in books such as Calling Bullshit. However, even experienced scientists often find that the number of samples in their studies is too small.

How does this lead to cognitive biases?

Rather than estimating probabilities, we tend to seek examples and gauge probability based on the ease with which we find instances. If our System 1 encounters a handful of examples, we may consider it sufficient for making an estimated guess of probability. However, we then fall for the law of small numbers. Hence, humans are ineffective intuitive statisticians.

4. The Wonders of Priming

The priming effect describes how words can impact our behavior, often on a subconscious level.

When exposed to specific words linked to particular characteristics, our actions tend to align with those associations.

Kahneman references a study in which students were primed with words generally associated with age. The researchers measured the time it took for these students to walk from one room to another. Those primed with age-related words took significantly longer compared to their counterparts in the control group.

This is our subconscious falling for priming.

5. Anchor Effect

The anchor effect, also known as anchoring bias, is a cognitive bias that describes the human tendency to rely too heavily on the first piece of information encountered when making decisions.

It is triggered in both: System 1 and System 2.

This initial information, known as the “anchor,” influences subsequent judgments or decisions, even if this value is absurdly high/low. This effect can be notable, reaching up to 55%. Additionally, the anchor effect is observed even when the “primed” number is evidently random.

People frequently modify their decisions based on the initial anchor, yet more often than not, the adjustment tends to be too small. This reluctance to deviate significantly from the initial value often stems from a subconscious desire to avoid offending others by proposing a drastically different figure.

6. Availability Heuristic & Errors in Representativeness

System 1 often replaces difficult questions (How happy am I?) with heuristic questions that we can answer more easily (How happy am I right now?).

Critically, these substitutions occur without us realizing it. This is one of the reasons why we (our System 1) are so bad at statistical questions. Instead of estimating a probability, we look for examples and estimate the probability by assessing how difficult it is for us to find examples. If it is easy to find examples, we think that the probability must be high.

The question of similarity is a much simpler question than the question of probability.

Since our System 2 is lazy in principle and similarity can also be evaluated in System 1, we replace the question of probability with the question of similarity.

The availability of information in our minds also strongly depends on the emotions that accompany it.

If the emotions are extreme, then the availability is also higher (emotional topics remain longer in our memory). With higher availability, our estimated probability of related events increases. This tendency leads us to overestimate the likelihood of extreme yet infrequent events, such as plane crashes and acts of terror.

7. Causes vs. Statistics

The problem is that we infer the general from the particular (individual events). Imagine the following situation, described within Thinking, Fast and Slow.

A cab was involved in a hit and run accident at night. Two cab companies, the Green and the Blue, operate in the city. 85% of the cabs in the city are Green and 15% are Blue.

A witness identified the cab as Blue. The court tested the reliability of the witness under the same circumstances that existed on the night of the accident and concluded that the witness correctly identified each one of the two colors 80% of the time and failed 20% of the time.

What is the probability that the cab involved in the accident was Blue rather than Green knowing that this witness identified it as Blue?

This example was used in a study. A majority of participants estimated probabilities exceeding 50%, with some offering estimates surpassing 80%.

However, the correct answer, determined through Bayes’ theorem, is much lower than these provided estimates:

  • The probability of a blue cab being correctly identified by the witness is 12 % (0.12 = 0.15 x 0.80).
  • The probability of a green cab being incorrectly identified by the witness as blue is 17 % (0.17 = 0.85 x 0.20).

If you now combine the two estimates, you come to the following statistical conclusions:

  • There is a 29 % probability (0.29 = 0.12 + 0.17) that the witness identifies the cab as blue.
  • The probability that the cab identified as blue was actually blue is only 41 % (0.41 ≈ 0.12 / 0.29).

The pure statistical analysis yields a probability that falls below 50%, contradicting the estimates provided by the participants in the study.

The learning? You have to distinguish between causal base rates (base rates that also make sense from a causal perspective) and non-causal base rates. Intuition and System 1 find it difficult to deal with the latter.

8. Regression to the Mean

Let’s start this with an example of military flight teachers: When Daniel Kahneman tries to teach them that praise works better than criticizing, they disagree.

One officer says that whenever he praises his students, they are significantly worse afterwards. If, on the other hand, he criticizes them, they are usually better afterwards.

The reasons for that are, however, statistical effects. If we assume that good and bad flying are roughly normally distributed, then the probability is greater that a good flight will be followed by a bad one - regardless of whether the teacher praises his student or not. The situation is similar with a bad flight: regardless of whether the instructor criticizes his student, the probability is greater that a good flight will follow a bad one.

The flight teachers simply overestimate the effect of their judgement on the students’ performance.

This applies to other areas as well: Humans are very good at identifying patterns and causal explanations for patterns. We just underestimate the role of probability.

To make accurate estimations, it’s essential to consider the concept of regression to the mean and integrate statistical base rates in our decision-making.

9. Hindsight Effect & Result Effect

The “Hindsight Effect” explains that the human ability to remember past states of knowledge or attitudes is very limited. As a result, decisions often do not have the right retrospective effect (we simply cannot imagine the state / our attitude at the time the decision was made).

The “Result Effect” is of a similar nature - the influencing variables of (from our point of view causal factors) are greatly overestimated and in retrospect it is partly clear to us why things turned out the way they did (e.g., the economic crisis of 2007/2008 was bound to happen).

10. Planning Fallacy

People are overly optimistic about their own plans - external views are sometimes deliberately ignored.

We are all prone to fall for this fallacy. Even Daniel Kahneman did fall for it.

Daniel Kahneman and a group of researchers embarked on creating a curriculum. Upon initiating the project, they sought an expert’s evaluation of their progress and an estimate of the time needed to complete the curriculum. Despite the team’s optimistic perception of their subjective advancements and their belief in a swift finalization of the project, the expert projected a significantly longer timeframe, averaging around 7 years in total. The team believed that they could do much better and went on. However, the reality unfolded differently, with the curriculum ultimately taking a total of 8 years to complete. This discrepancy illustrates the planning fallacy stemming from an internal perspective.

11. The Endowment Effect

The endowment effect is a cognitive bias that describes the phenomenon where people tend to ascribe higher value to the things they own merely because they own them. In other words, individuals often perceive the items in their possession as more valuable than equivalent items they do not own.

This bias can influence various aspects of decision-making, including buying, selling, trading, or even simply evaluating the worth of an object.

The endowment effect suggests that ownership alone can create a subjective increase in the perceived value of an item, leading individuals to overvalue their possessions compared to what they might be willing to pay for the same items if they did not own them.

This phenomenon is closely tied to the “New Expectancy Theory”, a concept I’ll explore in depth in my upcoming article, scheduled for this Sunday.


Thank you for reading!

This was the first part of my in-depth series exploring “Thinking, Fast and Slow” by Daniel Kahneman. Check out the entire series here: