“Debiasing” by Richard Larrick (2004)

September 2012

in Self-mastery

Blackwell Handbook of Judgement and Decision Making

This article contains my summary notes for “Debiasing” by Richard Larrick (2004), a chapter from the Blackwell Handbook of Judgment and Decision Making. ((Larrick, R. P. (2004). Debiasing. In D. J. Koehler & N. Harvey (Eds.), Blackwell Handbook of Judgment and Decision Making, (pp. 316–337). Blackwell Publishing Ltd. Retrieved from here.))

What You Need to Know (Summary of My Notes)

Human thinking is flawed—there is a gap between how well we think and how well we want to think. How to we bridge this gap? One option is through the act of debiasing.

The very “shape” of our brains causes us to be biased—to make systematic thinking mistakes. There is a central and neglected question in current biases and debiasing literature: “How do you get people to adopt better decision strategies?” Debiasing is the art of reducing biases in human thinking, by finding a variety of useful bias-reducing techniques, and getting us to use them. Larrick groups these debiasing techniques into three categories: motivational, cognitive, and technological strategies.

What has been tried, and how well did it work? Note that the focus here is on personal strategies that we as individuals can implement ourselves; not strategies that, for example, rely on external manipulations of our environment.

Motivational strategies: Incentives aren’t very useful; they often create a “lost pilot” effect: “I don’t know where I’m going, but I’m making good time!” That is, incentives make you want to work hard, but this is useless without having the right tools or direction. Social accountability is a more promising form of motivation, but has its own problems. If we know what others expect us to decide, we may be biased towards that outcome. If we know what process others expect us to use, we may be biased towards that process.

Cognitive strategies: Teaching people to “consider the opposite” has been very successful at improving decision making. Training in specific thinking rules is also promising, though needs further study.

Technological strategies: Group decision making can work, but you really have to do it right. Decision-assisting software might help, but it needs more research.

There are a lot of challenges to debiasing. We naturally resist change: “You mean I’ve been doing it wrong for the last 50 years?” The benefits of debiasing are often unclear, difficult to measure, and challenging to implement. The techniques that work best are usually simple, easy, domain-specific (applicable to very specific problems), and bottom-up (“grown” in the individual, not mere instructions we are commanded to obey).

The research is promising, but little is known with confidence in the scientific literature. There are many things yet to do, many areas to explore, and many debiasing techniques to study further.

My Notes

There is a big gap between how we actually think and how we would ideally think. ((A “normative-descriptive” gap.))

The obvious question then follows: How can we close this gap? What can we do to improve our thinking? 

Scientific research has historically focused on identifying how we go wrong, but not on equipping us to do better. Better decision-making strategies exist. What is known about them, and how do we get people to use them? ((“The identification and dissemination of better strategies is known as prescriptive decision making.” Larrick (2004) p. 317.))

“Debiasing” reviews the scientific literature on techniques and methods that have been studied in an attempt to reduce biases in individual decision making. The focus is on research involving personal changes, not changes to the environment or using one bias to “counteract” another.

But first, where do biases come from? “Debiasing” gives a quick crash-course on the inner-division of the mind: our System 1 (automatic, fast, intuitive thinking) and our System 2 (conscious, deliberative thinking), and how flaws in the different parts can lead to biases. Biases can be seen to generally come from one (or more) of three faults. ((The exact breakdown—psychophysically-based error (System 1), association-based error (System 1), and strategy-based error (System 2)—is not essential to the overall discussion, and I found it quite confusing. I much prefer Keith Stanovich’s taxonomy of biases.))

Approaches and Methods

There are three general approaches to debiasing individuals: motivational, cognitive, and technological strategies. There is a common implication from all of these strategies: namely, that debiasing requires intervention.

Motivational strategies use incentives to perform well or social accountability to change thinking behavior. There is little evidence that incentives work. There are no replicated studies where irrationality in a low-stakes situation became more rational when the stakes were higher (and hence the individual had incentive to think more rationally).

Why don’t incentives work? Simply because a decision-maker may not have the ability to make a good decision, regardless of whether they want to or not. As a result, people may get stuck “trying really hard” on difficult problems, which only results in forcefully using bad strategies, without changing their approach. Larrick dubs it the “lost pilot” effect: “I don’t know where I’m going, but I’m making good time!” ((Larrick (2004) p. 321.)) Incentives can cause people to try harder and put in more time, but without the right tools they’ll “try harder” in the wrong direction. However, incentives can be used to motivate action on boring or repetitive tasks, and to motivate short-term practice of useful skills.

Social accountability, the second motivational strategy, produces a strong desire to look consistent to others. It comes from the expectation that we will have to explain our decisions to others. Unfortunately, it has flaws just as with incentives. Having to justify our decisions to others may make us more susceptible to justification-based biases. We may “give people what they want” and justify our decision-making after-the-fact. We may be overly self-critical if we don’t know what our audience wants. And if we know the process our audience wants us to use, we may use it without considering better alternatives.

Overall, motivational strategies aren’t very promising.

Cognitive strategies are context-specific rules for addressing specific biases. The most popular cognitive strategy is to “consider the opposite,” which has been shown to decrease overconfidence, hindsight bias, and anchoring effects.

“Consider the opposite” works because it direct attention to contrary evidence you wouldn’t otherwise have considered. It’s not without it’s flaws, however: if you spend too much time considering the opposite, it may backfire when the tenth “con” convinces you that you were right all along.

Another option is to train yourself to use specific better thinking rules in specific situations. This is relatively unstudied, though we can be confident that the best techniques will be simple, effective, learnable in a short period of time, and taught using both abstract and concrete examples. Bayes’ Theorem is a bad example, since it is abstract, complex, unfamiliar, and counterintuitive.

A third cognitive strategy is to learn to convert probabilities to frequencies, which we have been shown to be better at using when thinking about uncertainties.

A final cognitive strategy is direct training in behavioral decision theory (BDT) and cognitive biases. This has potential, though it is unclear whether mere awareness of biases is sufficient to reduce their occurrence.

Overall, it is debatable whether cognitive strategies can improving decision making skills, though it does look promising. Note that for cognitive strategies to be useful we must be able to recognize when to use them.

Technological strategies are those that make use of tools external to the decision maker, such as groups of people or decision-making software.

Group decision making has its serious flaws. We are easily and unknowingly influenced by the public decisions of others, especially when we are uncertain ourselves. Put a group of people in a room to solve a tough problem and they’ll surprisingly propose (not necessarily good) solutions faster when a problem is harder. Despite this, group decision making has potential benefits:

  • Participants can error-check each other.
  • Combining complementary skills can create beneficial “synergies”.
  • Biggest of all, diverse perspectives increase the sample size of possible solutions (though this statistical benefit is fragile).

The best way to get a group to work well is to have a diverse set of skills and views and have participants constantly “consider-the-opposite.”

The second technological strategy is to use decision models. This method faces challenges; more tools for verifying the effectiveness of these techniques is needed.

The last technological strategy is to use decision support software. ((Larrick calls these Decision Support Systems (DSS).)) This is a growing area of research that shows promise for debiasing.

Overall, technological strategies are understudied and often difficult to implement.

Using Debiasing Techniques

Debiasing must involve more than merely identifying better thinking strategies, it must also figure out how to get people to use them. Unfortunately, debiasing has serious implementation challenges that we have to keep in mind:

  1. Feedback is often delayed. There is a lot of ambiguity in decision making and thinking. It is often difficult to identify the existence and source of our errors, whether we have improved or not, and whether we made the best decision.
  2. The benefits aren’t always clear. It’s difficult to see the benefits of reducing, for example, overconfidence. The benefits may be delayed, vague, or difficult to measure.
  3. We mix up process and outcome. Lottery winners feel completely justified in buying lottery tickets, yet buying potter tickets is still irrational. If you make a terrible decision but you end up with a good result, you may think your decision process itself was good—maybe you just got lucky?
  4. Our view of ourselves is distorted. When there’s a good outcome, we tend to attribute it to our skills. When there’s a bad outcome, we tend to blame it on the circumstances. This makes it difficult to detect true mistakes in our reasoning.
  5. We resist changing thinking habits. Who want’s to admit they’ve been “doing it wrong” their entire life? It’s hard for us to “give up control” of our thinking processes to alien and complex techniques.
  6. Compliance does not equal internalization. Compliance means going along in response to prompting; internalization means having a deep understanding of a technique and a strong intrinsic motivation to use it. Just because a test subject is compliant during a study, doesn’t mean they’ve internalized the skill or are at all likely to use it in the future. Just because you make the right decision once doesn’t mean you will the next time.

Given the above, what can we possibly do? The literature shows that techniques are much more likely to be adopted when they fall on a certain side of several dimensions:

  • Simple rather than complex. We forget or ignore what is complex. We remember and use things that are simple.
  • Domain-specific rather than domain-general. Learning to use a rule in a very specific situation is much easier than learning a vague, general rule to use “all the time.”
  • Social rather than individual. Learning in groups is often more effective than learning by yourself.
  • Bottom-up rather than top-down. This is especially noticeable in organizations that try (unsuccessfully) to “force” decision-making methods and techniques from manager and supervisors down to employees.

There are tradeoffs with these dimensions. For example, making a technique simple enough to be used may result in an oversimplified, imprecise, and inflexible technique that is hard to apply correctly to various situations. Despite this, a simple technique is likely much better than a complex technique that is misunderstood, distorted, and eventually abandoned.

How do the previously-mentioned techniques fit in to these dimensions? (Remember, all of these techniques are under-studied!)

Technological strategies—e.g. statistical models and decision analysis software—are the worst. They are usually complex, the benefits are difficult to demonstrate, and they are often forced “top-down” by managers. In fact, these characteristics are common in most of the academic literature on debiasing methods!

In the literature, internal/individual strategies are often domain-general and top-down: you get told what to do and are given general, vague examples of why it will be useful. Having a broad range of specific examples is key to internalizing cognitive strategies and moving knowledge from your System 2 mind into your automatic System 1 mind. ((I.e. moving declarative knowledge (System 2) to tacit knowledge (System 1).))

Socially-administered practices are good for getting people started—they guide us to think more deeply than we would normally if we were by ourselves—but they are likely not sufficient by themselves.

From Heath et al. (1998):

The most successful repairs will be simple, domain-specific, socially administered, and evolved from bottom-up rather than developed from top-down. ((Heath, C., Larrick, R. P., & Klayman, J. (1998) Cognitive repairs: How organizations compensate for the shortcomings of individual learners, Research in Organizational Behavior, 20, 1–37.))

Closing

In closing, demonstrating the existence of biases is much more common than exploring debiasing techniques. “Debiasing” by Larrick (2004) covers a bunch of debiasing techniques that have been studied, but there is much more research needed. Future directions could focus on the influence of self-esteem, motivation, and affect (desire), as well as the use of intuitive strategies.

Probably most important: There is a central and neglected question in current biases and debiasing literature: “How do you get people to adopt better decision strategies?”

///