One bias to rule them all?
Maybe we are not quite so riven with dozens and dozens of biases as we have been led to believe. Could just one bias, and six beliefs, explain most of our behaviour?
[Prologue: When I read the paper that is the subject of this essay in March 2023, when it was first published, I expected it to cause a significant stir in behavioural science circles. To my surprise that did not happen, and in October of last year, I wrote the present text, intending to pitch it for publication, rather than publish it as a blogpost. For various reasons, that never happened and it ended up gathering dust. In the spring of this year, Steve Stewart-Williams wrote a piece on the same paper, even with the same title as mine (minus the question mark). This motivated me to publish my own take, without actually reading his to avoid the temptation to alter my original text). but I decided to deliberately did not read it (lest I would be tempted into rewriting it. More delay happened, but eventually – encouraged by Steve’s re-upping his own essay – here it is: my original, October 2023 post. Enjoy!]
When I recently looked at Wikipedia’s list of cognitive biases, I lost count at something like 240. The previous time I checked, about four years ago, I remember there were around 160. That is a compound growth rate of around 10% per annum. Of course, some of those biases are very specific cases of broader ones, but even accounting for this, the claim that the number of defined biases is becoming unmanageable has merit. There have been valiant attempts to cluster them in a structured framework (most notably the Cognitive Bias Codex by Buster Benson and John Manoogian III which organizes them in four categories). But even this falls short of providing an underlying mechanism to understand how and why we exhibit these countless biases. It is as if we are seeking to learn to understand and use a language, but all we have is a dictionary explaining the different words, but no grammar with which to make sentences.
The abundance of biases is not just an impediment to understanding human reasoning and behaviour. It also sits uncomfortable with the principles of evolutionary psychology: it is hard to see how and why such a plethora of biases would evolve separately and independently, without a common, underlying mechanism. A more parsimonious account would both be more effective in analysing and explaining behaviour, and be more in line with evolutionary models and hence carry more credibility.
Unfortunately, most research into biases perpetuates the fragmentation, zooming in on a particular, and often very specific phenomenon, with little attention for similarities with other biases. Identifying a new bias is generally a more appealing prospect for researchers than searching for a coherent, parsimonious model that explains and connects our biases.
However, a recent paper by two German psychologists, Aileen Oeberst and Roland Imhoff, may well put us out of our misery. Their proposition is tantalizingly simple: biases are the result of certain prior beliefs and the processing of information in a way that is consistent with those beliefs.
This belief-consistent processing manifests itself broadly as a generic confirmation bias. We are less concerned with verifying the accuracy of our existing beliefs, and more with maintaining and strengthening them – regardless what those beliefs are. That, however, still leaves the field open for endless different beliefs, each one corresponding with a particular bias. But no: they argue that a small set of beliefs is enough to support many biases. To demonstrate the power of their model, they set out six beliefs,
My experience is a reasonable reference
I make correct assessments of the world
I am good
My group is a reasonable reference
My group (members) is (are) good
People’s attributes (not context) shape outcomes
and link them each with several cognitive biases in the literature, seventeen in total.
For example, they argue that the belief that own experiences form a reasonable reference point from which to generalize other people’s behaviour explains the spotlight effect (overestimating the extent to which we, or aspects of our person, are noticed and judged by others, based on our own self-conscious awareness), the false consensus effect (overestimating the extent to which others share our views and opinions), and social projection bias (the tendency to judge others as similar to ourselves). If we do indeed believe that what we experience is an accurate reflection of what anyone would, then the fact that we realize that we forgot to polish our shoes, that we draw one particular conclusion from a discussion, or that we like our steak medium rare requires only the most insignificant of hints in others’ behaviour to confirm others are judging us, agree, or share our preference.
Another example is the default belief that our own judgements are correct (unlike those of others). This would explain hostile media bias (which describes how partisans tend to see media reports that do not unequivocally support and share their group’s views or mission or biased against their side), and the bias blind spot (we easily notice the biases others fall prey to, while believing that we are not prone to them ourselves). If – as we believe – our judgements are correct, they can obviously not be biased, and when we find that others disagree with us, the only explanation compatible with our belief is that they are biased. Even the weakest criticism in the media will thus be seen as a manifestation of ‘anti-us’ bias.
Might things be actually simpler still than this model? Maybe biases can arise without the existence of any prior beliefs. The authors accept “innocent” causes – drawing conclusions from a small, skewed sample, or the incorrect inference of causality between variable that often co-occur – could, in principle, make perfectly open-minded people with biased judgement. But they point out that beliefs are indispensable for human cognition and almost impossible to avoid – to the contrary, we are “extremely ready to form beliefs about the world”. The beliefs are there, and almost always form the basis for a bias. The possibility for bias without prior belief does not detract from the parsimony of the model.
A more significant challenge to the model is the opposite: it is too simple, and ignores the role of motivated cognition. Someone might be motivated to express their superiority, and thus exhibit the bias blind spot. However, Oeberst and Imhoff argue, while this motive may reinforce the bias, it relies on the belief, and the belief on its own is sufficient to explain it. Furthermore, we may well be motivated to make correct judgements of the world, but we need no such motive to arrive at the belief that we do. The simple fact that we do make numerous correct judgements every day, makes this belief, if not necessarily accurate, then certainly understandable.
Similarly, motivated cognition may play a role in the processing of information: our motive may be to defend, preserve or strengthen our prior beliefs, especially if we want them to be true. But here too, the authors say, motivated information processing is not a necessary precondition of the former. Belief-consistent information processing is not contingent on motivation, as it takes place whether or not we are motivated to confirm our beliefs, and even when we are motivated to be, or appear, unbiased. The present model is therefore more parsimonious than one that requires motivation.
Oeberst and Imhoff also consider the use of conscious deliberation, and of Bayesian belief updating as strategies to combat bias. Deliberately considering the possibility that a fundamental belief – say, “my experience is a reasonable reference” – might be wrong, and seek out evidence that contradicts it might reduce the corresponding biases (the spotlight effect and the false consensus effect). Insofar that this is true, it supports the core of the model: if a fundamental underlying belief leads to a bias, weakening that belief will reduce it.
In Bayesian belief updating, firmly held beliefs require strong, or a lot of contradictory evidence. This looks much like the proposed model. However, Bayesian reasoning assumes (a) rational, unbiased information processing, of (b) information that is in itself unbiased. In stark contrast, in this model both the acquisition of information (we select supporting evidence and dismiss contradictory information) and its subsequent belief-consistent processing are biased by the fundamental belief in question. We do not really engage in Bayesian belief updating, unless the belief itself is first questioned, and so the present model is a better fit with reality
Even biases less obviously linked to fundamental beliefs fit this model well. Hindsight bias is one example – both when we consider others and when it concerns ourselves. In the first case, we believe that we would have made the correct choice because we make correct assessments about the world. In the second case, we can (and do) use our current experience of an event as a reference for the moment that the decision that led to it was taken and conclude that we should have known, because we are convinced that we have always held our current beliefs (this steadfastness is itself arguably a fundamental belief).
This proposed mechanism does not just explain how biases are manifestations our tendency to confirm fundamental beliefs. The confirmation of one belief can itself confirm another one. For example, if you believe that your group is good, a biased conclusion that confirm this belief will also confirm the belief that you make correct assessments of the world. The better-than-average effect (also known as the Lake Wobegon effect) and outcome bias would act similarly.
It will be interesting to see whether this model gains traction among researchers and practitioners. My assessment is that it has considerable potential, but then again, I make correct assessments of the world, of course.
It's fascinating to see how bias researchers are getting really close to rediscovering Buddhism now
Confirmation bias is starting to really look like ignorance (the first of the 12 links of dependent origination)
And those fundamental beliefs really do look like the three marks of existence