Why the familiar feels safer (even when it isn't)
The way we resort to double standards when judging the risks of what is novel against what is current can be an obstacle to genuine progress

A few years ago, a young man was killed by a car as he crossed my street. Not long after, the town responded by installing a pedestrian-controlled crossing – traffic safety often entails adding more clutter to our imperfect traffic system. What if, instead, the root cause was removed? Autonomous vehicles (AVs) could eliminate human fallibility: inattentiveness, recklessness, and even intoxication that claim so many lives. Yet one death caused by a self-driving car seems to weigh heavier than thousands caused by human drivers. Isn’t that odd?
More than framing
Our sceptical attitude to autonomous vehicles fits a pattern of wariness of the novel that shows up everywhere. Consider GMO foods: we scrutinize them for the slightest risk, while accepting conventional agriculture's known environmental damage. Or take remote work during the pandemic: companies were fretting about possible productivity losses after having ignored, well, forever the well-documented inefficiencies of standard office life, from lengthy commutes to wasteful meetings. In both cases, the familiar problems are neglected while the risks of the new command our attention – the familiar poison is less threatening than the unfamiliar cure. This isn't just about food or work – it's about how our brain processes uncertainty.

In a tweet, writer Tim Urban suggests our excessively cautious reaction to self-driving cars mirrors a classic psychological experiment. Amos Tversky and Daniel Kahneman's Asian Disease scenario gave people the choice between two programmes to combat an outbreak threatening 600 lives. When told Programme A “will save 200 lives” while Programme B offers “a 1/3 chance of saving all 600 lives, and a 2/3 chance none are saved”, 72% selected the risk-averse option A. But when these same options were framed as “400 will die” versus “a 1/3 chance nobody dies, and 2/3 chance 600 will die”, 78% opted for Programme B. Simple framing changes everything.
But autonomous vehicles really present a different puzzle. We're not making a choice between two clear options which are framed differently. Instead, we're weighing a familiar evil against an unknown alternative. Our resistance runs deeper than mere framing.
This opposition stems from ancient survival instincts manifest in our cognitive and behavioural tendencies. Our brains are wired to favour the present situation that hasn't killed us yet (we know this as status quo bias), to mistrust probabilistic outcomes and prefer certainty (risk aversion), and to spotlight potential threats (negativity bias).
Most tellingly, we habituate to familiar dangers while hyperfocusing on novel ones. This isn't just about risk – it's how our entire sensory system works. We don't notice our clothes touching our skin or the hum of air conditioning, but instantly detect a fly landing on our neck or the sudden silence when the fan stops. Our senses evolved to report changes, not constant levels, because change signals potential threat. The status quo fades into the wings while novelty is centre-stage in our attention.
A simple home experiment demonstrates the deep roots of this mechanism: place one hand in cold water, the other in warm. Within seconds, both will feel accustomed to the temperature. Then put them together in lukewarm water, and they will report contradictory sensations despite experiencing identical temperatures.
Redressing the asymmetry
This hardwired bias helps explain our reaction to AVs. Traffic deaths have become background noise – they are a regrettable, but normal consequence of human nature. Meanwhile, a single AV fatality triggers loud alarm bells, precisely because it's novel. We fixate on rare potential machine errors while accepting thousands of human ones. Our instincts may have served our ancestors well, but in an age of transformative technology, they can prevent us supporting progress that could save thousands of lives.
Yet, the challenge is not to override these instincts, but to work with them: to build a path to adoption that acknowledges our natural caution while not letting it paralyse us. Thankfully, our powerful brain evolved another trick: it can challenge its own impulsive reactions. When we look at optical illusions, we can understand that straight lines only appear curved. We can learn that the moon isn't actually larger near the horizon. And, if we choose to, we can recognize when our ancestral tendency to subject the new to detailed examination, while downplaying what we have become accustomed to, is distorting how we assess risk.
This self-awareness suggests a practical approach to self-driving cars. Instead of fighting our instinct to pore over what is new, we can harness it. Let's apply that same rigorous attention to the traffic status quo and redress our asymmetric judgment. Count not just the rare AV accident, but every human-caused fatality. Notice not just machine errors, but every time a driver is caught drunk or drugged. Our instinct to focus on the novel can be repurposed to illuminate the flaws in the current we have learned to ignore.
Progress doesn't require perfect solutions – just better ones than we have now. And sometimes all we need to do is look at the familiar with the same critical eyes that we tend to reserve for the new.
"– the familiar poison is less threatening than the unfamiliar cure."
What an absolutely brilliant and intelligent phrase. So very well written!
I think this is also a very good argument in favor of multi-disciplinary studies.