“Scientists are trained to be cautious. They know that no matter how tempting it is to anoint a pet hypothesis as The Truth, alternative explanations must get a hearing. And they must seriously consider the possibility that their initial hunch is wrong.” – Philip E Tetlock and Dan Gardner
Here’s a little exercise: I’ll give you a sequence of three numbers, and I want you to figure out the rules that govern that sequence: 2, 4, 6.
You’ve probably already figured out something that works.
Now what if I gave you the option to test your solution by providing me with sequences of three numbers, and I’d tell you if they were or were not valid?
Think for a minute about some sequences you’d provide.
Now consider that even if I allowed you unlimited test triplets, odds are that only about one in ten people would actually correctly determine the rules that govern the original sequence I provided.
Why? Because of a powerful, destructive, and widespread cognitive bias.
In this post, I’ll expand upon the example above, and I’ll explain why understanding confirmation bias (and knowing how to defend against it) is so damned important in many walks of life.
In my book report on Matthew Syed’s Black Box Thinking, I mentioned that, “p111 has a neat example, the important lesson of which is that to avoid confirmation bias one needs to consciously try to falsify a hypothesis, rather than just test for validity”.
That example has stuck with me, for reasons I’ll explain below.
Also, several other books I’ve read since completing Black Box Thinking have explored – as some important aspect of a broader thesis – the severe ramifications of confirmation bias. It’s just one of those things that pops up everywhere.
Here’s the full example from Black Box Thinking:
Consider the following sequence of numbers: 2, 4, 6. Suppose that you have to discover the underlying pattern in this sequence. Suppose, further, that you are given an opportunity to propose alternative sets of three numbers to explore the possibilities.
Most people playing this game come up with a hypothesis pretty quickly. They guess, for example, that the underlying pattern is ‘even numbers ascending sequentially’. There are other possibilities, of course. The pattern might just be: ‘even numbers’. Or ‘the third number is the sum of the first two’. And so on.
The key question is: how do you establish whether your initial hunch is right? Most people simply try to confirm their hypothesis. So, if they think the pattern is ‘even numbers ascending sequentially’, they will propose ’10, 12, 14′ and when this is confirmed, they will propose ‘100, 102, 104’. After three such tests most people are pretty certain that they have found the answer.
And yet they may be wrong. If the pattern is actually ‘any ascending numbers’, their guesses will not help them. Had they used a different strategy, on the other hand, attempting to falsify their hypothesis rather than confirm it, they would have discovered this far quicker. If they had, say, proposed 4, 6, 11 (fits the pattern), they would have found that their initial hunch was wrong. If they had followed up with, say, 5, 2, 1, (which doesn’t fit), they would now be getting pretty warm.
As you can see, the key point is that we must be willing to test things in a genuine attempt to violate our beliefs and to falsify our hypotheses.
This example really bugged me when I read it, and has stuck with me since: it’s bugged me because I fear I’d fall prey to it, and it’s stuck with me because it’s so deceptively simple. I’ve always been a fan of brain teasers, and in my day I’ve taken probably more than my fair share of aptitude tests, but those types of tests are always about quickly getting a right answer (often in a multiple choice format), rather than about attempting falsification. So, in a way, my mind – certainly when it comes to numerical patterns – is programmed to deconstruct, solve, and extend, rather than to consider alternative possibilities.
I’d like to think, if presented with the experiment Syed describes, that I’d take the time to test my own hypotheses, but I’m troubled to admit that I expect it’s more likely I’d confidently ‘spot’ the pattern and move on. I’m just not in the habit of falsification.
I’d like to think, if presented with the experiment Syed describes, that I’d take the time to test my own hypotheses but…I’m just not in the habit of falsification.
At least I wouldn’t be alone in my failure – Syed quotes Paul Schoemaker, research director of the Mack Institute for Innovation Management at the Wharton School of the University of Pennsylvania: “College students presented with this experiment were allowed to test as many sets of three numbers as they wished. Fewer than ten per cent discovered the pattern.”
“College students presented with this experiment were allowed to test as many sets of three numbers as they wished. Fewer than ten per cent discovered the pattern.” – Paul Schoemaker
Why does this happen?
As Philip E Tetlock and Dan Gardner explain in Superforecasting (p38-39): “Our natural inclination is to grab on to the first plausible explanation and happily gather supportive evidence without checking its reliability. That is what psychologists call confirmation bias. We rarely seek out evidence that undercuts our first explanation, and when that evidence is shoved under our noses we become motivated skeptics – finding reasons, however tenuous, to belittle or throw it out entirely.”
“Our natural inclination is to grab on to the first plausible explanation and happily gather supportive evidence without checking its reliability. That is what psychologists call confirmation bias.” – Philip E Tetlock
That last sentence reminded me of a few sections of Philip Mudd’s The HEAD Game, specifically parts where he addresses how to properly and efficiently incorporate anomalies into complex decision analysis and how to hunt for assumptions (for instance, by using the charity proposition).
And here we start to see the potential breadth of negative impact that results from confirmation bias:
- In Black Box Thinking, Syed explores subjects as important and varied as criminal justice, the healthcare system, and aviation – hardly throw-away subjects
- In The Lean Startup, Eric Ries champions a build-measure-learn approach to startups in which making and testing hypotheses are a cornerstone of successful innovation
- In Superforecasting, in addition to quickly recapping the long and sordid history of medicine, Tetlock and Gardner point out that most prognosticators and pundits have dismal track records, in terms of accuracy, with confirmation bias playing a large role in the public’s acceptance of (or, more likely, ignorance to) this unfortunate reality; they go on to show how confirmation bias has played a large role in justifying military intervention, when alternative theories were never even considered despite suggestive evidence
- We also see countless other examples where folks dismiss evidence that runs counter to their ideologies or beliefs, or – in the case of conspiracy theorists – where folks use the existence of evidence that runs counter to their ideas as proof that their ideas are somehow valid (see this BBC article on chemtrail theorists for a very recent example – hat tip to Cassio for passing that along for a giggle)
With so much on the line, it’s crucial that we learn to be vigilant against falling prey to confirmation bias.
Lucky for us, we have a proven and powerful tool: the scientific method, and its core concept of putting hypotheses to falsification tests.
As Tetlock and Gardner say:
“Scientists are trained to be cautious. They know that no matter how tempting it is to anoint a pet hypothesis as The Truth, alternative explanations must get a hearing. And they must seriously consider the possibility that their initial hunch is wrong. In fact, in science, the best evidence that a hypothesis is true is often an experiment designed to prove the hypothesis is false, but which ails to do so. Scientists must be able to answer the question ‘What would convince me I am wrong?’ If they can’t, it’s a sign they have grown too attached to their beliefs.”
Every day, the world suffers because the general population – and our elected or appointed representatives – are woefully ignorant of what the scientific method is, and what it means. A more widespread understanding and appreciation would truly change the course of humanity…but that’s not going to happen overnight.
However, even on a local scale, we can ‘be the change’, improving our decisions and effectiveness (in whatever we’re doing), by challenging ourselves once in a while – or better yet, regularly – to make sure we’re not falling prey to our own confirmation bias.
Even on a local scale, we can ‘be the change’, improving our decisions and effectiveness (in whatever we’re doing), by challenging ourselves once in a while – or better yet, regularly – to make sure we’re not falling prey to our own confirmation bias.