Overconfidence and the illusion of knowledge
From "The Trouble With Elections: Everything We Thought We Knew About Democracy is Wrong," Chapter 9.10
Another psychological tendency humans exhibit, which can harm good decision making, is known as the illusion of knowledge. We all believe we understand and know more about the world around us than we actually do. Sometimes this non-understanding is hidden by superstition, but it often is simply glossed over. Our brain automatically “fills in gaps” in our knowledge with assumptions, derived from past experiences, etc.1 There are interconnections between this cognitive bias and the illusion of information adequacy, discussed at the end of chapter eight. Politicians are exceptionally likely to believe they know more about policy matters than they do. Such self-confidence is associated with a willingness to run and the projection of that self-confidence is beneficial for winning elections. However, what psychologists call the Dunning–Kruger effect suggests these feelings of self-confidence are suspect. The Dunning–Kruger effect is a cognitive bias in which less competent individuals often have a feeling of superiority due to their incompetence at evaluating their own level of competence — or colloquially — too stupid to recognize how stupid they are.
Further, this feeling of being well-informed (whether valid or mistaken) that is typical of elected politicians, may exacerbate the tendency found in the research of Nyhan mentioned above, of resistance to accepting new facts that contradict current beliefs. Random citizens selected for a policy jury are far less likely to assume they already know the answers, and are thus more open to learning facts from impartial expert presentations. Psychological scientist Philip Fernbach of the Leeds School of Business at the University of Colorado, Boulder and his co-authors were interested in exploring some of the factors that could contribute to political polarization and the illusion of knowledge. In their research they found that having to explain a policy can make people with extreme positions moderate their views, as they come to appreciate they only had the illusion of understanding the issue. While applicable to average citizens, even if this effect occurs among elected officials, the electoral imperative that impels them to project self-confidence and certainty on policy matters appears to trigger defensiveness and short circuits any beneficial mitigation that could accrue from coming to appreciate their true level of ignorance.
But even very bright individuals, with a solid ability to interpret scientific information, and an eagerness to use Kahneman’s system 2 rational analysis often fail to use these skills appropriately when steeped in a competitive partisan (tribal) environment. Dan Kahan, a professor of psychology and law at Yale University, summarized research in an article in Scientific American entitled, “Why Smart People Are Vulnerable to Putting Tribe before Truth.” He writes that in various studies
“proficient reasoners are revealed to be using their analytical skills to ferret out evidence that supports their group’s position, while rationalizing dismissal of such evidence when it undermines their side’s beliefs.”
Kahan argues that, at least for an everyday US citizen,
“given what positions on climate change [for example] have now come to signify about one’s group allegiances, adopting the “wrong” position in interactions with her peers could rupture bonds on which she depends heavily for emotional and material well-being. Under these pathological conditions, she will predictably use her reasoning not to discern the truth but to form and persist in beliefs characteristic of her group, a tendency known as ‘identity-protective cognition.’”
Kahan’s research suggests that a particular kind of scientific competence, however, can alleviate polarization. That is scientific curiosity – an enjoyable feeling of awe at gaining a new perspective and understanding. Subjects who ranked high on scientific curiosity were more able to amend their prior opinions and beliefs.
Another psychological trait that researchers have found to be beneficial to absorbing new information and good decision-making is what Mark Leary, a professor of psychology and neuroscience at Duke University, refers to as intellectual humility — an awareness that one’s beliefs may be wrong. Leary’s studies have found no significant difference between liberals and conservatives in this regard, but it seems likely that the subset of individuals from across the political spectrum who are willing to run for office tilt towards intellectual arrogance rather than humility. Leary noted:
“If you think about what's been wrong in Washington for a long time, it's a whole lot of people who are very intellectually arrogant about the positions they have, on both sides of the aisle."
Recognized good decision-making procedures that acknowledge the uncertainty of the world and impact of policy proposals are taboo for those who fight to win elections.
It would be possible, for example, to use the scientific method with treatment and control groups to determine which particular policy was in fact more effective at tackling some societal problem. The policy tested could deal with the impact of various prison policies on recidivism, or education schemes on competency at graduation, or whatever. We would need to randomly assign some prisoners or school children to experimental groups and some to a control group. At the end of the experiment we would have useful information on which policy was actually better for achieving an agreed upon goal. However, politicians seeking election are rewarded for asserting certainty about policy proposals without any scientific evaluation, often on ideological grounds.
Scholars often seek out “natural experiments” where different populations have been subjected to different policies (such as in different states), but without random assignment of subjects to experimental and control groups, these experimental results need to be reported with a caveat.2 As all scientists know, “correlation does not prove causation.” There are so many other variables that are different between states, these “natural experiments" are tricky to interpret. Naturally, politicians seize on studies that seem to support their predetermined policy preferences, while finding flaws and dismissing studies that contradict. This is because they are generally more concerned about making themselves look correct in the eyes of the voters, than in determining the truth about some policy's effectiveness.
Average citizens, with appropriate technical support, and no such distorting electoral imperatives, are far more capable of honestly evaluating policy experiments than are politicians. A recurring problem with elected policy-makers, especially those serving multiple terms, is their unwillingness to acknowledge when a pet policy they adopted has clearly failed. Rotation, along with sortition, brings in people with fresh perspectives. Lacking the pride of authorship, such policy-makers are more able to recognize failure in current policies, rather than doubling down, as politicians are wont to do.
The reason certain individuals rise to leadership roles in partisan competitive legislatures often has more to do with certain (frequently problematic) personality traits or connections, than policy ability. Only certain people seek leadership roles, often reflecting inflated self-confidence. Unlike elected chambers with rigorous internal hierarchies, lottery-selected bodies have an inherent equality. Since this equality can easily give way over time, regular rotation through sortition can maintain equality.
A study by Swiss and German researchers found that for an assigned task an organization in which the leader was chosen by competence in previous performance, or by random selection made no difference in overall performance of the team. However, when the leader was chosen based on confidence, performance was worse than when a leader was randomly chosen. An overly confident leader tends to overweight their own opinion, and be less willing to incorporate input from others. These researchers concluded:
“Hence, when designing a procedure for leader selection in a situation in which social learning is important, declared random selection is a viable option and overconfidence should be avoided.”
Studies by University of Pennsylvania researchers found that
"In egalitarian networks, where everyone has equal influence, we find a strong social-learning effect, which improves the quality of everyone's judgements. When people exchange ideas, everyone gets smarter. But this can all go haywire if there are opinion leaders in the group."
An influential opinion leader can hijack the process, leading the entire group off course. While opinion leaders may be knowledgeable on some topics, these researchers found that, when the conversation moved away from their area of expertise, they still remained just as influential. As a result, they ruined the group's judgment.
"On average, opinion leaders were more likely to lead the group astray than to improve it."
Over-confident people who believe (whether correctly or incorrectly) that they already know a lot about a policy matter at the start of a process, are actually less competent at absorbing new information and making optimal decisions. Randomly selected people generally recognize that they must learn about the issue before drawing conclusions, and as a result are better able to absorb information in order to make better decisions.
Chapter Summation
The distorting cognitive biases discussed in this chapter are standard equipment that all humans may fall victim to. However, it is possible to ameliorate their effects when forming a deliberative body. Firstly, the selection process should not be one that favors people who are especially prone to these biases, nor a system that will generate or exaggerate them. In addition to optimizing the selection process, systems can be put in place that are designed to help decision makers avoid various cognitive biases. But elected officials have unusual traits, incentives, and competitive environments that make overcoming these cognitive shortcomings nearly impossible. The electoral imperative to project certainty and “leadership,” reinforces the psychological biases.
Some people contend that elected officials are frequently sinister, pursuing goals in their personal or class interests, or serving some other behind-the-scenes powerful elite. The point of this chapter is that even if nearly all elected officials started out as honest, bright, and well-intentioned, the process of competing in elections and then being immersed in a competitive legislative environment inevitably creates a body that, as a whole (not every member), is unfit for public policy-making. The evidence from many hundreds of citizens’ assemblies in recent years suggests that democracy using sortition can do better.
You can do a quick little visual demonstration of this automatic brain-filling-in. Hold your hands next to each other in front of you at arm's length, sticking up your two index fingers. Close your left eye and focus with your right eye on your left index finger. Very slowly move your right finger to the right, while keeping your focus on the left finger. You will be able to see the moving right finger using your peripheral vision. At some point (when the two fingers are perhaps six to eight inches apart) the light from your right finger will fall on the blind spot of your retina where the optic nerve is, and you won’t be able to see your finger tip any more, but will reappear if you keep moving it to the right. A blind spot is always there when you close one eye, but you don’t normally notice it, because your brain fills in that unseen area with information from the area surrounding it.
In some cases the “natural experiment” does indeed achieve random assignment due to unique circumstances.