The Argumentative Theory of Reasoning
From "The Trouble With Elections: Everything We Thought We Knew About Democracy is Wrong," Chapter 11.3
In order to design good deliberative and decision-making processes it is important to understand cognitive biases and other characteristics of human reasoning. In 2011 Hugo Mercier and Dan Sperber published a paper with a startling but highly influential new theory of how reasoning evolved in humans, and how deliberation plays out in the real world. I mentioned their argumentative theory or reasoning earlier in this book, but it deserves deeper explication. Their approach is based on evolutionary psychology. They showed how real-world evidence undercut the “common sense” notion that reasoning evolved to allow the individual to find the truth or think through the best course of action in a given circumstance. They noted that the well documented confirmation bias, in which individuals have a powerful tendency to seek out and believe evidence that supports their existing beliefs and dismiss evidence that would tend to disprove those beliefs, suggests reasoning didn’t evolve to work in that solo situation. If it did, confirmation bias wouldn’t exist, and we would each quickly abandon old beliefs as soon as good evidence showed them to be in error. Countless experiments have shown human resistance to that logical rule. As I pointed out previously, we can notice this in others, but are generally blind to this tendency in ourselves. Confirmation bias works potently against finding the truth or optimal solution.
In addition, once we have expressed an opinion, we develop a psychological investment in it, and feel a prideful compulsion to defend it, even when the weight of mounting evidence is against it. One implication of this is that group members without prior opinions on a matter may be better suited for assessing the validity of conflicting arguments, unlike politicians or their partisan supporters who come pre-positioned. This does not mean people never change their minds, but that changing a deep belief once adopted is more difficult than we admit to ourselves.
Mercier and Sperber suggest that human reasoning evolved as a social tool for ferreting out flaws in arguments presented by others, and also as a tool for building convincing arguments to present to others in favor of our own existing beliefs (which others can do a better job of evaluating than we can ourselves). In other words, reasoning works best as a social process in which people evaluate other people’s conflicting arguments, rather than merely thinking things through within a single mind. We are much better at detecting fact-based or logical flaws in other’s arguments than in our own. Indeed, we are horrible at recognizing our own cognitive biases. In this social application, reasoning can help find the truth or optimal solution to a problem. Decision-making is generally superior at the group level than at the individual level.
This understanding of the dynamics of actual human reasoning and cognitive biases requires an update or modification of the philosophy-based theory of deliberative democracy. We should abandon the insistence that we have interlocutors who are willing to fundamentally change their minds in response to arguments superior to their own. That is a fool’s errand. We need other open-minded observers to listen and assess the arguments. They are able to exercise “epistemic vigilance,” a forensic function that is essentially the opposite of the confirmation bias.
This principle was enshrined in the reconstituted Athenian democracy after 403 BCE, when the authority to adopt new laws was transferred from the mass Assembly (ecclesia) to randomly selected legislative panels (nomothetai). Around ten presenters –an equal number of opponents and advocates of the draft law – were selected to make their case in front of the panel of as many as a thousand. The panel listened, weighed the arguments, and then voted. They did not engage in active debate among themselves.
This deliberative process is comparable to the invention of the scientific method, in which the key is to avoid the natural instinct of simply seeking evidence to confirm a belief, but rather to seek means of testing to disprove that hypothesis. The reason the invention of the scientific method was one of the greatest inventions in the history of the human race, is that it counters our natural tendency to seek confirmation of our beliefs. A very informative and entertaining demonstration of this natural proclivity is presented in a video of Derek Muller’s entitled “The Most Common Cognitive Bias,” which can be found on his Veritasium website or Youtube.1 [Note that a description of this video is here relegated to a footnote below, but could appear in the body text in the book itself. I urge readers and listeners of this Substack post to click on this link to the YouTube video]
Good deliberative process anticipates various cognitive biases, such as confirmation bias, and uses design features that overcome them. Taiwanese researcher, Kevin Chien-Chang Wu, argues that with specific, appropriate design details,
“deliberative democracy is one of the best designs that could facilitate good public policy decision making and bring about epistemic good based on Mercier and Sperber’s theory of reasoning.”
This is particularly important at the final yes/no decision-making step.
In the open-ended “deliberative” process of crafting a proposal, having some presenters who are invested in a particular view, and thus do exhibit confirmation bias, can actually be a benefit to the group. Mercier and Landemore, in their article “Reasoning is for arguing: Understanding the successes and failures of deliberation,” suggest that according to the argumentative theory of reasoning, this assures that different sides will marshal the strongest possible arguments, which others can then weigh.
As I mentioned earlier in this book, I had an eye-opening experience while serving on the Commerce Committee in the Vermont House of Representatives. Rather than following the usual procedure of taking testimony from one witness at a time, on one occasion we had a panel of opposing lobbyists, essentially conduct an organized argument in front of the committee for a bill on which none of us had prior stances. The lobbyists would call each other out, since they knew when an opponent was intentionally cherry-picking data, or skirting an important issue, which none of us less-informed committee members could detect. When a question of fact was in contention, the committee could have our non-partisan staff from the Joint Fiscal Office check the research. We learned and comprehended more through that one brief deliberative session than we normally ever learned about any bill.
However, it is also essential to have presenters who are not under the influence of confirmation bias to bring forth a wider array of information, which neither side’s advocates deemed useful to their side. This requirement is rarely achieved in partisan political debate. In electoral politics, the tendency is for one side to over-promise the benefit of a particular policy, while the other side warns of its devastating consequences, with neither side mentioning important information that is irrelevant to their goal of winning the argument, but may be vital to making a wise decision.
My contention is that both active discussion by one group, and listening and weighing by a separate group are vital to good public policy decision-making. Both should be employed in public policy decision-making – in the appropriate phase of the overall process. The first phase with active give and take can aggregate diffuse knowledge and seek out win-win possibilities. But this process doesn’t facilitate forensic evaluation, is prone to groupthink, and can go off track due to deference to high status or highly persuasive individuals who are locked into confirmation bias. Once a proposal is crafted, a separate jury needs to listen and weigh arguments before voting on final adoption.
In the video, Derek Muller tells a variety of people that he has a rule in mind about how to sequence numbers, and gives the example “2, 4, 8,” and their task is to figure out what his rule is, by proposing a sequence of numbers of their own. He will then tell them whether their proposed sequence of numbers fits his rule or not. They can then guess what his rule is. As you might expect, people propose sequences like “3, 6, 12,” or “16, 32, 64.” In both of these examples he tells them that their sequence meets his rule, and they then guess his rule is that the numbers double each time. He tells them “no, that is not my rule.” He then reminds them that they can keep proposing any sequence they wish, and he will tell them if it fits his rule.
It is at this point that the confirmation bias becomes overwhelming. Nearly everybody proposes another sequence with doubling numbers. This fits their belief about the rule. Again, he says that sequence also meets his rule. They again guess something like his rule is multiplying each number by 2. Again, he says “no that is not my rule.”
The scientific method would dictate that they instead propose a sequence that violates their hypothesis in order to test it. But this is not a natural thing to do because of confirmation bias. With prompting by Muller, somebody eventually proposes 7, 8, 9 and is shocked to learn that that sequence also follows his rule. Eventually he reveals that his rule is simply that the number sequence has to be in ascending order, with no regard to doubling. Because his first example happened to have doubling, everybody leaps to a belief (hypothesis) and then persists in giving sequences that fit their belief, rather than doing the logical scientific thing of testing it by seeing if they can disprove (falsify in scientific jargon) their hypothesis.
This was one of the most interesting chapters yet. The point about cognitive biases being a clear negative for our current political debates but then mitigated with a certain type of deliberation via assemblies was an exciting new idea. As well as the link to the excellent Veritasium.
One part that was missing, though, is how that 2nd phase is able to bring in new ideas/points that we’re not brought up by the congnitively biased sides from the first phase. If the experts arguing in phase 1 are going to leave out important info that is not relevant to their argument, then how will that info be injected in the 2nd phase?