Confirmation Bias In Our Lives
Table of contents
“The human organism tends to seek, embellish, and emphasize experiences that support rather than challenge already held beliefs.” (Mahoney 1977) Unfortunately, confirmation bias is part of everyone’s cognitive process to some extent. It impedes objective reasoning, influences decisions and who we spend our time around, and can bolster mental illness like anxiety and depression. Baron (1995) suggests that this phenomenon can be seen within arguments individuals generate in support of a given side. In his experiment on thinking, Baron discovered that some people believe that one-sided thinking is superior to two-sided thinking, thus sustaining biased judgments.
Stonavich &West (2008) improved on Baron’s (1995) study by removing some instruction from the experiment to improve internal reliability. In their between-subjects design, they found a distinction, among 449 participants, between intelligence and problem-solving ability by testing for myside bias in individuals who scored high on the SAT. No significant correlation was found between SAT scores and the degree of myside bias. They concluded that although rational thinking is often associated with intelligence, intelligent people still displayed myside bias. Stanovich discusses how biases are present in research, and how intelligence testing misses this kind of cognitive process. He goes on to explain that individual differences such as cognitive functions and open-mindedness are poor indicators of the degree of myside bias, implying that everyone demonstrates some degree of myside bias.
Toplak & Stanovich (2003) conducted a similar experiment using 112 undergraduate students who were instructed to produce arguments both for and against a particular issue. Participants produced a greater number of arguments that confirmed their position than they did for an opposing position. However, myside bias varied across different issues, suggesting that myside bias is stronger when discussing issues relevant to participant beliefs.
Marks and Fraley (2006) demonstrate the prevalence of confirmation bias by tackling a well-known belief; the sexual double standard. This belief suggests that while women are condemned for sexual promiscuity, men are praised for identical behaviors. Marks and Fraley point out that although there is a preponderance of anecdotal evidence in support of the existence of a double standard, studies that evaluate this phenomenon show limited empirical evidence to support its existence. They posit that confirmation bias is responsible for this disparity. It may be easier to recall instances of women being derogated for promiscuity, and of men being praised for it. Multiple studies (Harold, 1999. Marks, 2002. Gentry, 1998) have demonstrated that most people do believe that the double standard exists when asked. Marks and Fraley (2006) conducted their own study to identify confirmation bias in the belief of a double standard. Their results suggest that people’s beliefs lead them to be more receptive to information consistent with said belief. They go on to point out that when this occurs, it reinforces their belief, thereby making it more likely that they will persist in this cycle of granting more consideration to confirmatory information.
This phenomenon can be observed throughout various real-world situations and among every person. In social media, for example, confirmation bias can be seen influencing search algorithms, limiting exposure to opposing information. Confirmation bias may cause therapists and doctors to misdiagnose and perform needless procedures. It’s observed in political discussions, as well as inside research.
Causes
One explanation of confirmation bias is the theory of self-verification. Developed by William Swann (1981), the self-verification theory posits that individuals gravitate toward others who verify their self-views. More specifically, individuals with positive self-view are motivated to seek out evidence that others also recognize the positive self-evaluations. Those with negative self-views tend to seek to confirm evidence from others depending on whether they want to improve themselves. Swann & Predmore, (1984) assert that self-verification is unfavorable for depressed individuals because of the tendency to gravitate toward romantic partners who can be abusive and counterintuitive to the progress of their self-wroth. Within the workplace, this negative self-verification cycle can undermine an individual’s assertiveness. Self-verification plays a role in confirmation bias in that the individual unconsciously wants to avoid cognitive dissonance, which is defined by a psychological discomfort when faced with conflicting beliefs. Festinger (1957) explained how the strong desire to avoid dissonance propagates confirmation bias.
Evans (1989) describes confirmation bias as a preference for positive over negative information. He terms this “positive bias” and asserts that it is a cognitive rather than motivational phenomenon; meaning individuals struggle to seek out disconfirming evidence, not because of a lack of willingness, but a difficulty during the processing of logically negative material. Negative material takes longer to process than positive. Evans (1989) proposes this being the reason why people find it easier to think of logically affirmative evidence than about logically negative, or disconfirming evidence.
Marks & Fraley (2006) propose a couple of social-cognitive mechanisms responsible. They suggest that it’s possible that humans encode more information consistent with their beliefs, or that humans encode equally consistent and inconsistent information with their beliefs but can recall consistent information easier.
Dhir &Markman (1984) propose social judgment theory being the driving force behind confirmation bias and consequently conflict amongst ourselves. Initially proposed by Hovland, Sherif &Muzafer (1980), social judgment theory describes the cognitive process of evaluating a new idea by filtering it through the individual’s “cognitive image” of the environment, which is cultivated from past experiences and interactions. Dhir and Marksman (19840 assert that “human judgments are based then upon one’s biased interpretation of available information” (p600). Because of this bias, it is rare for individuals to accurately describe or explain their own judicial process. In their article, Dhir and Marksman attempt to demonstrate their theory using a case study they constructed that assessed the degree to which this cognitive process influences disagreements within the marriage. They proposed that couples are more likely to have disputes if unaware of their own biases as well as of how their partner evaluates information. In this case, confirmation bias fuels the cycle of marital disputes, feeding into the myside bias. Marriages suffer when one or both partners form a negative opinion or belief about the relationship and continue feeding that belief with confirmatory evidence. So while the husband may be trying his best to change things, the wife only sees the mistakes. Dhir and Marksman (1984) ultimately suggest that “Social judgment theory contends that disagreements may flow from the mere exercise of human judgment. Any attempt at amelioration of a human conflict situation must try to analyze the human judgment process and its limitations” (p601). Identifying one’s own cognitive biases is to also improves rational decision-making and interpretation.
Amos Tversky & Daniel Kahnemon (1973) proposed the theory of availability heuristic, which is defined as a mental shortcut of recalling recent information in a support of a specific topic. Individuals under this model place greater importance on recent information, as well as the frequency of corresponding instances that are recalled. People make false assertions based on this limited cognitive process.
Assessments
Confirmation bias was coined by Peter Wason (1960) after his popular experiment demonstrating the human tendency to confirm rather than disconfirm held beliefs. He offered participants a triad of numbers (2,4,6) that he asserted conformed to a simple rule. Participants were tasked with discovering the rule by offering as many different sequences as necessary to discover his rule. Afterward, participants attempt to predict the rule, to which Wason confirms whether they are correct. Wason constructed this experiment in a way that held an unlimited amount of confirmations following his rule. Participants would have to attempt to falsify their hypotheses instead of confirming them to discover it is incorrect. Wason posits that confirming instances do not verify a rule or theory, but disconfirming instances can invalidate or refute it (Wason, 1960).
Snyder and Swann (1978) tested confirmation bias in social settings. Participants were instructed to test the hypothesis whether the target individual was an introvert or an extrovert. Participants in both controls chose 12 questions out of a list of 26 to ask the target. Consistent with other studies, participants had a consistent tendency to search for confirmatory evidence of the corresponding hypothesis, and less so of disconfirming evidence. Swann & Giuliano (1987) discussed these findings by identifying possible implications, as “even the best-intentioned therapist, teacher, or layperson may unwittingly adopt a confirmatory search strategy and thereby may constrain the response options of targets in ways that cause targets to behaviorally confirm erroneous expectancies” (p513). In other words, a counselor may misdiagnose a client when only seeking to confirm a diagnosis rather than disconfirm it, especially since so many disorders share symptoms. Also, confirmation bias can influence people’s impressions of others, and how they interact with others. When Swann and Giuliano (1987) replicated Snyder’s 1978 study with 3 experiments, they controlled for some methodological concerns such as poor generalizability. However, they too found results consistent with Snyder and Swann (1978). They support the notion that once an individual forms a belief about a given person, they will be inclined to search for evidence in support of that belief (521). Interestingly, participants were more likely to search for confirmatory evidence when they were certain of their belief, but for those that were uncertain of their beliefs, they explored a blend of confirmatory and disconfirming evidence.
Christopher Wolfe and Anne Britt (2008) conducted a similar experiment to that of Lewicka (1997), where they investigated how myside bias influences what participants think makes a “good argument”. This study observed differences in how people develop an argument in essay form. Participants were instructed to write persuasive essays either for or against their favored side of an argument. Although most participants opposed the presented proposition, it was found that they tended to visit more sites that were for the side they were assigned. Consistent with Evans (1989), Wolfe & Britt proposed that myside bias is not characterized by an unwillingness to read other side information. In the end, participants were asked what they thought to be a good argument was, and if adding information from the other side was important. Those who did not think including other side information was important tended to not include it in their own essays.
Lewicka (1997) conducted six experiments involving choice, where participants were asked to either choose or reject one of the various alternatives representing prospective candidates. Lewicka found that this framing of the instructions influenced the decision of what information to search. The results showed that “rejecting” participants generated less biased information searches than in the accept condition, spreading their attention more evenly across the different candidates. In the “accepting” condition, devoted more attention to their first choices, which found a sufficiently good candidate. The rejecting participants gathered evidence that allowed them to eliminate the objectively worst candidate. Lewicka concluded that when scrutinizing both merits and drawbacks, participants developed a more balanced evaluation of the candidates.
Lewicka (1998) divides with two types of confirmation bias. The first being the layman's understanding of favoring evidence supporting an idea while ignoring, distorting, and reinterpreting contrary evidence. She uses the example of an individual believing fate is unfair to him because a bus he waits for is always late when in reality, he ignores or dismisses the evidence every time the bus is on time because it doesn’t fit his belief. The second version of confirmation bias is seen more commonly in research. Like Wason (1960), Lewicka similarly explains the second definition of confirmation bias as a focus on seeking confirmatory instances of a hypothesis while ignoring alternative and contrary explanations of the predicted phenomena (p236).
Intervention
Baron (2000) suggests that open-mindedness is the best strategy when approaching a personal bias. Baron recognized through his studies that individuals who score dual-sided arguments as more important also tended to show less bias in their arguments by considering both sides. Keith & David (1998) propose motivation for self-enhancement to improve self-esteem. Since Swann (1981) suggested that those with negative self-views are more prone to confirmation bias, self-enhancement and the improvement of self-views can assist in more rational and two-sided thinking (Swann, Chang-Schneider & McClarty, 2007).
Gaps
Among the myriad amount of articles testing for confirmation bias, the most common gap that most of them shared was the lack of psychological theory to explain why this phenomenon occurs. Lewicka (1998) recognizes this in her work as well as in other studies by other researchers (Wolfe & Britt, 2008; Wason, 1960; Stanovich & West, 2008). Wason (1960) received much criticism for its weak generalizability and weak internal reliability. Some researchers (Wetherick, 1962; Nickerson, Baron,) claim that Wason’s study didn’t measure what it claimed to measure due to methodological concerns, didn’t have a large enough sample size and didn’t actually find a significant correlation. Many researchers also attempted to replicate and improve on Wason’s and others’ studies but ultimately produced similar results, confirming the existence of a confirmation bias. Additionally, studies examining myside bias (Wolfe et.al, 2008; Baron, 1995; Stanovich & West, 2008) showed threats to internal reliability in that the measurements used also assessed obedience. Individuals were assigned to controls and were given instructions on what kind of argument to generate. Although they observed a bias in researching assigned sides, participants were not entirely free to demonstrate bias since they were following instructions. Because of this, their results favored a myside bias in argumentation and not in personal opinion.
Hypothesis
I hypothesize the prevalence of confirmation bias as being greater than the results seen in Wason’s (1960) study. As Wason saw just above half of the participants demonstrate confirmatory hypothesis testing, I believe that with larger sample size and an enhanced design, there will be a greater number of individuals that will exhibit this form of confirmation bias. Given the amount of research behind this phenomenon and theoretical models explaining it, significant effects should be less elusive.
Cite this Essay
To export a reference to this article please select a referencing style below