Loading stock data...
Media 37ce28e2 6b8a 4628 8eb0 5600cfd26c0e 133807079767971920 1

Conspiracy Theorists Think They’re in the Majority—Even as They Sit on the Fringe, New Study Finds

Conspiracy beliefs are often explained through ideas about motivated reasoning or psychological needs. Yet new research suggests there is a deeper, less obvious driver at work: people who subscribe to conspiracies tend to be overly confident in their own thinking and dramatically misjudge how widely their views are shared. In other words, the more they believe in a conspiracy, the more they assume they are part of a majority, even when the evidence suggests the opposite. This combination of overconfidence and miscalibration creates what researchers describe as a powerful false consensus effect, one that may help explain why conspiratorial thinking persists even in the face of contradictory information. The implications stretch beyond individual beliefs to shape how communities form, how evidence is evaluated, and how public discourse unfolds in the digital age. The findings come from a series of eight studies conducted with thousands of U.S. adults, spanning experiments that measure perception, belief, and confidence as participants encounter various conspiracy claims. Taken together, the line of inquiry casts conspiracy belief not merely as a function of personality or motivation, but as a systematic misalignment between what people think they know, and what others actually think.

Understanding the core phenomenon: overconfidence, miscalibration, and the false consensus effect

This body of work places the false consensus effect at the center of conspiracy thinking. The false consensus effect is the psychological tendency to overestimate how much others share our beliefs, values, and preferences. In the context of conspiracy theories, researchers found that even when the actual prevalence of belief in a given conspiracy is relatively small, adherents often believe that a large majority supports their view. This miscalibration is stark: the belief holders consistently overestimate the degree of agreement around their views, sometimes by dramatic margins. The researchers highlight that the overconfidence at work is not simply a matter of thinking one is right; it is a systemic misperception of social consensus. For conspiracy believers, the distance between their private conviction and the perceived public support can be wide, and the consequences for how they engage with information can be profound. In practical terms, overconfidence functions as a buffer that protects their beliefs from being eroded by contradictory data. It reinforces a closed loop: strong personal conviction feeds the sense of belonging to a unique in-group, which in turn sustains the belief even when presented with opposing viewpoints. This loop operates in online spaces where information flows rapidly and social validation can be highly consequential.

The researchers describe overconfidence as a core underlying component that helps explain why conspiratorial claims feel so compelling to some individuals. When people are convinced that their position is true, they become less likely to scrutinize alternative explanations or to accept counterevidence. The effect is amplified when individuals believe that their views reflect a majority stance, a perception that strengthens the impulse to defend the belief vigorously. In a striking contrast, the same individuals who believe in conspiracies often report only a minority level of personal belief among the general population—that is, they think “most people don’t share this,” while in reality their own belief level remains a minority. The gap between their private certainty and their perception of public opinion is what constitutes a powerful miscalibration: a skewed sense of social consensus that can sustain and magnify conspiratorial thinking over time. This misperception is a crucial piece of the puzzle because it reshapes how people interpret disagreement and evidence, encouraging them to resist counterarguments more strongly than their actual beliefs would warrant.

To contextualize, the researchers emphasize that overconfidence does not merely describe a lack of humility; it describes a cognitive stance in which confidence operates as a behavioral compass. If you believe you are part of a majority, you are less inclined to concede error, less open to revision, and more likely to interpret new information as confirming your view rather than challenging it. The dynamic helps explain why highly confident conspiracy theorists can persist in their beliefs even when credible information contradicts them. It also illuminates how social contexts—such as online communities, comment sections, and echo chambers—can magnify the sense of being misunderstood by outsiders while simultaneously amplifying the perception of support among likeminded individuals. The practical upshot is that overconfidence, paired with a skewed read on social consensus, can create a robust barrier to modification of beliefs, regardless of the quality of new evidence.

In examining the data, the researchers also note a nuanced distinction: overconfidence in this context is not simply about a person’s skill at a particular task. Rather, it comprises a broader metacognitive tendency—how people monitor and judge their own cognitive processes. They devised tasks designed to decouple actual performance from perceived performance, ensuring that participants’ confidence was not simply a byproduct of competence. This methodological choice allowed them to observe whether belief in conspiracies co-occurs with false beliefs about social consensus. In the findings, belief in conspiracies correlated with greater overconfidence in one’s own thinking—and with a consistent miscalibration of how widely those beliefs were shared. The combination suggests a cognitive profile in which individuals not only hold certain claims with strong certainty but also misread the social temperature around those claims. The result is a potent mix that can sustain fringe opinions well beyond the point at which a more accurate measurement of social agreement would occur.

The broader significance is that the phenomenon appears not to be restricted to a single conspiracy or a single domain. Across multiple studies, the same pattern emerges: overconfidence and miscalibrated perceptions of consensus persist regardless of the specific conspiracy claim under consideration. This consistency strengthens the case that the mechanism is robust, not merely an artifact of a single dataset or a particular topic. It implies that interventions aimed at reducing conspiratorial belief must take into account how people judge social consensus and how their internal confidence interacts with those judgments. Without addressing the meta-cognitive dimension, efforts to debunk or debunking alike may fall short, as individuals may simply double-down in response to perceived disagreement or challenge to their identity. The research signals a need for strategies that can recalibrate perceived consensus and promote more accurate self-assessment of one’s own beliefs.

The logic of the earlier work: pseudo-profound nonsense, belief, and cognitive skepticism

Long before exploring overconfidence and false consensus in conspiracy thinking, researchers examined how people react to statements that sound profound but lack substance. In a 2015 study, Pennycook and colleagues investigated how some individuals interpret so-called pseudo-profound statements as deep or meaningful. The researchers presented participants with statements containing sophisticated-sounding buzzwords that were, in fact, semantically vacuous. They found that participants who were less skeptical and more prone to accept superficial linguistic cues tended to rate these meaningless statements as genuinely profound. This line of inquiry helped illuminate how certain cognitive styles—such as a willingness to be impressed by grandiose language—could predispose individuals to accept superficially persuasive content, including conspiratorial narratives. The study established a link between cognitive vulnerability to pseudo-profound content and a broader susceptibility to accepting unlikely or unfounded claims. This connection is important because conspiracy theories often present themselves in framed language that sounds authoritative, coherent, and consequential—even when the underlying logic is flimsy or inconsistent.

The initial findings drew some controversy, partly due to concerns about tone and methodology. Critics argued that labeling certain content as pseudo-profound could come across as condescending or paternalistic. Nevertheless, the study won recognition in science culture and even earned a lighthearted Ig Nobel Prize for its provocative paradoxes in cognitive judgment. The central idea that resonated through later work was that a particular cognitive style—characterized by a readiness to infer depth from surface-level coherence—could predispose individuals to embrace complex or controversial narratives without critical scrutiny. While not the only factor behind conspiratorial thinking, the pseudo-profound illusion offered a useful lens for examining how people evaluate information when sophisticated language masks logical gaps. It also suggested that the human tendency to find meaning and order can be exploited by content that sounds profound but lacks substantive grounding.

As researchers extended their inquiry, they observed how this same cognitive style interacts with other determinants of belief. If individuals tend to interpret language as meaningful or profound based on superficial cues rather than robust evidence, they may be less likely to detect logical fallacies in sophisticated narratives. The research thus creates a continuum: from the interpretation of language quality to the endorsement of complex sociopolitical claims, including conspiracies. The insights imply that educational and communicative strategies should emphasize not just factual correction but also critical processing of language and argument structure. Teaching people to interrogate not only what is being claimed, but how it is being framed, could help reduce susceptibility to pseudo-profound content and, by extension, conspiratorial narratives that rely on ornate but baseless rhetoric. This thread laid the groundwork for subsequent studies that connected cognitive vulnerability to overconfidence and miscalibration of social consensus.

The AI debunking approach and the question of lasting impact

In another strand of Pennycook and colleagues’ research, the investigators explored how technology can be mobilized to counter conspiracy beliefs. They conducted experiments in which an AI chatbot engaged people who believed at least one conspiracy theory in conversation. The chatbot’s strength lay in its access to a wide corpus of information and its ability to tailor counterarguments to the individual’s specific beliefs. The results indicated that the AI-driven debunking approach could markedly weaken the strength of conspiracy beliefs, and in some cases the effect persisted for several weeks. The secret, according to the researchers, lay in precise, targeted responses that addressed the individual’s line of reasoning rather than offering generic, one-size-fits-all refutations. The AI system could adapt its messaging to align with the person’s knowledge gaps, thereby making counterarguments more relevant and harder to dismiss. This finding presented a potentially transformative implication for how debunking and public information campaigns could be designed in the digital age.

Yet the studies also highlighted limitations. Notably, substantial movement away from conspiratorial beliefs occurred primarily among participants who were less overconfident or more open to engaging with counterarguments. When engagement was shallow or individuals were unwilling to consider alternative viewpoints, the AI’s impact waned. Even with eight minutes of dialogue, a majority of participants retained their beliefs, albeit sometimes in a diminished form. This nuance underscores a critical point: the effectiveness of AI debunking depends on a willingness to participate in the conversation. If a person is not prepared to engage, the AI cannot deliver meaningful, adaptive discourse. The takeaway is not a universal cure for conspiratorial thinking but a potential tool that can be effective under the right conditions—conditions that include voluntary engagement and a context that supports reflective thinking. The broader implication is that technology can augment traditional educational and rhetorical strategies, but it cannot replace the fundamental cognitive and social processes that drive belief formation.

In the interviews surrounding these studies, Pennycook emphasized that the AI approach unsettles conventional assumptions about why people believe conspiracies. Rather than attributing belief to simple deficits in intelligence or a lack of concern for truth, these findings point to sophisticated cognitive dynamics. Overconfidence interacts with social perceptions of consensus, and technology can be used to counteract misinformed narratives in ways that are tailored to individuals. The research suggests that debunking is not merely about presenting facts; it is about engaging with the cognitive architecture that supports belief. The challenge, then, is to design interventions that respect individual autonomy while offering precise, targeted information that can recalibrate beliefs in the long term. These studies illuminate a frontier where cognitive psychology, social psychology, and artificial intelligence converge to shape how society responds to misinformation.

A rigorous, broad program: eight studies with thousands of participants

The breadth of the research program behind these insights is notable. Across eight distinct studies, the researchers examined how people’s beliefs, performance, and confidence interact in the context of conspiracy theories. The design of these studies paid careful attention to disentangling actual ability from perceived ability, ensuring that participants’ sense of competence did not simply ride on their success in a given task. For example, in one experiment, participants were asked to guess the subject of an image that was largely obscured, a task designed to reveal metacognitive accuracy independent of actual knowledge. The next steps involved probing participants’ beliefs about various conspiracy claims—such as the Apollo Moon landings being faked or Princess Diana’s death not being an accident—and assessing how confident they were in those beliefs, as well as how confident they thought others were in holding them. A subset of studies focused on testing participants’ perceptions of others’ beliefs, revealing persistent miscalibration that aligned with the overarching theme of false consensus.

The data demonstrated a clear pattern: a substantial association between individuals’ tendencies toward overconfidence and their propensity to endorse conspiracy theories. Interestingly, while a majority of participants reported belief in a given conspiracy in a targeted survey, believers consistently overestimated the prevalence of belief among the broader population. In one illustrative set of findings, a specific conspiracy claim—such as a false-flag interpretation of a historic event—produced a situation where a portion of the population believed it, yet the proportion of people who actually shared that belief was far smaller than perceived by believers. The researchers interpret this miscalibration as a robust predictor of ongoing belief in conspiracies because it reinforces the sense that one’s views are not only true but also widely shared. The methodological rigor across eight studies, including large sample sizes and diverse task formats, reinforces the robustness of the conclusion that miscalibrated social beliefs interact with overconfidence to sustain conspiratorial thinking.

Moreover, the studies explored how subjective assessments of one’s own knowledge diverge from objective performance. In several experiments, participants’ self-rated understanding of a topic did not correspond with their actual accuracy on related tasks. This divergence is central to the overconfidence picture: people who are overconfident tend to trust their own judgments even when those judgments are poorly grounded in evidence. The researchers’ approach—carefully separating task performance from confidence judgments—allowed them to isolate the metacognitive components of belief formation. The outcomes converge on a consistent message: overconfidence and misperceived consensus create a cognitive environment in which conspiracy beliefs can take hold and endure. By mapping these dynamics across a broad array of experimental conditions, the research team provided a compelling, replicable framework for understanding why conspiratorial thinking resists correction and why simplistic counterarguments often fail to dissolve entrenched beliefs.

In looking to practical implications, the eight-study program offers multiple lines of intervention. First, debunking efforts benefit from specificity and context. Generic refutations may fail to penetrate the confidence that believers place in their own reasoning; targeted debunking that addresses the specific lines of argument a person uses can be more effective. Second, recognizing the role of social perception is critical. If individuals think that their views reflect a broad consensus, debunking must address not only the facts but also the perceived social dynamics that sustain belief. Third, engagement quality matters. The AI-based approach showed promise, but it relies on meaningful, interactive dialogue—something that requires participants’ willingness to engage. All told, the program suggests that improving metacognitive awareness, encouraging open-minded conversation, and delivering precise, person-tailored information could collectively reduce overconfidence and miscalibrated beliefs, thereby weakening the footholds of conspiracy thinking. While no single strategy guarantees rapid transformation, a combination of targeted information, social-cognitive insight, and respectful dialogue can move the needle in meaningful ways.

The Dunning-Kruger connection and the nature of overconfidence

A central thread in this research is its relationship to the well-known Dunning-Kruger effect, which describes how people with lower ability in a domain often misjudge their own competence, while those with higher ability may underestimate it. The researchers acknowledge that the Dunning-Kruger framework helps explain why some people misperceive their own expertise in relation to conspiracies. However, Pennycook explains that the phenomenon observed in these conspiracy studies is not a simple reflection of domain-specific incompetence. Instead, the overconfidence evident among conspiracy believers emerges from a broader metacognitive stance—a general tendency to overestimate one’s own cognitive abilities across tasks and domains. This means that even when individuals are not demonstrably skilled at evaluating factual claims, their confidence in their own reasoning remains disproportionately high. Notably, the same trait does not imply universal cognitive weakness; it can coexist with strong performance in other domains, illustrating the complexity of how confidence and competence interact.

To understand this nuance, the researchers developed methods to measure confidence independent of task performance, revealing that beliefs in conspiracies tend to be accompanied by a disproportionate sense that one’s own views are widely shared and that one’s own reasoning is sound. This pattern aligns with a broader conception of overconfidence as a trait that travels across contexts, rather than a deficit confined to a particular skill set. The implication is that interventions focusing solely on improving factual accuracy may be insufficient if they do not also address the metacognitive layer—that is, how people monitor and validate their own thoughts and the social signals that reinforce their beliefs. By disentangling task-specific abilities from confidence judgments, the research illuminates why individuals who are not obviously irrational can nonetheless hold strongly false beliefs and insist on their veracity despite contrary evidence. In turn, this understanding encourages more nuanced therapeutic and educational approaches that consider both cognitive processing and the social-identity environment in which beliefs are formed and sustained.

The Dunning-Kruger connection also helps explain why some discussions around conspiracy theories become emotionally charged and resistant to change. When overconfidence is intertwined with a conviction that one’s beliefs reflect a majority view, conversations can devolve into defensive posturing rather than fact-seeking dialogue. The resulting dynamic can trap both sides in a friction-filled exchange that fails to produce understanding or resolution. Recognizing this interplay provides a pathway for more productive engagement strategies: acknowledge the social and identity dimensions of belief, invite reflective inquiry, and design interventions that reduce perceived consensus pressure while increasing exposure to credible, accessible counterarguments. In short, addressing overconfidence and miscalibrated social perception requires a comprehensive approach that respects cognitive and emotional factors alike, rather than a purely logical or informational remedy.

Combatting overconfidence: prospects and limits of debunking, dialogue, and media design

The research assembled a nuanced picture of what might work to reduce conspiratorial belief without sacrificing respect or engagement. Debunking that is tailored to an individual’s specific beliefs and presented in a precise, well-structured manner can be more effective than broad, generic corrections. The key is to provide directly relevant counterarguments and to explain why those counterarguments matter in a way that resonates with the person’s existing knowledge gaps. In practical terms, this means designing debunking messages that are highly specific, well-supported by evidence, and framed in a manner that aligns with the listener’s cognitive style. The logic behind this approach is that precise, person-centered information can circumvent some of the cognitive barriers that allow conspiratorial claims to persist, especially when the disconfirming data is connected to concrete, observable implications.

However, the studies also underscored limits. Even after extended, interactive dialogue with a debunking agent, many individuals retain their conspiratorial beliefs, albeit with reduced intensity. This persistence highlights the reality that belief systems are not easily dismantled by information alone. A primary constraint is the willingness to engage. For participants who are reluctant to participate in the conversation or who distrust the debunking source, the likelihood of perceptual recalibration drops sharply. This finding points to the broader societal challenge: how to foster environments that encourage open, nonjudgmental dialogue about controversial topics. It also suggests that credible, respectful communication is essential in all forms of media education and correctional campaigns. In addition, the research suggests that longer-term, multi-faceted strategies—combining metacognitive training, critical thinking education, and social-norms interventions—may yield more durable changes than short-term conversational debunking alone.

From a policy and practice perspective, these results encourage investment in educational tools that strengthen people’s ability to assess the reliability of information and to recognize when their confidence outpaces their actual knowledge. This includes training in metacognition, Argumentation Theory-based reasoning, and the evaluation of evidence quality. Media literacy programs could be expanded to emphasize not only how to identify misinformation but also how to understand the social dynamics that underlie belief formation, including the role of perceived consensus and identity signaling. The potential of AI-based debunking tools should be explored with caution, ensuring that such systems are transparent, ethically designed, and capable of engaging users in a way that respects autonomy and fosters genuine inquiry. Collectively, these strategies point toward a comprehensive framework for reducing the grip of conspiracy theories by addressing both cognitive overconfidence and the social misperceptions that sustain them.

Social dynamics, belonging, and the psychology of being part of the in-group

Beyond cognitive mechanisms, the research emphasizes the social psychology of conspiracy belief. For many individuals, adopting a conspiratorial view can serve a social purpose: it signals membership in a group, provides a sense of belonging, and offers a shared narrative that helps explain a confusing world. The appeal of conspiracy theories can be linked to a need for uniqueness—the desire to feel special or distinctive in a world that seems uncertain or chaotic. This sense of belonging to a community can stabilize beliefs because social validation reinforces confidence, even in the absence of strong empirical support. In addition, the interplay between belief and practice—how people participate in communities around these ideas, attend events, or share content—can further entrench the belief. Even when personal conviction wanes, relational bonds within the group may keep individuals connected, creating a social gravity that is hard to resist.

A related dynamic concerns how individuals reconcile inconsistent information with the desire to maintain coherence in their worldview. People often construct a narrative in which counterevidence can be dismissed as misinformation or manipulation by outgroups, further entrenching in-group solidarity. In such environments, the fear of social ostracism or loss of identity can make belief revision even more challenging. The researchers highlight that the social rewards attached to conspiracy thinking—being part of a like-minded community, gaining status within a network, or simply avoiding the social costs of leaving a belief system—can overpower rational scrutiny. The result is a robust social-psychological system in which cognitive biases and social motives reinforce one another, producing durable conviction in even highly implausible claims.

Understanding these social dynamics is essential for designing interventions that are not only cognitively sound but also socially acceptable. Educational programs that focus solely on correcting factual errors may fail to address the identity-related incentives that sustain conspiratorial communities. Instead, successful strategies might combine critical thinking training with opportunities for constructive social engagement that allow individuals to explore concerns without feeling attacked or alienated. For example, programs could emphasize shared goals, such as civic literacy, media responsibility, and evidence-based reasoning, while offering safe spaces for dialogue where people can examine their beliefs without fear of social reprisal. In practice, this likely requires collaboration among educators, community leaders, media organizations, and platforms that host opinion discourse. The overarching aim is to cultivate a culture that values thoughtful disagreement and evidence-based reasoning as a shared social norm, reducing the social benefits of embracing fringe views while preserving individuals’ sense of agency and belonging.

Practical guidance for individuals, educators, and media platforms

For individuals seeking to navigate information landscapes more effectively, several practical steps emerge from this line of research. First, cultivating metacognitive awareness—becoming more conscious of how confidence relates to actual knowledge—can help people pause before asserting certainty about contested claims. Second, seeking out diverse information sources and actively testing one’s own beliefs against a broad cross-section of perspectives can reduce the risk of miscalibrated beliefs. Third, engaging in reflective conversations that emphasize understanding the other person’s reasoning and asking clarifying questions can promote more productive exchanges, especially when both parties approach the discussion with curiosity rather than defensiveness. These practices align with a broader objective: to reduce the personal and social costs of overconfidence and to create opportunities for legitimate doubt and revision where warranted.

Educators play a critical role in translating these insights into teaching practices that build students’ analytical thinking skills and their ability to evaluate evidence. Curricula that emphasize argument structure, evidence appraisal, and the distinction between correlation and causation can equip learners with a more resilient cognitive toolkit. Teachers can also introduce metacognitive prompts that invite students to reflect on their confidence levels relative to their actual performance, thereby normalizing the process of assessing one’s own knowledge. For media platforms and information designers, the challenge is to present counterarguments in ways that are accessible, relevant, and non-confrontational. Algorithms that prioritize engagement should be rethought to avoid amplifying sensational but unfounded claims; instead, platforms could emphasize quality sources, transparent reasoning, and context that helps users understand why a claim holds or fails under scrutiny. In all these domains, the aim is to create environments that reward careful thinking, humility in the face of uncertainty, and willingness to adjust beliefs in light of credible evidence.

Overall, the implications extend to how society communicates about uncertainty and risk. The research underscores that simply presenting facts is often insufficient to shift beliefs that are tied to identity and social belonging. Instead, successful communication strategies should consider the cognitive biases at play, the social incentives that shape beliefs, and the conditions under which people are most receptive to corrective information. The combination of precise, personalized debunking, opportunities for constructive dialogue, and educational reinforcement of critical thinking can contribute to a more resilient information ecosystem. While there is no silver bullet that guarantees rapid change in beliefs, the assembled evidence suggests a multi-pronged approach that respects individuals’ autonomy while encouraging more accurate perceptions of social consensus and better self-awareness of one’s own cognitive processes.

Conclusion

The convergence of overconfidence, miscalibrated perceptions of consensus, and social belonging provides a cohesive framework for understanding why conspiracy theories persist. Across multiple studies, conspiracy believers exhibit a pronounced overconfidence in their own reasoning and a systematic misjudgment of how widely their views are shared. This false consensus effect helps explain why counterarguments and factual corrections often struggle to gain traction, especially in online environments where social validation and identity signaling are powerful forces. It is not merely a matter of individual intelligence or skepticism; it is a complex interplay of cognitive metacognition, social perception, and community dynamics that sustains fringe beliefs. The research also points toward potential intervention pathways, including targeted, personalized debunking, metacognitive training, and educational initiatives that cultivate critical thinking and resilient reasoning. Importantly, these strategies must be designed with sensitivity to the social needs that conspiracy communities fulfill—belonging, meaning, and identity—so that corrective information can be integrated without triggering defensive backlash. By integrating cognitive insights with social and educational strategies, there is a path toward reducing the influence of conspiracy theories while preserving open, thoughtful dialogue about contested topics. The ultimate goal is a healthier information ecosystem in which individuals feel empowered to question claims rigourously, recognize when their confidence exceeds their knowledge, and participate in a public conversation grounded in evidence, empathy, and shared inquiry.