Conspiracy beliefs often seem stubbornly resistant to evidence, nurtured by a mix of psychological needs and cognitive quirks. A new, large-scale examination suggests a striking twist: people who endorse conspiracies are not only overconfident in their views, they also misjudge how widely those views are shared. In fact, they tend to overestimate their own consensus by up to several times what’s actually the case. The research also points toward a powerful, counterintuitive driver: overconfidence itself acts as a central force that keeps conspiratorial beliefs afloat, sometimes even when many others dispute them. In recent years, scientists have begun to unpack this phenomenon by connecting long-standing theories about motivated reasoning and social dynamics to fresh experimental findings. The result is a clearer, more nuanced picture of why conspiracy theories endure and how they might be challenged—including through targeted uses of artificial intelligence designed to debunk misinformation in personalized, timely ways.
The core finding: overconfidence and false consensus
A comprehensive study conducted across eight discrete experiments with more than four thousand U.S. adults examined the interplay between belief in conspiracies, overconfidence, and perceptions of how many others share those beliefs. The researchers designed tasks where participants’ actual performance could be measured independently from how confident they felt about their answers. For example, in one set of tests, people were asked to identify the subject of an image that was heavily obscured, requiring judgment under uncertainty. After completing such tasks, participants were asked direct questions about their beliefs in several well-known conspiracy claims—ranging from the idea that the Apollo Moon landings were fabricated to the assertion that Princess Diana’s death was not an accident. In parallel, other tasks probed how participants perceived the beliefs of others about these conspiracies.
The findings revealed a robust pattern: there was a marked link between an individual’s tendency to be overconfident and their propensity to believe in conspiracy theories. More strikingly, the data showed a dramatic miscalibration: while only a minority of participants actually believed the conspiracy claims at a meaningful rate, those same individuals consistently believed that their views were shared by the vast majority of the population. Quantitatively, about 12 percent of participants endorsed the conspiracy claims, yet they estimated that roughly 93 percent of people shared their beliefs. In other words, believers not only overrate their own certainty, they massively overestimate how widespread their views are. This combination points to what researchers described as a powerful false consensus effect—a misperception that one’s own beliefs are mainstream when, in reality, they are far from it.
Gordon Pennycook, a psychologist at Cornell University and one of the study’s co-authors, described the result as perhaps “one of the biggest false consensus effects that’s been observed.” He explained that the overconfidence pattern was predictable to people who engage with conspiratorial claims in everyday life, yet the scale of miscalibration—nearly quadrupling the perceived commonality of their views—was surprising even to him. The central message is clear: the root of some conspiracy thinking does not rest solely on misinterpretation or a lack of information; it also hinges on a cognitive bias that makes believers think they stand on a broad, mainstream ground when they do not.
This line of inquiry builds on a broader research agenda that Pennycook has pursued for years, exploring how people evaluate information that seems profound but is logically nonsensical, and how these tendencies relate to broader patterns of belief. Past work has shown that some individuals accept “pseudo-profound” statements—buzzword-laden sentences that sound deep but are practically vacuous—more readily when they exhibit lower skepticism and weaker analytical thinking. The juxtaposition of those findings with the conspiracy-belief work helps explain how confident individuals can read prevailing social signals into their own convictions, reinforcing a sense of being part of a larger movement even when counterevidence is robust.
What emerges from these experiments is a more precise account of why conspiracists may resist correction. It’s not simply a failure to process information correctly; it is a deeper, more stubborn confidence that tends to shield their beliefs from disconfirming evidence. The researchers emphasize that the relationship between overconfidence and perceived consensus appears to be central to maintaining conspiracy beliefs over time, shaping how individuals interpret new information and how they respond to counterarguments.
Beyond the numbers, the findings reveal a social-psychological dynamic: people who hold conspiratorial views often feel a sense of belonging or identity from those beliefs, which complicates efforts to persuade them away from these views. The interplay between personal conviction, a desire for uniqueness, and a sense of community can produce a powerful feedback loop. The study suggests that the overconfidence and miscalibrated sense of widely shared belief may function together to stabilize conspiracist ideation, making it harder for standard fact-checking or generic debunking to shift opinions.
The research lineage: from pseudo-profound BS to AI debunking
This investigation sits within a broader arc of research that has repeatedly challenged simple explanations for conspiracy thinking. A notable early thread in Pennycook’s work involved how people interpret pseudo-profound statements that mimic depth without meaningful content. In a 2015 investigation, participants were exposed to statements containing superficially impressive vocabulary; those who were less skeptical or showed lower analytical thinking tended to treat these nonsensical lines as genuinely profound. The study spurred debate about its tone and methodology but nonetheless contributed to a broader recognition that belief formation often intertwines analytical capacities, cognitive biases, and motivation.
The same line of inquiry extended to real-world conspiracy claims, where researchers questioned assumptions about why people adopt such beliefs. In 2016, the same team earned an Ig Nobel Prize for their work on pseudo-profound BS, reflecting the quirky but serious implications of how language and cognition interact in belief formation. More recently, Pennycook and colleagues explored how digital tools—such as AI chatbots—could influence conspiratorial beliefs in real time. An earlier iteration of this research involved AI agents engaging in prolonged conversations with individuals who held at least one conspiratorial belief. The results showed a meaningful reduction in belief strength, a decline that persisted for several weeks after the interaction. The key to the chatbot’s effectiveness lay in its ability to access vast troves of information and tailor counterarguments to the specific beliefs and concerns of each participant, illustrating how personalized debunking can be more powerfully persuasive than generic corrections.
Taken together, these studies map a progression from abstract questions about why people accept deep-sounding but hollow statements to concrete demonstrations that tailored, data-driven debunking can meaningfully affect beliefs. Yet the AI-based debunking also highlighted limitations: significant portions of participants did not change their beliefs, even after extended counterarguments, underscoring the challenge posed by deeply held and socially reinforced convictions. The AI work signaled a potential pathway for intervention, rather than a guaranteed solution, by showing that persistent engagement and precise, context-specific updates can change minds for a subset of individuals. In the contemporary landscape of misinformation, such findings offer a promising but nuanced approach to reducing the grip of conspiracy theories on public discourse.
Methodology and core measurements: eight studies, thousands of participants
The core overconfidence study implemented eight distinct experiments with a large, diverse sample of U.S. adults, totaling more than four thousand participants. The methodological design was deliberately constructed to differentiate actual performance from perceived performance. Participants encountered tasks that required perceptual or cognitive judgments under uncertainty, ensuring that their self-assessment of performance did not simply track how well they did on the task. In parallel, participants reported their belief in several high-profile conspiracy claims and, crucially, their estimates about how many others in the population shared those beliefs.
A central finding across the eight studies was the direct association between overconfidence and conspiracist beliefs. Yet the drivers were more complex than a single cognitive bias. The researchers found that the crucial miscalibration was not just about underestimating one’s own errors but about misjudging the social consensus around those beliefs. While only a small fraction of participants endorsed conspiracy claims in objective terms, those same individuals consistently overestimated the degree of social agreement with their views. The discrepancy—believing that a majority supports their conclusions when, in fact, the majority may disagree—emerges as a robust predictor of conspiracy endorsement.
In examining how people viewed the beliefs of others, the studies highlighted a pattern in which believers often perceived a stronger social consensus than was warranted by evidence. This misperception was not uniform across all conspiracy claims but showed a consistent tendency: believers who felt more confident in their own judgments also tended to project broad social acceptance of their views. The data indicated that this kind of social miscalibration, rather than a simple lack of information, plays a significant role in the persistence of conspiratorial beliefs.
The eight studies also included probing questions about specific conspiracy claims, such as whether the Apollo Moon landings were faked or whether Princess Diana’s death was an accident. Across these items, the researchers observed that participants who tended to overestimate consensus also tended to rate their own knowledge and certainty as high, even when the objective basis for such certainty was weak. The results emphasize that overconfidence serves as a key driver that not only sustains belief in conspiracies but also shapes how people interpret counter-evidence and whether they accept alternative viewpoints.
Moreover, the researchers acknowledged that the relationship between confidence and belief is not a simple reflection of general intelligence or cognitive ability. Rather, their interpretation points to an overconfidence that travels with individuals into varied situations and domains. This suggests a relatively stable trait-like aspect of cognition that interacts with social perception, feeding into a broader ecosystem of belief formation where personal conviction, perceived belonging, and information processing converge to amplify conspiracy thinking.
Interpreting overconfidence, consensus, and the Dunning-Kruger link
A central theme of the interviews with Pennycook and colleagues was the idea that overconfidence cannot be easily disentangled from the broader spectrum of cognitive biases that shape belief. In particular, the researchers discussed the relationship between overconfidence and the well-known Dunning-Kruger effect. They explained that misjudging one’s own competence is not necessarily a simple function of skill level in a given domain. Instead, the same cognitive processes that impair task performance can also impair awareness of those impairments. A key methodological development in their work involved de-coupling task performance from self-assessed ability. This allowed the team to isolate the tendency to overestimate performance and to assess how that tendency interacts with belief in conspiracy theories.
Pennycook described a nuanced view of overconfidence as a broad, transferable trait. He explained that the same psychological mechanisms that enable people to navigate complex environments—such as robust confidence in one’s own perceptions—can become maladaptive when beliefs are untestable by straightforward evidence. The crucial point is that overconfidence can operate as a protective layer, reducing openness to new information. When a belief feels certain and morally unassailable, the cognitive effort of reevaluating it may seem unnecessary or even aversive. In such conditions, people may double down on what they think they know, regardless of contrary data, and they may insist on the broad social acceptability of their viewpoint even when evidence suggests otherwise.
The research also addressed how overconfidence relates to the drive for uniqueness. People who hold conspiratorial beliefs often report a sense of being part of a distinct in-group that others do not understand, which can paradoxically reinforce their sense of certainty. The scientists noted that this counterintuitive dynamic—wanting to feel unique while simultaneously believing that one’s views are widely shared—can produce a disconnect between personal conviction and the actual social landscape. A practical example cited in the research concerns high-profile conspiracist claims such as Sandy Hook. In a given sample, a subset of participants believed the false flag theory, while a larger proportion believed that many others shared their belief. The researchers stressed that while individuals may feel special in their beliefs, their perception of consensus is often grossly inflated, illustrating a profound miscalibration.
The team also scrutinized the implications of overconfidence for attempts to correct misinformation. They observed that the miscalibration is not simply a matter of presenting more facts; the social and cognitive architecture that supports these beliefs often resists straightforward correction. In other words, even the best counterarguments may fail to produce durable change if they do not address the underlying confidence and perceived social dynamics that sustain conspiracist ideation. This nuance helps explain why traditional fact-checking and one-off debunks frequently have limited impact on entrenched beliefs.
Combating conspiracist overconfidence: AI debunking and the limits of conversation
Among the most provocative findings discussed by Pennycook and colleagues is the potential of tailored AI debunking to reduce belief in conspiracies. In one line of research, an AI chatbot engaged participants in extended conversations designed to challenge their conspiratorial views. The AI was equipped to draw on a vast breadth of information across many topics, enabling it to tailor counterarguments to the individual’s specific beliefs and concerns. The results showed a meaningful attenuation of belief strength, with effects persisting for at least two months after the interaction. The takeaway: targeted, personalized debunking can alter beliefs more effectively than one-size-fits-all corrections.
The mechanism behind the AI’s effectiveness appears to be twofold. First, the personalized nature of the counterarguments makes the rebuttals more relevant and harder to dismiss. Second, the AI can adapt its approach to reflect the user’s unique cognitive style and information gaps, potentially reducing the cognitive dissonance that arises when presented with generic corrections. Pennycook framed the outcome as a meaningful challenge to longstanding assumptions about conspiracy thinking—it suggested that the remedial path might lie, in part, in leveraging scalable, data-driven tools to provide precise, context-aware debunking.
However, the researchers are careful to acknowledge the limits. The AI debunking effect did not universally convert believers to skeptics, and the degree of impact varied across individuals. A central obstacle remains whether a person is willing to engage in the conversation in the first place. The researchers highlighted that even when participants were compensated for engaging in the dialogue, a sizable share remained resistant to change. This suggests that, while AI-powered debunking can be a powerful instrument, it is not a universal antidote to conspiratorial thinking. The larger takeaway is that engagement quality and willingness to reconsider are critical determinants of the success of any debunking intervention.
Looking ahead, the researchers emphasized that the challenge of overcoming overconfidence is not simply one of delivering more persuasive evidence. It requires strategies that address both the cognitive biases and the social identities that reinforce these beliefs. The AI approach demonstrates a promising avenue, but it must be integrated with broader communication strategies that respect autonomy while encouraging critical reflection. In the long run, the work invites policymakers, educators, technologists, and journalists to rethink how to design conversations about misinformation in ways that are more dialogic, tailored, and constructive.
Implications, applications, and avenues for future study
The convergence of findings on overconfidence, miscalibrated consensus, and the partial success of AI-driven debunking has important implications for how societies educate and engage with misinformation. Recognizing that belief in conspiracies is linked to a robust confidence that often misreads social norms helps explain why simple corrections may fall flat. This understanding points toward approaches that combine personalized counterarguments with strategies designed to reduce defensiveness and promote reflective thinking. Education and media literacy programs might benefit from incorporating exercises that illuminate how people misestimate consensus and how confidence can become a barrier to updating beliefs even in light of new information.
Practically, these findings encourage the design of debunking interventions that go beyond the mere transmission of facts. Effective approaches may involve engaging with emotional and social dimensions, offering clear, plausible alternative explanations, and creating opportunities for constructive dialogue where individuals feel respected and heard. The AI debunking studies hint at what a scalable, adaptive intervention could look like: a system capable of diagnosing a person’s specific misconceptions and delivering targeted, highly relevant counterarguments in a non-confrontational manner. Yet to realize this potential at scale, researchers and developers must address ethical considerations, ensure transparency about AI capabilities, and rigorously test for unintended consequences, such as the reinforcement of bias or the creation of new echo chambers.
The research also invites further inquiry into the nature of overconfidence itself. Is overconfidence a fixed trait that travels across contexts, or is it malleable through training, feedback, and social experience? The interplay between overconfidence and social consensus bias—how individuals infer the beliefs of others—appears to be a central mechanism sustaining conspiratorial ideation. Understanding how these processes operate across different populations, age groups, education levels, and cultural contexts could help tailor more effective interventions. Future work might also explore longitudinal designs that track how shifts in confidence and perceived consensus relate to changes in belief over months or years, including how exposure to credible information sources, social networks, and community norms modulate these dynamics.
Researchers may also examine how these insights apply to other types of misinformation beyond conspiracy theories. Do similar patterns of miscalibrated consensus and overconfidence emerge with health myths, political disinformation, or science denial? The degree to which the observed effects generalize remains an important question for future work. Likewise, it will be valuable to study how increasingly sophisticated AI tools interact with human cognition in real-world settings, such as classrooms, workplaces, and online communities. The balance between promoting critical thinking and preserving open dialogue will be crucial as these technologies become more integrated into everyday information ecosystems.
In sum, the body of work surrounding overconfidence, false consensus effects, and conspiracy beliefs advances a more nuanced understanding of why these beliefs persist and how they can be challenged. The central insight—that people who believe in conspiracies may be both overconfident and miscalibrated about how many others share their views—offers a concrete target for intervention. The evolving exploration of AI-enabled debunking demonstrates a potentially scalable method to reduce belief strength, even if it is not a universal remedy. As researchers continue to refine methodologies and explore new applications, the goal remains clear: promote informed, reflective thinking while preserving the integrity of open dialogue in a democracy increasingly defined by information abundance and digital persuasion.
Conclusion
The latest findings illuminate a critical nexus at the heart of conspiratorial thinking: overconfidence coupled with a misperceived social consensus. Across eight carefully designed studies involving thousands of participants, researchers demonstrate that conspiracy believers not only maintain strong confidence in their views but also consistently misread how widely those views are shared. The miscalibration is substantial, with believers tending to assume broad agreement where little exists. This substantial false consensus effect appears to be a central driver of conspiracy beliefs, reinforcing a cognitive shield that resists corrective information.
At the same time, the research points toward actionable avenues for intervention. Tailored AI debunking, as shown in prior work, can meaningfully reduce belief strength for some individuals, especially when the counterarguments are precise, contextualized, and delivered in a way that respects the participant’s perspective. Yet success hinges on the willingness of people to engage with the conversation. The takeaway is not a single silver bullet but a nuanced strategy: address cognitive biases and social identity dynamics, deliver personalized debunks, and foster constructive dialogue that invites reconsideration rather than defensiveness.
Ultimately, this body of work underscores a broader, practical implication for media literacy, education, and public communication. If interventions can be designed to illuminate the gap between perceived and actual consensus without triggering resistance, they may help reduce the grip of conspiracy thinking on public discourse. The path forward will require careful balancing of evidence, empathy, and technological tools—an approach that can advance informed citizenry in an era defined by rapid information flows and complex psychological dynamics.