A recent Duke University study reveals a troubling paradox: using AI at work can boost productivity while simultaneously inviting social judgments that undermine a worker’s perceived competence and commitment. Across several controlled experiments, employees who received AI-assisted help were consistently rated as lazier, less diligent, and less independent than their peers who received the same help from non-AI means or who worked without AI assistance. The findings point to a pervasive social cost of AI adoption that operates independently of actual task performance, and the researchers argue this stigma could act as a hidden barrier to broader workplace AI integration.
The social cost of adopting AI at work
Artificial intelligence is increasingly positioned as a productivity enhancer in modern organizations, capable of accelerating routine tasks, generating insights, and supporting decision-making. Yet the Duke study suggests that the social calculus around AI use may negate some of these gains by shaping how others perceive and evaluate AI-enabled workers. In a series of four experiments involving more than 4,400 participants, researchers showed a consistent pattern: AI-assisted workers faced negative judgments about their competence, motivation, and reliability from colleagues and managers, regardless of the task or setting.
The core dilemma identified by Jessica A. Reif, Richard P. Larrick, and Jack B. Soll from Duke’s Fuqua School of Business is that AI can enhance productivity on the one hand, but carry social costs on the other. This duality creates a dilemma for workers considering whether to adopt AI tools, especially in environments with strong reputational concerns or ambiguous performance signals. The fundamental question emerging from the research is not whether AI improves output, but how its use alters the social perception of the worker who employs it, and whether those perceptions translate into concrete career consequences such as hiring decisions, promotion prospects, or performance evaluations.
The study’s broader message is that the social ecosystem within organizations can dampen or distort the measurable benefits of AI. When colleagues and supervisors infer laziness or infer diminished diligence based on AI-assisted performance, even accurate assessments of an employee’s actual contribution can become biased. The researchers emphasize that the stigma surrounding AI use is not a fringe phenomenon affecting only a subset of workers, but a robust pattern that crosses typical demographic divides. The implications extend to how teams collaborate, how managers allocate blame or credit for outcomes, and how transparent or discreet workers choose to be about their use of AI tools.
In addition to the immediate implications for individual workers, the study raises questions about organizational norms and culture. If teams prize conspicuous displays of effort and manual toil over tool-supported efficiency, AI adoption may be perceived as a signal of dependency or weakness, even when the tools deliver demonstrable improvements. This not only shapes daily work dynamics but also influences talent management strategies, leadership expectations, and the design of performance review criteria that may inadvertently penalize technology-assisted productivity. The social evaluation penalty identified in the Duke study is, therefore, a multifaceted phenomenon that touches employee behavior, managerial judgment, and the broader workplace climate.
To summarize, the Duke research presents a nuanced portrait of AI in the workplace: a powerful instrument for productivity that also carries a social burden, potentially offsetting its practical advantages through biased perceptions and reputational risk. The central finding is clear—AI use triggers a consistent social penalty that can affect how workers are evaluated, how they are treated in the workplace, and how decisions about hiring, advancement, and collaboration are made.
Methodology: four experiments, thousands of participants, across scenarios
The Duke team undertook a rigorous program of four experiments designed to illuminate both anticipated and actual evaluations of AI users in a work context. Each experiment explored different dimensions of AI use, from hypothetical scenarios to more realistic assessments of job performance, with a shared aim of determining whether the social penalties associated with AI use were contingent on specific roles, demographics, or task types.
In the opening experiment, participants imagined themselves in a work environment where they had the option to use either an AI tool or a traditional dashboard creation tool to complete a project. The results consistently showed that those who opted for the AI option anticipated harsher judgments: they believed peers and managers would view them as lazier, less competent, less diligent, and more replaceable than would be the case for someone using conventional technology. Crucially, this expectation extended to a lower willingness to disclose AI use to colleagues and supervisors, suggesting a reluctance to be transparent about the tools underpinning one’s work.
The second experiment built on these introduced biases by evaluating descriptions of actual employees who received different kinds of help. Participants were presented with scenarios in which employees received assistance from AI versus non-AI sources or no help at all. Across the board, AI-assisted employees were rated as lazier, less competent, less diligent, less independent, and less self-assured than their counterparts who received the same help from non-AI sources or who did not receive help. This pattern reinforced the idea that the social penalties are tied not just to the act of using AI but to the perception of AI-derived assistance itself.
A particularly important thread in the paper is the observation that these judgments were not confined to specific demographic groups. The researchers explicitly tested for variations based on the target’s age, gender, or occupation to determine whether certain cohorts were more susceptible to AI-related bias. The findings demonstrated that none of these demographic attributes materially altered the effect. In other words, the social penalty for AI assistance appeared to be a general phenomenon—a broad-based stigma that did not rely on the characteristics of the person receiving help.
Experiment three shifted from perceptions to real business decisions. In a hiring simulation, managers who did not themselves use AI were less likely to hire candidates who regularly used AI tools. Conversely, managers who frequently used AI showed a preference for AI-using candidates. This reversal highlights how personal experience with AI modulates bias, and it suggests that exposure to AI within a leadership group or organization can pivot the hiring lens, potentially creating a divide between AI-using and non-AI-using managers in terms of talent selection.
The final experiment deepened the analysis by examining whether the perceived laziness penalty could be mitigated by context. The researchers found that the stigma did lessen when AI use was clearly advantageous for the assigned task. If the AI tool made sense for the job at hand, the negative perceptions diminished substantially. This result indicates that the social penalties are, at least in part, contingent on task relevance and the value AI contributes to achieving specific objectives. In other words, alignment between AI capabilities and job requirements can attenuate, though not necessarily erase, the social costs of AI adoption.
An additional layer of nuance emerged from the data: the evaluators’ own experience with AI mattered. Those who used AI frequently were less likely to perceive an AI-using candidate as lazy, suggesting that familiarity with the technology and firsthand experience with its outputs can soften the stigma. This finding points to a path for organizations seeking to reduce bias: increasing employee familiarity with AI through hands-on exposure and practical demonstrations of value can alter perception and reduce unwarranted negative judgments.
In aggregate, the four experiments provide a cohesive narrative about how AI use intersects with social evaluation in the workplace. The results indicate a robust bias against AI-assisted workers that transcends simple task performance and becomes a factor in judgments about laziness, diligence, independence, and self-assurance. Yet they also reveal important conditions under which the stigma can be dampened—namely, when AI is demonstrably useful for the task and when evaluators have substantial experience with AI themselves.
The social evaluation penalty: what critics and supporters should know
A central takeaway from the Duke study is that social penalties associated with AI use are not isolated incidents limited to particular job types or sectors. The researchers detail that the penalty counts as a form of social evaluation: it is how others assess and respond to the idea that a worker is leveraging AI to perform or expedite work. The penalties manifest in perceived laziness, questions about diligence and independence, and concerns about the worker’s overall self-assurance and reliability. These perceptions can cascade into tangible consequences, influencing hiring decisions, performance reviews, team dynamics, and opportunities for advancement.
The breadth of the stigma is particularly notable. The research demonstrated that demographic attributes such as age, gender, or occupation did not straightforwardly moderate the effect. This suggests that AI-related stigma operates as a general social mechanism rather than a bias restricted to a subset of workers. The universality of the effect presents a broader challenge for organizations seeking to adopt AI tools: without addressing the underlying social narratives, AI implementation could experience friction that undermines productivity gains and employee morale.
The findings also underscore the complexity of integrating AI into workplaces. While AI promises efficiency, accuracy, and speed, the social environment can subvert these benefits if colleagues interpret AI use as a signal of dependency, decreased effort, or diminished capability. In practical terms, this means that teams and leaders must consider not only the technical integration of AI tools, but also the social ecosystems—values, norms, and communication patterns—that shape how AI use is perceived and evaluated.
From a policy and management perspective, the study provides actionable insight into how to design safer, more inclusive AI adoption strategies. Transparency about the role of AI in completing tasks, along with clear documentation of its contributions to outcomes, can help align expectations and reduce misperceptions. Equally important is fostering an organizational culture that rewards intelligent tooling and emphasizes the quality and impact of outcomes rather than the visible effort saved by automation. Training programs and knowledge-sharing sessions that increase AI literacy among all employees, including managers, can also help reduce stigma by making AI outputs more legible and trustworthy.
In short, the “social evaluation penalty” identified in the Duke study is a multi-dimensional challenge that requires a holistic response. It is not enough to simply deploy AI tools; organizations must also cultivate cultures of transparency, education, and supportive leadership that recognize and reward AI-enabled productivity without penalizing the humans who leverage these tools to achieve better results.
Demographic breadth: stigma that crosses lines, not a few
A striking element of the Duke findings is the apparent universality of AI-related stigma across demographics. The researchers explicitly tested a broad spectrum of stimuli to explore whether factors such as the target’s age, gender, or occupation would alter the effect of receiving AI assistance. The results showed no meaningful influence from these demographic attributes on the core evaluations—perceptions of laziness, diligence, competence, independence, or self-assuredness remained consistently affected by AI use, irrespective of who received the help.
This cross-demographic robustness signals a general social bias toward AI-enabled work that is unlikely to be easily explained away by targeted stereotypes. Instead, it points to a broader cultural script about how humans interpret tool-assisted performance. The implication is that concerns about AI adoption are not simply about certain groups of workers or particular job roles; they reflect a wider anxiety about the implications of delegating cognitive or creative tasks to machines. This universality intensifies the urgency for organizations to address the stigma at multiple levels—from leadership messaging to practical demonstration projects that illustrate the reliability and value of AI outputs.
The cross-demographic nature of the stigma also raises questions about how performance metrics are designed and interpreted in AI-augmented environments. If evaluators rely on cues such as visible effort or manual processes to gauge dedication and capability, AI involvement can distort judgments, even when task outcomes meet or exceed objectives. The study’s findings encourage a careful re-examination of what counts as evidence of success in AI-enabled work—whether it is the final result, the speed of delivery, the quality of decisions, or the consistency of outputs across complex tasks.
From the perspective of human resources and organizational development, this implies a need for standardized, objective performance criteria that isolate the contribution of AI from subjective impressions. It also means investing in calibration exercises for managers and teams to align expectations about AI-generated results and to reduce quick judgments based on channelled signals of effort. The ultimate goal is to create a workplace where AI-supported performance is recognized for its value and contributions, rather than being misread through a lens of unwarranted skepticism or bias.
Real-world decision dynamics: how stigma shapes hiring and evaluation
The Duke study does not merely describe abstract attitudes; it also demonstrates how these attitudes translate into concrete workplace decisions. The hiring simulation is a particularly telling component, showing that managers who do not use AI themselves tended to deprioritize or overlook candidates who consistently relied on AI tools. In contrast, managers who frequently used AI demonstrated a bias in the opposite direction, favoring candidates who used AI. This dichotomy reveals a potential segmentation effect within organizations: leaders who are comfortable with AI may actively prioritize AI-proficient talent, while those who distrust or distrust AI may penalize AI users during the selection process.
Such dynamics have multiple implications for talent acquisition, team composition, and leadership development. If a subset of hiring managers undervalues AI-enabled candidates, AI-savvy job seekers and current employees may need to build stronger evidence of the impact of their AI-supported work to overcome biases. Conversely, organizations with leaders who rely on AI may naturally gravitate toward teams that leverage AI tools, potentially accelerating AI-driven performance but also creating inequality in the evaluation landscape if not managed carefully.
The study’s final experimental results add an important nuance: the observed penalty for AI use could be offset when the tool’s value to the task was clear. When the AI approach was demonstrably advantageous for the job at hand, the social stigma diminished significantly. This finding suggests that the practical alignment between AI capabilities and job requirements can modulate social judgments, reinforcing the idea that AI adoption should be guided by demonstrable fit and measurable outcomes rather than by technology adoption for its own sake.
Another notable aspect is the role of personal experience with AI. Evaluators who used AI more frequently were less likely to label AI-using candidates as lazy, indicating that familiarity with AI diminished some of the negative biases. This underscores the potential benefits of hands-on AI exposure across managerial and staff levels as part of a broader change-management strategy. If leadership and staff develop a shared vocabulary and experience with AI outputs, the gap between belief and reality can narrow, reducing the social penalties attached to AI use.
Taken together, these results illustrate a feedback loop in which AI adoption affects both perception and decision-making. Perceived laziness and diminished competence can influence hiring, promotions, and team assignments, while actual task performance and task-specific usefulness can mitigate or amplify these effects. The practical takeaway for organizations is that AI-driven productivity gains must be complemented by transparent communication, evidence-based evaluation, and inclusive practices that minimize social penalties and maximize the alignment between capability and outcome.
The task-dependence of stigma: usefulness as a buffer
A key insight from the Duke work is that the social penalties attached to AI use are not immutable constants. They can change depending on how useful AI is for the job at hand. The researchers found that the negative perceptions of AI-enabled workers were significantly reduced or even eliminated when AI was clearly beneficial to the assigned task. In such cases, evaluators adjusted their judgments, recognizing value in AI-assisted performance and becoming less inclined to attribute laziness or incompetence to the worker who used AI.
This finding highlights an important mechanism by which organizations can mitigate social costs: ensuring that AI use is not only present but purposeful and clearly tied to critical job requirements. When AI contributes to essential outcomes, stakeholders are more likely to interpret AI-assisted performance as a rational extension of an employee’s capabilities, rather than a sign of laziness or decreased diligence. The implications for job design, performance management, and project assignment are substantial. Leaders can design roles and processes that explicitly map AI capabilities to value-added tasks, thereby creating a narrative of AI as a strategic partner rather than a stigma-laden shortcut.
In addition, the results suggest that the integration of AI should be accompanied by education and evidence-sharing about the tangible benefits of AI outputs. When teams can see verifiable improvements—whether in speed, accuracy, or decision quality—the social narratives around AI use shift toward acknowledging competence and efficiency rather than suspicion. This implies that performance dashboards, case studies, and internal reviews that highlight AI-driven successes can help normalize AI usage and reduce the tendency to interpret AI-assisted work as a sign of reduced effort or capability.
Another dimension of task-dependence concerns the type of work being performed. Routine, highly structured tasks may lend themselves more readily to AI augmentation with clearer measurable benefits, while more exploratory or creative tasks might provoke greater scrutiny of AI involvement. Understanding these dynamics can help organizations tailor AI deployment to contexts where the value addition is most apparent, reducing the risk of stigmatizing workers for using AI in more ambiguous or open-ended tasks.
Ultimately, the evidence points to a pragmatic approach to AI adoption: emphasize task-utility alignment, demonstrate outcomes, and cultivate a culture that recognizes the strategic value of AI-assisted work. When these conditions are in place, the social penalties associated with AI use can be substantially reduced, laying a stronger foundation for durable AI integration across teams and functions.
Historical parallels: stigma around new tools and the long arc of adoption
The authors of the Duke study situate AI-related social stigma within a long historical arc of technological skepticism. They draw parallels to debates about writing in ancient times, where figures such as Plato worried that writing would undermine wisdom by diminishing human memory or reasoning. Similar concerns have echoed through centuries as new tools—calculators, spreadsheets, automation—rearranged the terrain of work and learning. Across epochs, labor-saving tools have triggered anxiety about de-skilling, dependency, and the erosion of skill truly valued by societies and organizations.
This historical lens helps explain why social judgments about AI use persist even when productivity gains are clear. People need to interpret and rationalize new tools within a familiar framework of competence, effort, and independence. When a tool appears to replace or diminish an element of human labor, it can provoke a defensive response that manifests as stigma or skepticism. The Duke study’s emphasis on the universality of AI-related stigma—transcending demographic lines—fits into this broader pattern of social adaptation to transformative technologies.
The authors also point to contemporary conversations about AI, including observations on “secret AI use” in workplaces where formal policies restrict AI outputs. Some workers reportedly adopt covert practices to avoid stigma or policy penalties, a phenomenon that reflects the tension between organizational rules and the practical value of AI-enabled work. This behavior illustrates how social norms can lag behind technological capabilities and how individuals may navigate this gap through covert practices. Such dynamics underscore the need for clear organizational policies that balance risk management with recognition of AI-enabled productivity.
From a policy and organizational design perspective, the historical analogy reinforces the idea that social adaptation to AI is a process that unfolds over time. Early adoption, visible success stories, leadership endorsement, and a consistent narrative about the value of AI can gradually erode stigma. Conversely, if social narratives are left unaddressed, stigma can persist and hinder the potential benefits of AI across teams and departments. Understanding this historical context emphasizes the importance of proactive change management, transparent communication, and evidence-based demonstrations of AI impact in shaping a healthier, more accepting work culture.
Economic realities: time savings, new tasks, and net effects
Beyond social judgments, the research landscape surrounding AI adoption highlights a nuanced economic picture. A separate line of inquiry, discussed in related coverage, shows that while a large majority of workers report significant time savings from AI tools—ranging from two to threefold efficiency gains in some cases—these benefits can be accompanied by the creation of new tasks and additional responsibilities. Economists from the University of Chicago and the University of Copenhagen found that while a substantial share of workers experienced productivity gains, AI often generated additional work for a subset of employees, including non-users tasked with verifying AI outputs or policing for AI usage in tasks such as student assignments.
This insight speaks to the broader productivity paradox sometimes associated with transformative technologies: efficiency gains in one area can produce a ripple effect of new requirements, checks, and governance tasks elsewhere. In practical terms, AI adoption may shift the workload rather than simply compressing it, and this shift can influence where time savings actually land within an organization. The resulting distribution of tasks could generate new forms of workload and stress, potentially impacting burnout, job satisfaction, and long-term adoption trajectories.
The broader labor market implications of AI, as projected by the World Economic Forum’s Future of Jobs Report 2025, are complex. The report suggests that AI could catalyze the creation of around 170 million new roles globally while eliminating approximately 92 million jobs, resulting in a net gain of about 78 million roles by 2030. That net figure implies a substantial opportunities for growth, but also signals a substantial reallocation of skills and roles across the economy. In this context, the social penalties associated with AI use at the individual level are not merely a workplace concern; they intersect with broader questions about how workers transition to new roles, how training and upskilling are conducted, and how employers design reskilling programs to align with evolving job landscapes.
Taken together, the economic dimension of AI adoption underscores the need for comprehensive workforce strategies that address both productivity and workforce transitions. Organizations may need to invest in training to help workers understand AI outputs, in governance frameworks to manage quality and accountability, and in culture-building efforts to ensure that AI-enabled performance is recognized and rewarded. By coupling measurable outcomes with transparent communication and skill development, companies can maximize the positive economic impact of AI while mitigating social penalties that could undermine morale and adoption.
Practical implications: guidance for workers, managers, and organizations
The Duke study and the broader literature on AI in the workplace point to several practical strategies for reducing stigma and creating a more productive AI-enabled environment. Key recommendations that emerge from the research include:
-
Promote transparency about AI use: Encourage teams to document when and how AI tools contribute to deliverables. Clear attribution of AI contributions can reduce ambiguity and help evaluators separate the tool from the worker’s initiative and skill.
-
Align AI usage with clear job value: Prioritize AI applications where the tool’s contributions are demonstrably significant to outcomes. When AI decisions are essential to the task, social penalties tend to lessen, suggesting that task design and project selection should emphasize AI-supported workflows where value is obvious.
-
Increase AI literacy across the organization: Provide training and hands-on experiences with AI tools for both managers and staff. Familiarity with AI outputs and processes can reduce skepticism and bias, helping evaluators interpret AI-assisted performance more accurately.
-
Reframe performance metrics: Develop evaluation criteria that capture the quality and impact of AI-enabled work beyond the apparent effort saved by automation. Emphasize accuracy, speed, decision quality, and outcomes to reduce the likelihood that AI use is misread as laziness or incompetence.
-
Foster an inclusive culture around tool use: Normalize AI-assisted collaboration by recognizing team outcomes that rely on AI, rather than praising those who do not use AI or who rely on manual processes. Leadership should model acceptance of AI-enabled productivity and articulate why AI is valued within the organization.
-
Create policy and governance frameworks: Establish clear policies that balance risk management with the encouragement of innovation. Policies should address transparency, accountability, and the ethical use of AI while avoiding punitive measures for legitimate AI use.
-
Monitor and mitigate bias in evaluation: Design calibration exercises for managers that help ensure consistent judgments across teams and discourage biased assessments based on the mere presence of AI tools in a workflow.
-
Support workers navigating AI transitions: Offer reskilling opportunities and career development plans that reflect the realities of an AI-augmented workplace, ensuring that workers can adapt to evolving roles without fear of losing standing or opportunities.
By integrating these strategies, organizations can reduce the social penalties associated with AI use while enhancing the measurable benefits of AI-enabled productivity. The goal is to build a workplace in which AI acts as a facilitator of performance, not a trigger for evaluative bias, and where workers understand how AI contributes meaningfully to outcomes.
Future outlook: balancing innovation and culture in a changing labor landscape
As AI technologies continue to mature and proliferate across industries, the tension between productivity gains and social acceptance will persist. The Duke study highlights a crucial dynamic: the technical capability of AI is only part of the equation; the social interpretation of AI-enabled work will shape adoption trajectories, influence hiring and promotion decisions, and determine how quickly organizations can scale AI-driven improvements. The paradox is that while AI can accelerate decisions, automate routine tasks, and enhance analysis, it can also complicate how workers are perceived, potentially stalling progress if social costs are not addressed.
Looking ahead, several trends will likely shape how workplaces navigate AI adoption:
-
Increasing AI literacy among managers and teams will become a core competency, reducing biases and enabling more accurate assessments of AI-enabled performance.
-
Transparent demonstration of AI value in real projects will be essential to legitimizing AI use and preventing stigma from eroding trust and collaboration.
-
Organizations will need to design roles and workflows that maximize the complementary strengths of humans and machines, minimizing cognitive load while promoting efficient, high-quality outcomes.
-
Policy and governance frameworks will evolve to address concerns about privacy, security, and accountability in AI-enabled workflows, balancing risk with the imperative of innovation.
-
The talent management ecosystem will increasingly emphasize continuous learning and adaptive career pathways that reflect a world where AI and human labor collaborate more closely.
In this evolving context, the social faces of AI—how people perceive and respond to AI use—will matter as much as the technical performance of AI systems. Companies that proactively address stigma, invest in AI literacy, and align AI deployment with tangible business value are more likely to realize the full potential of AI across their organizations.
Conclusion
The Duke study presents a nuanced and timely portrait of AI in the workplace. It demonstrates that AI use, while capable of boosting productivity, can trigger a universal social penalty—negative judgments about laziness, competence, diligence, independence, and self-assurance—that can influence hiring, promotion, and everyday collaboration. Importantly, the penalties are not immutable. They can be buffered when AI usage is clearly aligned with task requirements and when evaluators bring substantial familiarity with AI to their judgments. The research underscores the need for thoughtful change management—cultivating transparency, AI literacy, and fair performance metrics—to ensure that the social costs of AI do not outweigh its practical benefits.
As organizations navigate the AI revolution, balancing technological progress with a culture that values AI-enabled performance will be key. By recognizing and addressing the social dimensions of AI adoption, leaders can unlock productivity gains while preserving trust, collaboration, and morale across the workforce. The path forward involves not only deploying more capable tools but also shaping norms, expectations, and evaluation practices that reflect the true value of human-AI collaboration in modern work environments.