Loading stock data...
Media 584c5b83 a4cc 4298 ab61 27aaca4e9142 133807079768542870

Only One-Third of Americans Have Used AI for Work, AP Poll Finds

A new AP-NORC poll reveals a nuanced picture of how Americans are engaging with artificial intelligence in daily life. While more than half of adults turn to AI for information, actual work-related use remains comparatively modest, and younger generations are driving most adoption across a range of tasks. The poll also highlights a cautious approach to AI, with skepticism about its limits and a notable interest in AI companionship among younger people. These dynamics occur alongside evolving conversations about the social and psychological implications of AI tools, including concerns about energy consumption, the potential erosion of writing skills, and the risk of overdependence on AI systems.

AI usage in the United States: broad patterns and key takeaways

The AP-NORC poll illuminates a landscape where AI has become a common touchpoint for information gathering, yet its integration into work life remains uneven and limited for many Americans. In a broad sample of adults, 60 percent reported having used AI to search for information, underscoring AI’s status as a contemporary information tool rather than a niche technology. By contrast, only 37 percent said they have used AI to perform work-related tasks. This juxtaposition suggests that many individuals view AI primarily as a search and information utility rather than a productivity assistant integrated into daily workflows. The data also imply a gradual, uneven onboarding curve for AI in the workplace, with more robust uptake lagging behind the public’s interest in AI for personal use.

A deeper dive into the demographic breakdown reveals a pronounced generational split in AI adoption. Among adults under 30, AI usage for information searches is especially high, with 74 percent reporting at least occasional use. This contrasts with the overall 60 percent figure, illustrating that younger users are more likely to experiment with AI as a routine information source. The same youthful cohort also leads in brainstorming activities, with 62 percent of those under 30 employing AI to generate ideas or plan projects. In stark contrast, only 20 percent of adults aged 60 and older engage in brainstorming with AI tools, signaling a substantial generation gap in the adoption of AI for creative or planning tasks. These numbers reflect a broader pattern in which younger individuals, who are typically more tech-savvy and immersed in digital ecosystems, are driving AI experimentation across multiple domains.

Beyond information searches and brainstorming, the poll identifies several other practical applications where AI is being used, albeit at varying levels of adoption. Roughly one-third of respondents report using AI for tasks such as writing emails, creating or editing images, or providing entertainment. This finding suggests that while AI has penetrated some everyday work and creative tasks, many people have not yet integrated AI into every aspect of professional or personal life. The figure for shopping, at 26 percent, indicates a more cautious or selective use of AI in consumer decisions, implying that while AI is valued as a tool for information and ideation, its direct influence on purchasing behaviors remains less pervasive among the general public.

A notable nuance in interpreting these findings is the possibility that the poll may undercount AI usage in certain contexts. The data note that information retrieval and search are still dominated by AI-enabled features, particularly within large search engine ecosystems, where AI-generated responses can appear at the top of results. In such scenarios, users may not always recognize that their interactions involve AI-powered features, potentially leading to an underestimation of actual AI involvement in everyday searches. This underscores the importance of understanding how AI is embedded in mainstream tools and the challenges of measuring usage when AI operates behind the scenes in familiar interfaces.

The poll also points to a broader social dynamic: people are navigating AI with a mix of curiosity and caution. While a substantial portion of the population has experimented with AI across different tasks, there remains a skepticism about AI’s limitations and a concern about unintended consequences. This balanced stance—embracing AI for certain tasks while maintaining vigilance about its capabilities and risks—frames how AI technologies are likely to evolve within homes, classrooms, and workplaces in the coming years.

Applications, age profiles, and the spectrum of use

The geographic and demographic distribution of AI usage discussed in the poll reveals a spectrum of adoption that maps closely to age, occupation, and digital familiarity. Information searches dominate as AI’s most familiar and widely used application, but there is meaningful variation across age groups. The under-30 cohort demonstrates not only higher participation in information searches but also a greater propensity to employ AI as a generative or ideation partner. In practical terms, this means that younger users are more likely to turn to AI for brainstorming, drafting preliminary outlines, or exploring alternative approaches to problems and tasks. For older adults—particularly those above 60—the same uses are far less prevalent, though not absent, indicating an incremental adoption curve that may accelerate with increased exposure, improved user experiences, and more accessible interfaces.

When considering writing, editing, or image creation, the poll’s data reveal that roughly one in three Americans engages AI for these kinds of tasks. This suggests that for many people, AI serves as a supplementary co-creator or efficiency tool rather than a full-scale replacement for manual processes. The entertainment dimension also falls into this category: AI is being used to generate or curate content intended for leisure, but it does not yet appear to be a dominant or universal activity across the population. Shopping, while still a meaningful use case for some, shows a comparatively lower adoption rate, pointing to a conservative approach to AI-driven consumer decisions among a broad audience.

Within this landscape, AI companionship stands out as a distinct category with a different trajectory. Across the general adult population, AI companionship remains the least popular application, with only 16 percent reporting any engagement in this area. The under-30 group, however, exhibits a higher uptake, with 25 percent exploring AI companionship. This concentration among younger users aligns with broader trends in digital socialization, where individuals who grew up with online interactions may be more open to anthropomorphized or empathic AI experiences. Nevertheless, the overall footprint of AI companionship remains modest compared with other AI-enabled activities, underscoring ongoing concerns about the social and psychological implications of sustained AI-driven interactions.

The poll explicitly notes certain potential drawbacks associated with AI companionship that extend beyond simple popularity metrics. Some concerns revolve around excessive agreeability or “sycophancy”—situations in which AI systems consistently align with user preferences, potentially undermining critical thinking or realistic appraisals of information. Another category of risk highlighted is mental health concerns, including the possibility of encouraging delusional thinking or unrealistic self-perceptions. While these issues were not exhaustively measured by the poll, they are frequently discussed in broader debates about AI’s social impact, and they shape how policymakers, educators, and employers think about responsible AI deployment.

In addition to companionship considerations, several practical and ethical questions arise from the broader pattern of AI use. For example, as people increasingly rely on AI to draft emails, plan meals, or debug code, there are concerns about the long-term effects on individual skill development, attention to detail, and professional competencies. Some users report benefits such as time savings, increased productivity, and access to diverse perspectives, while others worry about dependence, reduced memory for core tasks, and potential overreliance on machine-generated outputs. The balance between leveraging AI as a supportive tool versus allowing it to erode foundational skills remains a central tension in conversations about AI’s role in daily life.

Real-life experiences: voices from users and the nuances of human-AI interaction

To ground these broad statistics in lived experience, the poll featured conversations with individuals who use AI tools in practical, everyday contexts. One interviewee, a 34-year-old audiologist working in Des Moines, described using AI to help plan weekly meals. This kind of use demonstrates how AI can fit into routine, everyday tasks that previously required more time and mental energy. AI’s role here is not about replacing human judgment or expertise but about streamlining planning, organizing, and decision-making in day-to-day life.

Another participant, a 28-year-old data scientist based in the Los Angeles area, relies on AI for debugging code. This example highlights AI’s potential utility in technical professions where complex problem-solving and iterative testing are common. The data scientist’s experience reflects a practical approach to AI: employ the tool to accelerate problem-solving while maintaining critical oversight and domain knowledge. The interview also sheds light on evolving attitudes toward AI within technical communities, where the initial enthusiasm may give way to more measured evaluations after experiencing both benefits and limitations firsthand.

The interviews also reveal evolving attitudes toward the energy and cognitive costs of interacting with AI. The data scientist notes concerns about the energy consumption associated with running AI queries, which can be substantial when scaled across frequent use. Additionally, there is concern about the potential atrophy of one’s own writing or coding skills, raising questions about long-term skill retention when relying on AI to draft text or generate code. These reflections illustrate a broader awareness among early adopters that AI can be a powerful ally, but not a substitute for ongoing practice, learning, and skill refinement.

The human-AI relationship described by these interviewees also touches on broader social and psychological dimensions. For example, one participant remarks that companionship with AI tools may be influenced by the social isolation many experienced during the COVID-19 pandemic. While not expressing a desire for AI companions as a replacement for human interaction, the respondent acknowledges that a degree of AI-assisted companionship can fulfill certain social needs in settings where human contact is limited. Conversely, another interviewee emphasizes a pragmatic stance, treating chatbots with politeness and courtesy in interactions, even rehearsing a social etiquette around AI prompts.

This courtesy behavior, including expressions of politeness such as saying please and thank you, is framed against a broader philosophical backdrop. Some observers point to thought experiments like “Roko’s basilisk,” which posits future AI models that might reward or punish people based on past conduct toward AI development. While the interviewees do not claim to fundamentally believe in such outcomes, they acknowledge that culturally infused narratives about AI influence how people choose to interact with these technologies. The takeaway is that everyday users are cultivating social norms around AI that blend practical utility with ethical and psychological considerations, shaping how people communicate with machines and how they perceive the credibility and reliability of AI-generated outputs.

Taken together, these user narratives illustrate several core themes. First, AI appears to be most valuable when it acts as a time-saving collaborator for routine tasks, enabling individuals to reallocate their attention to higher-level thinking and more complex activities. Second, as AI tools become more embedded in professional workflows, the need for critical oversight and domain expertise remains essential to ensure outputs are accurate, relevant, and aligned with ethical standards. Third, personal attitudes toward AI—ranging from cautious acceptance to modest enthusiasm—are influenced by experiences of reliability, transparency, and the perceived costs of using AI, including energy use and the potential erosion of core skills. Finally, social norms around AI behavior—such as showing politeness in prompts or adopting ethical guidelines for AI interactions—reflect a broader cultural shift in how people relate to intelligent machines.

The workplace, productivity, and the evolving value proposition of AI

A central thread in the poll’s narrative concerns the perceived promise of AI as a productivity catalyst and the actual level of adoption in work settings. The data suggest that, despite years of enthusiasm from tech leaders and industry marketers about AI as a transformative driver of efficiency, the day-to-day work life of most Americans has not been deeply transformed by AI assistants. The proportion of respondents using AI for work tasks trails behind those using AI for information searches or personal tasks, indicating a gap between the aspirational messaging surrounding AI productivity and the practical uptake among a broad cross-section of workers.

Nevertheless, there is clear evidence that AI is making inroads into professional activities. About one-third of respondents report applying AI to tasks such as composing emails, generating or refining visuals, and providing entertainment content in a work-related or personal context. These uses reflect a spectrum of applications where AI can facilitate routine communications, creative outputs, and knowledge work. The relatively modest share of users in the workforce may be influenced by several factors, including concerns about accuracy, data privacy, potential bias, and the learning curve associated with integrating AI tools into established workflows. Organizations may also face resistance to changing established processes, a lack of formal training on AI tools, and concerns about job displacement, all of which can slow the rate at which AI becomes embedded in everyday professional routines.

Another dimension of the workplace question concerns how AI is perceived relative to traditional search engines and information resources. The poll notes that search remains AI’s most common application, even as AI solutions are increasingly offered as integrated features within search results. However, this observation raises methodological questions: if AI-generated responses appear prominently in search interfaces, users may not recognize when they are engaging with AI, leading to an undercount of actual AI usage in work and study settings. This complexity underscores the need for greater transparency around AI-assisted results and for clearer indicators that help users distinguish between human-created content, algorithmic recommendations, and machine-generated outputs.

The poll’s generational splits also have implications for the future of work and education. Younger workers—who already show higher engagement with AI for information searches and brainstorming—could drive a shift in how tasks are structured, how projects are managed, and how teams collaborate. This suggests that the next wave of workplace AI adoption may emerge more quickly in environments that attract younger talent or in sectors that emphasize rapid ideation, data analysis, and digital content creation. Employers may respond by offering targeted training programs, integrating AI-assisted tooling into standard operating procedures, and revisiting policies related to data governance, privacy, and ethical use of AI in the workplace. As these elements evolve, the relationship between human expertise and machine-assisted productivity will become more nuanced, with AI serving as a facilitator of complex problem-solving rather than a wholesale replacement for human judgment.

Social norms, safety considerations, and the ethics of everyday AI use

An important facet of the poll’s findings is how people approach AI with both enthusiasm and caution, balancing the desire for convenience with concerns about reliability, energy costs, and long-term skill impacts. The participants’ anecdotes illustrate a broader pattern: AI is often used thoughtfully and selectively, rather than as a universal substitute for all cognitive tasks. This careful approach aligns with responsible AI principles that emphasize transparency, accountability, and the recognition that AI outputs require human verification in many contexts.

Energy consumption, a recurring theme in discussions of AI, is reflected in the experiences shared by users who monitor the resource demands of AI queries. This concern intersects with broader discussions about data centers, electricity use, and the environmental footprint of large-scale AI systems. While the poll does not quantify energy usage directly, the emphasis on this issue among respondents indicates a growing awareness of the environmental implications of AI technology. It also highlights the need for energy-efficient AI architectures, sustainable data-center practices, and ongoing research into optimization techniques that reduce computational overhead without sacrificing accuracy or usability.

The skill retention perspective is another noteworthy dimension of the discourse. Some users worry that heavy reliance on AI for fundamental tasks—such as drafting emails or writing code—could erode essential competencies over time. This concern is not merely about personal proficiency; it also touches on broader educational and professional development paradigms. If AI systems consistently fill gaps in routine tasks, there is a legitimate question about whether individuals, particularly students and early-career professionals, may miss opportunities to practice and internalize core skills. This potential risk underscores the importance of deliberate design choices in AI tools, including features that encourage users to engage with underlying concepts, review AI outputs critically, and maintain hands-on practice in essential domains.

The social etiquette observed by some users—treating AI with politeness, saying please and thank you, or requesting modifications politely—offers a window into how human-AI interactions are shaped by cultural norms and plausible social expectations. While these behaviors may seem trivial, they reflect a broader trend in which humans apply established communication rituals to their interactions with non-human agents. This phenomenon raises interesting questions about how such etiquette might influence user satisfaction, perceived AI responsiveness, and the overall quality of human-computer collaboration. It may also serve as a subtle reminder that AI is an artifact created by people, embedded in unpredictable social ecosystems, and subject to human biases and expectations.

The topic of companionship with AI, though not dominant in usage, continues to attract attention for its potential implications on social dynamics and mental well-being. While some individuals view AI companions as convenient tools or outlets for conversation, others worry about the risk of reinforcing social isolation or substituting meaningful human connections. The nuanced findings—16 percent overall uptake with higher rates among younger users—suggest that AI companionship could become a more prominent area of exploration in the future, particularly if AI agents become more capable of simulating nuance, empathy, and companionship. Policymakers, educators, and researchers may monitor this trend to understand its impact on social behavior, mental health, and interpersonal relationships.

Methodology, limitations, and the interpretation of results

The AP-NORC poll surveyed a cross-section of 1,437 adults, capturing a snapshot of AI engagement across the United States. The study design aims to reflect broad demographic patterns and to reveal variations in AI usage by age, information needs, and daily routines. While the sample size is substantial for public opinion polling, it remains important to acknowledge limitations inherent in such surveys. Self-reported data can be influenced by recall bias, social desirability, and respondents’ interpretations of what constitutes “using AI.” Additionally, the rapid evolution of AI tools means that the findings represent a particular moment in time, with potential shifts as technologies become more integrated into consumer devices, software platforms, and enterprise systems.

A notable methodological nuance relates to how AI is embedded in search and information services. Because AI-generated results can appear directly within search interfaces or be integrated into other features, users may experience AI without consciously recognizing that an AI component is involved. This phenomenon can complicate attempts to quantify AI usage accurately and may contribute to undercounting certain interactions. The poll’s authors acknowledge this possibility and emphasize the importance of considering how AI is embedded in everyday digital experiences when interpreting usage statistics.

The qualitative insights gathered from interviews with individual users add depth to the quantitative findings. These narratives illuminate how people think about AI, the trade-offs they consider, and the social and ethical questions that accompany AI adoption. They also reveal that people’s attitudes toward AI are shaped by personal experiences—such as the perceived energy demands of AI queries or concerns about skill atrophy—and by broader cultural conversations about the role of machines in daily life. Taken together, the quantitative results and qualitative perspectives offer a more holistic view of how AI is currently integrated into American life and how it might evolve in the near future.

As AI technologies continue to mature and become more ubiquitous, the AP-NORC poll’s findings provide a baseline for tracking change over time. Policymakers, educators, and industry leaders can use these insights to guide initiatives that promote responsible AI use, digital literacy, and equitable access to AI-enabled tools. They can also inform discussions about workplace innovation, education reform, and the ethical design of AI systems, underscoring the need for ongoing assessment of benefits, risks, and unintended consequences as AI becomes an increasingly ordinary part of daily life.

Implications for policy, industry, and society

The evolving patterns of AI adoption highlight the importance of strategic policy and practical industry considerations. For policymakers, the data underscore the need to foster digital literacy programs that equip people with critical thinking skills, a solid understanding of AI capabilities and limitations, and strategies to verify AI outputs across contexts. Education systems may need to adapt curricula to emphasize data literacy, information vetting, and responsible AI usage, particularly for younger learners who show higher engagement with AI in information searches and ideation. Equally important is the development of ethical frameworks for AI in professional settings, incorporating guidelines for privacy, bias mitigation, transparency, and accountability.

For industry players, the findings point to opportunities to tailor AI products and services to meet users where they are. The relatively higher use of AI for information searches and brainstorming among younger users suggests a demand for user-friendly interfaces, clear explanations of AI-generated results, and adaptable tools that can be integrated into existing workflows without overwhelming users. Product designers might focus on features that support collaboration, allow easy human oversight, and facilitate gradual adoption in work contexts where AI can meaningfully augment productivity without increasing cognitive load.

In social terms, the poll’s results invite reflection on the psychological and cultural dimensions of AI use. As AI becomes more pervasive, it will be essential to maintain a balance between leveraging AI to enhance daily life and protecting against overreliance that could erode core competencies, critical judgment, and genuine human connection. The observed etiquette around AI interactions and the discussion of companionship point to broader questions about how modern societies integrate intelligent machines into social norms, how communities cultivate responsible digital behavior, and how institutions can support balanced engagement with technology.

The conversations about energy efficiency and environmental impact also merit ongoing attention. The awareness of energy costs associated with AI queries signals a demand for more sustainable AI infrastructures and computationally efficient models. Businesses and researchers should continue exploring optimization strategies, renewable energy integrations, and hardware innovations that reduce the environmental footprint of AI deployment without compromising performance or accessibility.

Ultimately, the AP-NORC poll paints a portrait of AI as a powerful, pervasive, yet carefully managed subset of everyday life. It shows a country where a majority of adults use AI to inform their decisions and where substantial, younger cohorts are testing AI in creative and brainstorming roles, while many remain cautious about broader applications in work and intimate spheres. This landscape suggests that AI will continue to expand its reach in stages, with careful attention to user experience, education, privacy, and ethical considerations shaping how these technologies help, rather than hinder, personal and professional development.

Conclusion

The AP-NORC poll provides a comprehensive snapshot of how Americans are currently engaging with AI across search, work, brainstorming, and personal use, including companionship. The data reveal both broad curiosity and selective adoption, with significant generational differences that point to a future in which AI becomes more deeply woven into daily life, especially for younger users. While AI promises productivity gains and novel capabilities, many people approach these tools with measured optimism, mindful of energy costs, potential skill erosion, and the importance of maintaining critical thinking and human-centered approaches in a world increasingly influenced by intelligent machines. The lived experiences shared by everyday users illustrate both practical benefits and thoughtful concerns, highlighting the ongoing need for transparent design, responsible use, and equitable access as AI technologies mature and proliferate across society.