
Anxiety around AI-enabled tools is increasing, and Americans are particularly worried about regulation and data privacy.
AI chatbots like ChatGPT and Microsoft Copilot have exploded in popularity in recent years, launching a tech arms race to add AI capabilities to just about every app on your phone and computer. But as people become more accustomed to using AI they are simultaneously becoming more skeptical about the increasing role it plays in society. Several recent studies show that consumers’ enthusiasm for AI tools doesn’t match the hype. Convene’s latest Meetings Market Survey, for example, finds planners have developed a more tepid response to AI, with only a modest bump in adoption of AI tools in their work: 65 percent of respondents this year said they use AI compared to 62 percent in 2024.
The Vibes Are Off
Last month, a new Cybernews study “AI-related anxiety is rising in the U.S. – regulation and privacy top the list” concluded that the tide is turning against AI, with anxiety outpacing excitement: “The increase in AI anxiety means that people aren’t just blindly using AI and hoping for the best — they’re actually starting to understand the risks associated with AI tools and want to know more about it,” the study’s authors wrote in the analysis that accompanied their data. “We hope that most people who are searching for the terms included in the study don’t just worry about the negative effects of AI, but use that information to make more informed, responsible decisions about how they engage with these technologies.”
Cybernews, an independent media outlet that brings together journalists and security experts, collaborated with staff at nexos.ai, a Series A startup that bills itself as an all-in-one, enterprise-grade AI platform, to analyze search interest in AI-related keywords. The team crunched through U.S.-based Google Trends search data during 2025, from Jan. 1 to Oct. 31, to evaluate AI-related queries across five categories: Control & Regulation, Data & Privacy, Bias & Ethics, Misinformation & Trust, and Job Displacement & Workforce Impact. (Examples include “Is AI legal” for Control & Regulation? and “Is AI private” for Data & Privacy?) The research team stressed that using data that is automatically scaled by Google Trends is a clear way to “represent average relative search interest rather than raw volume, offering a clear picture of which AI-related issues evoke the most public anxiety over time.” (The keywords and exact calculations that Cybernews/nexos.ai used can be found here.)
The analysis showed that anxiety increased across the board, with Americans particularly worried about AI regulation and data privacy. The researchers not only demonstrated growing unease but tied several spikes in searches throughout the year to specific events. “Control & Regulation increased by 256 percent,” they wrote, “and Data & Privacy by 325 percent, between the weeks of May 25 and June 22.” The researchers note that several events may have caused this surge. In June, 260 lawmakers called on Congress to remove a moratorium on state-level AI regulations and the Texas Responsible AI Governance Act was also signed into law around that time. California also released a policy that highlighted the potential harms of AI, especially relating to privacy. Control & Regulation had the highest interest until about August, the data revealed, at which point Data & Privacy began to lead the way. Around that time, major AI companies released reports about AI security threats.”
‘Absolutely Dangerous in the Hands of Criminals or Other Dishonest People’
Another recent survey from Pew Research Center surveyed 5,000 American adults about AI and their responses align with the anonymized Google Trends search data analyzed in the Cybernews study. “How Americans View AI and Its Impact on People and Society” explored both how aware people are of AI and how they feel about its structural impact on American life. About half of respondents said they were “more concerned than excited about the increased use of AI in daily life,” a marked increase from the 37 percent who felt this way in a 2021 Pew survey that asked the same question. An even greater proportion of respondents were concerned about the overall impact on society.
“More than half of Americans (57 percent) rate the societal risks of AI as high, compared with 25 percent who say the benefits of AI are high,” according to the Pew report. “When asked to describe in their own words why they rated the risks as high, the most common concern mentioned was about AI weakening human skills and connections.”
The Pew Research Center included anonymized quotes from survey respondents alongside the data. Common themes that emerged were risks associated with widespread adoption of AI technology and a lack of appropriate regulation in the U.S.:
“Misinformation is already a huge problem and AI can create misinformation a lot faster than people can.” – Man, age 30-39
“Society will be too slow to regulate and control AI. The technology will advance rapidly and outpace our ability to anticipate outcomes (both positive and negative). It will therefore be extremely difficult to implement and deploy risk management strategies, plans, policies, and legislation to mitigate the upheaval that AI has the real potential to unleash on every member of our society” – Man, age 60-69
“AI can very easily be used to fake people’s likeness and voice. This is absolutely dangerous in the hands of criminals or other dishonest people. Identities can be stolen; innocent people could be framed for doing or saying things they didn’t do/say.” – Man, age 40-49
The (Missing) Human Element
There was also marked concern about the use of AI tools dampening human creativity, with just over half of respondents (53 percent) agreeing that “AI will worsen people’s ability to think creatively.” Respondents also want to better control how they use AI, echoing the trend Cybernews identified in Google Trends data: “In their own lives, about six-in-ten say they’d like more control over how AI is used, compared with 17 percent who are comfortable with their amount of control and 21 percent who are unsure.”
Americans’ concerns about AI aren’t limited to casual use at home but spill over into how they use AI tools in their careers. Harvard Business Review recently shared the results of an August 2025 survey conducted by Columbia Business School’s AI in Business Initiative and Boston Consulting Group’s Henderson Institute. The study, “Employee Centricity in an AI World,” is based on the responses of around 1,400 employees and business leaders to questions about how they feel about AI. The authors were struck by the sizable gap in enthusiasm for AI between the C-suite and rank-and-file workers: “Seventy-six percent of executives reported that their employees feel enthusiastic about AI adoption in their organization. But the view from the bottom up is less sunny: Just 31 percent of individual contributors expressed enthusiasm about adopting AI. That means leaders are more than two times off the mark.”
The gulf between how business leaders and individual contributors see AI fitting into their daily grind seems to reflect the general U.S. population’s concerns about data privacy and a lack of regulation. The common denominator is a mismatch between how the tools are being marketed and the real-world use cases and expectations for streamlining everyday tasks — a point that isn’t lost on the authors of the Columbia/Boston Consulting Group study: “In short: The more senior you are, the more you overestimate how informed and excited employees really are about AI.”
Kate Mulcrone is Convene’s digital managing editor.

