
The rapid rise in artificial intelligence use across the United States is not being matched by growing public trust, according to new consumer research that challenges how organisations measure confidence in emerging technologies.
Recent findings from SurveyMonkey show that while one in three Americans now uses AI tools daily or weekly, skepticism around how the technology operates, particularly in sensitive or high-impact decisions, is deepening. Researchers warn that equating frequent usage with approval risks overlooking a critical signal shaping consumer behaviour: uncertainty.
Brandspur Brand News reports that the data highlights widespread ambivalence rather than outright rejection or acceptance. Nearly all Americans expect AI to significantly shape the future, with many viewing it as one of the most important societal issues by 2030. However, a majority believe its influence will be both beneficial and harmful, reflecting growing caution rather than enthusiasm.
The research indicates that consumers are increasingly willing to use AI with conditions. Many rely on it for speed and convenience but continue to verify outputs, demand human oversight and expect transparency around how decisions are made. This conditional acceptance suggests that trust is fragile and can erode quickly when expectations are not met.
Privacy emerges as the most decisive fault line. The study shows that non-consensual use or sharing of personal data is the fastest way for companies to lose credibility with users. Other major trust breakers include the inability to reach a human representative, unclear disclosure when users are interacting with AI, and generic or scripted responses that reduce perceived authenticity.
Shifts in user behaviour also show that when confidence breaks, loyalty disappears. High-profile privacy controversies have led many users to abandon established AI tools such as ChatGPT in favour of alternatives like Claude, demonstrating how quickly trust can be reassigned.
Resistance becomes strongest in high-stakes situations. In recruitment, for example, overwhelming numbers of Americans insist on human involvement, with only a small minority willing to trust AI-led decisions. While many individuals are comfortable using AI to assist themselves, far fewer accept being evaluated or judged by automated systems.
According to Wendy Smith, the findings suggest that adoption metrics and satisfaction scores alone no longer capture how people truly feel about AI. As technology becomes more embedded in daily life, consumers are paying closer attention to accountability, fairness and control.
For researchers and businesses alike, the message is clear: AI may be spreading fast, but trust must be earned deliberately. Measuring hesitation, conditions and limits, rather than simple usage, is increasingly essential to understanding consumer confidence in an AI-driven economy.





