Building trust in artificial intelligence

Study finds framing can boost employee confidence, but one big error can destroy it

Robot looking out from inside a computer monitor.

Release Date: February 4, 2025

Print
Smith.
“If AI makes a small mistake, users tend to be more forgiving — especially when it has been framed as competent. But when AI makes a major mistake, trust plummets, and no amount of positive framing can recover it.”
University at Buffalo School of Management

BUFFALO, N.Y. —  Trust is crucial for the successful integration of artificial intelligence into organizations, and new research from the University at Buffalo School of Management reveals that framing the competence of AI systems can significantly influence user perception.

But, while framing AI as highly competent boosts employee trust and willingness to rely on AI-driven insights, the study found that once AI makes a major error, that trust sharply declines — regardless of its prior reputation.

“If AI makes a small mistake, users tend to be more forgiving — especially when it has been framed as competent,” says Sanjukta Das Smith, PhD, chair and associate professor of management science and systems in the UB School of Management. “But when AI makes a major mistake, trust plummets, and no amount of positive framing can recover it.”

To examine how framing and error levels affect AI trust and tolerance, the researchers surveyed more than 300 working adults, presenting them with workplace scenarios where they relied on AI for decision-making. Participants were randomly assigned to AI described as either highly competent or neutral and then informed that the AI made no errors, minor errors or major errors. The researchers then used regression models to analyze the results.

The study’s findings underscore the need for organizations to carefully manage how they present AI systems to their staff.

“Employees won’t automatically trust AI, even if it’s widely used — but emphasizing the reliability and accuracy of AI can encourage the adoption of the technology,” says Laura Amo, PhD, assistant professor of management science and systems in the UB School of Management. “Effective communication about AI capabilities should be coupled with genuine competence to ensure long-term trust and user error tolerance.”

The researchers say that firms should take a balanced approach when integrating AI into their operations.

“It’s important to not overhype the capabilities of what AI can do, because if employees feel misled about its capabilities, trust will be difficult to rebuild,” says Victoria Gonzalez, PhD candidate in the UB School of Management. “Training employees on how AI works and why occasional mistakes happen can help prevent trust from collapsing after errors.”

The study’s three co-authors, Smith, Amo and Gonzalez, recently presented their findings at the 2025 Hawaii International Conference on System Sciences.

The UB School of Management is recognized for its emphasis on real-world learning, community and impact, and the global perspective of its faculty, students and alumni. The school also has been ranked by Bloomberg Businessweek, Forbes and U.S. News & World Report for the quality of its programs and the return on investment it provides its graduates. For more information about the UB School of Management, visit management.buffalo.edu.

Media Contact Information

Contact
Kevin Manne
Associate Director of Communications
School of Management
716-645-5238
kjmanne@buffalo.edu