The Trust Paradox: Why Your Employees Install AI Tools But Never Use Them

49% of employees never use AI at work. The problem isn't a training deficit. It's a psychological pattern that leaders must recognize.

9 min read
The Trust Paradox: Why Your Employees Install AI Tools But Never Use Them

An employee installs an AI-powered coding tool to help with a statistical analysis. She has access. The software is running. And then: nothing happens.

She doesn't open the tool. Or she opens it, types a question, gets an answer, and closes the window again. In the end, she does the statistics manually. Three hours instead of thirty minutes.

Why?

Because she doesn't trust the result. Because she doesn't know if the answer is correct. Because she hasn't learned to use AI as a sparring partner that pre-evaluates, tests, and improves results. And because she has an uncomfortable feeling she doesn't want to name.

This employee is not an isolated case. Gallup data from the fourth quarter of 2025 shows: 49% of U.S. employees never use AI at work. Among leaders, regular usage is at 69%, among individual contributors at 40% (1). BCG surveyed thousands of employees worldwide in the same year and found: Only 51% of frontline employees use AI regularly, although over 75% of executives and managers report using AI multiple times per week (2).

The tools are there. The licenses are paid for. The training has been conducted. Usage remains absent.

This is the trust paradox. And it's not a technology problem.

Why People Punish Algorithms

In 2015, Berkeley Dietvorst, Joseph Simmons, and Cade Massey from Wharton School published a study that named a phenomenon that has occupied leaders ever since: Algorithm Aversion. In five experiments, they showed that people used algorithmic predictions less often after seeing them in action. Even when the algorithm performed better than a human predictor. The reason: People lose trust in an algorithm faster than in a human when both make the same mistake (3).

This is not a rational consideration. This is an emotional pattern.

The algorithm cannot make a single mistake. The human is allowed to err. Dietvorst and colleagues found in a follow-up study in 2018 that aversion can be reduced if people are given the opportunity to slightly modify algorithmic results. Even minimal adjustment options were sufficient to increase willingness to use and satisfaction with the process (4).

What does this say about your employee who installed the AI tool but doesn't use it?

She saw the tool in action. She got an answer she was uncertain about. A single questionable output was enough to destroy trust. And she saw no way to modify, test, improve the result. She faced a binary choice: Accept the AI output or do it yourself.

She chose "yourself." Not because she's anti-technology. But because her brain forgives an algorithm fewer mistakes than herself.

The Deeper Problem: Identity Under Threat

Algorithm Aversion explains why people distrust AI results. It doesn't explain why the employee doesn't open the tool in the first place.

For that, we need a different concept. Mirbabaie and colleagues published a study on "AI Identity Threat" in the journal Electronic Markets in 2022. Their core statement: The introduction of AI threatens employees' professional identity. Three factors drive this threat. First: Changes to the work itself. Second: The perceived loss of status and competence. Third: How employees perceive themselves in relation to AI (5).

This is the point that most AI training completely ignores.

Your employee has spent years creating statistical analyses. That's her competence. That's what she's respected for. When an AI creates the same analysis in thirty minutes, a question arises that she doesn't voice: "What am I worth then?"

Selenko and colleagues described in 2022 in Current Directions in Psychological Science how AI changes the functional identity of employees. Work fulfills identity functions: It provides self-worth, belonging, competence experience. When AI takes over the tasks through which a person defines themselves, a vacuum emerges. The researchers found that employees cope better with the change when they have a say in implementation and when there are protected spaces where they can build new competencies without being evaluated (6).

"Protected spaces" sounds soft. The consequence is hard. Without these spaces, employees refuse usage without saying so openly. They install the tools. They appear at training sessions. And then they continue working as before.

Self-Efficacy: The Forgotten Factor

Albert Bandura formulated the concept of self-efficacy in 1977: Whether a person performs an action depends less on their actual abilities than on their belief that they can successfully perform the action (7).

It's not enough that your employee is technically capable of operating the tool. She must believe that she will successfully use AI.

A 2024 study in Humanities and Social Sciences Communications examined 416 working professionals in South Korea across three survey periods. The results: AI adoption increases work stress. Self-efficacy in AI learning weakens this relationship. Employees with high self-efficacy experience less stress when introducing AI tools than employees with low self-efficacy (8).

This means: If you give your employees AI tools without strengthening their self-efficacy, you increase their stress level. You're not solving a problem. You're creating one.

Bandura identified four mechanisms that build self-efficacy (7): personal success experiences, observation of others who are successful, verbal encouragement, and dealing with emotional states. None of these four mechanisms is activated by a webinar on Friday morning.

What Leaders Get Wrong

Most companies address AI adoption with three standard measures: tool training, usage policies, and KPIs for AI usage.

All three miss the problem.

Tool training teaches operation. It doesn't teach self-efficacy. It shows which buttons to press, but it doesn't address the question: "Do I trust myself to evaluate the result?" And it doesn't create success experiences because participants sit alone in front of the tool after training and give up at the first questionable output.

Usage policies regulate behavior. They don't change internal states. If someone doesn't trust AI results, a policy doesn't change that feeling. Research on Algorithm Aversion shows exactly that: Transparency alone is not enough. Leichtmann and colleagues found in a 2023 experiment with 410 participants that visual explanations improve trust calibration, but knowledge transfer about AI functionality alone didn't change usage. Control over output works stronger than explanations about the algorithm (9).

KPIs for AI usage measure the wrong variable. They measure whether someone opens the tool, not whether someone uses it productively. Gallup reports that daily AI use in the U.S. is about 10% of employees, although nearly half use AI at least occasionally (1). This means: Many people open AI tools without using them meaningfully.

The Writer study from 2025, conducted in partnership with Workplace Intelligence, revealed an additional dimension: 31% of employees, including 41% of Gen Z, actively sabotage their company's AI strategy. They refuse to use AI tools or undermine their introduction (10). This is not passive refusal. This is active resistance.

What Works Instead

If Algorithm Aversion is reduced through control and self-efficacy is built through success experiences, four concrete measures emerge.

Give employees control over AI output. Train evaluation of results, not operation of the tool. The central question is not "How do I use the tool?" but "How do I recognize if the result is usable?" Dietvorst's research shows: Even the possibility of making minimal adjustments reduces aversion (4).

Create protected spaces for experiments. Selenko and colleagues emphasize the importance of "liminal spaces" for identity adaptation (6). One hour per week where employees experiment with AI without result obligations. The only question: "What did you try?" This lowers perceived stress and builds self-efficacy through experience.

Normalize uncertainty. The BCG study showed that the proportion of employees who rate AI positively rises from 15% to 55% when they experience strong leadership support (2). This means: Leaders who speak openly about their own uncertainty in dealing with AI. Who show that they themselves question AI output.

Address the identity question directly. Don't say: "AI makes your job easier." Say: "Your expertise becomes more important. AI delivers raw material. You deliver judgment." This addresses the status loss that Mirbabaie and colleagues identified as a central driver of AI Identity Threat (5).

The Uncomfortable Truth

Not every employee who doesn't use AI has a psychological problem. Some tools are bad. Some tasks aren't suitable for AI. Some training is so miserable that the logical response is refusal.

But if you see that your company pays for licenses, offers training, provides tools, and usage rates still stagnate below 50%, then don't look at the technology. Look at the people.

Your employee installed the AI tool. She overcame the technical hurdle. What holds her back is not a missing tutorial. It's the fear that a tool accomplishes in thirty minutes what she built ten years of expertise for. It's the missing belief that she's capable of evaluating AI output. It's the feeling of losing control over her own work.

You don't solve these three things with a policy. You solve them with leadership.

The question you ask yourself today is: Do you know why your employees don't use the tools? Or do you assume it's about training?

If you assume, the answer lies in training. If you ask, the answer lies in the person.

And that's where it belongs.

Related Articles


Sources with URLs:

  1. Gallup (2025). Frequent Use of AI in the Workplace Continued to Rise in Q4. Gallup Workplace Report, Q4 2025. https://www.gallup.com/workplace/701195/frequent-workplace-continued-rise.aspx

  2. Boston Consulting Group (2025). AI at Work 2025: Momentum Builds, but Gaps Remain. BCG Global AI at Work Survey. https://www.bcg.com/publications/2025/ai-at-work-momentum-builds-but-gaps-remain

  3. Dietvorst, B. J., Simmons, J. P. & Massey, C. (2015). Algorithm Aversion: People Erroneously Avoid Algorithms After Seeing Them Err. Journal of Experimental Psychology: General, 144(1), 114-126. https://pubmed.ncbi.nlm.nih.gov/25401381/

  4. Dietvorst, B. J., Simmons, J. P. & Massey, C. (2018). Overcoming Algorithm Aversion: People Will Use Imperfect Algorithms If They Can (Even Slightly) Modify Them. Management Science, 64(3), 1155-1170. https://marketing.wharton.upenn.edu/wp-content/uploads/2020/07/Dietvorst-Overcoming-Algorithm-Aversion.pdf

  5. Mirbabaie, M., Brünker, F., Möllmann, N. R. J., Frick, N. R. J. & Stieglitz, S. (2022). The Rise of Artificial Intelligence – Understanding the AI Identity Threat at the Workplace. Electronic Markets, 32(1), 73-99. https://link.springer.com/article/10.1007/s12525-021-00496-x

  6. Selenko, E., Bankins, S., Shoss, M., Warburton, J. & Restubog, S. L. D. (2022). Artificial Intelligence and the Future of Work: A Functional-Identity Perspective. Current Directions in Psychological Science, 31(3), 272-279. https://journals.sagepub.com/doi/full/10.1177/09637214221091823

  7. Bandura, A. (1977). Self-Efficacy: Toward a Unifying Theory of Behavioral Change. Psychological Review, 84(2), 191-215. https://doi.org/10.1037/0033-295X.84.2.191

  8. Baig, S. A., Iqbal, S., Abrar, M., Baig, I. A., Amjad, F. & Zia-ur-Rehman, M. (2024). The Mental Health Implications of Artificial Intelligence Adoption: The Crucial Role of Self-Efficacy. Humanities and Social Sciences Communications, 11, Article 1522. https://www.nature.com/articles/s41599-024-04018-w

  9. Leichtmann, B., Humer, C., Hinterreiter, A., Streit, M. & Mara, M. (2023). Effects of Explainable Artificial Intelligence on Trust and Human Behavior in a High-Risk Decision Task. Computers in Human Behavior, 139, 107539. https://www.sciencedirect.com/science/article/pii/S0747563222003594

  10. Writer & Workplace Intelligence (2025). 2025 AI Survey: Generative AI Adoption in the Enterprise. https://workplaceintelligence.com/ai-adoption-study/

More on this topic

Discover more insights about AI Readiness Program

AI Readiness Program

External Links