AI Readiness: Why Your AI Strategy Fails Because of You, Not the Technology

74% of companies fail at AI scaling. The cause isn't technology. It's the psychology of decision-makers.

AI Readiness: Why Your AI Strategy Fails Because of You, Not the Technology

74% of companies fail to generate measurable value from their AI projects. That's what a BCG study of 1,000 C-level executives across 59 countries found (1). 95% of generative AI pilots deliver no measurable revenue impact, according to an MIT report based on 150 interviews and 300 public deployments (2).

Two numbers. One question: If the technology works and the budgets exist, what's going wrong?

The standard answer: data quality, missing infrastructure, inadequate training. The BCG study offers a different answer. 70% of problems in AI implementations are people- and process-related. 20% involve technology. 10% involve algorithms (1). Companies that fail at AI fail because of people. And the person they most often fail because of sits at the very top.

The Psychology of Doing Nothing

Christopher Anderson published a review in the Psychological Bulletin in 2003 that identified four forms of decision avoidance: choice deferral, status quo bias, omission bias, and inaction inertia. His central finding: These behaviors aren't rational strategies. They arise from a combination of cost-benefit calculations, anticipated regret, and choice uncertainty (3).

Translated to the corporate context: The CEO who has been thinking about AI strategy for eight months without making a binding decision isn't suffering from a lack of information. He's suffering from anticipated regret. The fear of making the wrong decision outweighs the immediate costs of making no decision at all.

This pattern is a neurological algorithm. The brain prioritizes immediate emotional relief over long-term consequences. The mechanism is identical in eating behavior, procrastination, and strategic decision avoidance: The non-decision feels better right now. The costs come later. And "later" is a timeframe the emotional system doesn't evaluate. You know this from everyday life: the postponed dentist appointment, the deferred employee conversation, the tax return that's been waiting for weeks. In every case, you rationally know what needs to be done. In every case, immediate relief wins over future consequences. With an AI strategy, the same process runs at higher stakes: "We're still analyzing" feels better today than the uncertainty of a binding directional decision. The fact that the competitive disadvantage grows with every passing month doesn't register with the emotional system. Because the emotional system decides faster than the rational one.

Anderson described four specific mechanisms that produce this (3):

Choice Deferral. "We need a comprehensive analysis first." Another white paper. Another benchmark. Another pilot project. The decision isn't made — it's postponed. And every postponement feels like progress.

Status Quo Bias. "Our processes work." The current situation is overvalued, the potential change undervalued. Not because the status quo is better. But because it's familiar. And the familiar feels safe, even when the numbers say otherwise.

Omission Bias. "Better to do nothing than the wrong thing." CEOs systematically judge errors of commission more harshly than errors of omission. If you implement an AI strategy and it fails, you bear the responsibility. If you do nothing and the market turns, it was the market.

Inaction Inertia. "The window has closed." A CEO decided twelve months ago to start the AI transformation. Then operational issues intervened. Now getting started feels harder than it did back then, even though conditions are better. The missed opportunity becomes the argument against the current one.

Delegation as Avoidance Strategy

This is where it gets personal for executives.

The most common response to the AI question at the C-level is: "That's IT's job." Or: "That's what we have a CDO for." Or: "We've set up an innovation lab."

At first glance, this sounds like professional delegation. On closer inspection, it's one of Anderson's four forms of decision avoidance: delegation as choice deferral. The strategic directional decision isn't made. It's displaced. And it's displaced to a level that isn't authorized to answer the critical question.

A misconception needs to be cleared up immediately: The IT department isn't the problem. IT, compliance, and security are the functions that make AI adoption technically safe, legally sound, and operationally stable. Without them, every AI initiative becomes a compliance catastrophe and a security risk. What's actually happening in many companies is the exact opposite of delegating to IT: Business units and individuals are building AI solutions around IT. They're using uncontrolled tools, uploading confidential data to external systems, creating shadow IT. That's not liberation. That's a security and compliance nightmare.

The problem is something else entirely. The question that must be answered before any technical implementation is: What does AI change about our business model, our culture, our value proposition?

No IT director answers this question. No external consultant answers this question. The C-suite answers this question. Only then do you hand off to IT, compliance, and security to safeguard the implementation. The sequence matters: direction first, safeguards second. And it's precisely the directional decision that gets avoided.

In my article on Intent Engineering I described how Klarna's AI strategy failed because nobody clarified the core question: What does this company stand for when cost and quality conflict? Klarna delegated the directional decision to technology. Technology optimized for measurable variables. The result: worse customer service, reversal of the AI-first strategy. The problem wasn't the technical implementation. The problem was that nobody had defined what the technology should optimize for.

Directional decisions without IT create security risks. IT without directional decisions creates systems that optimize for the wrong things. Most companies end up in one of these two traps.

What the CEO Avoids

Steven Hayes and colleagues described Experiential Avoidance in 1996 as the attempt to control or eliminate unpleasant inner experiences, even when this control causes long-term damage (4). Their 2006 review showed: higher avoidance scores correlated with lower psychological flexibility and worse outcomes across numerous contexts (5).

Wang, Tian, and Yang confirmed the mechanism in detail in a 2024 review: Experiential Avoidance occurs when individuals actively avoid contact with certain inner experiences, including feelings, thoughts, and memories. The short-term effect is relief. The long-term effect is an increasing restriction of behavioral repertoire (6).

In the AI context, CEOs avoid specific inner experiences:

Loss of control. AI changes power structures. When employees use AI agents to accomplish tasks that previously required three levels of hierarchy, decision-making power shifts. That feels threatening to executives whose identity is built on control.

Irrelevance. A question no CEO asks out loud: "Does my company still need me in the same way if AI makes some of my decisions better than I do?" The answer is: Your company needs you more than ever. For judgment, values clarification, and directional decisions. For tasks AI won't perform. But the feeling of threat arises regardless.

Incompetence. The BCG study revealed a massive perception gap: 76% of executives believed their employees were enthusiastic about AI. Among employees themselves, enthusiasm stood at 31% (7). This means: Executives don't understand what's happening in their teams. And admitting that you don't understand something fundamental is an uncomfortable feeling for a successful CEO.

These three feelings — loss of control, irrelevance, and incompetence — are uncomfortable. And because they're uncomfortable, they get avoided. The avoidance looks professional: delegation to IT, pilot projects without scaling decisions, strategy papers without implementation commitments.

Why "More Data" Is the Wrong Answer

Lovich and Meier showed the scale of the disconnect in the Harvard Business Review in 2025: Executives overestimated their employees' enthusiasm by a factor of 2.5 (7). Croft, Vaid, Cheng, and Whillans confirmed in 2026, also in the Harvard Business Review, that senior leaders struggle with continuous disruption, contested value definitions, and emotionally divided reactions to change (8).

The response to this uncertainty is predictable: more data. More reports. More analyses. Anderson described exactly this pattern: The search for additional information is one of the most common forms of choice deferral (3). It feels like diligence. It's avoidance.

The difference between diligence and avoidance is easy to spot: Diligence leads to a decision. Avoidance leads to another analysis.

If your company has been working on an "AI strategy" for six months and still has no binding commitment, it's not because of missing data. It's because of a feeling that's being avoided.

The BCG Formula Nobody Reads

The BCG study contains a finding that gets buried in most summaries: Companies that lead in AI invest 10% of their resources in algorithms, 20% in technology and data, and 70% in people and processes (1).

70% in people and processes. Not 70% in technology. Not 70% in data. 70% in the question of how people work, how they change, how they handle uncertainty.

And that's precisely where the problem lies. Because the question "How do my employees handle uncertainty?" first requires a different question: "How do I handle uncertainty myself?"

Very few CEOs ask themselves that question. And that's why 74% fail at scaling.

The Pattern Behind the Pattern

In my work with executives, I see the same pattern in AI transformations as in every other strategic decision: The CEO knows what needs to be done. He doesn't do it. And he looks for an explanation that has nothing to do with himself.

I've described how Work Slop emerges as a stress signal, not a technology problem. How the Trust Paradox leads employees to install AI tools but never use them because their identity is threatened. How missing values clarification causes AI systems to optimize for the measurable rather than the right thing.

All of these phenomena share the same origin: Experiential Avoidance. The avoidance of uncomfortable feelings produces decisions that provide short-term relief and long-term harm.

With Work Slop, the employee avoids the feeling of being overwhelmed. With the Trust Paradox, the employee avoids the feeling of irrelevance. With missing values clarification, the C-suite avoids the conflict over its own identity.

And with AI readiness? The CEO avoids the question that changes everything: What am I willing to risk to do the right thing?

What Changes When You Set the Direction

AI readiness isn't an IT question. AI readiness is an organization's ability to change. And an organization's ability to change begins with the CEO's ability to make a directional decision and then implement it together with IT, compliance, and security.

"Stop delegating" doesn't mean bypassing IT. It means answering the strategic question yourself before commissioning the technical implementation. It means maintaining the sequence: direction, then safeguards, then implementation.

Concretely, this means:

Clarify your own position on AI before commissioning a strategy paper. What do you feel when you consider the possibility that AI does part of your job better than you do? If the answer is discomfort: good. That discomfort contains the information your strategy paper won't deliver.

Put values clarification before technology assessment. Not: "Which AI tools should we deploy?" But: "What do we stand for when efficiency and quality conflict?" If you don't answer this question, your AI optimizes for the measurable. And the measurable is rarely the right thing.

Bring IT, compliance, and security to the table early. Not as a brake, but as a prerequisite. The BCG study showed: Companies that successfully scale AI focus on governance, data quality, and change management (1). All of this requires close collaboration between executive leadership and technical infrastructure. Building AI around IT creates uncontrolled risks. Fully delegating AI to IT creates systems without strategic direction. Both fail.

Invest 70% of your AI resources in people. Not in tool training. In your executives' ability to handle uncertainty. In your employees' ability to evaluate AI output rather than blindly accepting it. In your C-suite's ability to set the direction rather than delegating the directional question.

Recognize the difference between diligence and avoidance. If your third strategy paper in twelve months hasn't produced a decision, the fourth paper isn't the solution. The solution is the question: What feeling are you avoiding?

Anderson was right. The psychology of doing nothing explains why individuals defer decisions, prefer the status quo, choose omission, and persist in inaction inertia (3). He described these patterns at the individual level. In organizations, they become culture.

The good news: Culture starts at the top. When the CEO stops delegating the directional question and starts treating it as a personal decision — and then brings in IT and compliance to safeguard it — the entire organization changes.

The uncomfortable news: That requires sitting with a feeling you'd rather avoid.

Further Reading


Sources Used with URLs:

  1. Boston Consulting Group (2024). Where's the Value in AI? AI Adoption Survey with 1,000 CxOs across 59 countries and 20 sectors. https://www.bcg.com/press/24october2024-ai-adoption-in-2024-74-of-companies-struggle-to-achieve-and-scale-value

  2. MIT NANDA Initiative (2025). The GenAI Divide: State of AI in Business 2025. Based on 150 interviews, 350 employee surveys, and 300 public AI deployments. Cited in Fortune, August 2025. https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/

  3. Anderson, C. J. (2003). The Psychology of Doing Nothing: Forms of Decision Avoidance Result from Reason and Emotion. Psychological Bulletin, 129(1), 139-167. https://pubmed.ncbi.nlm.nih.gov/12555797/

  4. Hayes, S. C., Wilson, K. G., Gifford, E. V., Follette, V. M., & Strosahl, K. (1996). Experiential avoidance and behavioral disorders: A functional dimensional approach to diagnosis and treatment. Journal of Consulting and Clinical Psychology, 64(6), 1152-1168. https://pubmed.ncbi.nlm.nih.gov/8991302/

  5. Hayes, S. C., Luoma, J. B., Bond, F. W., Masuda, A., & Lillis, J. (2006). Acceptance and Commitment Therapy: Model, processes and outcomes. Behaviour Research and Therapy, 44(1), 1-25. https://www.sciencedirect.com/science/article/abs/pii/S0005796705002147

  6. Wang, Y., Tian, J., & Yang, Q. (2024). Experiential Avoidance Process Model: A Review of the Mechanism for the Generation and Maintenance of Avoidance Behavior. Psychiatry and Clinical Psychopharmacology, 34(2), 179-190. https://pmc.ncbi.nlm.nih.gov/articles/PMC11332439/

  7. Lovich, D. & Meier, S. (2025). Leaders Assume Employees Are Excited About AI. They're Wrong. Harvard Business Review, November 2025. https://hbr.org/2025/11/leaders-assume-employees-are-excited-about-ai-theyre-wrong

  8. Croft, J., Vaid, S., Cheng, L., & Whillans, A. (2026). Where Senior Leaders Are Struggling with AI Adoption, According to Research. Harvard Business Review, February 2026. https://hbr.org/2026/02/where-senior-leaders-are-struggling-with-ai-adoption-according-to-research

More on this topic

Discover more insights about AI Readiness Program

AI Readiness Program

External Links