Insights

The Adoption Paradox: Why Spending More on AI Tools Delivers Less

Written by Matthew Simons | 27 January, 2026

AI Catalyst Partner Matthew Simons shares insights into why higher spending on AI tools often leads to poor outcomes and learn why building strong foundations leads to more sustainable AI adoption.

Here's an uncomfortable truth most AI tool vendors would prefer you not consider: what you spend on AI licenses has remarkably little correlation with the value you receive.

A Salesforce survey of 3,350 SMB leaders found that 83% of growing businesses are using AI, compared to just 55% of declining businesses. But the critical insight is not that growing businesses use more AI. It is that they use AI differently.

Growing businesses have clearer strategies for adoption of AI. They invest in training. They establish governance. They start with specific use cases rather than hoping general availability will generate value.

We call this the Adoption Paradox: the counterintuitive pattern where increased spending on AI tools often correlates with worse outcomes, not better ones. Understanding why this happens is the first step toward ensuring your business does not fall into the same trap.

Why The Paradox Exists

The most common approach to AI adoption in mid-market businesses follows a predictable pattern. Leadership recognises that AI is important. Budget is allocated for tools. Licences are purchased. Staff are given access and encouraged to experiment. Then everyone waits for value to emerge.

This approach feels sensible. It appears low-risk. It allows individual employees to find their own use cases. And it almost always disappoints.

The problem is not the tools. Modern AI platforms are genuinely capable. The problem is that this approach creates three compounding failures:

  • Fragmented experimentation: Without clear use cases, employees experiment in isolation. One person uses AI for email drafting, another for research, another ignores it entirely. There is no shared learning, no accumulated expertise, and no way to build on what works.
  • Missing context: AI tools produce generic outputs when they lack specific information about your organisation, your processes, and your customers. Without systematic investment in providing that context, each employee must reinvent the wheel—or accept mediocre results.
  • Inflated expectations: Vendor demos show AI performing brilliantly on carefully selected examples. When staff encounter the reality—that AI requires thoughtful setup to deliver value—they often conclude that the technology does not work and stop trying.

The result is that bigger AI budgets buy more licences but not better outcomes. The 74% of companies that BCG found trapped in cycles of failed AI experiments are not under-investing. Many are over-investing—in tools—while under-investing in the foundations that make tools effective.

The Evidence Is Clear

MIT's research on AI deployment found that internal AI builds succeed only about 33% of the time, while partnerships with specialised vendors succeed around 67% of the time. The difference is not primarily about technology. Vendors bring structured approaches, documented processes, and accumulated experience—the foundations that internal teams often lack.

The regional data tells a similar story. Research shows that 82% of London-based firms see AI as strategically important, compared to 44% in Northern England. But seeing AI as important and successfully implementing it are different things. Without the right foundations, strategic recognition often leads to frustrated investment rather than meaningful returns.

BCG's research across frontline employees found that only 30% of managers and 28% of frontline staff have been trained in how AI will change their jobs. When employees lack understanding of what AI can do and how to work with it effectively, even excellent tools produce disappointing results.

What Foundations Actually Look Like

When we talk about AI foundations, we mean the prerequisites that determine whether technology investment will succeed or fail:

  • Leadership commitment: Do your senior team visibly champion AI adoption? Are they allocating time and resources, not just budget? Are they modelling the behaviours they want to see, using AI themselves rather than delegating it entirely?
  • Culture and psychological safety: Do your people feel safe to experiment, make mistakes, and learn? A growth mindset matters here. Organisations where failure is punished see AI adoption stall because staff are afraid to try. Those that treat early experiments as learning opportunities build capability faster.
  • Use case clarity: What specific business problems are you trying to solve? Vague intentions to 'improve productivity' or 'explore AI capabilities' are not use cases. Reducing quote turnaround time by 50%, or automating invoice processing for a specific workflow, are.
  • Data readiness: Is the information AI needs to perform your use cases available, accessible, and of sufficient quality? Many AI initiatives fail because the data they depend on is scattered, inconsistent, or locked in systems that cannot easily share it.
  • Skills and understanding: Do the people who will work with AI understand what it can and cannot do? This is not about programming skills. It is about realistic expectations and effective interaction.
  • Governance structures: Who is accountable for AI performance? How are risks being managed? What policies govern acceptable use? Without these structures, AI deployment tends to fragment into inconsistent individual experiments that never scale.

The Foundation-First Alternative

The alternative to the tool-first approach is straightforward in principle, though it requires discipline in practice: invest in foundations before investing in tools.

This means starting with an honest assessment of your current readiness. Where are you strong? Where are the gaps? What specific use cases would deliver meaningful value if successfully implemented?

It means building the knowledge and skills your organisation needs—not through generic AI awareness training, but through practical education focused on your actual use cases and context.

It means establishing governance that enables rather than constrains, giving your people clear guidance on how to use AI effectively and responsibly.

Only then, with foundations in place, does tool selection become a meaningful decision. At that point, you know what you need the tools to do, you have the organisational capability to use them effectively, and you can evaluate options against clear criteria.

Escaping the Paradox

The Adoption Paradox is not inevitable. Organisations that take a foundation-first approach consistently achieve better results than those that lead with tools.

The challenge is that foundations are less exciting than new technology. They require internal work rather than purchasing decisions. They do not come with flashy demos or marketing promises.

But they work. And for mid-market organisations that cannot afford to waste investment on AI initiatives that fail to deliver, that is what matters.

If you recognise the pattern we have described, or suspect your organisation may be caught in the paradox, our Foundations Accelerator is designed to help you build the readiness that makes AI investment pay off. In the following articles in this series, we will explore what foundation-building looks like in practice, starting with context engineering.

 

Sources:

  1. Growing vs declining SMB AI usage: Salesforce Small & Medium Business Trends Report, 6th Edition, December 2024 (n=3,350 SMB leaders).