Skip to content

Three Capabilities Separating Leaders from Laggards

AI Catalyst Partner Matthew Simons highlights the three essential capabilities that separate AI leaders from laggards with regards to effective AI adoption.

fork-in-road

Research from MIT published in mid-2025 found that 95% of enterprise AI pilot projects failed to deliver measurable business value. Only 5% progress beyond early stages to achieve meaningful returns. This finding emerged from analysis of over 300 AI deployments, 150 executive interviews, and surveys of 350 employees.

Yet companies continue to increase their AI spending. The disconnect between investment and results has become one of the defining challenges of 2026.

We call this the Adoption Paradox: the counterintuitive reality that spending more on AI tools often delivers less value. The organisations that are achieving the strongest results are not necessarily those with the largest technology budgets. They are the ones who have understood that the foundations for AI success must be built before any tool is selected.

There is strong evidence that supports this position. A BCG study of over 1,000 companies worldwide found that only 26% have developed the capabilities to move beyond proof of concept and generate substantial value from AI. The remaining 74% are trapped in cycles of failed experiments. The differentiator is not which tools they bought, but how they prepared their organisations to use them.

This point matters a lot for mid-market businesses with revenues of £10 million and above, and teams of 30 employees and above. In such businesses, you do not have the luxury of enterprise-scale experimentation budgets. Every investment needs to count. The question is not whether to adopt AI, but how to do so in a way that actually delivers.

As we move through 2026, three interconnected capabilities are reshaping what successful AI adoption looks like. Understanding these capabilities, and building them strategically, will increasingly separate the leaders from the laggards.

Capability One: Managing AI Agents

In January 2026, McKinsey CEO Bob Sternfels revealed that the consulting giant now counts its workforce as 65,000: 40,000 human employees and 25,000 AI agents. Speaking at the Consumer Electronics Show in Las Vegas and on the Harvard Business Review IdeaCast, Sternfels stated that his goal is to reach parity by the end of the year, i.e. equal numbers of humans to that of AI agents.

This is not a distant future scenario. It is happening now, at one of the world's most influential professional services firms. McKinsey reportedly saved 1.5 million hours in 2025 through AI automation of search and synthesis tasks that junior consultants previously performed.

AI agents are different from the AI tools most businesses have encountered so far. Where a chatbot responds to queries or an automation follows a fixed script, an agent can break down complex problems, make decisions within defined parameters, and execute multi-step tasks with limited human supervision. They are, in effect, digital workers.

This shift raises questions that few mid-market businesses have begun to address. Who will build and configure these agents? Who will manage them day-to-day? Who will be accountable when they make mistakes? The technical infrastructure is only part of the challenge. The larger question is organisational: what does it mean to manage a workforce that includes AI?

New roles are emerging to answer these questions. Agent Operations Managers oversee performance and reliability. Agent Experience Specialists design how agents interact with humans. These are not IT roles in the traditional sense. They require a blend of technical understanding, process design, and people management.

The businesses that begin thinking about these questions now will be better positioned when agent adoption accelerates. Those that wait until they are forced to respond will find themselves playing catch-up.

Capability Two: Visual Intelligence

Until recently, AI that could genuinely understand images, documents, and visual information was largely the preserve of large enterprises with substantial R&D budgets. The complexity of training visual models, the cost of infrastructure, and the expertise required put it beyond reach for most mid-market businesses.

That barrier is falling rapidly. Multimodal AI models that can process text, images, and documents together have become much more accessible. The availability of pre-trained models through APIs, combined with reduced requirements for custom training, means that what required dedicated machine learning projects two years ago can now often be achieved with carefully configured solutions.

The business applications are significant. We have worked with organisations that have transformed their quotation processes by using visual AI to analyse technical drawings, identify relevant specifications, and calculate requirements. Results have included reducing quote turnaround from weeks to days, achieving model accuracy above 90%, and freeing skilled staff to focus on complex cases and client relationships rather than routine analysis.

This is not an isolated example. Advanced document processing, high quality inspections, invoice handling, archive digitisation: across industries, visual intelligence is creating opportunities to automate work that previously required human eyes, multiple rounds of analysis and human judgement.

The question for mid-market businesses is whether they are recognising the visual data opportunities in their own operations. Most organisations have drawings, documents, images, or forms that are currently processed manually. Many do not realise that these represent viable AI use cases. The first step is simply identifying where visual intelligence could apply.

Capability Three: Context Engineering

The most common approach to AI adoption in mid-market businesses looks something like this: purchase licences for an AI platform, distribute them to staff, and hope that value emerges through use. It is a pragmatic approach. It feels low-risk and allows individuals to experiment.

It is also, in most cases, a recipe for disappointment. The MIT research found that internal AI builds succeed only about 33% of the time, compared to 67% for vendor partnerships and purchased solutions. A significant factor is whether organisations have prepared the context—the information and structure that AI needs to perform effectively in their specific business.

Context engineering is the discipline of designing and maintaining the inputs that AI systems need: domain knowledge, process documentation, brand guidelines, customer information, and the constraints within which the AI should operate. It is not the same as prompt engineering, which focuses on individual interactions. Context engineering is systematic and organisational.

This is why two companies can purchase identical AI tools and achieve entirely different results. One has invested in building a shared knowledge foundation that makes AI outputs consistent and relevant. The other has left each employee to figure it out for themselves.

The companies we see achieving genuine value from AI are those that treat context as a strategic asset. They have documented what their AI needs to know. They have created shared prompt libraries and templates. They measure output quality and refine their approach over time.

This is the foundation-first approach: investing in readiness before investing in tools. It is counterintuitive in a market that constantly pushes new platforms and capabilities. But the evidence is clear. The businesses that build strong foundations consistently outperform those that chase the latest technology.

connected-gears

The Path Forward

These capabilities are interconnected. Agents require context to perform effectively. Visual intelligence often feeds into agentic workflows. Context engineering underpins success across all AI applications.

For mid-market businesses, the opportunity is significant. You can move faster than enterprises, make decisions more quickly, and adapt your approach as you learn. But capturing that opportunity requires a different mindset from the tool-first approach that dominates the market.

Start by understanding where you stand. Assess your readiness across the foundational pillars: leadership commitment, data quality, skills and culture, governance. Identify the specific use cases where AI could address genuine business problems. Build the context and knowledge foundations that will make any tool more effective.

The gap between AI leaders and laggards is widening. But for businesses willing to take a foundation-first approach, there is still time to be on the right side of that divide.

In the articles that follow, we will explore each of these shifts in detail, with practical guidance for mid-market businesses ready to act.

 

Sources:

  1. AI pilot failure rates: MIT NANDA Initiative, 'The GenAI Divide: State of AI in Business 2025', reported in Fortune, August 2025
  2. AI capability development: BCG 'Build for the Future' Global Study, October 2024 (n=1,000 companies)
  3. McKinsey workforce figures: Bob Sternfels, CES Las Vegas and Harvard Business Review IdeaCast, January 2026; confirmed by McKinsey spokesperson to Business Insider
  4. Internal vs vendor AI success rates: MIT NANDA Initiative, 2025