Fix the sales velocity problem with agentic AI execution
AI Catalyst Partner Joseph Taylor shares practical advice on how to reduce sales cycle times and enhance margins with risk managed agentic AI workflows and human oversight.

The paradox of the "value gap"
The UK sales organisations has embraced the promise of AI, but there is a widening disconnect between adoption and measurable commercial performance. Warwick Business School reported that 81% of sales professionals acknowledge AI as a key productivity driver, with only 7% say their organisations track and measure AI’s impact using specific KPIs. In other words, 93% of UK sales teams are using AI without validating its true impact.
We are effectively witnessing an AI "value gap", where increased activity at the top of the funnel does not translate to bottom-line efficiency. This imbalance is further evidenced by the concentration of AI usage. While 54% of sales teams employ AI for content generation, adoption for high-value execution tasks such as forecasting or sales coaching remains well below 8%. Consequently, while sellers are writing more emails, they are not necessarily closing more deals or improving the quality of the sales processes.
The structural cost of the "wait state"
In competitive SaaS, outbound deals rarely fail because of just product deficiencies. They fail because, loss of momentum between a client's needs discovery and confusing commercial proposals. That collapse is rarely dramatic. It’s a sequence of “wait states” from handoffs with information gaps, unclear success criteria, approval bottlenecks, solution engineering delays and pricing validation loops, which results in poor client experiences, and quietly eroding your velocity and margins.
These commercial stakes are not theoretical. Harvard Business Review’s analysis of 1.25 million leads found that firms attempting contact within an hour were nearly 7x more likely to qualify the lead than slower responders. When you combine that decay curve with the reality that sellers spend 64% of their time on non-selling activities (Salesforce) and that UK office workers lose 15 hours each week to administrative friction (Ricoh UK), proposal latency stops being an “operational nuisance” and becomes a structural failure - it slows pipeline conversion, increases rework, and invites discounting under time pressure.
Moving beyond the "AI enablement" trap
To move beyond the "AI enablement trap," leaders must distinguish between AI as an individual assistant and AI as an organisational agent. Most current strategies are limited to the content creation activities, producing better-written emails that feed into the same broken, slow workflows. That's the "AI assistant" value. True acceleration requires moving along the Automation Spectrum.
In a manual state, the seller is the sole engine of research and synthesis, often resulting in "sketchy notes" that miss critical buyer needs, pains or expected benefits. As we move toward an agentic model, the AI shifts from a passive drafter to an active operative. AI agents act as workflow orchestrators, executing autonomous target account research prior to the actual discovery, and conducting real-time gap analysis on meeting transcripts from sales meetings to ensure that no "new reality" goals are missed before they are saved to the CRM and share with other team members.
This is what we call the "agentic value". It's the shift from AI as a "copilot for writing" to AI as an "operator of workflow steps" (with defined boundaries and oversight controls) supervised by talented human sales professional. In practice, this means redesigning sales processes so AI executes repeatable work in parallel (and flags gaps early), while humans handle judgement, exceptions, and approval.
Re-engineering the discovery-to-proposal process
To produce measurable ROI, the discovery-to-proposal process must be redesigned from a sequence of heroic individual efforts to a combined workflow of agentic execution and human supervision.
In a traditional “before” state, the process is fragile and sequential: business development runs discovery, captures handwritten notes, briefs pre-sales, and then the rework begins - missing impact metrics, unclear technical constraints, and partial client stakeholders context. The proposal arrives late, often under a “rush discount” narrative, and the CRM records don't even contain the full story. Clients are confused, frustrated that they are not being heard and lose confidence in these type of vendors.
In a future “after” state where the process is a blend of human and AI agency, discovery becomes a supervised digital workflow. An agent can run target account and market research, produce high-quality summaries for the business development team, analyse the discovery meeting transcripts, map it against your methodology, and flag missing “commercial impact” data immediately before any handoffs or further decisions are made. It can update various systems of records such as your sales CRM, assemble a draft structured proposal (scope, success criteria, dependencies, assumptions) based on your templates, apply pricing logic, and draft a near-complete commercial proposal for human validation. The business development team become the commercial pilots of sales: reviewing outputs, handling exceptions and shaping the deal narrative rather than doing manual synthesis and admin.
The outcome is not simply “faster writing.” It is reduced proposal latency, fewer rework loops, cleaner handoffs, and better margin discipline because, the execution steps are instrumented and repeatable.
The new unit economics - Cost per validated proposal
To secure CFO-level confidence, the focus must shift from “AI usage” to cost per outcome. We propose cost per validated proposal as a new a metric that accounts for the loaded cost of human time, the direct compute and AI token costs, and the secondary operational costs of oversight and evaluation.
This KPI makes the commercial logic explicit: if agentic execution reduces rework, shortens cycle time, and lowers discount variance, it directly protects margin. It also creates a capacity uplift, which can be reinvested in producing more high-quality proposals or spending more time nurturing target accounts because, the routine synthesis, formatting, and routing is now moved to supervised digital labour.
De-risking through commercial risk controls
The primary barrier to agentic adoption is a rational lack of trust. Gartner predicts that over 40% of agentic AI projects will be cancelled by 2027 due to inadequate risk controls and unclear business value. The path forward is not “less autonomy” but, agents with bounded autonomy that operate within explicit commercial guardrails. To prevent "autopilot discounting" and forecast distortion, the agentic solution must be wrapped in a robust governance framework.
Below is a practical risk-control map for sales workflows:
| Failure mode | RAI risk category | Commercial risk | Mitigation controls |
|---|---|---|---|
| Incorrect pricing or unauthorised discounts | Reliability | Margin erosion; contract disputes | Grounded in source-of-truth pricing information, HITL approval, alerts for above threshold discounts |
| Incorrect CRM data write-back | Accountability | Forecast distortion; pipeline mistrust | Least privilege write permissions, schema validation, audit logging, HITL approval |
| Leaking PII / prospect data into drafts | Privacy and Security | Breach of GDPR compliance; loss of trust | Automatic PII masking and output sanitisation, HITL reviews |
| Runaway autonomous tool usage | Reliability and Robustness | Blowing out AI budgets | Real-time observability dashboards, budget alerts and circuit breakers for exceeding usage thresholds |
| Overconfident or "hallucinated" claims | Transparency and Explainability | Mis-selling and reputational damage | Grounded in internal knowledge base / document libraries, "show your sources" requirement, HITL reviews |
These proposed controls are not blockers. They are the "seatbelts" that allows the sales engine to run at a higher velocity without crashing margins.
Turn agentic AI experiments into P&L impact
The generative AI honeymoon is over. The winners in 2026 will be the firms that stop asking “What can AI write?” and start asking: "What can AI do—reliably, inside our workflows, with measured outcomes?"
If your team is already using AI but you cannot point to measurable reductions in cycle time, rework loops, or cost-per-outcome, you have a value engineering problem—not a tooling problem.
Our AI Value Audit is designed to identify your highest-cost wait states and prioritise the agentic use cases that are most feasible for your stack, data quality, and risk appetite. If you are seeing AI activity without velocity improvements, let's talk.