When a Board Rejects a £20M AI Investment — and What the Successful Version Looked Like
TL;DR
A CTO's £20M AI transformation proposal was rejected by the board — a revised £8M version was approved two months laterThe original failed because it was capability-first: it started with a technology vision and worked backward to find problems to solveThree board questions exposed the flaw: What specific problem justifies this? What do we get for £5M? Who among existing leadership is accountable?The winning version was problem-first: 3 focused use cases, each sponsored by a business unit leader, with phased investment and clear decision gates
The Original Proposal
The CTO's presentation included:
- A modern data lakehouse architecture (£4M)
- MLOps platform and governance infrastructure (£6M)
- Five use cases identified across the business
- A hiring plan for 40 ML engineers and data scientists
- Three-year roadmap to "AI transformation"
- Promise of competitive advantage through advanced analytics
Three Board Questions
Question 1: "What Specific Problem Justifies This?"
- The proposal listed five use cases, but none was sufficiently specific to justify £20M
- Each use case was framed around capability ("advanced analytics") not around a measurable business outcome
- Key implication: You cannot budget for a capability. You budget for a problem and its solution.
Question 2: "What Do We Get for £5M?"
- The CTO had no answer that was not "platform investment"
- The board needed to see a clear first return checkpoint, not just a multi-year roadmap
- Key implication: Phased investment requires clear milestones. Each phase must deliver business value.
Question 3: "Who Among Existing Leadership Is Accountable?"
- The proposal implied a new structure with a new VP of AI reporting directly to the CTO
- No existing business unit leader was named as accountable for outcomes
- Key implication: Accountability cannot be delegated to a new role. It must sit with someone who owns a business metric.
Capability-First vs Problem-First
Capability-First Approach (what failed):
- Starts with a technology vision
- Works backward to identify problems that justify the vision
- Asks: "What platform do we need to be competitive?"
- Measures success by platform adoption and ML model metrics
Problem-First Approach (what succeeded):
- Identifies the specific business problem
- Defines what "solved" looks like in business terms
- Asks: "What is the minimum viable capability to solve this problem?"
- Measures success by business outcomes (revenue, cost, risk reduction)
The Revised Proposal
Two months later, the CTO returned with a different framing:
Use Case 1: Customer Churn Prediction (Sponsored by Head of Customer Success)
- Budget: £2.5M (build and operate for 18 months)
- Outcome: Reduce churn in high-value segment by 5% = £3.2M revenue retained
- Accountability: Head of Customer Success owns the metric
Use Case 2: Fraud Detection in Transactions (Sponsored by Chief Risk Officer)
- Budget: £2M (build and operate for 18 months)
- Outcome: Reduce fraud loss by £1.8M annually
- Accountability: Chief Risk Officer owns the metric
Use Case 3: Demand Forecasting in Supply Chain (Sponsored by VP of Supply Chain)
- Budget: £1.5M (build and operate for 18 months)
- Outcome: Reduce excess inventory by 8% = £2.1M cost savings
- Accountability: VP of Supply Chain owns the metric
Total investment: £8M (not £20M)
- Decision gates at 6 and 12 months: Continue, pivot, or stop
- Each use case is a discrete project with its own P&L
- Technology decisions (platform, infrastructure) are driven by what each use case needs, not the reverse
The Board Approved It
What changed in their minds:
- Accountability was no longer abstract ("AI transformation") but concrete (three named leaders)
- Returns were measurable and phase-gated, not dependent on long-term vision
- Investment size was justified by specific problem, not ambition
- The CTO wasn't asking for a blank check — they were asking for specific problems to be solved
- Risk was distributed: if one use case failed, the others could still deliver value
Five Principles for AI Investment Boards Will Approve
- Frame around problems, not capabilities.
- Start modest and phase the investment.
- Assign accountability to an existing business leader.
- Define success in business metrics, not technical metrics.
- Build decision gates — don't fund a multi-year roadmap.
The Decision Scientist explores how organisations build, scale, and govern machine intelligence capabilities. Through research and case studies, we examine the architecture, incentives, and human systems that distinguish AI-native organisations from those still experimenting with pilots.
Subscribe to The Decision Layer — Get insights on AI architecture, governance, and decision-making directly to your inbox.