Rebuild on AI Principles: From Tools to Transformation
Why boards must shift focus from 'which tools' to 'what if we rebuilt this function entirely on AI principles?'

BCG's December 2025 research makes an argument that is obvious in retrospect and still almost universally ignored: the question most boards ask about AI is the wrong one. Most boards ask which AI tools the company is using, how many seats have been deployed, and what the adoption rate is. BCG's answer is that none of that is the question.
"Rather than asking which AI tools the company is currently using, the board should ask: What if we rebuilt this function entirely on AI principles? How much faster, leaner, and more predictive could it be?"
— BCG, Targets Over Tools: The Mandate for AI Transformation, December 2025
The distinction sounds semantic. It is not. It determines whether you get incremental efficiency or a fundamentally different operating model, and the gap between those two outcomes is where most enterprise AI investment is currently disappearing.
Why the tool question produces the wrong answer
When you start with tools, you map AI capabilities onto existing processes. Those processes were designed for human execution: they have the shape, the handoffs, the review stages, and the approval chains that humans need. Layering AI onto that structure speeds up specific steps but does not change the structure. The result is what McKinsey calls the "gen AI paradox": nearly eight in ten companies report using gen AI, and just as many report no significant bottom-line impact. The tools are deployed. The process is the same. The ROI is marginal.
The outcome-first question breaks that pattern. If you start by asking what a function optimised entirely for AI capabilities would look like, with zero legacy constraints and designed from scratch, you get a different answer. The function may have half the steps. The data flow may be inverted. The human touchpoints may be three instead of thirty. That is the transformation BCG is pointing to, and it is not a technology decision. It is a design decision that happens before technology is chosen.
The board's role is to force the question
Most management teams will not ask this question voluntarily. The people who own a function have spent years optimising it, and asking "what if we threw it out and started again?" requires a level of structural challenge that operating teams rarely apply to themselves. This is the board's job: not tool oversight, not AI governance checklists, but the harder question of whether the outcomes being targeted are ambitious enough and whether AI-first design gets there faster and at lower cost than optimising what already exists.
BCG's data shows that the companies doing this, still a small minority, are pulling ahead on the metrics that matter: productivity per employee, cost per outcome, speed from idea to market. The gap is not closing. Every quarter spent optimising the wrong function is a quarter of compounding advantage handed to the competitor that asked the right question first.
The sequence that works
BCG's research identifies the sequence that produces outcomes rather than pilots. Start by defining the target: not "reduce costs by 15%" but "what does this function look like if it runs at ten times current throughput with the same team?" Work backward from that to the capabilities required. Identify what data, what architecture, and what redesigned roles are needed. Then select the tools. Most AI transformation programmes run this in reverse: tool first, target adjusted to match what the tool does, outcome measured against the adjusted target. That is why 90% of enterprise AI pilots never reach production (MIT, 2025). The pilots prove the tool works. They do not prove the target was worth reaching.
For PE operating partners working across portfolio companies and for principals managing operational AI deployments, the question to put to management teams is not "how is the AI rollout going?" It is "what did this function look like before you chose the tool, and is that still the right function to have?"
Continue Reading
See All
Top Investors Behind the AI-Native Wave
A handful of venture firms appear repeatedly across unrelated AI-native sectors within the same six-month window. That repetition is a conviction signal.

Where the Smart Money Is Going in Cybersecurity
Threats evolve at machine speed. Security teams operate at human speed. The capital flowing into AI-native cybersecurity is a direct bet on closing that gap, and the investor pedigree is the sharpest concentration of any sector in this research.

The Entire Hiring Stack Is Under Attack: Where the Capital Is Going
The hiring stack is being rebuilt layer by layer. Sourcing, matching, interviewing, applicant tracking, and people ops each have a separate, well-funded AI-native challenger. The attack is happening simultaneously.