Why your AI strategy is actually a people strategy.
Organizations that deploy AI tools without building human capability are betting on a technology fix for a leadership problem. Here is why it always fails, and what to do instead.
A bank we spoke with last quarter spent $1.2 million on Copilot licences for 5,400 staff. Six months in, internal usage analytics showed that 73 percent of seats had been opened fewer than three times. The CEO did not call this a failure. He called it a slow start. He blamed change management.
He was almost right. He had a leadership problem. He just thought it was a software problem.
01 The illusion of access
Buying access to AI tools and assuming people will use them is the most expensive mistake in enterprise IT today. The line of reasoning is seductive: AI is a productivity multiplier, the tools are cheap relative to headcount, ergo deploying tools to everyone equals productivity gain at scale. Each step in that logic is wrong.
AI is a productivity multiplier conditional on the user knowing how to use it. The tools are cheap individually but expensive at scale because the licence fee is the smallest line in the total cost of adoption. And deploying tools to everyone produces a productivity gain only if everyone has the capability to extract that gain.
What capability looks like in practice: a senior analyst who can break a question into three prompts and synthesise the answers in twenty minutes instead of two hours. A team lead who can scaffold a discovery deck with a model and edit it down rather than start from a blank slide. An engineer who can describe a function in natural language and ship the unit test first.
What lack of capability looks like: 73 percent of seats opened fewer than three times.
An AI tool deployment without capability building is not an investment. It is a recurring expense with a probabilistic upside. Most organisations are not investors. They are budget approvers.
02 What "people strategy" actually means in AI
When we say AI is a people strategy, we are not saying "remember the human element" in a soft, HR-adjacent way. We are saying something operational and uncomfortable.
An AI people strategy has three layers, and you cannot skip any of them.
Layer 1 · Skills
The hands-on competence to use a model in a real workflow. This is not a one-hour lunch-and-learn. It is repeated practice on the team's actual work, with feedback, until the skill becomes default. The benchmark is simple: can the person ship the same output 30 percent faster, six months from now, without asking?
Layer 2 · Mindset
The frame the team uses when deciding what to delegate to a model and what to keep in human hands. Without this, you get either over-trust (people copy hallucinations into client decks) or under-trust (people ignore the model and stay slow). The benchmark: can the team articulate, in a sentence, what their model is good at and what it is bad at on their work?
Layer 3 · Governance
The rules of the road. Who can use what model on what data, where the audit log lives, what gets reviewed by whom before it goes external. Without governance, capability is a liability. The benchmark: does the team know the answer to "what happens if this output is wrong" before they generate it?
An AI strategy that addresses Layer 1 alone produces enthusiastic individuals and zero institutional change. Layer 1 plus Layer 2 produces a high-performing team with no organisational defence. Only the three layers together compound.
03 The three audiences problem
Most AI adoption programmes are built for one audience and silently expect the other two to follow. They do not.
The three audiences inside any large organisation are builders, leadership, and sponsors. Builders are the engineers, analysts, ML practitioners who ship the use cases. Leadership is the heads of AI, IT, data, architecture who clear the path and approve the workstreams. Sponsors are the business unit owners and executives whose KPIs the AI work is supposed to improve.
Build a programme for builders only and the work happens in a corner. Leadership sees AI as "that data team thing", sponsors do not see their numbers change, the work dies at the next budget cycle.
Build a programme for leadership only and you get a beautiful operating model document and zero shipped use cases. The builders are confused, the sponsors are skeptical, the conversation stays at the strategy table.
Build a programme for sponsors only and you get a sequence of pilots that look impressive in steering committees and never reach production because the build layer was never funded.
Capability transfers when the three audiences move at the same time, in the same engagement. That is the entire premise of a field-and-forum acceleration journey: builders ship, leadership clears blockers, sponsors realign incentives, and the three loops are connected by structured workshop checkpoints rather than left to chance.
04 Three signals you have an AI people problem
Test yourself against these three. If two or more apply, the budget you are about to spend on a new tool will not move the needle.
Signal 1 · Your usage analytics show a long tail
Open the Copilot, ChatGPT Enterprise, or Gemini admin dashboard. If your top 10 percent of users account for more than 60 percent of the consumption, you have nine layers of organisation getting nothing from the tool, and one layer getting most of it. Buying more licences will not change the distribution.
Signal 2 · Your AI use cases are stuck at pilot
You can name three to five impressive AI pilots from the last 18 months. None of them have crossed into production. The same three names come up every time. The reason for the stall is rarely technical. It is the absence of a team capable of operating the use case after the consultancy left.
Signal 3 · Your AI strategy lives in IT, not on the operating model agenda
Open your last quarterly business review deck. Count the AI mentions in the IT section versus the AI mentions in the operating model and workforce section. If the ratio is above 5:1, AI is being treated as infrastructure, not as a transformation lever. The infrastructure framing is exactly what produces the 73-percent unused-licence problem.
05 What to do this quarter
If you recognise yourself in the signals above, here is what we recommend before the next budget approval cycle.
One. Pick a single team that already has live AI work in flight. Not the ML platform team, not the central AI office. A real business team where the operational outcome matters and where the work has been stuck. A team of eight to twenty people is the right size.
Two. Run an eight-week embedded engagement on that team. Field-and-forum: presence in the room while they ship, plus three or four workshop checkpoints where the team's leadership and the engagement sponsor sit down together and decide what changes. The output is not a deck. It is shipped production work plus a documented operating practice the team keeps.
Three. Measure the journey by capability built, not hours billed. The metric we use: at week 12, can the team take a new use case from idea to production-ready in half the time it took at week 0? If the answer is yes, you have proof that capability is a learnable, transferable, observable thing. If the answer is no, the engagement was wrong and you have learned something more valuable than the cost.
Four. Use the team you just accelerated as a pattern. Run the same protocol, sequentially, on three more teams over six months. By month nine, you have four teams shipping. The licences you bought start to pay back.
This is not a transformation programme. It is a capability acquisition. The vocabulary matters. Transformation programmes have steering committees and progress reports. Capability acquisitions have working sessions and shipped artefacts. The first kind is a budget item. The second kind is an asset on the balance sheet of the team.
06 The leadership question
Almost every conversation we have with a CEO or COO about AI ends in the same place. They tell us the technology is moving fast and they are worried about being left behind. We tell them the technology is the easy part. The hard part is building an organisation that compounds rather than depreciates the AI tools they buy.
Your AI strategy is your people strategy. The teams who treat it that way will spend the next decade owning their market. The teams who treat AI as a procurement decision will spend the next decade explaining to their board why the productivity gains never materialised.
Pick the right one before your next budget cycle.