Organizations have quietly replaced the question “how do we improve productivity?” with “how do we deploy AI?” – and the costs of that substitution are starting to show.
If AI is advancing this quickly, why does value still feel so hard to realize?
Kubrick’s research into the AI labour market explored a universal capability challenge across the US and UK: demand for mid-senior AI roles is outstripping entry-level by up to 5x. This squeezes out cost-effective junior talent, raises wages, and compounds the ROI crisis – but the deeper risk is existential. Who will work with AI tomorrow if we don’t create that talent today?
We brought these findings to decision-makers across banking, insurance, wealth and payments. What we found was a critical reframe: the blockers are operational, not technical. Together, we identified three challenges that impede transformation – and how an AI-native workforce can help organisations move faster.
Challenge 1: Framing AI as a technical solution, not a business transformation
Under pressure, many organizations have reframed a fundamentally human and operational challenge:
How do we sustainably improve productivity and ROI under constraint?
—into a technical one—
How do we build, deploy, and scale AI?
Previous digital transformations have followed a transformation cycle that reinforced that perception: investment in platforms and tooling came first, leading to data and infrastructure challenges, and focusing capability at the adoption stage.
However, AI adoption is unfolding under compressed timelines, flat budgets, and rising accountability. Capability creation, governance, and delivery are forced to happen simultaneously.
Under these conditions, mis‑sequencing investment in capability is actively destabilizing.
Challenge 2: Demanding speed while designing for control
Business units are under the same pressure to do more with less as technology teams – leading to spread of ‘shadow AI’, tooling used separately from central platforms or formal approval.
Labeled as a governance problem, or even bad behavior, it actually speaks to an asymmetry in incentivization. Business teams are accountable for outcomes: revenue, delivery, customer success. Technology teams are measured on capability: stability, security, compliance, long‑term scalability.
When timelines compress, these incentives collide. Speed wins, not because standards fall, but because of business impact.
Shadow AI thrives where three responsibilities sit between the cracks:
- Context: understanding how work happens and where judgement matters
- Translation: turning problems into use cases, and outputs into operational change
- Judgement: deciding when speed outweighs certainty, and the acceptable risk
The fragmentation between technical fluency and business context builds structural debt. Shadow AI recedes when context, translation, and judgement are embedded closer to the work, so AI can reshape processes.
Teams supported by junior AI-ready talent have an advantage when it comes to redesigning processes, which requires time, coordination, and context.
Challenge 3: Designing teams around execution instead of judgement
Concerns persist that junior staff lack industry experience to be productive. So, many organizations have concentrated decisions, review, and control with their most experienced people, equating experience with certainty.
In the short term, this feels prudent.
In practice, it creates friction.
Experienced leaders become overloaded with work that does not require their judgement. Review queues lengthen, mid‑senior leaders become execution bottlenecks. Delivery depends on a small number of individuals holding disproportionate context, and when they are unavailable or leave, risk escalates. AI changes the economics of this model: value shifts from doing the work to interrogating it. Outputs must still be tested, contextualized, and defended, particularly in regulated environments. This has prompted a reappraisal of early‑career talent.
Rather than acting as delivery capacity, AI‑literate juniors increasingly operate as the interface between AI systems and the business.
In many teams, agents execute while juniors supervise them. Senior leaders apply judgement at defined inflection points, focusing on tradeoffs, escalation, and accountability rather than routine validation.
In this model, juniors function as:
- Translators between business intent and technical execution
- First‑line challengers of AI outputs
- Multipliers of judgement, enabling experience to be applied where it matters most
Reverse mentoring comes as a benefit. AI fluency flows upward from those closest to the tools, while domain expertise and risk instinct flow downward.
AI reshapes how knowledge, experience, and judgement are applied to processes. Organizations that recognize this shift will stop asking how to remove humans from the loop, and start designing teams which blend experience, capability, and technology.
In this structure, organizations benefit from a cost-effective mix of junior and senior wage distribution and keep a healthy pipeline of talent open, developing their future leaders.


