in

AI adoption succeeds or fails on skills, not software

When organisations talk about “deploying AI”, they often mean very different things. For some, it is hosting a model on-premises. For others, it is enabling tools like Claude or Gemini across the workforce. In reality, “deployment” has become a catch-all term for using AI in some way, somewhere within the business.

But the more useful question is not how to deploy AI. It is whether the organisation has the skills to use it well.

Too many AI initiatives are framed as technical projects. The assumption is that if you select the right platform and configure it correctly, value will follow. In practice, the differentiator is rarely the model. It is the capability around it.

Context matters

Successful AI adoption demands more than technical expertise. It requires critical thinking to validate outputs, problem decomposition to break complex challenges into manageable prompts, and change management to bring people along the journey. Large language models are improving rapidly, but they still operate within the limits of context. If users do not understand the problem they are trying to solve, no model will fix that.

On the technical side, fundamentals still matter. Data fluency is essential. If your team does not understand how data is structured, stored, and governed, AI will amplify confusion rather than insight. Integration and automation skills are equally important. AI becomes valuable only when it is embedded into real workflows, not when it sits in a separate tab. Security awareness must also be part of the equation. Granting an AI broad access without clear boundaries introduces a risk that many organisations underestimate.

Bridging the gap

The organisations doing this well tend to have people who can bridge technology and business. These are individuals who understand enough of the underlying systems to evaluate what is feasible, and enough of the commercial context to ensure AI is applied to real outcomes rather than theoretical capability.

One concern I have is the growing belief that AI allows organisations to hire only senior people because the technology can do the work that juniors do. That thinking is short-sighted. If we stop developing junior talent, we eventually run out of senior expertise. AI is a tool. It can accelerate learning and improve productivity, but it cannot replace the developmental pipeline required to sustain technical depth over time.

In many cases, the blockers to AI adoption are not technical at all. Fragmented and immature data landscapes limit what AI can do. Middle management may lack AI literacy and, therefore, hesitate to support adoption. Security and legal teams sometimes respond with blanket bans rather than nuanced policy. There is often a shortage of internal champions willing to navigate the messy implementation phase. Perhaps the most common is poor output evaluation, where teams do not know when to trust AI and when to question it.

These are organisational challenges.

Skills remain critical

When it comes to building capability, my view is to upskill existing teams first. Your people already understand your customers, your processes, and your systems. AI tools amplify that context. External hires can bring valuable skills, but without a business context, they require time to become effective. Recruit externally only where a specific capability cannot be developed quickly enough internally, or where an outside perspective is required to shift a resistant culture.

Adoption also tends to follow a familiar pattern. Early individual adopters experiment independently. A central enablement function then emerges to set shared standards and policies without becoming a bottleneck. If adoption matures, AI becomes embedded into daily processes rather than treated as an optional extra. This trajectory requires the same discipline applied to any major technology change.

Leadership plays a critical role. Purely technical leadership can produce elegant solutions that struggle to gain traction. Purely business-led initiatives often stall at implementation. What works is a hybrid leader, technically credible enough to interrogate vendor claims and commercially grounded enough to connect AI initiatives to measurable outcomes, with sufficient organisational authority to align stakeholders.

Moving out of the shadows

Some risks must be managed deliberately. Shadow AI, where staff use consumer tools with sensitive data outside organisational oversight, is already a reality. Over-reliance on AI can lead to skill erosion. Confident but incorrect outputs can create legal and reputational exposure. Vendor lock-in can occur without a deliberate strategy. Concentrating AI capability in one or two individuals creates dependency risk. Most commonly, organisations invest in initiatives that never produce lasting value because the data, process, and governance foundations were not in place.

Ultimately, AI becomes most powerful when it helps organisations extract insight from their own data and processes, making connections that would otherwise be missed. But that only works when the inputs are trusted, and the surrounding capabilities are mature.

AI will continue to evolve. The question is whether our skills evolve with it. The organisations that succeed will not be those with the most sophisticated models, but those with the human capability to apply them responsibly, critically, and in service of clear outcomes.

What do you think?

Grace Ashiru

Written by Grace Ashiru

Leave a Reply

Your email address will not be published. Required fields are marked *

Noah and Nafolo Launch Stablecoin Virtual Accounts to End Cross-Border Payment Friction in Sub-Saharan Africa.