
Expert insights for practical AI implementation. Learn how to start small, shape culture, choose the right use case and build a clear AI adoption strategy.
Most teams reach a moment where AI feels both exciting and overwhelming. The possibilities expand daily, new tools appear without warning and the pressure to keep up grows quietly in the background.
Our MD, Matt Sherwen opened his Smart Retail Tech Expo talk with three simple observations.
A recent MIT study suggests that as many as 90% of AI projects fail to meet expectations. This number can be discouraging at first glance, although Matt’s message to the room was clear.
Failure at this stage is normal because most businesses are starting in the wrong place.
Continue reading to learn how to approach AI implementation with clarity and confidence by grounding decisions in real business needs rather than pressure or hype.
The early excitement around AI often leads businesses straight into tools.
A new automation platform appears. A chatbot looks promising. A content generator impresses a few people internally.
The temptation is to pick a tool and hope it solves a problem.
Matt warned against this habit. Many of the impressive tools businesses adopt sit on top of generic models and often do not address the specific issue at hand. By the time a team commits to procurement and rollout, the goalpost has already moved.
The first real challenge is not the tool. It is clarity. Businesses must identify the problem before they consider technology. Without this step, even well-funded AI projects struggle.
For more on how integration and tooling choices influence performance, read our analysis on how retailers reduce operational strain through stronger system foundations.
One of the most common questions businesses ask is how to get started when the topic feels too big.
Matt spoke openly about the pressure leaders feel to use AI simply because the market expects it. He urged leaders to step away from the idea that adopting AI is a requirement or a badge of progress.
The better approach is to start small.
A small project reduces risk and creates space for learning. It allows a team to test assumptions, adjust course and build momentum through small, meaningful wins. This approach also protects the team from burnout and creates a clearer understanding of what works and what does not.
Small projects scale more effectively because they grow from practical experience rather than theoretical ambition.
Many searches begin with questions like what is the best AI tool or which AI model should I use. These assume the technology is the starting point. Matt’s message was direct. Forget the tools and focus on the problem.
The right use case emerges when a business looks at its own structure. Matt encourages teams to examine challenges through a simple lens of industry, role and task. Each layer helps identify what is slowing the business down or preventing teams from operating at their best.
Here are examples of where use cases can emerge.
Each situation contains a problem that can be analysed for suitability. The question is whether AI contributes to the solution. Sometimes the answer is yes. Sometimes the answer is better data or improved workflow design.
Strong AI adoption begins with use cases that matter.
For more on how teams identify high value opportunities, see our guidance on the technologies shaping the next generation of customer experience.
Culture is often overlooked when leaders think about AI implementation. Matt highlighted that the biggest shift businesses face is not technical but behavioural. People need room to experiment, question and learn without fear of failure.
Internal workshops, hack sessions and experimentation weeks help teams build familiarity with AI and confidence in their own judgement.
Matt described how Sherwen ran weekly AI strategy workshops that encouraged the entire team to explore new ideas freely. The results were unexpected. Many tasks saw reductions in time spent by 30%, 40%, 50% or even 60%, not because the company enforced a process but because people explored independently.
A culture-first approach accelerates adoption and reduces friction across departments.
Many teams search for guidance on why AI outputs can feel unreliable or inconsistent.
Matt addressed this point directly. Poor data leads to poor outcomes. If product descriptions are unclear, pricing records are inaccurate or customer attributes are incomplete, AI will struggle to return meaningful results.
Businesses often expect machine-level precision from systems that do not yet have access to reliable information. Data quality magnifies under AI because the machine amplifies whatever it receives.
A first step in AI readiness is to examine where data is incomplete, outdated or inconsistent and begin improving those areas. AI cannot replace missing foundations.
Another common question is whether existing technology stacks can support AI long term. Matt explained that as businesses adopt AI more deeply, the demand for structured, connected and accessible data will grow.
Teams that already use composable architecture, orchestration layers or modular approaches to their tech stack are in a stronger position. These structures help AI see more of the business and provide outputs that reflect real operational context.
Businesses with tightly coupled or legacy architectures may find the early stages of AI implementation harder because the machine lacks visibility.
Infrastructure is not the first step but becomes increasingly important as AI adoption grows.
Leaders often search for advice on whether they should manage AI adoption internally or bring in external partners.
Matt referenced research showing that external support nearly doubles or triples the success rate of AI projects. Teams already have full workloads and learning a new technology while maintaining performance can be challenging.
External partners bring experience, pattern recognition and an understanding of what questions to ask during planning. They also help avoid pitfalls that only appear after working through dozens of implementation cases.
Our own AI Readiness Assessment gives businesses a structured way to understand their current position, their opportunities and their risks before committing to a full project.
For a structured approach to evaluating your preparedness, take our AI Readiness Assessment.
AI implementation becomes more predictable when businesses stay grounded in their own needs.
The path forward begins with small steps, supported by cultural alignment, meaningful use cases and sound infrastructure.
The process works best when teams stay curious and ask clear questions that match their goals.
This mindset helps businesses move through uncertainty with structure and confidence.
Before reviewing the takeaways, it helps to understand why this question appears so frequently in search.
Many leaders want a direction that is practical, realistic and avoids unnecessary risk.
These points summarise the core guidance.
These steps form a sustainable path into AI implementation at any scale.
Many organisations begin with similar questions when exploring AI adoption. These answers reflect the most frequent themes seen across search engines, forums and workplace discussions.
Begin with one small problem and a small team. Clarity and confidence grow faster through focused experiments than through large commitments.
Tools come after the use case. Identify the problem before exploring technology. Let the challenge guide the choice.
Run internal workshops, encourage experimentation and let people test ideas freely. Small demonstrations of value shift attitudes faster than broad directives.
AI reflects the data it receives. When data is inconsistent or unclear, the output follows the same pattern. Improving data quality improves outcomes.
External partners help businesses avoid common pitfalls and gain a broader perspective. They also provide structure and experience during planning and rollout.
You may also like