West Midlands Police AI Error - a Watershed Moment for AI in Policing?

Oliver Waring, Senior Consultant

January 2026

What does the recent news about West Midlands Police’s use of AI that led to the ban of Maccabi Tel Aviv fans from a football match tell us about the future of AI in policing? Is this a key moment? Oliver Waring, Senior Consultant at Robiquity and ex-police sergeant with 15 years plus service dives into the story.

The recent events with West Midlands Police have reignited a critical conversation about the use of artificial intelligence in policing. Much of the public discussion has understandably focused on the actions taken and decisions made, with scrutiny from the Home Secretary, HMIC, the Home Office and, of course, the media.

It is multifaceted and I am not a political commentator… but there is a critical component that underpins a significant factor: how AI is understood, governed and relied upon in high-stakes Public Sector environments.

The Principles are Clear but Implementation is Hard

The National Police Chiefs Council helpfully sets out principles for AI for individual forces to consider and reference in their own implementation. They are pretty unambiguous. Of the eight principles outlined, accountability, transparency and explainability are applicable in this situationand essential safeguards for the ethical use of AI in policing.

At first glance, the West Midlands Police scandal suggests a gap not in intention, but execution. The use of AI appears to have outpaced the governance, education and guardrails required to use it safely. Something that is unacceptable in the Public Sector.

When that happens, accountability does not disappear. It concentrates. This leaves the senior leaders, who hold ultimate responsibility, exposed and struggling to confidently explain how a tool was used, what it was relied upon for and where its limitations lay.

Overreliance: A Known Risk, Poorly Understood

What we are seeing is a well-documented phenomenon in AI adoption: overreliance.

Generative AI systems are designed to produce fluent, confident outputs. But confidence is not accuracy. Without education on fact-checking, source validation and the probabilistic nature of these tools, outputs can be mistaken for truth rather than suggestion.

In this case, AI reportedly hallucinated a football match that never took place. That fabricated detail then found its way into the intelligence cycle public-facing material and influenced operational decision making. It was not the only detail under consideration of course, but when intense scrutiny is applied in hindsight explanations can be as fragile as a house of cards.

Governance Gaps Can Create Harm

This was not a minor technical error; it was a governance failure with tangible impact. When AI is introduced without:

·      Clear boundaries on acceptable use

·      Mandatory human verification and source checking

·      Defined ownership and accountability

·      Education on known risks such as hallucinations and bias

… the damage is not theoretical, it can affect and harm organisations, leaders, customers and, in the case, the wider public.

The Known Incident and the Unknown Ones

Perhaps the most uncomfortable question is not about this incident itself, but about the others we may never hear about.

If this hallucination was identified, debated, and surfaced publicly, how many AI-generated inaccuracies were never challenged? How many were quietly accepted? How many subtly influenced decisions without triggering review?

This uncertainty is precisely why responsible and trustworthy AI frameworks exist. Not to slow adoption, but to make it safe, defensible and sustainable.

Opportunity and Responsibility Must Move Together

At Robiquity, we understand both the opportunity and the responsibility that comes with ethical AI adoption.

AI has enormous potential across Policing and the Public Sector, from reducing administrative burden to supporting better and faster insight and decision-making, to the potential to save lives, it is a fantastic tool. But that value is only realised when capability is matched with governance.

Responsible AI adoption requires:

·      Clear use-case definition

·      Strong governance and assurance

·      Human-in-the-loop controls

·      Education that focuses on risks and limitations as much as capability

Responsible AI is not a blocker to innovation. It is the mechanism that enables trust.

Learn From it or Fear it?

I think the West Midlands Police story is a moment that will be referenced for years as an example of what can go wrong with AI when understanding and governance lag behind adoption. The choice now is simple: we either learn from it, or we fear it.

If Policing, and the wider Public Sector, treats this as a lesson learned, it can become a catalyst for better education, stronger governance and more confident leadership. If not, it risks becoming the case study that stalls much required progress entirely.

If your organisation is already using AI, or considering it, now is the time to ask whether governance, accountability and understanding are keeping pace with your ambition and speed of change. If you’d like to explore how to adopt AI responsibly and ethically, we’re always happy to share our experience - get in touch today.

Recent posts