< Back to Blog

The imaginary monster under the bed

May 5, 2024 by Alistair Enser

I suppose a scary story is pretty much expected on Halloween, so allow me to address some industry developments over the last few weeks which, for many, are warmly welcomed, but which a few others find frightening.

You perhaps won’t be surprised where I sit on the subject of technologies such as facial recognition and AI. I don’t believe in scaremongering, but I do believe in educated debate, reasoned data-driven analysis and a fair and balanced view on risk. Any emerging technology will improve over time and should absolutely only be employed for ‘reasonable and proportionate’ use.

Now, I accept that with AI and facial recognition there are many shortcomings, but the technology cannot be un-invented, no less so than electricity or the industrial revolution. Simply banning a technology will not allow space for improvement, but fair and proportionate regulation of the use will help protect privacy and ethics, as well as ensuring that the technology becomes better and more useful. For me it is about balance.

Should a hypothetical algorithm that predicts illnesses with an accuracy of 80%, against a doctor, with an accuracy of 70%, for example, be considered because it could improve outcomes, or should it be outlawed because it is wrong 20% of the time? The same should be asked of facial recognition and other systems.


Many of you will have noticed the news last week of a government initiative, Pegasus, which aims to reduce retail crime. Formulated in response to what retail chiefs have called a “licence to shoplift”, the initiative is based around information sharing and a commitment from the police to prioritise urgently attending the scene of shop theft in the event of violence against a shop worker, where security guards have detained someone, or where attendance is needed to secure evidence.

The plan sets out advice for retailers on how to provide the best possible evidence for police to pursue in any case, advising them to send CCTV footage of the whole incident as soon as possible after an offence has been committed. Where CCTV or other digital images are secured, the police will run this detail through the Police National Database using facial recognition technology to identify and prosecute offenders.

All very timely and sensible, if you ask me, and I would expect to see this combination of technology, process and commitment to yield results. That’s important, because the British Retail Consortium estimates there were 850 cases of assault or attacks on retail staff every day in the UK last year, well up on the 450 in 2019 /2020.

I would absolutely put in one very big caveat, which is that any potential match is reviewed and validated by a human, to help minimise ‘machine learned’ biases. The technology should be used to augment and support human investigative endeavours, not to replace them.

The news wasn’t greeted by some, however. A letter to the UK’s largest retailers from Liberty, Amnesty International and Big Brother Watch, among others, argued that their participation in the scheme relies heavily on facial recognition technology to combat shoplifting and so will “amplify existing inequalities in the criminal justice system”.


As you will know, I’m a firm supporter of anything that makes our lives better, safer and healthier, and it is my strongly held opinion that technology has a firm role to play in improving quality of life and keeping us safe. I wrote recently that I am ‘of course’ not blind to some of the issues that our reliance on technology has created: it’s becoming apparent that our use of technology is outpacing our ability to adapt to some of the social, ethical and indeed practical questions being raised, and that leaves important questions to answer.

There need to be adequate systems in place to support the ethical and reasonable use of facial recognition technology, for example. But simply banning it outright out of fear of misuse is not only impractical, but counter intuitive. Where do the rights of shop workers such as Charlene Corbin, who was assaulted in her Co-Op feature in the letter writers’ arguments? Those behind the letter claim that underlying social pressures such as the cost-of-living crisis need to be addressed to reduce retail crime. But if retailers have to close down stores due to persistent theft, as has been suggested, how will that help those living in deprived areas, in particular, where poor or expensive transport means they rely on their local supermarkets?


In some ways, the debate around facial recognition mirrors a larger one underway around AI. The UK hosts an international summit on AI at Bletchley Park later this week, where representatives of the largest creators and users of AI, alongside politicians, regulators and academics, will discuss how the technology can be used safely. At the time of writing Elon Musk it is understood that Elon Musk will attend the event.

The EU is already quite far advanced in legislation around the use of AI, but differences exist between its “regulate the technology” approach, and those that seek to regulate the use of the technology instead. But even one of the EU’s most senior officials has warned against being “paranoid” or too restrictive when regulating generative AI, because fear of the technology could very easily stifle innovation.

As UK prime minister Rishi Sunak greets world leaders and big tech bigwigs this week to debate the future of this promising technology, I trust the potential for AI – to save lives, reduce crime and manage disease – remains upper mind, as well as ensuring it is used ethically and without unreasonable prejudice.

Like the film ‘Monsters Inc.’ and the character Sully, not all the monsters under the bed are necessarily bad.