< Back to Blog

A ban on facial recognition is not the answer

April 20, 2024 by Alistair Enser

On the back of a highly successful Technology and Innovation Day in London last week, where we discussed the role of technology in a changing world, I was interested to see a call in the news for live facial recognition (LFR) to be banned.

It came on the back of a study undertaken by Matthew Ryder QC on behalf of the Ada Lovelace Institute, which campaigns “to ensure that data and AI work for people and society.” The report claims “There is an urgent need for new legislation specifically addressing the use of biometric technologies, as existing frameworks are inadequate and failing to keep pace.” Until a new framework is put into place to govern the use of LFR and a code of practice is agreed on, the report’s author suggests that “The use of LFR in public should be suspended.”

I have talked before about the pitfalls of technology being used indiscriminately, and I agree broadly with the Institute’s calls for the following

  1. ‘Legislation’ governing the use of biometric technologies.
  2. Oversight and enforcement by a national, independent and properly resourced regulatory body or function.
  3. Standards of accuracy, reliability and validity and an assessment of proportionality which considers human rights impact before biometric technologies are used in high-stakes contexts.

What I don’t agree with, however, is “an immediate moratorium on ‘one-to-many’ identification and categorisation in public services, until legislation is passed.”

At our Technology and Innovation Day, the room was full of end-users who rely on technology to keep their offices, streets, stations and other properties they are responsible for, safe. Local authorities and commercial customers in attendance all have similar challenges when it comes to identifying, reviewing, packaging and managing evidence. Like everyone, their physical and monetary resources are already stretched, and any technology which helps them filter the data at their hands in a more efficient way, and package it such that qualified personnel can then make final adjudication as part of an investigation, would appear to be a very popular concept.

Digital forensic management systems, object verification, and other video analytics solutions built around AI can help them quickly locate persons of interest or incidents from days of video footage. Looking for a lost child in a crowd by filtering for ‘children’ or searching for people matching a description such as ‘green jumper’ could make the difference between life and death in potential abduction situations. Very few of us want to live in a surveillance state, yet we all want to feel safe and secure, too. I have grave concerns that banning technology until it is legislated will choke innovation, it will drive genuine use cases away, and as technology is always developing, are we not in danger of ‘tomorrow never comes’ when it comes to legislation keeping up?

Conscious or unconscious bias?

A police officer draws on years of experience and training to spot likely offenders on the street, making a subjective decision to stop and search a suspect for example, because they might match a description of a suspect or meet the officer’s criteria for adequate suspicion to be raised. Does this make them more or less susceptible to misinterpretation? When looking for a ‘needle in a haystack’, in some respects, using LFR to identify individuals from crowds of people is more dispassionate and respectful of civil liberties, especially if that ‘initial filter’ is then finalised by a human expert. Surely, it’s less invasive and more practical than an extra 100,000 officers standing on each street corner?

We already have rules around GDPR which apply to all data, including the use of video to identify individuals, supplemented by tools such as redaction, which, as our delegates saw the other day, can now be managed near real-time, at speed, within a software solution, thus protecting the privacy of anyone not of interest, whatever the use case. The ICO can enforce fines against people or organisations which misuse data. Is this not sufficient to allow progress, while in the background continually updating and ensuring that rules keep pace with technology?

Legislation will always be prescriptive. It can’t cover every use case. What might be a legitimate interest under the terms of GDPR in one application may not be in another. For example, schools identifying kids that are habitually late or exhibiting unusual attendance patterns might help teachers identify those for whom there could be a safeguarding issue at home or bullying. With so many use cases for video analytics, could you possible legislate for each one?

There seems to be a groundswell of opinion moving against AI at the moment in the UK. The European Union has published a policy paper that seeks to boost trust in AI. What is often not mentioned in the same discussion is that the same policy wants to “make the EU the place where AI thrives from the lab to the market.” The EU recognises that too much regulation will stifle innovation in an area where the Chinese, for example, who ‘appear’ to have less qualms about the irresponsible use of technology, want to lead the world. This technology will be developed with or without the UK and Europe. It’s one reason why an editorial in response to the Lovelace report argued that a ban on LFR is not sensible.

Better to have the self-administered systems today than no systems at all, pending one in the future. For be in no doubt: in practical terms, the audience at our event the other day see an awful lot of benefits in video analytics and, for the most part, feel they can manage the risks because they believe they are using these for the legitimate goal of maintaining safety and enforcing our security.