< Back to Blog

Is a ban on AI really the answer?

April 25, 2024 by Alistair Enser

At a time when our vaccination programme is going from strength to strength and we are starting to see the green shoots of recovery, does a forthcoming piece of EU regulation confirm the type of thinking that drove many UK citizens to vote leave?

News that the EU plans a ‘blanket’ ban on the use of Artificial Intelligence (AI) algorithms and Facial Recognition under proposed new AI regulations was leaked to the BBC last week.

Under the proposed plans, the EU promises tough new rules for what it deems “high-risk AI.” This includes banning “AI systems used for indiscriminate surveillance applied in a generalised manner.” The use of AI in the military will, apparently, be exempt, as will systems used by “authorities in order to safeguard public security, however the use of algorithms for example, used by the Police or 999 services to prioritise the dispatch of emergency services or by recruitment business/or employers for profiling candidates could be outlawed”

In part, what the EU is trying to achieve is commendable, and I believe is rooted in our private data, and how it is captured by big tech, packaged and used to form a profile which then has commercial value to advertisers, for example. I’m sure most of us don’t support private companies profiling, manipulating and targeting us, purely for their own commercial gain.

I am also aware of claims of bias in AI, and the argument that the algorithms that underpin much AI must be readily understood. Both present challenges, but the latter owes much to machine learning; specifically, how we teach AI to identify patterns, and the data used as part of that learning process. Awareness of the need to get this right is now widespread, and these challenges are eminently manageable. In terms of complexity, Plenty of initiatives seek to make AI open and more readily understood such as in the case of employment profiling.

A fine line

Yet the EU’s plans to include surveillance and facial recognition technology under a blanket ban of “AI” seems to be overreaching, in my opinion.

I understand why there may be concerns about AI in any form being used for commercial gain – including the social profiling that the article mentions – or manipulating the behaviour of consumers by targeted subliminal messaging to change opinions. Many of us feel this is not right.

On the other hand, the article mentions facial recognition, albeit there is a caveat that it is acceptable when used for public safety (but only by the “authorities.’) Yet what about a building owner using the technology for the safety of the building occupants, and not for commercial gain? Would that be considered to be legitimate use? Would they still be permitted, for example, to use facial recognition in access control systems, or a visitor management system to validate users match the registered user image held on file? And if not, why not? – if the safety and security of people and assets is its fundamental concern?

For me, this is about walking the fine line between benefit and hindrance. AI allows organisations to harness data so we are connected more efficiently. It helps them provide services that are better tailored to our needs and, yes it helps us stay safe. Used this way, it improves our standard of living.

Misusing that data and analysing it without the user’s permission is ethically unsound – as was the case with Cambridge Analytica, for example – and clearly crosses the line. That said, many would feel that existing laws were sufficient to address the Cambridge Analytica issue, so why implement more layers? AI has many forms, and given the widespread use these days, it would be almost impossible to put that genie back in the bottle. Is an outright ban the right solution?

My thoughts – and yours?

In essence, my concerns about the EU’s proposed regulations are threefold:

Who decides?

Under the EU’s proposals, who decides what is acceptable and what isn’t? What is “high risk AI”, and what is ‘low risk?’ On the one hand AI would be banned for “indiscriminate surveillance supplied in a generalised manner.” Does this mean it would be okay to use it in a discriminate way and in a specific way? (What does this even mean?!) In the UK, we already have bodies that govern the use of video surveillance – the Surveillance Camera Commissioner – and our data – the Information Commissioner’s Office. There are existing rules in place to protect data privacy, of course, including the EU’s own GDPR. Ironically some of the stakeholders involved in the EU study, suggested that a binary ‘high/low’ rating , was “sloppy at best and dangerous at worst”, not to mention “too many loopholes and too much vagueness.”

A tool, like any other

Secondly, the proposed plan reveals confusion about the capabilities of the technology. AI is not judge and jury – it’s just a tool. What’s the difference between putting 50 police officers in a room to sift through hours of video to profile potential suspects, and subjecting the same video to an AI-enabled search so that for example, three or four police officers can check the results it delivers?

Both approaches involve searching for an individual based on physical description, direction of movement and other characteristics provided by a witness, for example. But one approach saves massive amounts of manpower. Why should you be more likely to fall foul of the EU’s rules for that? Using AI in that instance saves time and money and frees up valuable resources. As long as there is a human conscience involved in the process, why would we trust the result any less?

It’s here: deal with it

Finally, it isn’t possible to uninvent AI, in the same way that we can’t uninvent the internal combustion engine, or the firearm. We simply put rules in place to manage or limit their use. If I want to keep my children safe on the internet, do I throw away their computer, or do I help build their knowledge and give them the tools to make their computer use safe?

What do you think? Do you believe it’s necessary, practical and possible to ban AI in the way described in this article? I would be interested in your thoughts. Please let me know by answering a one question survey here.

Stay safe!