< Back to Blog

The Results are in…

March 28, 2024 by Alistair Enser

last week, I considered the proposed ban of AI under an EU proposal to address what it deems “high-risk AI.” This includes outlawing “AI systems used for indiscriminate surveillance applied in a generalised manner” and would effectively make most video analytics illegal in most applications. I asked readers whether they agreed with such a ban.

The results are interesting and unequivocal: 84% said such as ban was not necessary, practical, or possible.

You might think that as I work in the security sector, most of my connections would of course agree that AI helps keep people, assets, and reputations safe, yet two thirds of my connections are from outside the security sector, representing a wide range of industries internationally.

What’s more, the results build on a trend spotted during the first lockdown, when I posted a similar survey. It asked whether people would support the use of ANPR and mobile phone tracking to alert drivers when they left a tiered area with different travel restrictions. A similar figure – 79% – supported the use of the technology.

This suggests that an increasing number of us are willing to permit the use of technology to enforce lockdown regulations, keep us safe and help us get back to something like normality.

An acceptable trade-off

By necessity this involves us making a trade-off between the protection of civil liberties and taking the pragmatic position that compromises are needed if the pandemic, or terrorism, violent crime or other important issues are to be addressed. Furthermore, this ‘trade-off’ is, for example, reflected in the official guidance – and the public’s overwhelming acceptance of these rules – to wear facemasks and maintain a safe social distance. These ‘restrictions’, while sometimes uncomfortable, are leading us slowly to the greater freedom of normal life.

The balance between civil liberties and pragmatism is shown in the graph above from the Financial Times, which places the UK in the enviable position of the bottom right quadrant, where Covid-19 cases are dropping each week and mobility is increasing as lockdown restrictions are eased and shops, pubs and public spaces open once more.

Of course, the other reason that the UK finds itself in such a promising place is the success of the UK’s vaccination programme – something that was perhaps not anticipated at the start of the pandemic, but which has proved to be genuinely world beating. The impact of vaccination on Covid-19 in the UK is revealed in another graph, below, which shows how cases have tumbled as the vaccination programme has moved through the age groups.

Balancing risk

For me, the EU’s concerns should not be whether AI must be banned, but about how risk is managed. It’s not about the technology – any technology – but about how we balance risk and security. It’s how we tread the fine line between protecting our civil liberties and being sufficiently safe that we can go about our day-to-day business.

I am a strong advocate for human rights, equality, civil liberties, and the like, and certainly have many concerns over the use of AI in general life (shopping, phones, social media, etc.), as well as the potentially unethical use of security equipment – but we must look at balanced solutions to the challenge.

This is important because, in the debate around civil liberties, we must not lose sight of this need for security. Last week I posted news that UK officials believe a successful chemical, biological or nuclear attack is likely to occur in the next ten years, and that police and government bodies are urging the public to be vigilant as leisure, hospitality and workplaces re-open in June.

For me, putting the genie back in the bottle is simply an impossible pipedream which negatively impacts the huge benefits that AI – in fact, the benefits that AI in many walks of life, not just security – can bring us.

We should focus on ethical use and implementation. We should listen to public opinion, and then we should supplement the existing bodies such as the Information Commissioners Office (ICO), the Surveillance Camera Commissioner and others with a focused effort on how to manage these complex needs, keeping a balanced view on risk and benefit.

This shows that the security industry’s job is not yet done. We have much to offer to this debate as well as ensuring our practices become part of the solution – not the problem.

Times have been tough throughout the economy, and our industry has not been immune. Reliance High Tech has been fortunate to have prospered during the pandemic and has remained fully operational. Yet I have seen over recent months several organisations posting annual results that are perhaps less healthy, and I have noticed some sales and acquisitions going through which may not have been planned prior to the pandemic.

But given the real and increasing need for expertise, experience, and technology to address the greater threats that society faces, let’s hope that the security industry is finally emerging from what has been a challenging time. Let’s use our voice as a power for good and demonstrate our value. Indeed, businesses and the public alike are – as always – relying on us to keep them safe.