< Back to Blog

Could AI be mankind’s best friend?

May 5, 2024 by Alistair Enser

This week, I have been preparing a response to questions from a journalist about AI and, on the back of the UK’s recent AI summit, the subject is timely. The question put to me was whether we should be concerned or excited by the use of AI in the security sector.

While I have opinions on the subject, and regularly air them on this blog, in preparation, I switched on the TV. Or, more accurately, I accessed iPlayer, where I watched an episode in Hannah Fry’s series, The Secret Genius of Modern Life. I was particularly interested in an episode about the humble passport: often neglected as a simple travel document, it actually includes some very sophisticated technology to prevent fraudulent use or tampering, but still allow us to be instantly recognisable to border control staff.

The show reveals how the first biometric information was captured by a French police officer around the turn of the century, and involved taking measurements of suspects’ physical attributes, such as the distance between the eyes, as these tend not to change even if disguised. Fascinatingly, the same principles underpin modern facial recognition technology.

Types of AI

Before we address the original question, however, let’s consider the main types of AI, and why it is important to separate them. These are machine learning, deep learning and neural networks.

Machine learning essentially gives a computer access to lots of datasets and tells it to identify patterns and make predictions. As these data sets get larger, so the accuracy of the predictions these computers make should become more reliable.

Deep learning builds on machine learning methods by applying its algorithms to more than one specific task. Often, they use neural networks which mimic human thinking by calling on interconnected layers of computations. This results in an additive system that makes it possible for computers to learn from their ‘mistakes’ and improve efforts over time.

Each approach can be placed on a ‘safe/dangerous’ scale, and an ‘ethical/unethical’ scale.

Machine learning is not dangerous, in that it doesn’t allow the computer to ‘think’. It could, of course, be used unethically. Deep learning, based on neural networks, however, has the potential to be dangerous, and fears of a powerful AI that could code new AI and create systems that adapt and learn provides cause for concern for those worried about an omnipotent ‘Terminator’.

A safe pair of hands

Machine learning doesn’t have the same potential for misuse as deep learning, and both currently lack purpose and consciousness: they are simply task-orientated. But the role of machine learning in biometric technologies such as facial recognition certainly helps to complete those tasks far faster and with much greater efficiency.

In The Secret Genius of Modern Life, we learn that humans asked to identify similar faces typically get 16% wrong – even border guards get the same number incorrect! The show’s presenter, Hannah Fry, manages 13%, which is good but still allows for plenty of mistakes. Yet the latest facial recognition technology designed for use at borders only gets 1 in a million incorrect. Think about that: it makes a mistake in only 0.0001% of instances.

Everybody has budget restraints, and few in the private or public sectors have spare resources. Users are increasingly trying to spot patterns of behaviour, and identify and mitigate potential risks with fewer resources available to them. AI in the form of facial recognition is a business tool that offers such undeniable benefits that to not use it would be ridiculous.

Yes of course bias in data sets must be addressed, but this issue has been much improved over the last few years as developers have identified the problem. And no, AI should never act as judge and jury: indeed, one study shows that when AI technology is combined with human intelligence, the accuracy of facial recognition “shot up” and better results were achieved than when AI is used on its own, or when two human forensic experts work together. While results may vary, choosing a reputable AI partner is likely to give results that are more accurate than relying on human analysis alone.

Why not?

This shows that tech can be a force multiplier. To return to the original question posed to me, AI should not be used blindly, of course, and must be deployed with ethical use in mind. It needs to be used where appropriate and where it adds value, not for its own sake (be prepared for the unscrupulous selling it into applications where it isn’t required!) Many organisations have it and don’t understand why, and many may not need it. But be in no doubt: in the right use case, AI can be game-changing. I am very excited about its potential.

AI is here to stay. What we now need is a debate on how it’s best used, and how we avoid the ‘rise of the machines.’ Technology can be scary – think of the man with the red flag who ran in front of the first motor cars – but not only will it transform the security sector over the next few years, but as it works best in concert with humans, the time when it takes all our jobs is way off yet.

Organisations that want to put AI to use need to talk with experts such as Reliance High-Tech as we understand the technology and know how to balance the possibilities it offers with users’ requirements. The future is bright with AI.