< Back to Blog

The ghost of Christmas future

May 20, 2024 by Alistair Enser

Following my last blog on Spot, the robot that could one day replace security guards, this week I ask whether robots have to be physical, or whether virtual assistants are the future?

This week, we have been testing out the ChatGPT AI platform, which uses deep learning and AI to answer seemingly any question you can put to it. That is does so in plain, easy-to-understand and measured language, drawing on what appears to be an extensive bank of information on which it has been trained, is remarkable. It can reference up to approximately 3000 words from an active conversation, so can refer back to comments made or questions already answered, but any information beyond that is not stored and ChatGPT is unable to access past conversations to inform its responses.

Currently free of charge while being tested, the commercial possibilities for ChatGPT are immense. From providing genuinely useful and knowledgeable support in any language to acting as a virtual teacher for children unable to access school, it seems AI has come of age. Its creators, Open AI want to “ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity.”

The results of some tinkering suggest Chat GPT isn’t quite there yet: in the interests of avoiding bias it seems to refuse to fall on one side of an argument, or another, which means its answers to thorny philosophical questions often come across as responses to a GCSE exam. And there have been some questions about the bias that comes from training ChatGPT on datasets that aren’t truly representative, with a number of users posting examples of bias.

That said, we have been playing around with this tool this week and here are some of my questions to it, and its responses. See what you think!

Does Chinese video technology present a security threat?

There is no inherent reason why Chinese video technology would present a security threat. However, like any device that is connected to the internet, video technology is vulnerable to cyber-attacks and can potentially be used to collect and transmit sensitive information. It is important to take steps to secure your video technology and protect your data, regardless of the manufacturer. This can include using strong passwords, keeping the technology and its software up to date with the latest security patches, and only using reputable manufacturers and service providers.

It is also important to be aware of any potential privacy concerns when using video technology. Depending on how the technology is used, it may capture footage or data of individuals who are not aware they are being recorded or monitored. It is important to consider the potential privacy implications and to use video technology in a responsible and ethical manner.

In general, it is always a good idea to be cautious when using any type of technology and to take steps to protect your data and privacy. This includes being aware of potential security threats and taking steps to mitigate them, such as using strong passwords and keeping software up to date with the latest security patches.

How does one balance security with civil liberties?

Balancing security and civil liberties can be a challenging task, as both are important values that need to be considered and protected. Finding the right balance will depend on the specific context and the specific measures being taken. Here are a few general principles that can help:

Use the least intrusive measures necessary: Whenever possible, try to use the least intrusive measures that are effective at achieving the desired security goals. This can help to minimize the impact on civil liberties.

Be transparent: Make sure that the public is aware of the measures being taken to ensure security and the reasons for taking them. This can help to build trust and ensure that people understand the rationale behind the measures.

Provide oversight: Establish mechanisms for oversight and accountability to ensure that security measures are being used appropriately and not excessively. This can help to prevent abuses of power and ensure that civil liberties are being respected.

Protect human rights: Ensure that security measures are not in conflict with human rights and are not used to discriminate against or unfairly target certain groups of people.

Engage in ongoing dialogue: It can be helpful to engage in ongoing dialogue with the public, civil society organizations, and other stakeholders to ensure that security measures are being implemented in a way that is acceptable to all parties. This can help to find a balance that is acceptable to all sides.

Do you think 2023 will be a good year?

As an artificial intelligence, I do not have the ability to make predictions about the future. It is not within my capabilities to know what will happen in 2023 or to have an opinion about it. However, I can tell you that the future can be uncertain and that it is important to be prepared for a range of potential outcomes. It can be helpful to make plans and take steps to protect yourself and those around you, regardless of what the future may bring.

These are pretty comprehensive and measure answers, I think you’ll agree. But before you get too excited, consider the final question, as it’s a reminder that there are limitations to AI – it doesn’t provide a crystal ball!

And with that, let me wish you and your families a restful holiday period. Thank you for reading my blog this year, and everyone at Reliance High-Tech and Reliance Protect wishes you a very Merry Christmas and a Happy New Year! We remain operational over the festive break, and our (human!) support people are able to help with any problems you have over that period. I myself will be blogging again on Monday, 9 January. See you then!