One in the AI for the prophets of doom
November 29, 2023 by Alistair Enser
If you paid much attention to the news over the last few weeks, you would be forgiven for not wanting to leave your house, such is the fear around AI.
A recent letter co-signed by tech leaders and pioneers in AI argued that the development of AI should be paused as it presents such a risk for humanity that it should be classified alongside nuclear war and pandemics. Signatories to the letter included dozens of academics, senior bosses at firms including Google DeepMind, the co-founder of Skype, and even the chief executive of ChatGPT-maker OpenAI. The letter comes on the back of warnings by luminaries such as the so-called Godfather of AI, Geoffrey Hinton, who quit his job, warning about the growing dangers of developments in AI.
Things reached a crescendo this week when Mo Gawdat, AI expert and ex-chief business officer at Google X, warned people who don’t already have children to hold off as the rapid ascent of AI continues. “The risks are so bad, in fact, that when considering all the other threats to humanity, you should hold off from having kids if you are yet to become a parent,” he said.
Given such hysteria, it’s no surprise that we are now scared of AI. A poll this week by YouGov revealed that there has been a 10% increase in the number of Brits who see artificial intelligence as a top threat to human survival.
As the sun came out this week across the UK and we enjoyed the rare sight of clear blue skies, I wondered to myself: could those skies be replaced by the flame-filled image at the top of this article (created by AI, no less) Should we really be worried about AI?
When experts like Hinton speak on the subject of AI we should, of course, listen. But I would argue that the fears currently being expressed around AI amount to little more than scaremongering and are consistent with an approach towards technology that has existed for centuries, perhaps longer. The advent of technologies as diverse as the printing press, the steam engine and the computer have all been accompanied by fears over what they might bring. In each case, the fears have been unfounded.
Indeed, I wonder whether some of the warnings around AI can be reduced to fears and frustrations about the pace of its development, and the fact that generative AI, based on large language models (LLM), remains firmly the hands of the usual big tech players. How much of this is sour grapes?
We are surely right to be concerned about the potential of AI to eradicate jobs. But how many more jobs will be created as workers are redeployed to do the things that AI cannot, and will not, be able to do? An ageing global population supported by dwindling numbers of (tax paying) workers is unsustainable: AI could provide a boost of 7% to global GDP over a ten-year period according to Goldman Sachs – a useful increase given the demographic crisis we face.
Plus which, amid all the doom-mongering around AI, we forget its immediate benefits. Over two years ago, I wrote about the use of AI by the oncology department at the Addenbrooke hospital in Cambridge. In diagnosing prostate cancer, scans are taken, and these may involve up to 150 separate images, each of which must be examined manually. Working with Microsoft, the department uses AI as part of an ‘Inner Eye’ programme that sifts through the images, reducing the time taken to diagnose and treat patients. In doing so, the hospital generates as much data in a day as it did in an entire year. I don’t imagine there would be many people arguing that the development of AI for applications such as this should stop.
Should there be guardrails in place to govern the development of such a powerful technology as AI? Of course. But we place regulations around loads of technologies – such as the installation of electrical wiring, the roadworthiness of cars, or the use and storage of our data. Does anyone seriously suggest switching off every computer around the globe just because ‘bad actors’ can take control of corporate networks and steal money from bank accounts? Of course not.
Returning to that YouGov survey, consider this. While the number of people who fear AI might cause human extinction has risen 10%, it is still only at 17%, compared to nuclear war at 55%, global warming at 38% and a pandemic at 29%. Given we weathered the latter quite recently, I would argue we are safe for now…