As I was watching my son race at the Manchester Invitational Athletics meeting last week, it reminded me of what insurer Allianz said when questioned about its sponsorship of the Winter Olympics when challenged on the human rights record of its host, China:
“We are present in over 70 countries worldwide, including those that have a different view of human rights than we do. We firmly believe that our presence in these countries contributes to the prosperity and security of the people there, based on the values we stand for and live.”
Separately, the BBC’s Panorama last week addressed the subject of AI, and asked whether it represents a saviour or a threat. It’s well worth watching, as one section of the programme addressed the use of video surveillance in smart cities in China.
China’s global ambitions
What emerged for me is that China has clearly set up a very open and clear strategy to be a world leader in technology, and AI in particular, because it sees technology as not only economically advantageous but the key to global power. President Xi himself oversees a technological development programme that has been compared to the country’s development of nuclear arms under Mao. The USA has lately woken up to this fact and views China’s efforts as a “challenge to be the leader of the world”, according to the Chair of the US Future of Defence Task Force.
From a ‘commercially idealistic and non-political perspective’ one could argue that there is nothing wrong in China’s aims to the world leader in AI – but that does presuppose that the motives are economic and ethical.
Fundamentally, is it any different to Boris Johnson’s plans to make the UK a world leader in green energy or the US leading the world with Social Media Platforms? Many countries would like to be in China’s position at the front of the development in AI. The country has 1.4 billion people, which provides one of the largest datasets on earth with which to employ machine learning and train AI applications. There are 800 million cameras in the world: Panorama’s producers estimate that more than half of them are in China. On this basis, some of the pushback on China’s vision may be commercially driven just as much as potential humanitarian and political concerns. Let’s be honest, when we look closer to home Facebook and Cambridge Analytica’s underhand efforts to target voters were far from ethical, so the west is hardly a torch carrier for the ethical use of tech!
By the way – If you want a good read about the dystopian use of social media and Social Engineering, I can recommend a book called Fishbowl. Published in 2015 it is scarily close to where we are today.
China’s many critics cite the use of facial recognition technology as a tool to control and “re-educate’ the Uyghur people in Xinjiang, as part of an Integrated Joint Operations Platform which has been compared to colonising a nation, but using technology. The Panorama programme even revealed what appears to be early research into using AI to identify emotions among Uyghurs at what the authorities have called “re-education camps.” Chinese patents around the use of AI to differentiate between the majority Han and minorities such as the Uyghurs have been previously identified by IPVM. Confronted with this by Panorama’s producers, a range of Chinese camera manufacturers denied that the technology was being used to detect ethnicity and claimed to adhere to the laws of each country they operate in. Clearly the use of any technology for Social Engineering whether it be Analytics or Social Media platforms is fundamentally wrong and needs to be managed.
A fine line
As I have argued before, there is a fine line between the correct use of technology and its abuse. It’s not helped by the fact that the legal context for some uses of technology is mixed. The targeted surveillance of ethnic minorities would appear to trample all over 1948’s Universal Declaration of Human Rights. While racial profiling is illegal in the United States, the position in countries such as the UK and Germany is less clear.
To complicate this further, one of the examples provided in the Panorama programme was the use of AI by the oncology department at the Addenbrooke hospital in Cambridge. In diagnosing prostate cancer, scans are taken, and these may involve up to 150 separate images, each of which must be examined manually. Working with Microsoft, the department now uses AI as part of an ‘Inner Eye’ programme that sifts through the images, reducing the time taken to diagnose and treat patients. In doing so, the hospital now generates as much data in a day as it did in the entire year of 1997. I don’t imagine there would be many people arguing that the development of AI for applications such as this should stop.
In contrast to the alleged use of surveillance technology to target ethnicities by Chinese authorities, the use of AI to speed up medical diagnosis could actually benefit ethnicities that are more likely to develop certain diseases than others. For example, in the UK, men of Black African and Black Caribbean descent are three times more likely to develop prostate cancer than white men of the same age.
Flight or fight?
Tesla CEO Elon Musk claims that “We are very close to the cutting edge in AI, and it scares the hell out of me.” The Panorama programme went on to address the potential for AI-controlled armies of bots that would be impossible to counter. Some have talked of the potential that AI military technology eventually outgrows the need for human intervention, raising the possibility of robot armies at war with humankind.
This is the stuff of nightmares and may be best left for science fiction. Despite Musk’s fears I can’t help noticing that his own Teslas employ masses of AI in their Autopilot systems, which call on vast amounts of computing power to process their huge datasets to keep each car on the road.
The challenge for us, therefore, is that, as data sets grow in size, AI gets ever more advanced and computing power becomes more powerful, legislation is struggling to keep up. In many areas of tech, it is entirely absent. It is only in the last few years that suggestions have been made to develop ground rules around tech, and AI in particular.
As such, let’s be part of an industry that defines what those standards should be, to weed out unethical use, abuse and misuse. We should absolutely take a zero-tolerance approach to ‘abuse of power’ where it can be proven and not support unethical practices in any way.
At the same time though, we should defend the role of technology where it brings tangible benefits, whether keeping society safe from harm, giving businesses greater efficiency or helping people do their jobs more effectively. AI plays a key part in this. It can’t be put back in the box, or ignored. Equally, as Elon Musk warns, it can’t be allowed to get out of control.