Elon Musk, Tesla and SpaceX founder and long-standing critic of artificial intelligence (AI), warned a group of 30 US governors on Saturday that AI poses the biggest existential threat to human civilisation capable of manipulating people into a war.
Speaking at the National Governors Association at Rhode Island, Musk – who fears humans will lack the ability to control AI – said the threat posed dwarfs isolated human tragedies, such as “car accidents, airplane crashes, faulty drugs, or bad food – they were harmful to a set of individuals within society of course that they were not harmful to society as a whole”.
He added: “AI is a fundamental risk to the existence of human civilization. “AI is a rare case where I think we need to be proactive in regulation instead of reactive because I think by the time we are reactive in AI regulation is too late.”
Later in a Q&A response, Musk added that the most acute risk stems from “a deep intelligence in the network”. “What harm could a deep intelligence in the network do?,” asked Musk rhetorically. “Well, it could start a war by doing fake news and spoofing email accounts and fake press releases and just by manipulating information – the pen is mightier than the sword.”
He continued: “If you had an AI, where the AI’s goal was to maximize the value of a portfolio of stocks. One of the ways to maximize value would be to go long on defence, short on consumer, start a war.”
To illustrate his fear, Musk re-imagined how AI could have started a war in circumstances like the events which led to the Malaysian Airways airliner MH17 that was shot down on the Ukrainian-Russian border on 17 July 2014, fired from a field controlled by pro-Russian fighters. “How could it do that?,” Musk asked rhetorically. “Hack into the Malaysian Airlines aircraft routing server over a warzone, then sent an anonymous tip that an enemy aircraft is flying overhead right now.”
Musk said, in his earlier remarks, that he keeps “sounding the alarm bell but, you know, until people see like robots going down the street killing people they don’t know how to react because it seems so ethereal.” He is a long-time critic of AI, notably saying in 2014 that building AI was “like summoning the demon” and that there needed to be national and international oversight to “make sure we don’t do something foolish”.
In January 2015, Musk was one of dozens of AI specialists and scientists – including theoretical physicist Professor Stephen Hawking – who signed an open letter outlining research priorities to reap the benefits of AI “while avoiding potential pitfalls”. Hawking has warned AI could be the greatest disaster in human history if it is not properly managed.
In December 2015, Musk took up the role of co-chair, alongside Sam Altman, at OpenAI, a non-profit AI research house, which “seeks to advance digital intelligence in the way that is most likely to benefit humanity as a whole”. OpenAI is backed with up to $1bn from a group of world class entrepreneurs, research engineers, scientists and AI practitioners, including Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, John Schulman, Pamela Vagata, and Wojciech Zaremba,
Corporate greed and fear could lead to grave unintended consequences
Corporate competitiveness, and fear of being “crushed” by rivals, as well as shareholder pressure are spurring an AI race which could lead to new technologies created without safeguards to protect society, suggested Musk in his keynote last Saturday. “We need government regulation here ensuring the public good is served because [you have] companies that are racing – that kind of have to race – to build AI”. Failure to do so could leave corporates uncompetitive and “crushed” by their rivals, Musk added, forcing them into the race for which there is no current regulatory oversight to protect against mis-use and careless invention.
“That’s where you need to bring the regulators in and say: ‘you all need to really pause and make sure this is safe’. When when regulators are convinced that it’s safe to proceed then you can go, but otherwise slow down. But regulators need to do that for all the teams in the game otherwise shareholders would be saying: ‘hey, why aren’t you developing AI faster because your competitor is?’ I think there’s a role for regulators that’s very important – and I’m against over-regulation for sure – but I think we’ve got to get on that with AI.”
Robots will be better than us at everything
Musk warned AI job disruption will be massive because “robots will be able to do everything better than us… yeah, not sure exactly what to do about this – this is really like the scariest problem to me. Transport will be one of the first [sectors] to go fully autonomous, but when I say everything like the robots will be able to do everything – bar nothing.”
Estimates vary wildly over how many jobs will be lost to robots. Economists forecast as much as 40% of the Fortune 500 could vanish entirely within a single decade, driven out by algorithms, according to an article in Quartz. The same article adds economists also predict between 25% and 69% of jobs could be lost in China and India in time.
Doug Ducey, the Republican Governor of Arizona, asked Musk what policymakers could do beyond policies which force companies to slow down which do not obstruct innovators. “Well, I think the first order of business would be to gain insight,” Musk answered, “right now the government does not even have insight. I think the right order business would be set up a regulatory agency – the initial goal [would be] to gain insight into status of AI activity. Make sure the situation is understood. Once it is, then put regulations in place to ensure public safety. Make sure that there is awareness at the government level. I think once there is awareness you will be extremely afraid as they [sic] should be.”
In January 2016, The Information Technology and Innovation Foundation (ITIF), a Washington DC-based think tank, mocked Musk, Hawkins and others, including Bill Gates, as “alarmists touting an artificial intelligence apocalypse” and who “stirred fear and hysteria”. ITIF awarded the group the unwelcome Luddite Award in 2015. “It is deeply unfortunate that luminaries such as Elon Musk and Stephen Hawking have contributed to feverish hand-wringing about a looming artificial intelligence apocalypse,” said ITIF President Robert D. Atkinso in a statement at the time. “The obvious irony here is that it is hard to think of anyone who invests as much as Elon Musk himself does to advance AI research, including research to ensure that AI is safe. But when he makes inflammatory comments about ‘summoning the demon,’ it takes us two steps back.”