Elon Musk has warned that artificial intelligence (AI), if programmed by people in the “environmental movement”, may lead to the extinction of humanity.
Appearing on comedian Joe Rogan’s podcast on Tuesday, he said some people would use the tech to protect the planet by ending human life.
He was speaking ahead of the UK’s AI safety summit, now underway, where he is due to meet the UK Prime Minister.
Many experts consider such warnings overblown.
Nick Clegg, the president of global affairs at Meta and former deputy prime minister – who is also attending the summit – said people shouldn’t let “speculative, sometimes somewhat futuristic predictions” crowd out more immediate challenges.
Mr Musk said his comments were born out of fears the environmental movement had “gone too far”.
“If you start thinking that humans are bad, then the natural conclusion is humans should die out,” he said.
“If AI gets programmed by the extinctionists, its utility function will be the extinction of humanity… they won’t even think it’s bad.”
Mr Musk is due to speak to UK Prime Minister Rishi Sunak on his platform X later on Thursday.
Representatives of some of the world’s most powerful countries are giving speeches at the summit, including – unusually – China, which is emerging as a key player in AI.
Relations between China and the West are fraught in many areas of technology – but the country’s Vice Minister Wu Zhaohui said it was seeking a spirit of openness in AI.
“We call for global collaboration to share knowledge and make AI technologies available to the public,” he told delegates.
“Countries regardless of its size and strength, have equal rights to develop and use AI. We should increase the representation and voice of developing countries,” he added.
AI is ‘too important’
While few people share Mr Musk’s interpretation of the threat AI poses, many agree it has potential dangers.
Speaking ahead of the summit, Demis Hassabis, the co-founder of Google Deepmind, one of the UK’s biggest AI firms, said the “move fast and break things” mantra associated with Silicon Valley should be avoided.
“It has been extremely successful in building massive companies and providing us with lots of great services and applications,” Mr Hassabis said.
“But AI is too important. There’s a lot of work that needs to be done to ensure that we understand [AI systems] and we know how to deploy them in safe and responsible ways.”
He identified potential risks including the risk of AI generating misinformation and deepfakes and the deliberate misuse of the tech by bad actors.
Safety summit
Over the next two days, around 100 world leaders, tech bosses, academics and AI researchers are gathering at the UK’s Bletchley Park campus, once home to the codebreakers who helped secure victory during World War Two.
They will take part in discussions about how best to maximise the benefits of artificial intelligence – such as discovering new medicines and being put to work on potential climate change solutions – while minimising the risks.
The summit will focus on extreme threats posed by so-called frontier AI, the most advanced forms of the tech which Mr Hassabis described as the “tip of the spear”. The summit’s priorities include the threat of bio-terrorism and cyber attacks.
International delegates include US Vice President Kamala Harris and European Commission President Ursula von der Leyen.
There has been some criticism that the guest list is dominated by US giants including ChatGPT creator OpenAI, Anthropic, Microsoft, Google and Amazon – as well as Tesla and X (formerly Twitter) owner Elon Musk. Prime Minister Rishi Sunak will livestream a conversation with Mr Musk on X on Thursday evening.
Others have questioned whether announcements earlier this week from both the US and the G7 specifically about AI safety had overshadowed the event – but Mr Hassabis said the UK could still play “an important role” in shaping discussions.
‘Kind of sci-fi’
Aidan Gomez, the founder of Cohere, has come to the UK from Toronto for the summit. His firm was valued at $2bn in May 2023.
He said he believed there were more immediate threats than the “doomsday Terminator scenario” which he described as “kind of sci-fi”.
“In my personal opinion, I wish we would focus more near-term where there’s concrete policy work to be done,” he said.
“The technology is not ready to, for instance, prescribe drugs to patients, where an error could cost a human life.
“We really need to preserve human presence and oversight of these systems… we need regulation to help us steer and guide the acceptable use of this technology.”