(ANTIMEDIA) — The world was atwitter last week as President Trump crudely took North Korea to task over its nuclear program, a threat that has slowly but surely eclipsed the distractions of our imperialist wars in the Middle East. But according to tech mogul Elon Musk, humans face a far graver threat from something they use in their everyday lives and could be holding in their hands as they read these words: algorithmic artificial intelligence. Specifically, algorithmic AI that evolves into malevolent super-intelligent entities and seeks to end their meatbag parent species — us.
We're revolutionizing the news industry, but we need your help! Click here to get started.
Late last week, Musk tweeted that AI is far more dangerous than North Korea, adding that he believes regulation will be necessary to contain the burgeoning technology.
If you're not concerned about AI safety, you should be. Vastly more risk than North Korea. pic.twitter.com/2z0tiid0lc
— Elon Musk (@elonmusk) August 12, 2017
Musk’s alarmist attitude toward AI has, in the past, been mocked by a Silicon Valley digerati convinced that strong artificial intelligence will be a benevolent force that humans can harness. But in recent years, a consortium of futurists and rock star AI developers and experts have finally heeded Musk’s warnings — and Stephen Hawking’s — and launched AI safety conferences and committees for the express purpose of containing the threat of runaway artificial intelligence.
Musk, now one of the richest men in the world, has poured his entrepreneurial spirit into projects that are part of massive growth industries and, simultaneously, visions of collectivist human evolution (as a teenager, his first mission statement was: “The only thing that makes sense to do is strive for greater collective enlightenment.”) Not only did he open his patents for Tesla because “we’re all in a ship together,” not only does he want humans to merge with advanced technology for species protection, he also wants us to colonize Mars in order to have a second home in case future AI inhabitants — our “mind children” — kick us off our home planet. It’s a really weird, futuristic version of collectivism, but it’s there.
At a recent MIT symposium, Musk echoed recent sentiments from Stephen Hawking by declaring that AI constitutes our “biggest existential threat.”
“With artificial intelligence we’re summoning the demon,” Musk said.
Some have accused Musk of using Luddite sentiments and fear-mongering in order to appropriate the AI narrative and insert himself into the conversation. After all, he had a long-running dispute with his friend Larry Page, who heads Google’s DeepMind, over the threat posed by AI. Perhaps he wants to position his brand for what could soon be the most explosively profitable and civilization-altering industry in human history.
Many titans in the field — including Facebook’s Mark Zuckerberg (who last year announced his annual self-improvement project was to create a personal robot butler), futurist Ray Kurzweil (who authored the seminal book The Singularity Is Near and believes AI will entirely surpass human intelligence and acuity by 2029), and AI engineer Andrew Ng (who heads Baidu, China’s Google, and wears a jacket that says “Trust the Robot”) — believe humans will not face an existential threat from AI and will, in fact, flourish and grow with its assistance.
Musk thinks we could accidentally create a real-life version of Skynet. But instead of Terminator robots, he imagines centralized superintelligence endowed with self-directed exponential growth.
“If you want a picture of A.I. gone wrong, don’t imagine marching humanoid robots with glowing red eyes,” Musk says. “Imagine tiny invisible synthetic bacteria made of diamond, with tiny onboard computers, hiding inside your bloodstream and everyone else’s. And then, simultaneously, they release one microgram of botulinum toxin. Everyone just falls over dead.
“The thing about A.I. is that it’s not the robot; it’s the computer algorithm in the Net. So the robot would just be an end effector, just a series of sensors and actuators. A.I. is in the Net . . . . The important thing is that if we do get some sort of runaway algorithm, then the human A.I. collective can stop the runaway algorithm. But if there’s large, centralized A.I. that decides, then there’s no stopping it.”
Some might once again contend that Musk wants to reframe the debate around AI to make his companies, Tesla and SpaceX, more valuable and relevant in the coming decades. His electric cars and space rockets will require advanced algorithmic AI, including state of the art automation and deep learning, so it stands to reason that he wants his companies to dominate the field. What better way to do so than to regulate that field by restricting the growth of AI and keeping it commercially and industrially friendly and scalable?
Nobody likes being regulated, but everything (cars, planes, food, drugs, etc) that's a danger to the public is regulated. AI should be too.
— Elon Musk (@elonmusk) August 12, 2017
With a current dearth of public policy regarding AI and regulations left largely to the Federal Aviation Administration, the Securities and Exchange Commission, and the Department of Transportation —for oversight of drones, automated trading, and self-driving cars, respectively — one can understand a futurist’s droll reaction to the idea of federally regulating something that is still embryonic. One went so far as write a post for the transhumanist website H+ entitled “Elon Musk Is More Dangerous Than AI.”
Then again, can we be too careful when it comes to the exponential growth of a technology we may not be able to control? After all, we may only get one chance to shape the infrastructure of the Earth’s first post-biological intelligence.