Stephen Hawking Had A Grim Warning About Artificial Intelligence
We may receive a commission on purchases made from links.
Stephen Hawking was the most famous physicist of his time, so naturally, everyone wanted to know how he thought the world would end. Hawking made many predictions for the end of humanity over the course of his long career, but the rise of artificial intelligence was of particularly great concern to him.
Hawking warned that AI would widen the global poverty gap by accelerating industrial automation at the expense of jobs, something which is already coming to pass as companies around the world have announced plans to cut their workforce in favor of AI. He also warned about AI's potential to be exploited in ever more deadly weapons and other means of forceful oppression by a powerful few. However, what Hawking feared most was not what humans could do with AI, but rather what AI could do with us.
Hawking had a close connection to the AI industry. Unable to speak due to the effects of amyotrophic lateral sclerosis (ALS), near the end of his life, Hawking communicated through a computer program that used AI-powered predictive text. When asked about the technology in a 2014 BBC interview, Hawking acknowledged the many potential benefits of AI technologies, but he cautioned that in the future, AI "would take off on its own, and re-design itself at an ever increasing rate." His ultimate verdict? "The development of full artificial intelligence could spell the end of the human race."
How valid are Hawking's fears?
Hawking's fear was that AI could overtake humanity if these technologies became powerful enough to re-design themselves. Humans evolve at an incredibly slow pace, with only incremental changes from one generation to the next. Artificial intelligence, on the other hand, has taken massive strides with virtually every new iteration released.
Since the dawn of robotics, there has been growing concern over a potential technological singularity, a point at which AI evolves beyond what humans can control and begins to shape its own destiny. AI is swiftly approaching human skill levels in several fields, and in some cases even surpassing it, which has led to predictions that the singularity could occur within this decade. It's not a matter of if, but rather when.
For many, the thought of rogue AI conjures up sci-fi scenarios of robots turning on their masters in murderous fashion, but Hawking didn't believe that AI would actually become evil. In his posthumously-released book, "Brief Answers to the Big Questions," he warned that if AI came to harm humans, it wouldn't be out of malice, but rather because the technology would be so hell-bent on efficiency and achieving its goals that it would destroy anything that got in the way.
Hawking wasn't entirely grim on the future of AI. In addition to using it himself for speech, he repeatedly expressed hope that AI could help to eradicate evils like disease and starvation. However, he stressed that this could only be done if humans strictly regulate AI technologies.