'I fear that AI may replace humans altogether,' he had said.
Recommended for you
Not just science: 10 times Professor Hawking taught us about life
Photograph: Paul E Alers/NASA via Getty Images
Professor Stephen Hawking, who made physics and several mysteries of the universe accessible to the common man, had repeatedly warned that the efforts to develop artificial intelligence (AI) and create thinking machines could spell the end of the human race.
'The development of full artificial intelligence could spell the end of the human race,' Hawking had told BBC News in 2014.
As the world remembers Hawking, who died on March 14, we look back at his warnings.
The real risk with AI isn't malice but competence.
A super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren't aligned with ours, we're in trouble. You're probably not an evil ant-hater who steps on ants out of malice, but if you're in charge of a hydroelectric green energy project and there's an anthill in the region to be flooded, too bad for the ants. Let's not place humanity in the position of those ants.
-- In a Reddit AMA
I fear that AI may replace humans altogether. If people design computer viruses, someone will design AI that improves and replicates itself. This will be a new form of life that outperforms humans...
It would take off on its own, and re-design itself at an ever-increasing rate.
-- In an interview to WIRED
Success in creating effective AI, could be the biggest event in the history of our civilization. Or the worst. We just don't know. So we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it.
Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy."
-- At the Web Summit in Lisbon
There's no consensus among AI researchers about how long it will take to build human-level AI and beyond, so please don't trust anyone who claims to know for sure that it will happen in your lifetime or that it won't happen in your lifetime.
When it eventually does occur, it's likely to be either the best or worst thing ever to happen to humanity, so there's huge value in getting it right. We should shift the goal of AI from creating pure undirected artificial intelligence to creating beneficial intelligence. It might take decades to figure out how to do this, so let's start researching this today rather than the night before the first strong AI is switched on.
-- In a Reddit AMA
Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.
-- In an open letter co-signed by several technocrats