Thursday, December 04, 2014

AGI's Threat to Humanity

From Oxford University Professor Nick Bostrom's web page.
Physicist Stephen Hawking has spoken out on the risks of developing Artificial General Intelligence in computers (AGI). We don't have it yet, but if it happens things could get bad. Hawking says "The development of full artificial intelligence could spell the end of the human race." How so?

AGI "would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded."

This syncs with a book I read this summer - James Barrat's Our Final Invention: Artificial Intelligence and the End of the Human Era. AGI does not yet exist. AGI will precede the possibility of ASI (Artificial Superintelligence). As of now we have only weak, or narrow AI. Barrat cites scholars who think "there’s a better than 10 percent chance AGI will be created before 2028, and a better than 50 percent chance by 2050. Before the end of this century, a 90 percent chance." (Barrat, p. 25) 

If ASI happens it will be on its own, far-surpassing the greatest of human intellects that created it. And it will be amoral. That is, we will not be able to program ethics into it. What will it do? Barrat's book warns us of the possibilities. For one thing it likely will not have us in mind.

Any who think all this is just more science fiction should pick up the book on this subject - Nick Bostrom's Superintelligence: Paths, Dangers, Strategies. Bostrom iProfessor in the Faculty of Philosophy at Oxford University and founding Director of the Future of Humanity Institute and of the Program on the Impacts of Future Technology within the Oxford Martin School. Hear and read Bostrom at his Existential Risk web page. In a review of his book astrophysicist Martin Rees says "Those disposed to dismiss an 'AI takeover' as science fiction may think again after reading this original and well-argued book." And MIT physicist Max Tegmark writes: "This superb analysis by one of the world's clearest thinkers tackles one of humanity's greatest challenges: if future superhuman artificial intelligence becomes the biggest event in human history, then how can we ensure that it doesn't become the last?"