The men were too absorbed in their work to notice my arrival at first. Three walls of the conference room held whiteboards densely filled with algebra and scribbled diagrams. One man jumped up to sketch another graph, and three colleagues crowded around to examine it more closely. Their urgency surprised me, though it probably shouldn’t have. These academics were debating what they believe could be one of the greatest threats to mankind—could superintelligent computers wipe us all out?
I was visiting the Future of Humanity Institute, a research department at Oxford University founded in 2005 to study the “big-picture questions” of human life. One of its main areas of research is existential risk. The physicists, philosophers, biologists, economists, computer scientists, and mathematicians of the institute are students of the apocalypse.
Predictions of the end of history are as old as history itself, but the 21st century poses new threats. The development of nuclear weapons marked the first time that we had the technology to end all human life. Since then, advances in synthetic biology and nanotechnology have increased the potential for human beings to do catastrophic harm by accident or through deliberate, criminal intent.
In July this year, long-forgotten vials of smallpox—a virus believed to be “dead”—were discovered at a research center near Washington, DC. Now imagine some similar incident in the future, but involving an artificially generated killer virus or nanoweapons. Some of these dangers are closer than we might care to imagine. When Syrian hackers sent a message from the Associated Press Twitter account that there had been an attack on the White House, the Standard & Poor’s 500 stock market briefly fell by $136b. What unthinkable chaos would be unleashed if someone found a way to empty people’s bank accounts?
Read more here Apocalypse Soon: Meet The Scientists Preparing For the End Times
Read More »