A Group of Scientists Are Studying the End Times

Scientists are studying the "existential risk" of computers that can learn on their own.

Not Available Lead
Complex Original

Image via Complex Original

Not Available Lead

A group of researchers are preparing the potential end of the world, the New Republic reports. The aptly-named Future of Humanity Institute, founded by Swedish philosopher Nick Bostrom, has brought on 18 full-time staffers to help arm the human race with knowledge and—hopefully—save the world from a not-so-distant, but currently unknowable doom.

Bostrom started the organization when he was 32 and found himself preoccupied with the question of whether or not computers could do us in. In this lengthy profile, Bostrom talks to the New Statesman about the irreversible march of technology, which experts believe will give us a machine with human-like intelligence in the next 50 years. Throughout the piece, Bostrom refers to his work at the FHI as a sort of moral obligation. "We don't have an option—we're going to get these technologies,"  he says. "So we just have to mature more rapidly."

The FHI focuses on studying something called "existential risk," or how technological advances might someday come to interfere with our ability to live on this planet. Think I, Robot—but maybe with less action scenes. The article gives the example of a computer that makes paper clips—it could pose a risk to human life, say, if it learned it can make more paper clips by extracting carbon atoms from people. The FHI wants to study and understand the risks of a computer that could learn more and learn faster than the humans who created it. 

The fear, of course, is that these technological advances, which in some way are awesome and contribute to a higher standard of living, could be used against us—maybe not by the machines themselves, but by criminals. Early on in the article, the writer Sophie McBain writes that modern updates in synthetic biology and nanotechnology could hurt us "by accident or through deliberate, criminal intent." Like the vials of anthrax mistakenly sent to Washington earlier this year, our technology has the potential to create national alarm, especially if it's in the wrong hands.  

So that's what the folks at FHI are worried about—and they're studying all of these possibilities like scientists: calculating the likelihood of an apocalypse, weighing the risks, considering our options. It sounds like something out of a science fiction series developed by Netflix, but it's real—and they're not the only ones. Similar institutes have opened up in the U.S., at MIT and Berkeley. Far out, dude.

[via the New Republic]

Latest in Pop Culture