April 3, 2017
This is the way the world ends: not with a bang, but with a paper clip. In this scenario, the designers of the world’s first artificial superintelligence need a way to test their creation. So they program it to do something simple and non-threatening: make paper clips. They set it in motion and wait for the results — not knowing they’ve already doomed us all.
Before we get into the details of this galaxy-destroying blunder, it’s worth looking at what superintelligent A.I. actually is, and when we might expect it. Firstly, computing power continues to increase while getting cheaper; famed futurist Ray Kurzweil measures it “calculations per second per $1,000,” a number that continues to grow. If computing power maps to intelligence — a big “if,” some have argued — we’ve only so far built technology on par with an insect brain. In a few years, maybe, we’ll overtake a mouse brain. Around 2025, some predictions go, we might have a computer that’s analogous to a human brain: a mind cast in silicon.
After that, things could get weird. Because there’s no reason to think artificial intelligence wouldn’t surpass human intelligence, and likely very quickly. That superintelligence could arise within days, learning in ways far beyond that of humans. Nick Bostrom, an existential risk philosopher at the University of Oxford, has already declared, “Machine intelligence is the last invention that humanity will ever need to make.”
That’s how profoundly things could change. But we can’t really predict what might happen next because superintelligent A.I. may not just think faster than humans, but in ways that are completely different. It may have motivations — feelings, even — that we cannot fathom. It could rapidly solve the problems of aging, of human conflict, of space travel. We might see a dawning utopia.
This article was posted: Monday, April 3, 2017 at 9:22 am