A seemingly benign viral game about paperclips reveals why AI could be a huge danger to humanity

harbinger of human extinction.


A game
about making paperclips
has gone viral, and it’s a sly nod to
a famous thought experiment about the potential dangers of
artificial intelligence.

The game, as
described by Business Insider’s Matt Weinberger
(one of the
many staffers at Business Insider, myself included, who have
become quickly addicted) is fairly straightforward: You are in
charge of making as many paperclips as possible. As you progress
and sell paperclips, you unlock various upgrades that make the
process of paperclip-making ever more efficient and automated,
until the game is basically playing itself.

The idea of an ever-more efficient automated paperclip factory
has a history in the world of philosophers and futurists
contemplating the dangers and possibilities of a “technological
singularity” in which artificially intelligent machines rapidly
outpace human abilities.

In 2003, Oxford University philosopher Nick Bostrom published a
paper titled “Ethical Issues in
Advanced Artificial Intelligence
,” in which he discusses what
could happen if humanity creates a “superintelligent” machine
capable of quickly improving itself.

The paper discusses some of the pluses and minuses of such an
entity. On the plus side, a benevolent superintelligent machine
would be able to solve all of humanity’s problems, potentially
ushering in a utopian golden age of health and prosperity for
all. On the minus side, a malevolent superintelligence would be
able to crush our species more efficiently than we could remove
an ant-hill.

YouTube/”Ex Machina”

The movie “Ex Machina” explores the dangers of artificial intelligence.

While the latter scenario has been explored innumerable times in
science fiction — “Terminator,” “The
,” and Harlan Ellison’s classic “I
Have No Mouth And I Must Scream
” to name a few — one of
Bostrom’s key insights in the paper is that a superintelligence
wouldn’t have to be actively opposed to humanity to
present an existential risk. Instead, a superintelligence
with a very un-humanlike psyche and poorly-defined goals might
not realize that it’s causing immesurable harm to its creators.

This brings us back to our friendly browser game. The example
Bostrom gives of a non-malevolent but still extinction-causing
superintelligence is none other than a relentlessly
self-improving paperclip maker that lacks an explicit overarching
sense of being pro-human (emphasis ours):

“The risks in developing superintelligence include the risk of
failure to give it the supergoal of philanthropy… Another way
for it to happen is that a well-meaning team of programmers make
a big mistake in designing its goal system. This could result, to
return to the earlier example, in a superintelligence
whose top goal is the manufacturing of paperclips, with the
consequence that it starts transforming first all of earth and
then increasing portions of space into paperclip manufacturing
 We need to be careful about what we
wish for from a superintelligence, because we might get it.”

So, while you’re having fun improving your browser-based
paperclip enterprise, remember that you’re essentially simulating
the total extinction and eradication of humanity and everything
it’s ever created. Enjoy!

Source: http://www.businessinsider.com/paper-clip-game-superintelligence-theory

Leave a Reply

Your email address will not be published. Required fields are marked *