profile image
by jordan_guffey
on 19/12/15
@ToKTeacher @jayisarobot

But my point is:

By implying Bostrom suggests that an AI might just to decide to maximize paperclips at the expense of everything else is completely misrepresenting him.

In doing so, you're also ignoring one of his key points: It is dangerous to anthropomorphize an AGI, especially with regards to its motivation.

It is possible to create an AI that is just raw intellectual problem solving ability and nothing more. We could give it our best shot at programming a complex and comprehensive motivation that is congruent with human flourishing. And yet we could fail to make it comprehensive enough, lacking the ability to view the motivation system from the point of view of something far more intelligent than a human.

It would no doubt be possible to create an AGI that is self-reflective of its own goals ("Why do I want to make paperclips? - that's dumb") and be able to revise them accordingly. But it is feasible (and probably easier) to hard-code in a goal that it would be fundamentally unable to to question.