By implying Bostrom suggests that an AI might just to decide to maximize paperclips at the expense of everything else is completely misrepresenting him.
In doing so, you're also ignoring one of his key points: It is dangerous to anthropomorphize an AGI, especially with regards to its motivation.
It is possible to create an AI that is just raw intellectual problem solving ability and nothing more. We could give it our best shot at programming a complex and comprehensive motivation that is congruent with human flourishing. And yet we could fail to make it comprehensive enough, lacking the ability to view the motivation system from the point of view of something far more intelligent than a human.
It would no doubt be possible to create an AGI that is self-reflective of its own goals ("Why do I want to make paperclips? - that's dumb") and be able to revise them accordingly. But it is feasible (and probably easier) to hard-code in a goal that it would be fundamentally unable to to question.
"And he is worried that the machine might, for example, decide to pursue some goal (like making the universe into paperclips) at the expense of all other things. "
He never suggests an AI would "decide" to pursue such a goal. The paperclip maximizer example is used to illustrate that an AI's motivation or goals could be completely orthogonal to its intelligence. That is probably the key takeaway from his chapter on motivation.
It is also used as an example of how a seemingly benign motivation (maximizing paperclips) has a very different consequence for a narrow AI vs. AGI vs. SAI.