"And he is worried that the machine might, for example, decide to pursue some goal (like making the universe into paperclips) at the expense of all other things. "
He never suggests an AI would "decide" to pursue such a goal. The paperclip maximizer example is used to illustrate that an AI's motivation or goals could be completely orthogonal to its intelligence. That is probably the key takeaway from his chapter on motivation.
It is also used as an example of how a seemingly benign motivation (maximizing paperclips) has a very different consequence for a narrow AI vs. AGI vs. SAI.