Current computational models of theory of mind typically assume that humans believe each other to selfishly maximize utility, for a conception of utility that makes it indistinguishable from personal gains. We argue that this conception is at odds with established facts about human altruism, as well as the altruism that humans expect from each other. We report two experiments showing that people expect other agents to selfishly maximize their pleasure, even when these other agents behave altruistically. Accordingly, defining utility as pleasure permits us to reconcile the assumption that humans expect each other to selfishly maximize utility with the fact that humans expect each other to behave altruistically.