Bounded total
utilitarianism (obviously a draft)
I like my utility functions bounded, and my utilitarianisms total.
Wait, fuck.
What do
Let’s try to construct a moral theory that is like total
utilitarianism when choosing between deterministic universes, but the
utility assignments of which are bounded. Let the set of universes be
\(S\), and the total utilitarian
utility function be \(u\colon S\to
\mathbb{R}\). (Feel free to think of this as the total amount of
pleasure minus pain in the universe, or whatever other formulation of
totalism you prefer.)
The bad
Since our utility function is bounded, we are bound to run into some
of the issues discussed in sections 4, 5, 6 of Wilkinson’s In defence of fanaticism
The good
Local non-egyptologicality
Other good properties
Other than being bounded and thus avoiding issues with unbounded
utility functions, here are some additional properties of our utility
function that some would consider nice. I guess it’s possible to view
each of these as just avoiding a problem with unbounded utility
functions, but it’s probably helpful to spell these out anyway.
- Robustness to the world-doubling button. rejects
this (for sufficiently large values of utility): “Tyler Cowen’s variant
of the St. Petersburg paradox is one objection to utilitarianism that I
accept as a serious problem. Suppose you are offered a deal - you can
press a button that has a 51% chance of creating a new world and
doubling the total amount of utility, but a 49% chance of destroying the
world and all utility in existence (let’s assume that there are no
aliens in the universe, or alternatively that the button also doubles
the number of aliens or something). If you want to maximise total
expected utility, you ought to press the button - after all, the button
is rigged in your favour and so pressing the button has positive
expected value.”
- Robustness to Pascal’s mugging. I’m a beggar with
the sign “if you give me $1, I will save \(3
\uparrow \uparrow \uparrow \uparrow 3\) people from great
suffering”. Should you pay yp? Well, maybe you should and maybe you
shouldn’t, but intuitively the decision should not be motivated by you
actually expecting me to save \(3 \uparrow
\uparrow \uparrow \uparrow 3\) people. With reasonable priors and
reasonable updating, bounded totalism agrees with both common-sense
verdicts here.
The ugly
[]
Acknowledgments
I would have been very surprised if no one had proposed this earlier,
but a moderate amount of searching turned up nothing except for . Even
still, I would be surprised to learn that no one has proposed this
earlier. If anyone can point me to someone who has proposed this
earlier, I would be happy to