Contact Us

Use the form on the right to contact us.

You can edit the text in this area, and change where the contact form on the right submits to, by entering edit mode using the modes on the bottom right. 

         

123 Street Avenue, City Town, 99999

(123) 555-6789

email@address.com

 

You can set your address, phone number, email and site description in the settings tab.
Link to read me page with more information.

Writing

Robust longterm comparisons

Toby Ord

The choice of discount rate is crucially important when comparing options that could affect our entire future. Except when it isn’t. Can we tease out a class of comparisons that everyone can agree on regardless of their views on discounting?

Some of the actions we can take today may have longterm effects — permanent changes to humanity’s longterm trajectory. For example, we may take risks that could lead to human extinction. Or we might irreversibly destroy parts of our environment, creating permanent reductions in the quality of life.

Evaluating and comparing such effects is usually extremely sensitive to what economists call the pure rate of time preference, denoted ρ. This is a way of encapsulating how much less we should value a benefit simply because it occurs at a later time. There are other components of the overall discount rate that adjust for the fact that an extra dollar is worth less when people are richer, that later benefits may be less likely to occur — or that the entire society may have ceased to exist by then. But the pure rate of time preference is the amount by which we should discount future benefits even after all those things have been accounted for.

Most attempts to evaluate or compare options with longterm effects get caught up in intractable disagreements about ρ. Philosophers almost uniformly think ρ should be set to zero, with any bias towards the present being seen as unfair. That is my usual approach, and I’ve developed a framework for making longterm comparisons without any pure time preference. While some prominent economists agree that ρ should be zero, the default in economic analysis is to use a higher rate, such as 1% per year.

The difference between a rate of 0% and 1% is small for most things economists evaluate, where the time horizon is a generation or less. But it makes a world of difference to the value of longterm effects. For example, ρ = 1% implies that a stream of damages starting in 500 years time and lasting a billion years is less bad than a single year of such damages today. So when you see a big disagreement on how to make a tradeoff between, say, economic benefits and existential risk, you can almost always pinpoint the source to a disagreement about ρ.

This is why it was so surprising to read Charles Jones’s recent paper: ‘The AI Dilemma: Growth versus Existential Risk’. In his examination of whether and when the economic gains from developing advanced AI could outweigh the resulting existential risk, the rate of pure time preference just cancels out. The value of ρ plays no role in his primary model. There were many other results in the paper, but it was this detail that grabbed my attention.

Here was a question about trading off risk of human extinction against improved economic consumption that economists and philosophers might actually be able to agree on. After all, even better than picking the correct level of ρ, deriving the correct conclusion, and yet still having half the readers ignore your findings, is if there is a way of conducting the analysis such that you are not only correct — but that everyone else can see that too.

Might we be able to generalise this happy result further?

  • Is there are broader range of long run effects in which the discount rate still cancels out?

  • Are there other disputed parameters (empirical or normative) that also cancel out in those cases?

What I found is that this can indeed be greatly generalised, creating a domain in which we can robustly compare long run effects — where the comparisons are completely unaffected by different assumptions about discounting.

Let’s start by considering a basic model where $u(t)$ represents the ‘instantaneous utility’ or ‘flow utility’ of a representative person at time $t$. That is, it is a measure of their wellbeing such that the integral of $u(t)$ over a period of time is the utility (or wellbeing) the person accrued over that period. This is normalised such that the zero level for $u(t)$ is the level where someone is indifferent about adding a period at this level to their life.

Now let $n(t)$ represent the number of people alive at time $t$ and let $d(t)$ be the discount factor for time $t$. This discount factor is another way of expressing pure time preference. A constant rate of pure time preference ρ, corresponds to a discount factor that drops exponentially over time: $d(t) = e^{–ρt}$. But $d(t)$ need not drop exponentially — indeed it could be any function at all. So could $n(t)$ and $u(t)$. The only constraints are that they are all integrable functions and that the integral below converges to a finite value.

On this model, we’ll say the value of the entire longterm future is:

$V = \int_0^\infty d(t) \cdot n(t) \cdot u(t) \ . \ dt$

(This equation assumes the total view in population ethics, where we add up everyone’s utility, but we’ll see later that this can be relaxed.)

Now suppose that we have the possibility of improving the quality of life, from $u(t)$ to some other curve $u^+(t)$, without altering $d(t)$ or $n(t)$. And lets make a single substantive assumption: that this improvement is a rescaling of the original pattern of flow utility: $u^+(t) = k \ u(t)$ for some scaling factor $k$. How does this change the value of the future?

$V^+ = \int_0^\infty d(t) \cdot n(t) \cdot k \ u(t) \ . \ dt$

$V^+ = k \int_0^\infty d(t) \cdot n(t) \cdot u(t) \ . \ dt$

$V^+ = k V$

So on this model, scaling up a curve of utility over time simply leads to scaling up the discounted total value under that curve.

What about the value of extinction? We can model extinction here by $n(t)$ going to zero. If so, the value of the integral from that point on falls to zero. (Similarly, other kinds of existential catastrophe could be modelled as $u(t)$ going to zero.)

Now let’s consider the expected value of developing a risky technology, where we have a probability $s$ of surviving the development process and scaling up all future utility by a factor of $k$, but otherwise we go extinct.

$EV$(develop)$\ = s V^+ + (1-s)0 = s V^+ = s k V$

This expected value still depends on the the discount function $d(t)$ because $V$ depends on $d(t)$. But what if we ask about the decision boundary between when the expected value of taking the risk ($EV$(develop)) is better or worse than the value of the status quo ($V$). The boundary occurs when they are equal:

$EV$(develop)$\ = V$

So:

$skV = V$

$s = \frac{1}{k}$

This decision boundary for comparing whether it is worth taking on a risk of extinction to make a lasting improvement to our quality of life has no dependence on the discount function $d(t)$. Nor does it depend on the population curve $n(t)$. And because it doesn’t depend on the population curve, the decision doesn’t depend on whether we weight time periods by their populations or not. It is thus at the same place for either of the two most commonly used version of population ethics within economics: the time integral of total flow utility and the time integral of average flow utility.

And one can generalise even further.

This model assumed that all the extinction risk (if any) happens immediately. But we might instead want to allow for any pattern of risk occuring over time. We can do this via a survival curve, where the chance of surviving at least until $t$ is denoted $S(t)$. This can be any (integrable) non-increasing function that starts at 1. If so, then the expected value of the status quo goes from:

$V = \int_0^\infty d(t) \cdot n(t) \cdot u(t) \ . \ dt$

to

$EV = \int_0^\infty S(t) \cdot d(t) \cdot n(t) \cdot u(t) \ . \ dt$

And this has simply placed another multiplicative factor inside the integral. So as long as the choice we are considering doesn’t alter the pattern of existential risk in the future, the argument above still goes through. Thus the decision boundary is independent of the future pattern of extinction risk (if that is unchanged by the decision in question).

Jones’s model has more economic detail than this, but ultimately it is a special case of the above. He considers only constant discount rates (= exponential discount functions), assumes no further risk beyond the initial moment, and that the representative flow utility of the status quo, $u(t)$, is constant. He considers the possibility of it changing to some other higher constant level $u^+(t)$, which can be considered a scaled up version of $u(t)$, where $k = \frac{u^+(0)}{u(0)}$.

So the argument above generalises Jones’s class of cases where comparisons of longterm effects are independent of discounting in the following ways:

  • constant $u(t)$ and $u^+(t)$  $\Rightarrow$  $u(t)$ may vary and $u^+(t)=k\ u(t)$

  • constant ρ  $\Rightarrow$  time-varying ρ (so long as value converges)

  • constant population growth rate $\Rightarrow$ exogenous time-varying population growth

  • total view of population ethics  $\Rightarrow$  either total or time integral of average

  • no further extinction risk $\Rightarrow$ exogenous time-varying extinction risk

It is worthing noting that Jones’s model is addressing the longterm balance of costs and benefits of advance AI via a question like this:

if we could either get the benefits of advanced AI at some risk to humanity, or never develop it at all, which would be best?

This is an important question, and one where (interestingly) the way we discount may not matter. On his model it roughly boils down to this: it would be worth reducing humanity’s survival probability from 100% down to $s$ whenever we can thereby scale up the representative utility by a factor of $1/s$.

In some ways, this is obvious — being willing to risk a 50% chance of death to get some higher quality of life is arguably just what it means for that new quality level to be twice the old one. But its implications may nonetheless surprise. After all, it is quite believable that some technologies could make life 10 times better, but a little disconcerting that it would be worth a 90% chance of human extinction to reach them.

One observation that makes this implication a little less surprising is to note that there may be ways to reach such transformative technologies for a lesser price. Just because it may be worth a million dollars to you to get a bottle of water when dying of thirst, doesn’t mean it is a good deal when there is also a shop selling bottles of water for a dollar apiece. Those people who are leading the concern about existential risk from AI are not typically arguing that we should forgo developing it altogether, but that there is a lot to be gained by developing it more slowly and carefully. If this reduces the risk even a little, it could be worth quite a lengthy delay to the stream of benefits. Of course this question of how to trade years of delay with probability of existential risk does depend on how you discount. Alas.

In my own framework on longterm trajectories of humanity, I call anything that linearly scales up the entire curve of instantaneous value over time an enhancement. And I showed that like the value of reducing extinction risk, the value of an enhancement scales in direct proportion to the entire value of the future, which makes comparisons between risk reduction and enhancements particularly easy (much as we’ve seen here). But in that framework, there was an explicit assumption of no pure time preference, so I had no cause to notice how ρ (or equivalently, $d(t)$) completely cancels out of the equations. So this a nice addition to the theory of how to compare enhancements to risk reduction.

One might gloss the key result about robustness to discounting procedure as follows:

When weighing the benefits of permanently scaling up quality of life against a risk of extinction, the choice of discounting procedure makes no difference — nor does the population growth rate or subsequent pattern of extinction risk (so long as these remain the same).

In cases like these, discounting scales down the magnitude of the future benefits and of the costs in precisely the same way, but leaves the location where benefits equal the costs alone. It can thus make a vast difference to evaluations of future trajectories, but no difference at all to comparisons.

I hope that this region of robustness to the choice of discounting might serve as an island of agreement between people studying these questions, even when they come from very different traditions regarding valuing the future. Moreover, the fact that the comparison is robust to the very uncertain questions of the population size and survival curve for humanity across aeons to come, shows that at least in some cases we can still compare longterm futures despite our deep uncertainty about how the future may unfold.

$\setCounter{0}$