top of page
What to do about now, political theory blog, political philosophy blog, logo_edited.png
  • David Thorstad

High Risk, Low Reward: A Challenge to the Astronomical Importance of Existential Risk Mitigation




Derek Parfit (1984) asks us to consider two scenarios. In the first, a war kills 99% of the world population. This event, Parfit urges, would be a great tragedy. Billions of lives would be lost, and the recovery would take decades or even centuries. In the second, a war kills 100% of the world population. This event, Parfit holds, would be many times worse. Untold numbers of future human lives would never be lived. Humanity would never settle the stars. There would be no more art, music, or philosophy. And these things might never return.


Many followers of Parfit have drawn the lesson that it is extremely important to mitigate existential risks, risks of existential catastrophes involving “the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development” (Bostrom 2013, p. 15). For example, we might work to regulate chemical and biological weapons or to reduce the threat of nuclear conflict.


Many philosophers defend two claims about existential risk. First, they are Existential Risk Pessimists: they think that levels of existential risk are currently very high. For example, Toby Ord (2020) estimates the risk of existential catastrophe by 2100 at one in six, and participants at the Oxford Global Catastrophic Risk Conference in 2008 estimated a nearly one in five chance of human extinction by 2100 (Sandberg and Bostrom 2008).


Second, they hold the Astronomical Value Thesis: the best-available efforts to mitigate existential risk have astronomical value, far exceeding the value of competing interventions such as global health and development work (Greaves and MacAskill 2021; Ord 2020). Because the future of humanity could be long, large and flourishing, anything we can do to increase the probability of that future being realized may have astronomical value.


It is natural to suppose that Existential Risk Pessimism supports the Astronomical Value Thesis. After all, it is usually more valuable to mitigate large risks than to mitigate small risks. I use a series of models to draw a counterintuitive conclusion: far from supporting the Astronomical Value Thesis, Existential Risk Pessimism tells strongly against it, so much so that Existential Risk Pessimism threatens to falsify the Astronomical Value Thesis. I argue that the most viable strategy for reconciling Existential Risk Pessimism with the Astronomical Value Thesis depends on unlikely empirical assumptions, then draw implications from this discussion.


I begin with a simple model of existential risk mitigation due to Toby Ord (2020). On this model, it turns out that the Astronomical Value Thesis is false and Existential Risk Pessimism has no bearing on the Astronomical Value Thesis. This is, in fact, something of a best case for the Pessimist.


I explore a variety of ways in which the simple model can be expanded: allowing for growth in the value of future centuries; allowing risk-mitigation efforts in current centuries to affect risk in distant centuries; and distinguishing between absolute and relative notions of risk. In each case, I show, the Astronomical Value Thesis remains false, and Existential Risk Pessimism begins to tell ever more strongly against the Astronomical Value Thesis.


If Existential Risk Pessimism is the problem, then the natural way to reconcile Pessimism with the Astronomical Value Thesis is to relax our pessimism. We might hold that risk is very high now and will remain high for a few centuries, but that if humanity survives this perilous time, existential risk will then drop to a very low level and remain at a low level for the rest of human history. This is known as the Time of Perils Hypothesis. For the Time of Perils Hypothesis to reconcile Existential Risk Pessimism with the Astronomical Value Thesis, we must hold that risk will drop within just a few centuries, and drop by a very large amount (on many models, at least four orders of magnitude).


The Time of Perils Hypothesis is a strong claim, and it needs to be grounded by a correspondingly strong argument. In the second part of the paper, I explore three leading arguments for the Time of Perils Hypothesis. The first uses an economic model of optimal resource allocation to argue that as societies grow wealthier, they will quickly stamp out most existential risks. The second holds that as humanity settles the stars, levels of existential risk will drop. And the third paints an optimistic picture on which our descendants will be wise enough to realize the folly of risking humanity’s future for short-term gains. I argue that these considerations are not strong enough to ground the Time of Perils Hypothesis. If that is right, then we should place little faith in the Time of Perils Hypothesis, and consequently should take Existential Risk Pessimism to tell strongly against the Astronomical Value Thesis.


This discussion has implications for ethical longtermism, a perspective which emphasizes the importance of acting to benefit the long-term future (Greaves and MacAskill, 2021) (Cousens, 2022). One of the most common arguments for longtermism is that it is very important to do what we can to mitigate existential risks, since existential risks threaten what could otherwise be a long and prosperous future. While I would not want to deny that existential risks should be taken seriously, the arguments in this paper will tend to reduce the moral importance of existential risk mitigation. In doing so, they may also threaten the argument for longtermism based on the moral importance of existential risk mitigation.


The full paper is available here. You can also read a blog series about the paper on my blog, Reflective Altruism, or see me talk about the paper here.


Works cited

Bostrom, Nick. 2013. “Existential risk prevention as a global priority.” Global Policy 4: 15-31.

Cousens, Chris. 2022. "The Magic We Owe The Future". What To Do About Now? https://www.whattodoaboutnow.com/post/the-magic-we-owe-the-future


Greaves, Hilary and MacAskill, William. 2021. “The case for strong longtermism”. Global

Priorities Institute Working Paper 5-2021. <https://globalprioritiesinstitute.org/hilary-greaves-william-macaskill-the-case-for-strong-longtermism-2/>.

Ord, Toby. 2020. The precipice. Bloomsbury.

Parfit, Derek. 1984. Reasons and persons. Oxford University Press.

Sandberg, Anders and Bostrom, Nick. 2008. “Global catastrophic risks survey”. Technical report 2008-1. Future of Humanity Institute. <https://www.global-catastrophic- risks.com/docs/2008-1.pdf>.



379 views0 comments
Culture wars
bottom of page