top of page
What to do about now, political theory blog, political philosophy blog, logo_edited.png
Chris Cousens

The Magic We Owe The Future


A silhouette of a person looking at a bright starlit sky

2022 has been a rollercoaster for ‘longtermism’. This is the idea that to do the most good, we need to help not only people who exist now, but also those who might exist in the future. Longtermism has become very influential. This might mean saving resources to pass on to those who come after us; it might also mean investing in asteroid detection, so we don't end up like the dinosaurs.


Longtermism enjoyed the highs of the media blitz for William MacAskill’s book What We Owe The Future. Now, it faces uncertainty with the collapse of cryptocurrency exchange FTX, whose founder Sam Bankman-Fried was an important source of funding.


While FTX was flying high, as much as $46 billion was invested into longtermist projects (although this has since been revised down). Without all that crypto-money, we need to look for less resource-intensive ways to positively affect the future. Rather than pursuing expensive technological projects, we should explore the untapped potential of the power of magic. If we become sufficiently powerful wizards, we might do all of the good longtermism promises at a fraction of the cost.


When we make decisions, we should think about how our actions affect other people. If I drive too fast, I might injure someone. And we don’t just think about people immediately affected. Even if no one alive today gets hurt, I still shouldn’t dump chemicals into groundwater that might poison future generations.


Longtermism goes further. If we should care about people alive in one hundred years, we should also care about those alive in one thousand years. Or one million years! What we do today can affect these future lives. For example, if we fail to stop climate change, fewer people might exist, and their lives might be much worse.


So, future people matter, and what we do now can affect their lives. What, then, should we do? Inspired by utilitarianism, longtermism asks how to do the most good.


Making the lives of everyone alive today better by a large amount would do some good, but not as much good as making the lives of so many future people better by a small amount. Around 10 billion people (that’s 1 followed by 9 zeros) will live this century. That is just so much smaller than 10^45 people (MacAskill’s ‘moderate’ calculation – that’s 1 followed by 45 zeros) that those who want to do the most total good need to think about the future as much (if not more) than the present.


There are lots of ways we might help future people. One is to make future lives better. Nick Bostrom, another prominent longtermist, suggests that future ‘posthumans’ might be ‘enhanced’ with technology, able to live better lives. Perhaps we will create huge virtual realities populated by digital people living simulated (happy) lives. The sooner we can create such technology, Bostrom argues, the more total good we will have done.

This might sound like science fiction. But the other goal of longtermism concerns more familiar scenarios. An event that wipes out humanity would prevent all those future people from ever existing. ‘Existential risks’, harms that threaten to end the human species, must be prevented, even at the cost of people’s welfare today. So, we need to invest in asteroid detection and deflection. We need to invest in space travel, so we no longer rely on the habitability of a single solar system. And we need to develop AI technology—as long as it doesn’t try to wipe us out. By contrast, a disaster that leaves a handful of survivors would be a barely noticeable bump in the road, slightly delaying us getting to 10^45. As MacAskill says, if human history is a book, we’re still on page one.


MacAskill’s book leaves some of this speculation aside, but it looms larger in his other work (here with Hilary Greaves), and that of other prominent longtermists like Bostrom. Émile Torres, a critic of longtermism, has suggested that minimising the sci-fi scenarios in What We Owe The Future makes it more palatable for readers new to longtermism – a savvy marketing move.


This future-centred resource distribution might seem troubling. It suggests a lot of investment in technology for the future, while lots of people are suffering from poverty and preventable diseases now. But longtermism teaches us that only worrying about the 8 billion present people overlooks the 10^45 future people – just as only worrying about poverty in your own country overlooks the plight of the poor overseas. So, ‘existential threats’ like asteroids, plagues, and nuclear war are the main things we need to be worried about.


Even if there’s only a very small chance that they happen, the threat that these risks pose to 10^45 future people may well outweigh the needs of the mere 8 billion alive today. So, rather than transferring resources from wealthy countries to the global poor, longtermism gives us reasons to consider alternative approaches to maximising good (Greaves and Nick Beckstead, former CEO of FTX, have argued along these lines). Mathematically, this may just be better value for money.


Longtermists often consider how technological progress might improve the lives of future people (MacAskill also argues that we should steer future societies towards good ‘values’). But it is expensive to build marvellous machines sustaining digital people, or spaceships, or to nuke an asteroid. The earth’s resources are finite. As, since FTX’s collapse, are the funds for longtermist research. We need to consider alternatives.


In place of expensive technologies, consider magic. If we can discover and harness magical powers, we can become extremely powerful wizards. With this arcane knowledge, we could create new dimensions populated with people whose welfare is secured through the magical provision of resources. We could teleport to new planets and magically make them habitable. We could enchant people, changing their perceptions about how their lives are going (and thus, by some utilitarian calculations, their welfare). And most importantly, we could use this magic to stop wars, remove bad leaders, and prevent disasters.


There are some important caveats. Magic may not be possible. But other research longtermists consider may not be possible either. Some dualists, who argue that our minds include radically different things than the physiological stuff that makes up our brains, might say this about Bostrom’s digital people.


And magic might be misused. As much as Merlin helped King Arthur, Morgan le Fay hindered him. But this kind of double-edged sword is familiar in longtermist thinking. Just as we invest in the development of artificial intelligence, we also invest in AI ethics and AI safety, ensuring that the development of potentially devastating technology maximises potential benefits while minimising existential risks. Let’s invest in magic ethics, and magic safety, as we search for magic.


Now this might sound absurd. Why invest time and money into mystical research instead of technology? It is quite unlikely that magic will be discovered, or that it will have the excellent outcomes suggested above. But longtermists are used to thinking about unlikely scenarios with high-value upside. It is unlikely that the earth will be hit by a large asteroid in the next thousand years, but as it would be so bad if it happened, we should do what we can to prevent it.


Hilary Greaves, writing with MacAskill, suggests that this kind of calculation might seem ‘fanatical’. The chances of success are far too slim. But it’s hardly fanatical to wear a helmet when cycling 35 miles, even if the chance of dying without one is ‘one-in-a-million’. This mirrors investment in asteroid detection or AI ethics. The chance of extinction may be low, but the harm of extinction is monstrously high.


It is hardly reckless to spend a small amount to pursue arcane research. A lone scholar might chance upon a mystical breakthrough with few resources. By contrast, technological advancement requires thousands of experts and billions of dollars making incremental progress. A small investment, say, $50,000, would be a drop in the ocean. And if it succeeds, the expected value for 10^45 people compounded over millions of years is... well, a lot.

We don’t yet have the data needed to calculate the likelihood of success, nor the investment required to improve our chances. Greaves argues that more research needs to be done to work out what we should do when we are unsure about the likelihood of different outcomes (and as Bostrom also suggests, we need to carefully assign correct credence to each possibility). But longtermists should divert at least some attention, and funding, towards research in:


1. Magical powers

2. Ethics of magic

3. The expected value of investment in magical research


There is a lot of uncertainty here. But that is a hallmark of longtermism. We won’t know if our interventions have had their desired effects until thousands, perhaps millions, of years have passed. Given the incredible potential value of becoming a powerful wizard, this demands a place in our longtermist calculations. It is magic we owe the future.

553 views0 comments

Комментарии


Culture wars
bottom of page