- Giulio Fornaroli
The Banality of Good: On Why Millionaires Love Effective Altruism
If you have recently heard about Effective Altruism (EA), it is probably for one of two reasons. The first is the publication of what has probably been the most widely publicized book in the history of Western philosophy, William MacAskill’s What We Owe the Future (if you know of another philosophy book advertized on the London Underground, drop me an email). The second is the collapse of FTX, a cryptocurrency exchange company. Sam Bankman-Fried, founder and CEO of the company, was a devout acolyte of Effective Altruism and, according to his own reconstruction, had been convinced by MacAskill to pursue a career in finance so that he could earn more to donate more to good causes, as prescribed by the movement.
There is a tendency of a certain type of millionaire (especially the tech type) to praise (and financially support) EA. Apart from Bankman-Fried, admiration for the movement has been expressed by no other than the wealthiest person on earth, Elon Musk and (admittedly, some years ago) by Peter Thiel, the founder of PayPal and now one of the great financial supporters of the US Republican Party.
Guilt by association is a cheap way of dismissing ideas. But, if I were an Effective Altruist, I would be uncomfortable that my ideas attract such people and I would like to know why it is so. So, in this essay, I am going to try to understand what may be the affinity between a movement that professes simple and noble ideas and a certain kind of not-so-noble character. I am not interested in how the ideas advanced by EA originated, but rather how they have become so compelling for a particular audience.
The guiding idea of EA is, unsurprisingly for such a successful movement, remarkably simple. Its slogan is also the title of MacAskill’s previous book, Doing Good Better. Supposing altruism is something we are all interested in, why not make sure that, when we are being altruistic, we do so effectively? Put this way, the idea seems hard to dispute. Suppose you want to help a person in need – say, the child who risks drowning in a pond that philosophers love to talk about. Surely, it is important that you do that in the best possible way, in the sense that your action has the most chance of saving them. And, if a movement can help you choose better when you want to behave altruistically – for example, by creating websites through which you can check how various charities have used their money and to what effect – then that is a laudable initiative.
But all of this concerns what we may call means-based effectiveness, i.e., effectiveness in how you decide to pursue an altruistic end you have chosen. Effective Altruists, however, are not content with that. For them, behaving altruistically means adopting effective ends or, as MacAskill makes clear in the recent book, effective values, values that, once adopted, lead the people who follow them to make choices that promote the greater good. The greater good of whom? According to MacAskill’s recent views, the values you need to adopt are the ones that may be beneficial to the greatest number of human lives not just now or in the near future but in the entire future history of humanity.
Here Effective Altruism lies very close to classical utilitarianism. MacAskill, inspired by classical utilitarianism, regards more welfare as better, no matter who enjoys it: present people, or future people. Someone who doesn't exist enjoys no welfare. Adopting beneficial choices means then increasing the number of human lives in the future, so long as such lives contain more pleasure than pain. And, because the future of humanity is potentially immense (consider all years before now and the end of Earth’s habitability or even the end of the universe, if humans start colonizing other planets), then adopting beneficial choices for the future means avoiding at all costs the prospect of human extinction which would mean the cancellation (or, more precisely, the non-coming-into-existence) of an almost infinite amount of units of pleasure. ('Human' is not even particularly relevant here; human lives are better than other creatures’ only insofar as – and so long as – they contain more pleasure than other animals’ or artificial entities’.)
Others have already criticized the peculiar population ethics of what is now called long-termism or the broader ethical framework of Effective Altruism. Here I want to focus, however, not so much on what Effective Altruism recommends but on the way it is supposed to make us feel when we try to reason morally.
As this is a deeply personal issue, let me give a personal example. When we got married, my partner and I wanted to give a portion of the money we received as presents to charities. We had to decide where to donate. We surveyed a couple of websites that help people be efficient altruists in the means-based sense (they tell how various charities have used their money, whether their projects have come to fruition, etc.). But, in the end, we also chose a particular charity because, beyond its being means-efficient, it also adopted ends that we felt closer to our own perception of what are particularly appalling injustices that must be redressed.
Was the charity we chose one that guaranteed to provide the greatest benefit to the highest number of individuals, calculating all potential lives in the longest future? The question strikes me as ludicrous: I am not even sure it guaranteed to provide the greatest net benefit, in comparison to other charities, to the highest number of present human lives! It is possible that some other charity would have been a better option if my intent was to maximize my beneficial contribution to the world. But that was never my intention. And, when I think about my sub-optimal contribution, I do not feel particularly bad about it. I do not feel the same way that I would if I had done one of the other actions that usually bear the mark of immorality, such as betraying a friend, breaking a promise or, case in point, not donating anything when you earn more than enough.
Maybe the ultimate truth about morality is that I was wrong to feel like that, and the effective altruists are right. But, so long as we cannot access ultimate moral truth, it is worth judging moral perspectives, by considering whether they are attuned to the kinds of character that we have, or would like ourselves and others to have.
Which kind of person might feel particularly attracted by a moral perspective that suggests that the praiseworthy action is the one that maximizes one’s net positive contribution to human lives, i.e., one’s positive contribution once we subtract the evils one has caused? Imagine you are, for instance, Elon Musk. Then, clearly, you may find Effective Altruism very attractive. Because that is the ethical perspective that can better demonstrate that you did well, morally speaking. Indeed, what is even the problem with laying off a couple of thousands of people, or giving a platform back to one of the most toxic people on earth, when, with Tesla and SolarCity, you could argue that you have contributed more than any other individual to reducing carbon emissions? If what ultimately matters is your net beneficial contribution, and not, say, how you behaved with respect to any of the laid-off people, the calculus is, evidently, in your favour and you can be satisfied with yourself. And the same can apply to almost all millionaires that engaged in some beneficial action (whether charitable or profitable is irrelevant); the opportunity a millionaire has to produce an overall good outcome, even accounting for the evils, is, simply, much greater than the average individual’s.
Effective Altruists may agree that announcing a massive layoff programme the minute Musk bought Twitter and readmitting Trump were bad actions. But, when we evaluate the whole of one’s life – as we do whenever we ask ourselves if we have been a good person, overall – then, according to the logic of EA, these wrongs pale in comparison to the immense good produced by solar power and electric cars. But this would not prevent Musk from seeing his life as relatively ethically successful overall: yes, he did some evil that he could have avoided doing, but his net contribution is still positive.
A standard accusation against utilitarianism is that it demands too much. But my impression has always been that the opposite is true. Utilitarian moralities ask too little. Not in the sense that they require too little material sacrifice, but that they too easily lead people who have produced something good, regardless of how they regularly treat or think about others, to feel good about themselves.
This is due to the simplicity of the utilitarian ideal, which turns ethical activity into the banal exercise of making sure that one’s contribution remains positive overall. If we believe, by contrast, that laying people off for no apparent reasons is a bad way of dealing with others for which that no amount of good can ever compensate, then we introduce the possibility of irredeemable moral failure. Unless we do something to address that failure in particular – for example, by attending to the complaints of the people involved – no amount of future good can make up for that. But we know how people like Musk are prone to thinking about failure; it is, after all, thecapitalist’s mantra that failure is no more than another opportunity. That is why they may be attracted by a moral perspective that recommends, first and foremost, to maximize one’s positive contribution: even if you have done worse than you should, you can always contribute more, especially if you have plenty.
 Kieran Setiya, 'The New Moral Mathematics', Boston Review, 2022, pp. 1–16.  Amia Srinivasan, 'Stop the Robot Apocalypse', London Review of Books, 37.18 (2015), 3–6; Federico Zuolo, 'Beyond Moral Efficiency: Effective Altruism and Theorizing about Effectiveness', Utilitas, 32.1 (2020), 19–32.  Effective Altruists disagree. And they offer sophisticated arguments for why not donating is better, all-things-considered, than donating inefficiently (in the end-based sense). See Joe Horton, ‘The All or Nothing Problem’, The Journal of Philosophy, 114.2 (2017), 94–104. and Theron Pummer, 'Whether and Where to Give', Philosophy & Public Affairs, 44.1 (2016), 77–95. I find such arguments unpersuasive, mostly for reasons expressed in Thomas Porter Sinclair, 'Are We Conditionally Obligated to Be Effective Altruists?', Philosophy & Public Affairs, 46.1 (2018), 36–59.