• Giulio Fornaroli

Can Moral Philosophy Tell You Who Should Get the Ventilator?

One thing epidemics have in common with other disasters is that they force people into tragic decisions. For “tragic decisions,” I do not mean decisions that will turn out after the event to have devastating consequences, nor genuine dilemmas. I mean, instead, decisions that we would not like to take in the first place, because even the prospect of thinking about the choice feels like an unbearable burden. In these cases, the burden is not necessarily removed if we find out, ex post, that we made the right call: tragic decisions involve sacrifices, which we might come to regret, even if we recognize they were necessary.

A tragic decision that has been evoked in the last weeks concerns the use of ventilators, those sophisticated machines that allow people to breathe artificially when their lungs are severely obstructed. Even though the peak of the emergency seems now overcome (at least in Europe and the US), the question of what to do when medical resources become scarce seems incredibly prominent.

On Trolleys, Philosophy, and Ventilators

Many believe that, when tragic decisions emerge, moral philosophy is in its natural habitat. Indeed, many moral philosophy courses commence with the following tragic decision: an agent is required to choose whether a trolley can proceed on its track and kill five innocent bystanders or be derailed to an alternative track and kill just one.

“Trolley-ology,” as some have started calling it,[1] has become a sub-discipline of its own. Trolley-like examples have become increasingly complex, to account for more variables than in the simplest one vs. five case. One can for instance introduce the complication that the person alone on the track – the one who would be saved if the agent does not derail the trolley – is a friend. Or maybe the five people who would be killed if the agent does nothing are not completely innocent – in fact, they are there to set a bomb. Or, they are there because they consciously took a risk – they knew the tracks were used occasionally but they decided to cross them anyway to take a short cut along their walking route. And why should the moral agent be an external witness? What if he is one of the five on the main track and can decide to kill the one person and save himself (and four others)? What if he is on the other track and can decide whether to sacrifice himself to save the other five?

The aspiration of “trolley-ology” is that the more we become adept at giving a solution to these riddles, the more we can come up with a fair answer to real dramatic problems, such as who gets the ventilator. By carefully balancing our intuitions about the various trolley cases, we arrive at a list of general principles (the fewer the better, ideally) which we can then apply to the cases that puzzle us in real life. For example, one principle can be what moral philosophers call “aggregation” – the idea that, at least when the benefits you allocate to different individuals are sufficiently similar, you should try to benefit the greatest number. (As five people’s lives are more than one, and the value of a life is identical for each person, you should save the five and sacrifice the one.) Or, you can introduce another principle that says that moral agents enjoy a prerogative not to sacrifice themselves (and maybe their nearest and dearest), even when that would be required from an objective viewpoint. (So, if you are the person alone on the alternative track, you can save yourself and sacrifice the five and we won’t blame you for that, although we may call you a hero if you do the opposite.) Then, we can agree on some metric we use when weighing the two principles in cases of conflict.

Consider, however, the factors in play in the decisions over ventilators. First, when doctors realize ventilators are becoming scarce, some patients are already attached to the machines. We can start rationing resources only for those patients that arrive after the moment of reckoning. But isn’t it unfair to give priority to people that just happened to need resources before others? On the other hand, isn’t detaching someone from the ventilator worse than simply denying them access to the machine? (It feels worse – at least to me.)

Even ignoring this initial complication, which criteria can we use to determine priorities? There are various ways to calculate how much a single life is worth, as much appalling as that might sound at first sight. The most common is the so-called QALY index – quality-adjusted life years; the number of years one is expected to live after the intervention, each scaled from 0 to 1 depending on its expected health status (the “quality”).

QALY clearly discriminates against the old (and the already ill). We might think, however, that this is an overall “fair” discrimination as old age is a status we all acquire at a point in our life (if we’re lucky), regardless of our sex, ethnicity, income, and all other characteristics usually taken as grounds of discrimination. At the very least, we can agree QALY indexes are neutral across persons.

But, paradoxically, the neutrality of the QALY criterion can be a double-edged sword. Imagine the 68-year-old person QALY would deny the use of the ventilator is a renowned epidemiologist, who, if recovered, might give a critical contribution to the solution of the medical crisis. If we have chosen “aggregation,” we might prefer saving the epidemiologist now, even at the cost of denying the ventilator to a younger patient, as it will save more lives in the future. Neutrality clashes against consequentialism – the moral ideal, which underlies “aggregation,” that, to decide what is the right thing to do, you ought to look at the overall consequences.

Neutrality may also clash against another commonly evoked (and intuitively compelling) moral principle, the one according to which people should bear the consequences of their actions. Call this the “responsibility” principle. If the only criterion for selecting who gets the ventilator is the expected post-intervention QALY, we cannot discriminate against people who have consciously taken risks that have compromised their health. Maybe sacrificing “irresponsible” people is excessively harsh, but isn’t it unfair to give no consideration to how one has spent their life when deciding which of two people can be saved?

And is “aggregation” such a compelling principle after all? Some people think it implausible, because, as John Rawls remarked, it violates the “separateness of persons.”[2] By accepting that a sacrifice to one person can be compensated by a benefit to another, you are employing across different people criteria of rational conduct that would be appropriate for a single person, who might well forsake a smaller benefit today to gain a larger one tomorrow.[3]

Trolley Skepticism and the Unavoidability of Tragedy

If you are now overwhelmed by the abundance of considerations that become relevant in a single tragic decision, you might feel that there is no trolley example we can fathom that will help clearing our mind when real tragic decisions befall us. I am not disputing the use of trolley-like examples in moral philosophy in general. Probing your intuitions is a helpful exercise, at least when it gives otherwise unconcerned people some food for thought. But there is a presumption amongst some who engage in “trolley-ology” that moral philosophy, when adequately construed, can produce the perfect algorithm for tragic decisions. One can even imagine moral philosophy condensed into a nice manual of instructions to be stored in an emergency cabinet next to the fire extinguisher with “Use me in case of tragic decisions” written on its cover.

My point of contention with “trolley-ology” concerns a certain ambition I see in it to remove the “tragic” from “tragic decision.” I doubt the ambition can be fulfilled, and not only because of the technical problem of the immense number of variables a functioning algorithm for tragic decisions would have to accommodate. The wider problem, as I anticipated at the outset, is that even knowing that one’s decision was blameless (for example, because one followed the moral instruction manual faultlessly) does not shield from what Bernard Williams called “agent-regret,”[4] the peculiarly human experience of feeling guilty or remorseful for one’s involvement in a course of action that led to dramatic consequences, even though one’s action was, at the time one took it, the “right thing to do.”

Agent-regret is not a matter of moral obligation; I cannot see how morality can impose to feel agent-regret whenever one blamelessly harmed someone else. But there seems to be something puzzling and troublesome about individuals that never felt that way. Hence, a moral theory whose main aim is elaborating a strategy through which individuals can, when faced with tragic decisions, identify the course of action that will make them blameless, seems to me impoverished or, at least, less interesting.

I do not want to imply morality is never about finding the right thing to do. But I also believe morality should leave certain areas of human conduct to individual discernment. To return to the original scenario, I am confident most medics will decide who should get the ventilator in a reasonable manner, even if they are not trained in moral philosophy and would not be able to give a definitive answer to most trolley-like questions if they were asked about them outside the emergency room.

So, to conclude, I am both skeptical and slightly uncomfortable about the idea that moral philosophy will ever tell us who should get the ventilator. People faced with tragic decisions tend to find their own fair balance among the competing considerations. Some ways of balancing will strike us as wrong, even when we consider the peculiar situation of the agent when making the decision. Others will feel impeccable. In many cases, however, I suspect we prefer suspending the judgement. We notice that the decision caused some harm, but the peculiar condition of the agent provides an excuse for possible mistakes. And we are particularly well-disposed towards the agent responsible for the decision, precisely because we know that they are the ones that will have to live with it.

Not only I doubt moral philosophy can offer relief in tragic circumstances; I believe many tragic decisions can only be solved (if “solved” is even the right word) when they directly confront us. They cannot be replicated in the absence of the dramatic circumstances that trigger them. In fact, I suspect many attempts at reproducing in the seminar room the tragic decisions that one needs to take in time of emergency only give the unfortunate participants of the seminar, to borrow another of Williams’s phrases, “one thought too many.”[5]

[1] Barbara Fried, "What “Does” Matter? The Case for Killing the Trolley Problem (or Letting It Die)," The Philosophical Quarterly, 62.248 (2012). [2] Rawls’s target was not aggregation in general, but the version of aggregation in use within utilitarianism. Many (including myself) believe that a certain form of aggregation is inevitable and that Rawls himself subscribed to it. [3] For a ventilator-oriented reconstruction of the debate over aggregation, see this fantastic op-ed by philosopher Scott Hershovitz: . [4] The original formulation is in the essay “Moral Luck,” reprinted in the collection Moral Luck (Cambridge: Cambridge University Press, 1981), pp. 20-39. [5] The phrase is contained in “Persons, Character, and Morality,” in: Moral Luck, p. 18.

*Photo by Javier Allegue Barros

  • instalogo

©2020 by Hannah McHugh