Hide table of contents

0. Introduction

Arguments about maximising expected utility are one way in which we might arrive at the conclusion that impact on the far future is the most important consideration when choosing our actions today. However, arguments for longtermism can and have been made from other moral and political frameworks. I will be looking at the political constructivism of John Rawls, his views on justice for future generations, and sketch out some ways we might expand on this when considering possibilities such as human extinction.  In particular, I will consider how the Rawls’ difference principle could be applied across time (although this is not direction that Rawls himself endorsed).

There are several ways this might be useful. Showing that longtermism is not unique to utilitarianism may broaden its appeal; if we are convinced of longtermism, that can only be a good thing. It may also be useful to see how, while a different approach might still yield the result that the far future is very important, there also might be key differences in priorities.This would help us understand why people might have similar but differing intuitions.

1. What is longtermism?

Axiological longtermism is, very roughly, the view that impact on the far future is the most important consideration when choosing our actions. Deontic longtermism is the view that, because of this, expected impact on the far future ought to be the thing which guides our actions (Greaves and MacAskill, 2025) (1). Because there are potentially so many lives in the far future, the argument goes, some actions such as reducing existential risk can deliver very large expected benefits.

The reasoning behind this is utilitarian: aiming to maximise aggregate welfare. However, there is no reason in principle why concern for the far future should only be of interest to utilitarians. Mogensen (2022) (2), for example, discusses several ways in which effective altruism might be of interest to deontologists. In the same spirit, I will be looking at how concern for the far future might fit into one way of thinking about justice.  

The main challenges for longtermism which Greaves and MacAskill (2025), raise are

  1. The challenge of finding cost effective ways for the far future to go well.
  2. Cluelessness: not only not knowing, but not having any idea of what the future will be like, or how our actions today will influence it.
  3. Fanaticism: extremely low probabilities of very high value outcomes might always be preferred to higher probabilities of low value outcomes. This can lead to some unintuitive conclusions.

I will look at whether a justice based approach, applying the difference principle across time, suffers from the same problems.

2. Rawls: Justice as Fairness

For Rawls, the way to come by just political principles is through a process of reflective equilibrium, beginning with the most general principles, and working through to specific details of how society ought to be governed. These should be such that all rational agents can in principle agree to them.

A key tool for discovering these is the Original Position. This is a thought experiment whereby rational, self interested agents reflect on what principles to adopt behind a “veil of ignorance”. That is, they do not know who they will be in the social order they are creating. Information that is unavailable to them includes their gender, race, social class, income, wealth, comprehensive doctrine and natural endowments. Agents do know that citizens have different comprehensive doctrines and life plans, and that everyone wants more primary goods. They know society is under conditions of “moderate scarcity” whereby it is possible for everyone’s basic needs to be met but not possible for everyone to have everything that they want. They know general facts, common sense, and uncontroversial science.

From this, Rawls (2001, p42-3) (3) derives two principles, very roughly:

  1. “Each person has the same indefeasible claim to a fully adequate scheme of equal basic liberties, which scheme is compatible with the same scheme of liberties for all”
  2. “Social and economic inequalities are to satisfy two conditions: first, they are to be attached to offices and positions open to all under conditions of fair equality of opportunity; and second, they are to be to the greatest benefit of the least-advantaged members of society”

This second part of Principle 2 is known as the difference, or maximin, principle. Since agents in the OP do not know who they will turn out to be, they are motivated to make the welfare least well off as high as possible: for all they know, they might turn out to be one of those least well off people.

Rawls saw the future-looking potential of utilitarianism as a reason to be concerned, and aimed to produce a “savings principle” which, while still considering future generations, was in his view less extreme.

‘the utilitarian doctrine may direct us to demand heavy sacrifices of the poorer generations for the sake of greater advantages for later ones that are far better off. (. . .) Even if we cannot define a precise just savings principle, we should be able to avoid this sort of Extremes.’ (Rawls, 1999, p 253) (4)

Savings in this case can mean saving any kind of resources for future generations. As we have seen, while the idea of saving for the future does enter into utilitarian longtermist theories it does not tend to be the primary concern.

In earlier versions of his theory Rawls did not think that agents in the OP should be ignorant of what time period they belonged to, instead making the rational agents “heads of families” with concern for their children’s welfare.  On later versions of his theory, following English (5), Rawls endorsed a version of the OP where agents did not know what generation they were from. However, he still argued that the difference principle did not apply across different time periods.

Rawls believed that applying the difference principle across time periods would mean that we would have no reason to save anything for future generations, while intuitively we should. (Note this is the opposite of the fanaticism concern about longtermism as stated above.) This is because the difference principle states that inequalities should favour the least well off, and a priori, there’s no reason to think that future generations will be less well off than our own.

English (1997) suggests that we can just require best off members of current generations to save for worst off members of future generations. However, I think there is an even simpler response. Regardless of what may be true a priori about the relative welfare of current and future generations, a posteriori we have some great reasons to think the welfare of future generations might be lower than our own, or indeed nil. We need only consider the catalogue of existential and catastrophic risks: misaligned AI, biorisk, nuclear war, climate change and more. If we do apply the difference principle across generations, step one is to find out who is least well off and whether they are among people alive today or people who will live in future time periods. Whoever it is, those should be the people we help first. I think this is quite intuitively plausible, certainly as much so as applying the difference principle in any other context.

3. A sketch of how the difference principle might be applied between generations using comparisons between possible worlds

What follows is a sketch of what it might look like to apply the difference principle between generations. To be clear, Rawls did not endorse doing this. However, to me it seems like a natural direction. There is no intrinsic difference between people now and future people. It is hard to see how a separation in time could matter more in principle than a separation in space. In fact the crucial distinction is between people whose condition we can influence, in the future either near or far, and those who we can’t influence at all, in the past and present moment (no instantaneous causation). This has everything to do with causation, and nothing to do with the intrinsic qualities that make someone a moral patient. If there is no relevant difference between near and far future people, there is no reason in theory why the difference principle should not apply, or why we can’t use the same thought experiment with the rational agents in the OP in order to come to a reflective equilibrium about justice to future people.

I suggest that it is useful to think about agents in the original position choosing between possible worlds. I think this is helpful as it allows us to make clear comparisons. I am not committing here to any particular metaphysics of possible worlds.

Agents in the OP, then, will be unaware of both who they will turn out to be within a given time period, and which time period they will be born into. Following the difference principle, they are looking for the possible world with the best welfare for the least well off person in the worst time period. To simplify the thought experiment for thinking about longtermism I have represented each period in time with a single letter and assigned it a single welfare value, following Van Long (2007, p294) (5). (He takes things in a very different direction subsequently however.)

Here are some highly simplified examples:

In these example possible worlds,

+1 = good welfare better than not existing

0 = neither better not worse than not existing

-1 = poor welfare worse than not existing

X = not existing at all

Time period                A        B        C        D

Possible world p        -1        +1        +1        +1

Possible world q        0        0        0        +1

To a welfare maximising consequentialist, possible world p is the best choice, since the total welfare works out at +2. But following the difference principle, possible world q is the best choice, since the welfare in the lowest welfare time period is higher than the welfare at the lowest welfare time period in possible world p. Under the justice as fairness approach, a rational agent in the Original Position would choose a world where things are best for them in the worst time period. If we have reason to believe that people in the far future could be worse off than anyone alive now, we have a reason to prioritise the far future. On the other hand, if we think there are people alive now who are the worst off, we ought to help them.

(I note that as stated this sounds similar to my basic understanding of prioritarianism. Due to time and word limits I am not exploring the similarities and differences here.)

4. The condition of “moderate scarcity” is load bearing

A Rawlsian approach to longtermism has been brought up on the EA forum before. In this post, D’Alessandro (2022) (6) suggests that a Rawlsian approach of justice towards future generations might be promising. Commenter Michael St Jules (7) noted that one worry about this is that applying Rawls’ difference principle across time would require us to endorse human extinction. I will expand on St Jules point, and then show how applying the difference principle across time can be robust to his concern.

(It is interesting how his point is almost the opposite of the reason why Rawls himself did not endorse using the difference principle across time.)

The argument goes as follows: assuming that some states are worse than not existing at all, the rational agent in the Original Position will want to avoid these at all costs.

Time period                A        B        C

Possible world r        0        +1        X

Possible world s        +1        -1        +1

A Rawlsian, the objection goes, would want to choose possible world r over possible world s. In r, the rational agent in the OP can know that no matter which time period they turn out to be from, they won’t be in a state worse than nonexistence. Whereas in s, they might find themself in time period B, where their welfare is so bad that it is worse than not existing. The total welfare aggregated over time is greater in possible world s, meaning that utilitarians will prefer it. But also there is a strong intuitive reason to prefer possible world s: it doesn’t involve human extinction.

This objection generalises, even setting aside the temporal aspect. Suppose a rational agent in the Original Position is only considering what is best for their own time period and ignoring the others. By relabelling our possible worlds table so that it is about groups of people separated in space (or something else) not time, we can see that that result is just the same.

Groups                A        B        C

Scenario r                0        +1        X

Scenario s                +1        -1        +1

Again, we could argue agents in the OP should choose scenario r over scenario s, even though this means allowing (or even causing) group C to not exist, because that means the welfare of the least well off group is higher. Obviously, this is not what Rawlsians suggest.

The answer to this worry is as follows: the rational agents in the Original Position have certain conditions which they reason under. One of these is the condition of moderate scarcity. That is, there is enough for everyone to get what their basic needs met, but not enough that everyone can have everything that they want.

In terms of our possible worlds model, this means that agents are choosing between possible worlds where moderate scarcity is true. Both possible world r and possible world s are not in this category, since not existing or having worse than nonexistence suffering are both cases of not having your basic needs met. The answer is then simply that rational agents in the OP as stated are never making this choice. Or rather, if someone is making this kind of choice, we have stepped outside the situation where this Rawlsian theory is useful.

Whether moderate scarcity is true or not is an empirical question, contingent on the state of the actual world and nearby possible worlds. It is also load bearing. If moderate scarcity is not true in this actual world then a Rawlsian approach doesn't apply. So it is important that moderate scarcity is well defined in a way that makes sense when comparing possible worlds over time. I will explore ways to tighten up our definition of moderate scarcity in the next section.

5. What does “moderate scarcity” mean across time and across different possible worlds?

The definition of moderate scarcity as stated is modal: there is enough that everyone could get their basic needs met but not enough that everyone could get everything they want. So whether a possible world is under conditions of moderate scarcity depends on its relations to other possible worlds.

Moderate scarcity is harder to define when we are talking about entire timelines, not just the present time. It could mean an aggregate comparison where time is treated like space, a comparison across possible worlds at a specific time, or a comparison between possible futures. I will explain the pros and cons below.

Here is one possible definition of moderate scarcity, treating time exactly like space:

A possible world a is under conditions of moderate scarcity iff there is some other possible world b such that:

  1. Possible world b has the same total population over time and the same total resources over time as possible world a. (that is, the total number of people who have ever and will ever live in both worlds is the same, and the total amount of resources that have ever or will ever exist is the same).
  2. In possible world b everyone’s basic needs are met.
  3. Not everyone in possible world b has everything that they want.

The problem with this is the asymmetry of time. Even rational agents in the OP can only influence the future of their world, not the entire timeline. It is possible for people in the present to save for the future but not for the past.  

Here is another possible definition. A possible world a is under conditions of moderate scarcity iff it is such that at each time t1, t2, tn etc there is some possible world b where:

  1. World b has the same population as world a
  2. World b has the same resources as world a
  3. In world b everyone has enough resources to meet their basic needs.
  4. In world b it is not the case that everyone has everything that they want.

This relies on there being such a thing as the “same” time across different possible worlds. There is something intuitively right about that: if I say “this morning I could have gone swimming, but I went for a run instead” it is plausible that I mean there is some other possible world, where I have the same capabilities and my environment provides me with the same affordances, where I went swimming at the same time as I in fact went for a run in the actual world. However when we start talking about historical timelines this becomes murkier. The only way for possible worlds to get substantially different outcomes in terms of whose needs are met would be for them to have profoundly different political and economic histories. If the timelines of two worlds differ in terms of when major revolutions such as the agricultural revolution or the industrial revolution happened, picking out a point that counts as the “same time” in a non-arbitrary way may be more problematic. (To be sure, we can use our standard dating system, but this counts from the birth of Christ. What if in one possible world Christ was never born, or was born but did not found a religion? An economically and politically different world would probably also have significant religious and cultural differences.)

Another issue: it’s not totally clear what having a resource available means. Does it mean bread on the table, or wheat in a field? What about a piece of land that could be used for growing wheat, provided you first invent agriculture and cultivate something like modern grain crops? “Bread on the table” seems to rule out too much: a process of making a society more just could plausibly involve using resources in a more effective way to get more human usable outputs. But there is a lot of scope for disagreement about where to draw the line. So it is not clear what “having the same resources” means.

Here is another way we could define moderate scarcity taking into account that our actions can only affect the future. A possible world a is under conditions of moderate scarcity at time t iff

  1. There is some other physically possible world b, which is identical to a up until time t:
  2. In possible world b, at some point in the future of t, everyone has their needs met
  3. At no point in the future of time t in possible world b does everyone have everything they want.

This has several advantages. One is that we can be agnostic about how possible world b gets to the future where everyone’s needs are met, avoiding the issue of “what counts as the same resources?”. We still have the issue of possible worlds needing to have a “same time”, but this is less problematic when we have specified that they have the same up until that time. What makes that time the same in both worlds is that that is the point where they start to diverge. Most importantly, it reflects the point that we can only affect the future.

We might be concerned that if the definition of moderate scarcity requires specifying a time, the rational agents in the OP would have to all be from the same time period. This ruins the aspect of the thought experiment where they do not know what time period they are from. However, “moderate scarcity” can be true at many different times, perhaps in different ways. The rational agents do not all need to be from the same time period, it just needs to be the case that whichever time they are from, moderate scarcity is true of that time.  (Another way of putting it: the role of possible world b doesn’t have to be played by the same possible world for each rational agent.)

It is not clear whether the actual world is under conditions of moderate scarcity, especially considered over time according to the third definition above. If it isn’t, the conclusions of the rational agents in the OP seem not to apply.  

However, I think it’s plausible that we can find out whether or not moderate scarcity is true of our world. We already have far more information about this than a few centuries ago, with a better understanding of farming, manufacturing and economics. Some questions we might try to answer to gain more information on this topic could include: how can we meet the world’s energy needs without reliance on fossil fuels? Can we usefully get resources from objects in space? Can we make an economically viable fusion reactor? Are there any other habitable planets? None of these questions are especially novel. Finding out whether the world is in a state of moderate scarcity could take place alongside existing projects.

We can also consider this one of the many times where we are reasoning under uncertainty. We don’t know if the actual world is such that there is some way from how things are now, to a state where everyone’s basic needs are met. But we could plausibly try to put a number on how likely that is. Depending on that number, and considering that the payoff of achieving a just society is pretty high, we might decide that it is a chance we ought to take.  

6. Conclusion

So would taking a Rawlsian approach solve any of the problems suggested by Greaves and MacAskill (2025) The three areas which they note need further work are:

  1. Comparing cost effectiveness of various interventions that could affect the far future
  2. Cluelessness
  3. Fanaticism

Taking a justice approach would reorder our priorities in terms of interventions, but would not give us more information about which interventions are likely to be most useful. Our shopping list of which pieces of information are most crucial is likely to change: for example, we will be particularly concerned to identify people in the near or far future who may experience worse-than-nonexistence suffering, and find interventions that will raise their welfare. But it is not clear that finding these cases or inventions will be any easier than finding the interventions the utilitarian is looking for. Similarly with cluelessness: taking a justice approach, we might be clueless about not what the future will be like and how to intervene, but also whether we are in the sort of possible world where we are able to reach a just society where everyone’s basic needs are met.  

We turn next to fanaticism: the objection is that strong longtermism is inappropriately fanatical: it requires us to bet on tiny probabilities of massive payoffs, rather than aiming for surer probabilities of something more modest. A Rawlsian approach does not seem to require us to do this. That said, if we use similar decision theory to decide when to bet on an opportunity to improve the lot of the least well off, we will run into the same problems. If not, we would need some other method of determining what interventions to choose. This isn’t a problem for Rawls: his work focuses on establishing just principles and institutions. It is a problem though for my vaguely Rawlsian suggestion that we could apply the difference principle across time.

So if this Rawlsian approach does not solve any of the known problems for longtermism, what is it good for? Firstly, a more thorough justification of longtermism using the principles of justice could usefully support coalition building with those who consider establishing just institutions and principles, not welfare maximisation, to be the best way to pursue the good. Note how the concept of “climate justice” has allowed concern for climate change to sit more comfortably alongside traditional concerns around justice; nor is this framing misleading - it really is unjust that the least well off will suffer the worst from climate change. A parallel case could be made for “longtermist justice”. While what I've offered is just a sketch, it suggests that “justice requires concern for the far future” is philosophically defensible.

Secondly, it could also help explain the intuition that many have that securing justice for people alive today is at least as important as heading off existential risks. Even if we do not agree with the reasoning it is helpful to trace the path that such reasoning might take and understand it as a form of rational disagreement. Some of this is no doubt down to scope insensitivity, but some of it, perhaps is due to a sense that, if people today are suffering very badly, justice demands improving their lot before securing the existence of future people.

7. References

  1. Greaves, Hilary, MacAskill, William. “The Case for Strong Longtermism”, Greaves, Hilary, Barrett, Jacob, Thorstad, David. “Essays on Longtermism: Present Action for the Distant Future”, OUP, 2025. https://doi.org/10.1093/9780191979972.003.0003
  2. Mogensen, Andreas. Episode 137, 80000 hours podcast, 2022. https://80000hours.org/podcast/episodes/andreas-mogensen-deontology-and-effective-altruism/
  3. Rawls, John. “Justice as Fairness”, Harvard University Press, 2001.
  4. Rawls, John. “Collected papers” Harvard University Press, 1999.
  5. English, Jane. "Justice Between Generations" Philosophical Studies 31, 91-104, 1997
  6. Van Long, Ngo. “Toward a Theory of a Just Savings Principle”, Roemer, John, Suzumura, Kotaro. “Intergenerational Equity and Sustainability”, Palgrave Macmillan, 2007.
  7. D’Alessandro, William. “Jane English on Rawls and duties to future generations”, EA forum, 2022. https://forum.effectivealtruism.org/posts/niyr9uvHTfSboRRne/jane-english-on-rawls-and-duties-to-future-generations
  8. St Jules, Michael. Comment on the above post. EA forum, 2022. https://forum.effectivealtruism.org/posts/niyr9uvHTfSboRRne/jane-english-on-rawls-and-duties-to-future-generations?commentId=vGv7Kr5zBkqwr35co

3

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
More from JM
Curated and popular this week
Relevant opportunities