The word "longtermism" was first used publicly, here on the Effective Altruism Forum, in 2017. In the intervening eight years, a few books on longtermism have been written, many papers have been published, and countless forum posts, blog posts, tweets, and podcasts have discussed the topic.

Why haven’t we seen a promising longtermist intervention yet? For clarity, longtermist interventions should meet the following criteria:

  • Promising: the intervention seems like a good idea and has strong evidence and reasoning to support it
  • Novel: it’s a new idea proposed since the term "longtermism" was coined in 2017 and it was first proposed by someone associated with longtermism in explicit connection to the term "longtermism"
  • Actionable: it’s something people could realistically do now or soon
  • Genuinely longtermist: it’s something that we wouldn’t want to do anyway based on neartermist concerns

In my view, the strongest arguments pertaining to the moral value of far future lives are arguments about existential risk. However, the philosopher Nick Bostrom’s first paper on existential risk, highlighting the moral value of the far future, was published in 2002, which is 15 years before the term "longtermism" was coined. The philosopher Derek Parfit discussed the moral value of far future lives in the context of human extinction in his 1984 book Reasons and Persons. So, the origin of these ideas goes back much further than 2017. Moreover, existential risk and global catastrophic risk has developed into a small field of study of its own, and a topic that was well-known in effective altruism before 2017. For this reason, I don’t see interventions related to existential risk (or global catastrophic risk) as novel longtermist interventions.

Many of the non-existential risk-related interventions I’ve heard about are things people have been doing in some form for a very long time. General appeals to long-term thinking, as wise as they might be, do not present a novel idea. The philosophers Will MacAskill and Toby Ord coined the term "longtermism" while working at Oxford University, which is believed to be at least 929 years old. I’ve always thought it was ironic, therefore, to present long-term thinking as novel. ("You think you just fell out of a coconut tree?")

I have seen that at least some longtermist acknowledge this. In What We Owe the Future, MacAskill discusses the Haudenosaunee (or Iroquois) Seventh Generation philosophy, which enjoins leaders to consider the effects of their decisions on the subsequent seven generations. MacAskill also acknowledges the California non-profit the Long Now Foundation, founded in 1996, which encourages people to think about the next 10,000 years. While 10,000 years is not the usual timespan people think about, some form of long-term thinking is an ancient part of humanity.

Two proposed longtermist interventions are promoting economic growth and trying to make moral progress. These are not novel; people have been doing both for a long time. Whether these ideas are actionable is unclear, since so much effort is already allocated toward these goals. It’s also unclear whether they are genuinely longtermist. The benefits of economic growth and moral progress start paying off within one’s own lifetime, and seem to be sufficient motivation to pursue them to nearly the maximum extent.

Other projects like space exploration — besides not being a novel idea — might be promising and genuinely longtermist, but not actionable in the near term. The optimal strategy with regard to space exploration, if we’re thinking about the very long-term future, is probably procrastination. The cost of delaying a major increase in spending on space exploration for at least a few more decades, or even for the next century, is small in the grand scheme of things. There is Bostrom’s astronomical waste argument, sure — every moment we delay interstellar explansion means we can reach fewer stars in the fullness of time — but Bostrom and everyone else believed that doing well over the next century or so, and securing a path to a good future, is more important than rushing to expand into space as fast as possible. Right now, we have problems like global poverty, factory farming, pandemics, asteroids, and large volcanoes to worry about. If everything goes right, in a hundred years, we’ll be in a much better position to invest much more in space travel. 

Another proposal is patient philanthropy, the idea that longtermists should set up foundations that invest donations in a stock market index fund for a century or more. The idea is to allow the wealth to compound and accumulate. There are various arguments against patient philanthropy. Patient philanthropy mathematically blows up within 500 years because the wealth concentrated in the foundations grows to a politically unacceptable level, i.e., from 40% to 100% of all of society’s wealth. Some people define longtermism as being concerned with outcomes 1,000 years in the future or more, so an intervention that can’t continue for even 500 years maybe shouldn’t count as longtermist. It’s also unclear if this should count as an intervention in its own right. Patient philanthropy doesn’t say what the money should actually be used for, it just says that the money should be put aside indefinitely so it grow and be used later, with the decision about what to use it for and when to use it deferred indefinitely. 

The rationale for patient philanthropy is that the money can be used to respond to future emergencies or for exceptionally good opportunities for giving. However, it isn’t clear why patient philanthropy would be the best way to make that funding available. We saw in 2020 the huge amount of resources that societies can quickly mobilize to respond to emergencies. Normal foundations that regularly disburse funds are often already on the lookout for good opportunities; we should expect, barring catastrophe, foundations like these will exist in the future. The promisingness of patient philanthropy is, therefore, dubious.

This is the pattern I keep seeing. Every proposed longtermist intervention I’ve been able to find so far fails to meet at least one of the four criteria listed above (and often more than one). This wouldn’t be so bad if not for the way longtermism has been presented and promoted. We have been told that longtermism is bracing new idea of great moral importance, around which the effective altruist movement, philanthropy, and possibly much else besides should change course. I think it’s a wonderful thing to generate creative or provocative ideas, but the declaration that an idea is morally and practically important should not get ahead of producing some solid advice, motivated by the new idea, that is novel and actionable. 

Occasionally, I’ll see someone in the wider world mention longtermism as a radical, unsettling idea. It typically seems like they’ve confused longtermism with another idea, like transhumanism. (In fairness, I’ve also seen people within the effective altruism community conflate these ideas.) As I see it, the problem with longtermism is not that it’s radical and unsettling, but that it’s boring, disappointing, overly philosophical, and insufficiently practical. If longtermism is such a radical, important idea, why haven’t we seen a promising longtermist intervention yet?

16

2
5

Reactions

2
5

More posts like this

Comments8
Sorted by Click to highlight new comments since:

Why on earth would you set 2017 as a cutoff? Language changes, there is nothing wrong with a word being coined for a concept, and then applied to uses of the concept that predate the word. That is usually how it goes. So I think your exclusion of existential risk is just wrong. The various interventions for existential risks, of which there are many, are the answer to your question.

Yes. One of the Four Focus Areas of Effective Altruism (2013) was "The Long-Term Future" and "Far future-focused EAs" are on the map of Bay Area memespace (2013). This social and ideological cluster has existed long before this exact name was coined to refer to it.

Something can be a promising X intervention even if its something that had been thought of before in connection with another purpose.

For example, GLP-1 blockers are promising obesity interventions. When we discovered they were very effective at weight loss, this was an important intellectual contribution to the world. It gave fat people a new reason to take the drugs. This is true even though GPL-1s were already an approved medical intervention for a different purpose (diabetes).

Even beyond this, I think Nick's Astronomical Waste argument is Longtermist. So in that sense it is a novel Longtermist idea, even if it predates the term 'Longtermism'.

I think my basic reaction here is that longtermism is importantly correct about the central goal of EA if there are longtermist interventions that are actionable, promising and genuinely longtermist in the weak sense of "better than any other causes because of long-term effects", even if there are zero examples of LT interventions that meet the "novelty" criteria, or lack some significant near-term benefits. 

Firstly, I'd distinguish here between longtermism as a research program, and longtermism as a position about what causes should be prioritized right now by people doing direct work. At most criticisms about novelty seem relevant to evaluating the research program, and deciding whether to fund more research into longtermism itself. I feel like they should be mostly irrelevant to people actually doing  cause prioritization over direct interventions. 

Why? I don't see why longtermism wouldn't count as an important insight for cause prioritization if it was the case that thinking longtermistly didn't turn up any new intervention that we're not already known to be good, but it did change the rankings of interventions so that I changed my mind about which interventions were best. That seems to be roughly what longtermists themselves think is the situation with regard to longtermism. It's not that there is zero reason to do X-risk reduction type interventions even if LT is false, since they do benefit current people. But the case for those interventions being many times better than other things you can do for current people and animals rests on, or at least is massively strengthened by Parfit-style arguments about how there could be many happy future people. So the practical point of longtermism isn't to produce novel interventions, necessarily, but also to help us prioritize better among the interventions we already knew about. Of course, the idea of Parfit-style arguments being correct in theory is older than using it to prioritize between interventions, but so what? Why does that effect whether or not it is a good idea to use it to prioritize between interventions now? The most relevant question for what EA should fund isn't "is longtermist philosophy post -2017 simultaneously impressively original and of practical import" but "should we prioritize X-risk because of Parfit-style arguments about the number of happy people there could be in the future." If the answer to the latter question is "yes", we've agreed EAs should do what longtermists want in terms of direct work on causes, which is at least as important than how impressed we should or shouldn't be with the longtermists as researchers.* At most the latter is relevant to "should we fund more research into longtermism itself", which is important, but not as central as what first-order interventions we should fund. 
To put the point slightly differently, suppose I think the following: 

1) Based on Bostrom and Parfit-style arguments-and don't forget John Broome's case for making happy people being good-I think it's at least as influential on Will and Toby-the highest value thing to do is some form of X-risk reduction, say biorisk reduction for concreteness.

2) If it weren't for the fact that there could exist vast numbers of happy people in the far future, the benefits on the margin to current and near future people of global development work would be higher than biorisk reduction, and should be funded by EA instead, although biorisk reduction would still have significant near-term benefits, and society as a whole should have more than zero people working on it. 

Well, then I am a longtermist, pretty clearly, and it has made a difference to what I prioritize. If I am correct about 1), then it has made a good difference to what I prioritize, and if I am wrong about it, it might not have done. But it's just completely irrelevant to whether I am right to change cause prioritization based on 1) and 2) how novel 1) was if said in 2018, or what other insights LT produced as a research program.  

None of this is to say 1), or its equivalent about some other purpoted X-risk, is true. But I don't think you've said anything here that should bother someone who thinks it is. 

Quick question - Is AI safety work considered a "long-termist" intervention? I know it has both short term and long term potential benefits, but what do people working on it generally see it as?

I suppose if you are generally pretty doomer, it wouldn't meet your 4th criteria. "Genuinely longtermist: it’s something that we wouldn’t want to do anyway based on neartermist concerns"

Also one would hope that it wouldn't be too long before @Forethought has cranked out one or too, as I think finding these is a big part of why they exist...

I don't think longtermism necessarily needs new priorities to be valuable if it offers a better perspective on existing ones (although I don't think it does this well either). 

Understanding what the far future might need is very difficult. If you'd asked someone 1000 years what they should focus on to benefit us, you'd get answers largely irrelevant to our needs today.[1] If you asked someone a little over 100 years ago their ideas might seem more intelligible and one guy was even perceptive enough to imagine nuclear weapons, although his optimism about what became known as mutually assured destruction setting the world free looks very wrong now, and people 100 years ago that did boring things focused on the current world did more for us than people dreaming of post-work utopias. 

To that extent, the focus on x-risk seems quite reasonable: still existing is something we actually can reasonably believe will be valued by humans in a million years time[2] Of course, there are also over 8 billion reasons to try to avoid human extinction alive today (and most non-longtermists consider at least as far as their children) but longtermism makes arguments for it being more important than we think. This logically leads to willingness to allocate more money to x-risk causes, and consider more unconventional and highly unlikely approaches x-risk. This is a consideration, but in practice I'm not sure that it leads to better outcomes: some of the approaches to x-risk seeking funding make directionally different assumptions about whether more or less AGI is crucial to survival: they can't both be right and the 'very long shot' proposals that only start to make sense if we introduce fantastically large numbers of humans to the benefit side of the equation look suspiciously like Pascal's muggings.[3] 

Plus people making longtermist arguments typically seem to attach fairly high probabilities to stuff like AGI that they're working on in their own estimations, which if true would make their work entirely justifiable even focusing only on humans living today.

 

(A moot point but I'd have also thought that although the word 'longtermist' wasn't coined until much later, Bostrom and to a lesser extent Parfit fit in with the description of longtermist philosophy. Of course they also weren't the first people to write about x-risk)

  1. ^

    I suspect the main answers would be to do with religious prophecies or strengthening their no-longer-extant empire/state

  2. ^

    Notwithstanding fringe possibilities like the possibility humans in a million years might be better off not existing, or for impartial total utilitarians humanity might be displacing something capable of experiencing much higher aggregate welfare.

  3. ^

    Not just superficially in that someone is asking to suspend scepticism by invoking huge reward, but also that the huge rewards themselves make sense only if you believe in very specific claims about x-risk over the long term future being highly concentrated in the present (very large numbers of future humans in expectation or x-risk being nontrivial for any extended period of time might seem superficially uncontroversial possibilities but they're actually strongly in conflict with each other). 

Great question

Based on conversations I've had, I believe the focus in EA on longtermism has been off-putting for a lot of people and has probably cost a lot of support and donations for other EA causes.

Was it all a terrible waste?

I agree that trying to make moral progress has near-term benefits but, particularly in some areas like animal welfare, progress can feel dishearteningly slow. The accumulated benefit from 1000 years of tiny steps forward in terms of moral progress could be pretty huge, but perhaps it won't ever feel massively significant within any one person's lifetime. That makes it feel longtermist, though I accept that it feels quite vague to be considered an actionable longtermist intervention.

Curated and popular this week
Relevant opportunities