In this document, I compile a wide range of information about longtermism including its:
- Institutions
- Individuals
- Writings
- Core argument
- Related concepts
- Common counter arguments
- History
In this appendix I also include:
- Summaries of some major works
This document is about longtermism, defined as the view that we have a moral obligation to work to shape the far future in a positive way.
I do not include much information related to the intersection of longtermism and AI. I also have not included any information related to suffering risks, which is adjacent to longtermism and addresses similar questions. I also have not read everything there is to read about longtermism so this document is necessarily incomplete.
Institutions
Longtermism has/had three major institutions:
- Future of Humanity Institute (FHI)
- Oxford, England
- 2005-2024
- Global Priorities Institute (GPI)
- Oxford, England
- 2018-2025
- Forethought
- Oxford, England
- 2018-2024, 2025 - pressent
- Previously named “Forethought Foundation for Global Priorities”
FHI and GPI were both institutions at the University of Oxford. FHI was founded by Nick Bostrom and did extensive work related to defining and then presenting the field of existential risk. GPI was founded by effective altruists as a way to give more credibility to and to engage in more quality global priorities research. GPI led the way on work that is particularly longtermist in flavor.
Forethought Foundation for Global Priorities was founded by effective altruists with the mission to do global priorities research with a focus on longtermism. It temporarily shut down between 2024 and 2025. When it re-opened, it renamed itself Forethought, and it changed its mission to focus on humanity’s transition to a post-AGI world. It has many individuals who worked at either GPI or FHI, and it is the only remaining institution that releases work related to longtermism. It is unclear whether “Forethought Foundation for Global Priorities” and "Forethought" are technically the same organization or if the latter just happens to be a rough continuity of the former.
Individuals
Longtermism has roughly seven major figures that have made multiple significant contributions to longtermism. These individuals include:
- William MacAskill (GPI, Forethought)
- Toby Ord (FHI)
- Nick Bostrom (FHI)
- Nick Beckstead (GPI)
- Fin Moorhouse (FHI, Forethought)
- Joe Carlsmith (FHI, Forethought)
- Phillip Trammel (GPI)
I’m not entirely sure where each of these individuals worked. If someone is not included in this list, I’m not trying to imply that they should not be included on it.
Writings
In this section, I include major works, important works, books, resources, and research agendas related to longtermism. It’s quite subjective what writings are the most important so I don’t mean to indicate that a work was unimportant if it was not included.
Major Works
“Astronomical Waste” by Nick Bostrom (2003)
“Existential Risk Prevention as Global Priority” by Nick Bostrom (2013)
“On The Overwhelming Importance of Shaping The Far Future” by Nick Beckstead (2013)
The Precipice by Toby Ord (2020)
“The Most Important Century” by Holden Karnofsky (2021)
What We Owe The Future by William MacAskill (2022)
“The Case For Strong Longtermism” by Hilary Greaves and William MacAskill (2019, 2021, 2025)
“Better Futures” by William MacAskill (2025)
Other Important Works
“How many lives does the future hold?” by Toby Newberry (2021)
Eternity in six hours: Intergalactic spreading of intelligent life and sharpening the Fermi paradox by Stuart Armstrong and Anders Sandberg (2013)
“AGI and Lock-In” by Lukas Finnveden, Jess Riedel, and Carl Shulman (2022, 2025)
“Viatopia” by William MacAskill (2026)
Books
The Precipice by Toby Ord (2020)
The Long View: Essays on Policy, Philanthropy, and the Long-term Future by Natalie Cargill and Tyler M. John (eds.) (2021)
What We Owe The Future by William MacAskill (2022)
Essays on Longtermism: Present Action for the Distant Future by Hilary Greaves, Jacob Barrett, David Thorstad (2025)
Overviews (Taken from William MacAskill’s website):
The website Longtermism
“Longtermism” by Wikipedia
“Longtermism” by Forethought
“Longtermism: a call to protect future generations” by 80,000 Hours
Research agendas:
- “How to Make the Future Better” by William MacAskill
- “Longtermism” by Forethought
- “Philosophy Research Agenda” by Global Priorities Institute
- Effective Thesis
- EA Forum Research Agendas
- “A central directory for open research questions” by Michael A
- “Which questions can’t we punt?” by Forethought
History of Longtermism:
This section is a brief history of longtermism based on its academic writings. As such, it is necessarily going to miss some relevant details. A much better history could be given by someone who has been working in the field for a long time, but I have not seen any such histories written.
Longtermism, as far as I can tell, can be traced back to the development of the field of nuclear ethics after the invention of the nuclear bomb in the 1940s. Nuclear ethics, concerned about the power of these weapons, argued that one serious concern about nuclear weapons was that they could pose an existential risk to humanity. Then, in 1996, John Leslie wrote the book The End of the World, in which he expanded thinking on existential risk beyond merely that of nuclear weapons. This book went on to inspire Nick Bostrom to be concerned about existential risk and to form the basis of the field as we know it today.
In my view, longtermism has roughly three waves, all of which roughly overlap.
The first wave was defined by early work related to existential risk and to further defining it as a field. This work was done mostly by Nick Bostrom and his organization FHI, and its time period was roughly from 2003 to 2013. Two representative works are Bostrom’s “Astronomical Waste” and “Existential Risk Prevention as Global Priority.”
The second wave was defined by more developed thinking around existential risk as well as a greater focus on the ethics of caring about future generations. This work was done by both FHI and GPI, and its time period was from roughly 2013 to 2024. Two representative works are “On The Overwhelming Importance of Shaping The Far Future” by Nick Beckstead (2013) and The Precipice by Toby Ord (2020)
The third (and most recent) wave is defined by an increased focus on how to positively shape the far future outside of existential risk reduction. It has a particularly strong focus on AGI. This work can be thought of as starting in 2022 with the release of Forethought’s “AGI and Lock-In.” This work was mostly done by Forethought although it was also contributed to by other organizations. Two representative works are What We Owe The Future by William MacAskill (2022) and “Better Futures” by William MacAskill (2025).
Core Argument
The core argument for longtermism is as follows:
- Future people matter just as much as we do.
- Humanity’s future could be vast in duration.
- We can help these people.
This philosophy usually assumes a Bayesian view of the world and makes these arguments on the basis of expected value.
Related Concepts
The following is a list of concepts related to longtermism. It is not meant to be complete and is mostly based on ideas from What We Owe The Future:
- Related philosophies
- Longtermism
- The view that working to improve the far future is a key moral priority of our time
- Strong longtermism
- The view that working to improve the far future is the key moral priority of our time
- Neartermism
- Often left undefined. The view that we should work to make positive changes to the nearterm rather than the longterm
- Longtermism
- Related research fields
- Effective altruism
- A practical movement and research field dedicated to finding the best ways to do good and to putting them into action. This movement takes seriously the use of science and reason to figure out what’s good, and it often takes an impartial and welfare-focused perspective.
- Global priorities research
- Research into how humanity should allocate its limited resources in order to the most good.
- Similar to effective altruism, but it is often focused on comparing cause areas rather than comparing interventions within them.
- Macrostrategy research
- Global priorities research focused on the long-term future
- Effective altruism
- Views on the current state of the world
- Time of perils
- The view that humanity is at a time of heightened existential risk due to technological development
- The most important century
- The view that this century will be more impactful than any other century in humanity’s history
- Now as an important time in humanity’s history
- The view that this century will be at least somewhat important in humanity’s history
- Humanity as a teenager
- A metaphor that imagines humanity as a teenager, emphasizing that humanity is early in its history and taking on an excessive amount of existential risk
- Time of perils
- Views on how the future will go
- Doomsday argument
- The argument that, if you are a randomly selected person in the history of humanity, it would be very surprising if you were alive at the very beginning, and that, as such, we should believe that humanity’s duration will be quite short
- Dichotomy
- The view that humanity has two possible future states. Either, the future will have minimal value or it will have an extremely high amount of value within a narrow range
- This view is used to support maxipok
- No Easy Eutopia
- The view that, if people don’t actively seek to make a near best future, the far future will have far less value than it otherwise could have
- Doomsday argument
- Views on how we should shape the future
- Maxipok
- The view that existential risk reduction should be your only priority if you are a “temporally impartial [altruist]”
- Argument that we should do more technological development to prevent extinction
- This view argues that humanity is at a heightened state of existential risk due to our level of technological development and that we must increase our technological development in order to escape it
- Better Futures
- The view that we should consider actions that cause trajectory changes to be of similar priority to those that increase the chances of humanity’s survival
- Maxipok
- Ideas about states the future could permanently enter into
- Extinction
- A state in which all humans are dead
- Stagnation
- A state in which humanity has minimal technological progress, possibly resulting from diminishing marginal returns from scientific research
- Irreversible collapse
- A state in which humanity has irreversibly returned to a hunter gatherer way of life
- Eutopia
- A near-best future
- Dystopia
- A state in which the world has near-zero value or even negative value, possibly as the result of an autocratic regime
- Extinction
- Ideas about states the future could temporarily enter into
- Viatopia
- A state of humanity in which it is able to navigate itself towards a best possible future
- The great reflection
- An idealized state of humanity in which there is a low existential risk, a diversity of moral views, and mechanisms in place to allow the correct moral views to win out
- This is a form of viatopia
- Moments of plasticity
- States in human history in which the future can take many forms as a result of the knowledge and behavior of different actors
- Longtermists often believe that many such moments have occurred in the past and that we are currently in such a moment
- Viatopia
- Ideas about how the future could be permanently shaped
- Existential catastrophe
- An event where a significant amount of humanity’s potential becomes permanently destroyed
- This could be an event where humanity goes extinct, permanently collapses and is unable to recover, or becomes set to enter into a dystopic state from which it will be unable to escape
- Lock-in
- When humanity’s possible futures become comes significantly reduced as a result of some action or environmental change
- Value lock-in
- When a small set of values become set to determine humanity’s future
- This view is often strongly associated with the use of AGI, although it does not necessarily have to be
- Persistent path dependence
- When an action predictably influences the far future in such a way that is extremely persistent
- Existential catastrophe
- Other related concepts
- Civilizational virtues
- The view that we can think of humanity as single entity needing to develop virtuous characteristics such as patience and foresight
- Civilizational virtues
- Ideas about how to shape the future
- Reducing existential risk
- Working to reduce the risk that humanity experiences an existential catastrophe
- Ensuring survival
- Working to ensure that humanity does not experience extinction or the permanent collapse of civilization
- Causing trajectory changes
- Most broadly, causing changes to the shape of humanity’s future
- More specifically, causing changes to the average value of humanity’s long-term future
- Keeping our options open
- Reducing risks of lock-in
- Working towards viatopia
- Trying to ensure that humanity will enter a state of viatopia
- Steering our trajectory
- Ensuring that if humanity does experience lock-in, that the state it is locked-in into is better than its alternatives
- Speeding up progress
- Working to increase technological progress so that humanity reaches an ideal state faster
- This is usually considered a far less effective intervention than working to reduce existential risk or creating trajectory changes
- Learning more
- This often suggested as a good intervention since we might learn better ways to shape the future
- Building the longtermism movement
- This often encouraged since more people could have more impact on the far future
- Spreading positive values
- This idea is suggested in What We Owe The Future as a way to create a trajectory change
- Reducing existential risk
Cause Areas
Longtermists have generated many cause areas for how to positively influence the far future. This list includes many of these ideas (but they are primarily from William MacAskill and Toby Ord):
- Ensuring humanity’s survival
- Reducing extinction risks
- Biorisks
- Engineered pandemics
- “Natural” pandemics
- Mirror bacteria
- Great power war
- World peace
- Nuclear weapons
- AI
- AI safety
- Preventing gradual disempowerment
- Climate change
- Environmental damage
- Dystopic scenarios
- Biorisks
- Reducing the risk of irreversible collapse
- Climate change
- Fossil fuel depletion
- Reducing the risk of stagnation
- Increasing population growth
- Promoting technological progress
- Reducing extinction risks
- Trajectory changes
- “Keeping our options open”
- “Preventing post-AGI autocracy"
- “Space governance”
- Explicitly temporary commitments
- Working towards viatopia/“the long reflection” (Not mentioned in linked article)
- “Steering out trajectory”
- AI governance
- “AI value-alignment”
- Rights of digital beings
- “Space governance”
- “Collective decision-making”
- “Preventing sub-extinction catastrophes
- Spreading positive values (Not mentioned in linked article)
- Reducing suffering risks (Not mentioned in linked article)
- Other/Both
- “Deliberative AI”
- “Empower responsible actors”
- “Keeping our options open”
- Other
- Further research into longtermism
- Movement building for longtermism
For more examples of cause areas, see the EA Forum Wiki. Also, check out the post I wrote categorizing them in a different way.
Common Counter Arguments
Some common counter arguments include:
- We have less of a moral obligation to future humans than humans who are alive today.
- We cannot meaningfully predict how the future will go.
- Longtermism relies on small probabilities of extremely large outcomes.
- We should expect our actions’ effects to “wash out” over sufficiently long time periods.
Appendix
Summaries of Some Major Works
Major Works:
“Astronomical Waste” by Nick Bostrom (2003)
- Bostrom argues that, in the future, we will be able to sustain vast numbers of biological or digital beings across the Universe. As such, every second of delayed progress is equivalent to enormous amounts of lost value because we will sustain these beings for a shorter duration than we otherwise would have. He, then, argues, furthermore, that existential risk will, in expectation, cause vastly more loss in value than delayed progress.
What We Owe The Future by William MacAskill (2022)
- MacAskill argues the basic case for longtermism, namely that working to positively influence the far future should be a key moral priority of our time. He argues this on the basis that we should expect there to be vast amounts of future beings due to humanity’s expected duration, that these beings matter morally, and that we can predictably influence their lives (or whether they come to exist) in positive ways. He argues for focusing on reducing the risk of permanent civilization collapse and for trying to increase the average value of humanity across its entire lifespan.
- He also suggests a series of cause areas:
- He thinks that we could have values become permanently locked in as a result of AGI, global conquest, or a culture outgrowing other cultures (such as via a result of immigration, being able to succeed in novel environments such as space, or population growth.)
- He thinks extinction poses a serious threat to humanity, such as from engineered pathogens (from a lab leak or malicious actors), and a great power war (from poor luck or changes in power among global powers).
- He also thinks irreversible collapse could be a serious issue if we cause extreme global warming or deplete our fossil fuels.
- He also thinks stagnation (a lack of technological progress resulting from diminishing returns from scientific endeavors) could be seriously problematic since it could increase the risk of problematic value change or, if we experience global catastrophic risks, collapse or extinction.
- Another key idea of his book is the idea that we should work towards period he calls “the long reflection,” where humanity has a low risk of extinction and it's in a state where it can guide itself towards a near best possible future
“The Case For Strong Longtermism” by Hilary Greaves and William MacAskill (2019, 2021, 2025)
- Greaves and MacAskill argue the case for strong longtermism, namely the idea that working to positively shape the far future should be a key moral priority of our time because the expected value of longtermist interventions far outweigh the expected value of neartermist interventions. This piece notably also addresses some more serious criticisms of longtermism.
“Better Futures” by William MacAskill (2025)
- MacAskill (and additional writers in some essays) argue that working to positively shape the far future could be of roughly equivalent value to working to reduce threats to humanity's survival. They argue that a near best future would likely have to be intentionally aimed for and that humanity may fail to do so. In addition to this, they argue that there are actions available to us today that we should expect to have net positive effects on the far future.
Other Important Works:
“How many lives does the future hold?” by Toby Newberry (2021)
- Newberry attempts to estimate how many future lives we should expect to be lived over the course of humanity across different domains such as Earth, the Solar System, the Milky Way, and the Affectable Universe, concluding that humanity’s future is vast in expectation.
“AGI and Lock-In” by Lukas Finnveden, Jess Riedel, and Carl Shulman (2022, 2025)
- The writers argue that, if AGI were developed, it would be feasible for humans to permanently lock-in their values at a global level by training AI agents to hold their values and then making these agents safe from almost all forms of disruption.
