Hide table of contents

This is a crosspost from my blog post.

Currently, it feels like longtermist is an awkward state because it has generated a wide range of ideas for how to positively shape the far future outside of reducing existential risk, but it does not have a clear conception of the exact nature by which these actions are good and how they compare to each other. In this post, I will attempt to reduce this issue by categorizing various longtermist cause areas into broader groups so that they can be more easily compared.

Specifically, I will categorize cause areas into these following groups

  • Ensuring survival
  • Increasing flourishing
    • Locking-in one’s values
    • Ensuring the future is aligned with the correct values
      • Working towards viatopia
      • Promoting futures with more moral reflection
      • Improving the ability for people with different views to get their desired futures
    • Ensuring future people are able to create a good future
      • Keeping humanity’s options open
      • Improving global stability
      • Improving future human’s decision making
      • Empowering responsible actors
    • Speeding up progress
  • Learning more

For each group, I will define what it is, explain why someone may favor work on it and list some potential cause areas within it.

Also, an aside, most of these ideas are taken from either William MacAskill’s What We Owe The Future or his essay series “Better Futures.” Anything quoted is from his fifth essay in that series, titled “How to Make The Future Better.”

Ensuring Survival

This means working to prevent futures of near-zero value from being created this century.

If you believe that the future is dichotomous, meaning that humanity will either get a near-best future or a near-zero value future by default, you may support work on this category. This view is supported here and argued against here.

If you believe that we are unable to influence the far future outside of ensuring survival, you may also support this category. This view is argued against in these two essays.

Potential cause areas include:

  • Reducing extinction risks
    • Biorisks
      • Engineered pandemics
      • “Natural” pandemics
      • Mirror bacteria
    • Great power war
      • World peace
      • Nuclear weapons
    • AI takeover
    • Other risks from AI
    • Climate change
    • Environmental damage
  • Reducing the risk of irreversible collapse
    • Global catastrophic risks
    • Climate change
    • Fossil fuel depletion
  • Reducing the risk of stagnation
    • Increasing population growth
    • Promoting technological progress
  • Reducing the risk of stable totalitarianism
    • Promoting more democratic futures

Increasing Flourishing

This means working to improve the value of the far future given that humanity does not become trapped in a future with near zero value.

You may support work on this area if you believe that it has a higher impact than work on ensuring survival or if you think that we can influence the far future but you are unsure whether humanity is worth saving.

This view is supported by this essay series.

Potential cause areas include:

  • Locking-in one’s values
  • Ensuring the future is aligned with the correct values
  • Ensuring future people can create a good future
  • Speeding up progress

Locking-in One’s Values

Locking-in one’s values means increasing the likelihood that the far future is determined by one’s values. This can mean the entire set of one’s values or just a single value.

If you believe that your moral values are correct, that humanity will prematurely lock-in the wrong values, or that humanity will never determine the correct values, you may support work in this category.

Potential cause areas include:

  • Promoting positive values
  • Promoting specific values
    • Promoting consideration of digital beings
    • Promoting consideration of animals
    • Promoting liberal-democratic values
    • Promoting longtermism
  • Reducing suffering risks
  • Improving futures where AI takeover occurs
  • Governance on the use of space resources

Ensuring the future is aligned with the correct values

This means working to ensure that the future is aligned with the correct moral values by promoting mechanisms that you believe will lead to those in power to be able to make decisions based on the correct moral values.

One such mechanism is ensuring that future people come to the correct moral values. Longtermists generally assume that this can occur by enabling people to engage in moral reflection under ideal conditions. That said, this mechanism could also be something other than reflection such as exposure to a particular religious text or the use of a mind-altering substance.

You may support work on this area if you believe that your moral values are incorrect, but you believe that you know the right mechanism by which people can come to correct moral views. You may also support work in this area if you believe it is more tractable than locking-in your own values.

Potential cause areas include:

  • Working towards viatopia
  • Promoting futures with more moral reflection
  • Improving the ability for people with different views to get their desired futures
  • Ensuring future people are able to create a good future

Working Towards Viatopia

This means working towards viatopia, an intermediate state of society in which humanity is able to guide itself towards a near best future.

Some potential cause areas include:

  • Keeping humanity’s options open
  • Promoting futures with more reflection
  • Improving global stability

Promoting Futures With More Moral Reflection

This means ensuring that humanity has done more reflection before it makes irreversible decisions.

You may support this work if you believe that people can come to correct moral views via reflection.

Some potential cause areas include:

  • Keeping humanity’s options open
  • Improving global stability
  • Promoting more democratic futures
  • Promoting the use of AI for moral reflection

Improving the ability for people with different views to get their desired futures

This means ensuring that people with different views are able to get a maximally good future according to all of their views.

Possible cause areas include:

  • Increasing the likelihood of moral trade
  • Improving group decision-making procedures
  • Creating and promoting the widespread use of AI that helps humans to engage in decision-making

Ensuring future people are able to make a good future

This means making sure that future humans will possess the means by which to create a near best future.

You may support work in this area if you believe that humanity may accidentally fail to create a near best future by failing to properly plan ahead.

Potential cause areas include:

  • Keeping humanity’s options open
  • Improving global stability
  • Improving future human’s decision making
  • Empowering responsible actors

Keeping Humanity’s Options Open

This means ensuring that humanity does not make irreversible choices.

You may support work on this area if you believe that humanity will make better decisions if given more time.

Some potential cause areas include:

  • Promoting more democratic futures
    • “Preventing democracies from turning autocratic”
    • “Making it harder for existing autocracies to use AI to entrench authoritarianism further”
    • “Ensuring that autocracies don’t become hegemonic post-AGI”
    • Improving “the governance of superintelligence development”
  • Delaying or slowing down major decisions
    • “[Slowing] the intelligence explosion”
    • Delaying usage of space resources
    • Doing “explicitly temporary commitments”

Improving Global Stability

This means working to ensure that humanity doesn’t face major events or situations that put it in a worse state to determine humanity’s future.

Possible cause areas include:

  • Reducing global catastrophic risks
  • Preventing wars
  • Preventing race dynamics
  • Empowering responsible leaders

Improving future human’s decision making

This means creating and improving technologies/institutions in order to enable future humans to make better decisions

Potential cause areas include:

  • Promoting the use of AI to improve epistemics
  • Doing research on critical topics before decisions are made, possibly using AI
    • Research on space governance
    • Research on the rights of digital beings
    • Research on AI governance
    • Research on macrostrategy
  • Improving group decision-making procedures

Empowering responsible actors
 

This means ensuring that those in power are motivated by doing the most good rather than pursuing their own self interest.

Potential cause areas include:

  • Supporting responsible leaders in elections
  • Disempowering the ultra-rich through wealth redistribution

Speeding Up Progress

This means working to ensure that society has faster technological development so that it can spend more time in a state of maximal technological development.

You may support work in this category if you believe that humanity faces no extinction risk and that it will almost certainly converge on the correct moral values and be motivated to put them into action.

This view is argued against here and here.

Learning More

This means doing further research in regards to any of the cause areas.

You may support work in this category if you believe that further knowledge acquisition is currently more impactful than taking action.

Appendix

Mapping of Cause Areas

This is the mapping of cause areas that I have just described:

  • Ensuring survival
    • Reducing extinction risks
      • Biorisks
        • Engineered pandemics
        • “Natural” pandemics
        • Mirror bacteria
      • Great power war
        • World peace
        • Nuclear weapons
      • AI takeover
      • Other risks from AI
      • Climate change
      • Environmental damage
    • Reducing the risk of irreversible collapse
      • Reducing global catastrophic risks
      • Climate change
      • Fossil fuel depletion
    • Reducing the risk of stagnation
      • Increasing population growth
      • Promoting technological progress
    • Reducing the risk of stable totalitarianism
      • Promoting more democratic futures
  • Increasing flourishing
    • Locking-in one’s values
      • Promoting positive values
      • Promoting specific values
        • Promoting consideration of digital beings
        • Promoting consideration of animals
        • Promoting liberal-democratic values
        • Promoting longtermism
      • Reducing suffering risks
      • Improving futures with AI takeover
      • Governance on the use of space resources
    • Ensuring the future is aligned with the correct values
      • Working towards viatopia
        • Keeping humanity’s options open
        • Promoting futures with more reflection
        • Improving global stability
      • Promoting futures with more moral reflection
        • Keeping humanity’s options open
        • Improving global stability
        • Promoting more democratic futures
        • Promoting the use of AI for moral reflection
      • Improving the ability for people with different views to get their desired futures
        • Increasing the likelihood of moral trade
        • Improving group decision-making procedures
        • Creating and promoting the widespread use of AI that helps humans to engage in decision-making
    • Ensuring future people are able to make a good future
      • Keeping humanity’s options open
        • Promoting more democratic futures
          • “Preventing democracies from turning autocratic”
          • “Making it harder for existing autocracies to use AI to entrench authoritarianism further”
          • “Ensuring that autocracies don’t become hegemonic post-AGI”
          • Improving “the governance of superintelligence development”
        • Delaying or slowing down major decisions
          • “[Slowing] the intelligence explosion”
          • Delaying usage of space resources
          • Doing “explicitly temporary commitments”
      • Improving global stability
        • Reducing global catastrophic risks
        • Preventing wars
        • Preventing race dynamics
        • Empowering responsible leaders
        • Delaying or slowing down major decisions
      • Improving future human’s decision making
        • Promoting the use of AI to improve epistemics
        • Doing research on critical topics before decisions are made, possibly using AI
          • Research on space governance
          • Research on the rights of digital beings
          • Research on AI governance
          • Research on macrostrategy
        • Improving group decision-making procedures
      • Empowering responsible actors
        • Supporting responsible leaders in elections
        • Disempowering the ultra-rich through wealth redistribution
    • Speeding up progress
  • Learning more

6

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities