Hide table of contents

This is a cross post from my blog article.

 

Longtermism is the view that we have a moral obligation to make the far future better.

It is usually argued for on the basis of the following claims:

  1. Future people are morally important.
  2. The future will be vast in expectation.
  3. There are actions available to us today that can meaningfully influence the far future.
  4. Due to the size of humanity’s potential, the value of these actions’ effects will, in expectation, be very large.

These claims are usually based on the assumption that, based on our present understanding of the world, we are justified in believing:

  1. We can make predictions about the far future that are sufficiently accurate that we are justified in using them to guide our decision-making.
  2. There are actions available to us that can predictably influence the far future.
  3. Humanity’s future will, in expectation, persist for millions, billions, or even trillions of years.
  4. Humanity will use a significant portion of these resources over this time to create whatever you consider morally good (or whatever actually is morally good.)

I think that, according to normal views of the future, this argument is really compelling. If you think that humanity will prevent itself from going extinct and create a technologically advanced utopia, you really should be reasonably convinced that working to reduce extinction risk or to increase the likelihood of such an outcome is deeply important.

Reasons to Be Skeptical Of Longtermism

That said, I’m not sure how confident we should be in this view, and I think there are a couple of compelling reasons to think that longtermism could be wrong:

1. The “We should expect the far future to be neutral in value” objection

One reason longtermists offer in support of their view is that we have actions available to us today, which can reduce humanity’s existential risk. If humanity’s far future has extraordinary value in expectation, even working to reduce existential risk by very small margins could be worthwhile.

While I find it quite plausible we can reduce existential risk, I am skeptical that we should consider the far future’s value to be anything but neutral in value.

For one, it’s extremely unclear how we should expect power in the future to be distributed and whether we should expect those in power to have to correct moral views.

If power is highly distributed in a democracy, we may expect the future to be very valuable since democracies seem to create conditions that motivate people towards more correct views. Problematically, though, it’s unclear whether these exact conditions would persist into the future. In the US, for instance, we’ve seen major improvements in the treatment of black people as a result of the civil rights movement. Black people considered their treatment unjust, and they were able to convince others to take their plight seriously. In the future, oppressed peoples may be unable to convince others because others may be able to simply prevent themselves from hearing future view points. For instance, if future people can use AI to select for who they interact with, they could completely avoid people they disagree with. And, even if democracy generally leads to correct views, it could be that this mechanism isn’t strong enough. If a view favors reproduction, it may win out not because it’s correct but instead because it simply out reproduces the rest of the population.

If power is distributed in an autocracy, it’s possible the future could turn out quite well. While some autocracies around the world seem to have been net positive for their people, their track records have been questionable to say the least. And, as I wrote in a previous post, “Some of the most powerful people in history have been actively sadistic. In William MacAskill’s What We Owe The Future, he points out that, ‘Although they are rare in the population as a whole, malevolent, sadistic, or psychopathic actors may be disproportionately likely to gain political power. Many dictators have exhibited such traits aside from Mao and Hitler, including Genghis Khan, Saddam Hussein, Stalin, Mussolini, Kim Il-sung, Kim Jong-il, François Duvalier, Nicolae Ceaușescu, Idi Amin, and Pol Pot.’ If it turns out that the future selects for sadistic leaders, the world could be horrible on scales that are difficult to even imagine.”

For two, even if those in power have almost the right moral views, this could still result in an enormous moral catastrophe. Most people today consider liberal democracies to be net positive for the world, but, considering how they led to the mass suffering of animals on factory farms across the world, it’s not so clear that this is the case. Future people could also make a similar mistake such as by spreading suffering wild animals across the universe.

2. The arbitrariness objection

If you try to assign exact probabilities to each of longtermism’s central claims (and each of their assumptions), I think you will find that, for each claim, you could reasonably believe in probabilities that are vast orders of magnitude different from one another, and that, under certain sets of reasonable probabilities, you will find that longtermism is false.

For instance, a central claim in longtermism is the view that we should expect humans to persist, on average, for a vast amount of time. The problem though, to me, is that I could reasonably see humanity persisting only very short durations and also extremely long ones. Some days, it seems like it would be crazy to expect humans to last more than 10,000 years, while, on others, it seems like it would be crazy not to expect them to last billions.

In “The Case for Strong Longtermism” by Hilary Greaves and William MacAskill, they argued against this view, stating:

“The ‘arbitrariness’ objection is that even if a rational agent must have some precise credence and value functions, there is so little by way of rational restriction on which precise functions are permissible that the argument for strong longtermism is little more than an assertion that the authors’ own subjective probabilities are ones relative to which this thesis is true. We have some sympathy with this objection. However, there is a distinction between there being no watertight argument against some credence function on the one hand, and that credence function being reasonable on the other. Even in the present state of information, in our view credence-value pairs such that the argument for strong longtermism fails are unreasonable. If, for instance, one had credences such that the expected number of future people was only 1014, the status quo probability of catastrophe from AI was only 0.001%, and the proportion by which $1 billion of careful spending would reduce this risk was also only 0.001%, then one would judge spending on AI safety equivalent to saving only 0.001 lives per $100—less than the near-future benefits of bednets. But this constellation of conditions seems unreasonable.”

I’m not entirely sure I would agree that such a set of conditions are unreasonable, but, regardless, I think this argument still gives good reason to be skeptical of longtermism. If you assume that we will have ten billion people alive at any given time until the end of humanity, then their argument actually assumes that humanity will only last ten thousand years, a far shorter duration than the “million, billions, or even trillions” of years that MacAskill mentions in What We Owe The Future.

3. The chaotic future objection

I think that it’s likely that humanity’s future will mostly be chaotic, which is to say that:

a. If the world is deterministic, then very slight changes in starting conditions will result in radically different outcomes.

b. If the world is not deterministic, then any individual action will not have any meaningful effect on eventual outcomes.

If humanity’s future is mostly chaotic, that would give us good reason to think that we’re unable to make reasonable predictions about how the future will go or how our actions will affect it, which would effectively strike down longtermism.

One reason to think this is the case is that it seems like humanity’s history has been very chaotic, or, in other words, it seems like even very slight changes to historical conditions would have led to a very different present day. For instance, if the distribution of power in Europe before World War I were slightly different, it’s reasonable to think that the current distribution of power globally would be vastly different than it is today. Similarly, if Mesopotamia had been just a bit less fertile, it seems like Greece and the Roman Empire would have had very different religions, which would have led to a very different world today.

In William MacAskill’s What We Owe The Future, he makes a similar argument, making the case that history had contingent events, where, had things gone slightly differently, the world today would be very different. I think I agree with him, but I think that he probably considers such events to be far less common than I do. In my view, it seems possible that both our history and our future is overflowing with millions or billions of such events rather than, say, hundreds or thousands of them. If this is the case, it seriously harms the case for longtermism because, even if you successfully make one contingent event go well, that doesn’t guarantee you’re able to make each consecutive one go well too.

Another reason to think this could be the case is that, even today, people seem extremely uncertain about how the future will go. Experts on AI, for instance, seem to have wildly ranging views on how the future will go. Some people don’t think we’ll get AGI. Some think we will, but that it won’t be a big deal. Others think that we will and it will be the most disruptive technology of all time. And, even others think that we will and that it will almost certainly kill everyone. If we don’t even have a clear view on how a technology developed this century will affect the world and how we will respond to it, it doesn’t seem like we should think we have a clear view on how the far future will go.

A longtermist might respond to this argument by stating that, just because the future is mostly unpredictable, that doesn’t mean we don’t have any actions available that can predictably influence the far future.

The first sets of actions that they would point to are ones that reduce the risk of existential catastrophe. They would point out that, if humanity goes extinct, that would be a major loss of value that would persist for a very long time.

To this, I would respond with two things. First, it could be that humanity’s expected value is actually neutral, considering how difficult it is to predict whether humans will make the future better or worse. Second, it could be that many steps to reduce existential risk are actually neutral due to how chaotic the world is. If you try to reduce the risk of a great power war by reducing tensions between China and the US, you could inadvertently increase the risk of a great power war between other powers later on. Similarly, if you try to reduce the risk of an engineered pandemic by doing mass surveillance, this might accidentally increase the risk of a permanent autocracy instead. Lastly, even if an action seems robust, considering how chaotic the past has been, we should be at least somewhat skeptical it is actually as robust as it seems.

The second set of actions they would point to are ones that predictably increase the expected value of the far future. They would point out that, this century, AGI could be developed and that it could enable people to have their values permanently determine how the far future would go. This could occur as the result of AGI-enabled autocracy or poor decisions around space governance.

To this, I would respond with two things. One, it could be that humanity will enter into some irreversible state as a result of AGI being invented. That said, just because we can predict that such a thing will occur, that doesn’t mean that we can influence exactly how it occurs. It could turn out that each of the actions available to us really have no effect since they interact with such a radically complex world. For instance, if you try to warn people of AGI-enabled coups, that could accidentally lead to more AGI-enabled coups. Second, it seems like such an event may be vastly less likely than longtermists expect. It seems like many longtermist concerns involve a very specific constellation of conditions that are unlikely to unfold. For instance, in order to get AGI-enabled autocracy, we must both have AGI invented and have an extremely small group of people controlling it, who are both motivated and able to create such an autocracy.

4. The “we’re historically very bad at predicting the future” objection

I don’t think anyone needs to be convinced that humans, historically, have been pretty bad at predicting how the future will go. When the Founding Fathers created the US constitution, most of them didn’t expect the country to last more than fifty years. Similarly, many historical regimes expected to last centuries but ultimately fell far short. For instance, the Nazis used to refer to themselves as the “Thousand-Year Reich,” but their rule only lasted twelve years.

If we’re so bad at making predictions about the future, this gives us good reason to think that many longtermist claims and assumptions, such as that humanity’s future will be vast or that it will use its resources to promote what’s morally good, may be wildly off.

It’s also worth pointing out that there’s been minimal research on how good people are at making predictions over the next century (with pretty poor results overall), and, of course, no research on how good people are at making predictions on the order of millions of years. As such, we really should be quite uncertain how accurate our own predictions are.

5. The “future humans won’t be like us” objection.

Most longtermist thought has a strong assumption that future humans will be like us, and that, as such, we should expect that they will take actions similar to what we would. For instance, in The Precipice, Toby Ord writes, “in expectation, almost all of humanity’s life lies in the future, almost everything of value lies in the future as well: almost all flourishing; almost all beauty; our greatest achievements; our most just societies; our most profound discoveries. We can continue our progress on prosperity, health, justice, freedom, and moral thought. We can create a world of wellbeing and flourishing that challenges our capacity to imagine. And if we protect that world from catastrophe, it could last millions of centuries. This is our potential—what we could achieve if we pass the Precipice and continue striving for a better world.”

Ord’s vision seems possible, but, in many ways, it seems deeply unlikely since it assumes that future humans would build a world just like what we would, despite the fact that we really should expect them to be quite different from us. Modern humans are the way they are as a result of their genetics, biology, and environment, but all of these factors are things that we should expect to change. In the future, or at least the future that longtermists imagine, we should expect future humans to be able to radically alter not only themselves but also their offspring and their environment.

To give some examples, future humans could completely change who they are through strong self-modification. They could alter their own memories, preferences, beliefs, character, and personalities. Even if such changes were small initially, they could radically spiral out of control. Future humans could also modify the traits of our own offspring. If their offspring did the same, this process could also quickly spiral out of control. Lastly, future humans could also wildly alter their environment. They could make it so that they’re never exposed to ideas they don’t want to hear or so that they live in digital worlds where they have no knowledge of what the physical world is even like.

The question then becomes, “If we have good reason to think future humans may be very different from us, why should we expect them to build a world that we would approve of?”

Conclusion

Putting this all together, I really don’t know whether or not longtermism is true. I don’t think anyone should be highly confident that it is, but I also don’t think anyone should be highly confident that it’s not.

This is because even if you think each of these objections are pretty likely, that still isn’t enough to reject longtermism outright. Considering that longtermists think it’s possible humanity will spread across the affectable universe and persist for trillions of years, if you think there’s almost any chance this is the case, that may still be sufficient to act as though longtermism is true since the expected value of working on it would be incredibly high.

As such, I think it’s important to take seriously how our actions could affect the future and how they could persist on longer time scales, but I also don’t think anyone should be fully committed to longtermism.

12

2
0
1

Reactions

2
0
1

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities