As a utilitarianist I personally believe alignment to be the most important course area - though weirdly enough, even though I believe x-risk reduction to be positive in expectation, I believe the future is most likely to be net negative.
I personally believe, without a high level of certainty, that the current utilities on earth are net negative due to wild animal suffering. If we therefore give the current world a utility value of -1, I would describe my believe for future scenarios like this:
~5% likelihood: ~101000 (Very good future e.g. hedonium shockwave)
~90% likelihood: -1 (the future is likely good for humans, but this is still negligible compared to wild animal suffering, which will remain)
~5% likelihood: ~-10100 (s-risk like scenarios)
My reasoning for thinking “scenario 2” is more likely than “scenario 1” is based on what seems to be the values of the general public currently. Most people seem to care about nature conservation, but no one seems interested in mass producing (artificial) happiness. While the earth is only expected to remain habitable for about two billion years (and humans, assuming we avoid any x-risks, are likely to remain for much longer), I think, when it comes to it, we’ll find a way to keep the earth habitable, and thus wild animal suffering.
Based on these 3 scenarios, you don’t have to be a great mathematician to realize that the future is most likely to be net negative, yet positive in expectation. While I still, based on this, find alignment to be the most important course area, I find it quite demotivating that I spend so much of my time (donations) to preserve a future that I find unlikely to be positive. But the thing is, most long-termist (and EAs in general) don't seem to have similar beliefs to me. Even someone like Brian Tomasik has said that if you’re a classical utilitarian, the future is likely to be good.
So now I’m asking, what am I getting wrong? Why is the future likely to be net positive?
Animal Liberation by Peter Singer was published in 1975, just 50 years ago. Wild animal suffering as a moral concern gained traction in effective altruism just 10-20 years ago. Moral ideas and social movements often take a long time to go from conception to general acceptance. For example, in the U.S., 65 years passed between the founding of the Mattachine Society, one of the earliest gay rights groups, in 1950 and the Supreme Court decision in 2015 that gave gay people the right to marry nation-wide.
Given this, why would you consider it 90% likely that in 100 years, in 1000 years, or in 10,000 years, people wouldn’t change their minds about wild animal suffering? Especially given that, on these timescales, I think you’re also willing to entertain that there may be radical technological/biological changes to many or most human beings, such as cognitive enhancing neurotech, biotechnological self-modification of the brain, potentially merging with AGI, and so on.
First off, I must say - I really like that answer.
I guess I'm concerned about how much of a value lock-in there will be with the creation of AGI. And I find it hard to imagine a majority caring about wild animal suffering or mass producing happiness (e.g. creating a large amount of happy artificial sentience). But I do agree - I shouldn't giv it a 90% likelihood.
Personally, I’ve never bought the whole value lock-in idea. Could AGI make scientific, technological, and even philosophical progress over time? Everybody seems to say yes. So, why would we think AGI would not be capable of moral progress?
It seems like an awkward relic of the "MIRI worldview", which I don’t think ever made sense, and which has lost credibility since deep learning and deep reinforcement learning have become successful and prominent. Why should we think “value lock-in” is a real thing that would ever happen? Only if we make certain peculiar and, in my opinion, dubious assumptions about the nature of AGI.
When you say you can’t imagine a majority of people caring about wild animal suffering, does this mean you can imagine what society will be like in 1000 or 10,000 years? Or even beyond that? I think this is case where my philosophical hero Daniel Dennett’s admonishment is appropriate: don’t mistake a failure of imagination for a matter of necessity. People’s moral views have changed radically within the last 500 years — on topics like slavery, children, gender, violence, retribution, punishment, animals, race, nationalism, and more — let alone the last 1000 or 10,000.
I am an optimist in the David Deutsch sense. I think, given certain conditions in human society (e.g. science, liberal democracy, universal education, the prevalence of what might be called Enlightenment values), there is a tendency toward better ideas over time. Moral progress is not a complete accident.
How did you come to your view that wild animal suffering is important? Why would that process not be repeated on a large scale within the next 1000 or 10,000 years? Especially if per capita gross world product is going to increase to millions of dollars and people’s level of education is going to go way up.
The factor of technological advancement must be taken into account. A fully cooperative humanity committed to the elimination of all forms of suffering can get at its disposal technological means with a power as unimaginable today as our current technology could have been unimaginable to the wise Aristotle more than two thousand years ago.
True. But I think that's more of an argument that the future is uncertain (which ofc is a relevant argument). But even with the technology I don't necessarily think we'll have a majority interested in eliminating all forms of suffering (especially in the wild) or mass producing happiness.
I have a question I would like some thoughts on:
As a utilitarianist I personally believe alignment to be the most important course area - though weirdly enough, even though I believe x-risk reduction to be positive in expectation, I believe the future is most likely to be net negative.
I personally believe, without a high level of certainty, that the current utilities on earth are net negative due to wild animal suffering. If we therefore give the current world a utility value of -1, I would describe my believe for future scenarios like this:
My reasoning for thinking “scenario 2” is more likely than “scenario 1” is based on what seems to be the values of the general public currently. Most people seem to care about nature conservation, but no one seems interested in mass producing (artificial) happiness. While the earth is only expected to remain habitable for about two billion years (and humans, assuming we avoid any x-risks, are likely to remain for much longer), I think, when it comes to it, we’ll find a way to keep the earth habitable, and thus wild animal suffering.
Based on these 3 scenarios, you don’t have to be a great mathematician to realize that the future is most likely to be net negative, yet positive in expectation. While I still, based on this, find alignment to be the most important course area, I find it quite demotivating that I spend so much of my time (donations) to preserve a future that I find unlikely to be positive. But the thing is, most long-termist (and EAs in general) don't seem to have similar beliefs to me. Even someone like Brian Tomasik has said that if you’re a classical utilitarian, the future is likely to be good.
So now I’m asking, what am I getting wrong? Why is the future likely to be net positive?
Animal Liberation by Peter Singer was published in 1975, just 50 years ago. Wild animal suffering as a moral concern gained traction in effective altruism just 10-20 years ago. Moral ideas and social movements often take a long time to go from conception to general acceptance. For example, in the U.S., 65 years passed between the founding of the Mattachine Society, one of the earliest gay rights groups, in 1950 and the Supreme Court decision in 2015 that gave gay people the right to marry nation-wide.
Given this, why would you consider it 90% likely that in 100 years, in 1000 years, or in 10,000 years, people wouldn’t change their minds about wild animal suffering? Especially given that, on these timescales, I think you’re also willing to entertain that there may be radical technological/biological changes to many or most human beings, such as cognitive enhancing neurotech, biotechnological self-modification of the brain, potentially merging with AGI, and so on.
First off, I must say - I really like that answer.
I guess I'm concerned about how much of a value lock-in there will be with the creation of AGI. And I find it hard to imagine a majority caring about wild animal suffering or mass producing happiness (e.g. creating a large amount of happy artificial sentience). But I do agree - I shouldn't giv it a 90% likelihood.
Personally, I’ve never bought the whole value lock-in idea. Could AGI make scientific, technological, and even philosophical progress over time? Everybody seems to say yes. So, why would we think AGI would not be capable of moral progress?
It seems like an awkward relic of the "MIRI worldview", which I don’t think ever made sense, and which has lost credibility since deep learning and deep reinforcement learning have become successful and prominent. Why should we think “value lock-in” is a real thing that would ever happen? Only if we make certain peculiar and, in my opinion, dubious assumptions about the nature of AGI.
When you say you can’t imagine a majority of people caring about wild animal suffering, does this mean you can imagine what society will be like in 1000 or 10,000 years? Or even beyond that? I think this is case where my philosophical hero Daniel Dennett’s admonishment is appropriate: don’t mistake a failure of imagination for a matter of necessity. People’s moral views have changed radically within the last 500 years — on topics like slavery, children, gender, violence, retribution, punishment, animals, race, nationalism, and more — let alone the last 1000 or 10,000.
I am an optimist in the David Deutsch sense. I think, given certain conditions in human society (e.g. science, liberal democracy, universal education, the prevalence of what might be called Enlightenment values), there is a tendency toward better ideas over time. Moral progress is not a complete accident.
How did you come to your view that wild animal suffering is important? Why would that process not be repeated on a large scale within the next 1000 or 10,000 years? Especially if per capita gross world product is going to increase to millions of dollars and people’s level of education is going to go way up.
Not an answer to your question, but I also think most futures will be net negative for similar reasons, so it’s not just you!
The factor of technological advancement must be taken into account. A fully cooperative humanity committed to the elimination of all forms of suffering can get at its disposal technological means with a power as unimaginable today as our current technology could have been unimaginable to the wise Aristotle more than two thousand years ago.
True. But I think that's more of an argument that the future is uncertain (which ofc is a relevant argument). But even with the technology I don't necessarily think we'll have a majority interested in eliminating all forms of suffering (especially in the wild) or mass producing happiness.
what is the difference between net negative and negative in expectation?
Net positive/negative: "All good" minus "all bad"
In expectation: Weighted average outcome (prob × impact)