Jens Aslaug 🔸

Dentist
115 karmaJoined Working (0-5 years)Danmark

Bio

Dentist doing earning to give. I have pledged to donate a minimum of 50 % (aiming for 60%) or 40.000-50.000 $ annually (at the beginning of my career). While I do expect to mainly do "giving now", I do plan, in periods of limited effective donating opportunities, to do "investing to give".  

As a longtermist and total utilitarian, finding the cause that increases utilities (no matter the time or type of sentient being) the most time/cost-effectively is my goal. In the pursuit of this goal, I so far care mostly about: alignment, s-risk, artificial sentience and WAW (wild animal welfare) (but feel free to change my mind).  

I heard about EA for the first time in 2018 in relation to an animal rights organization I worked for part-time (Animal Alliance in Denmark). However I have only had minimal interactions with EAs.

Male, 25 years old and diagnosed with aspergers (autism) and dyslexia. 

Comments
23

First off, I must say - I really like that answer. 

I guess I'm concerned about how much of a value lock-in there will be with the creation of AGI. And I find it hard to imagine a majority caring about wild animal suffering or mass producing happiness (e.g. creating a large amount of happy artificial sentience). But I do agree - I shouldn't giv it a 90% likelihood. 

True. But I think that's more of an argument that the future is uncertain (which ofc is a relevant argument). But even with the technology I don't necessarily think we'll have a majority interested in eliminating all forms of suffering (especially in the wild) or mass producing happiness.    

Net positive/negative: "All good" minus "all bad" 
In expectation:  Weighted average outcome (prob × impact)

I have a question I would like some thoughts on:

As a utilitarianist I personally believe alignment to be the most important course area - though weirdly enough, even though I believe x-risk reduction to be positive in expectation, I believe the future is most likely to be net negative.

I personally believe, without a high level of certainty, that the current utilities on earth are net negative due to wild animal suffering. If we therefore give the current world a utility value of -1, I would describe my believe for future scenarios like this:

  1. ~5% likelihood: ~101000  (Very good future e.g. hedonium shockwave)
  2. ~90% likelihood: -1 (the future is likely good for humans, but this is still negligible compared to wild animal suffering, which will remain)
  3. ~5% likelihood: ~-10100 (s-risk like scenarios)

 

My reasoning for thinking “scenario 2” is more likely than “scenario 1” is based on what seems to be the values of the general public currently. Most people seem to care about nature conservation, but no one seems interested in mass producing (artificial) happiness. While the earth is only expected to remain habitable for about two billion years (and humans, assuming we avoid any x-risks, are likely to remain for much longer), I think, when it comes to it, we’ll find a way to keep the earth habitable, and thus wild animal suffering. 

Based on these 3 scenarios, you don’t have to be a great mathematician to realize that the future is most likely to be net negative, yet positive in expectation. While I still, based on this, find alignment to be the most important course area, I find it quite demotivating that I spend so much of my time (donations) to preserve a future that I find unlikely to be positive. But the thing is, most long-termist (and EAs in general) don't seem to have similar beliefs to me. Even someone like Brian Tomasik has said that if you’re a classical utilitarian, the future is likely to be good. 

 So now I’m asking, what am I getting wrong? Why is the future likely to be net positive? 

Thank you for the post! Can't belive I only saw it now. 

I do agree that altruism can and should be seen as something that's net positive to once own happiness for most. But:
1. My post was mainly intended for people who are already "hardcore" EAs and are willing to make a significant personal sacrifice for the greater good. 
2. You make some interesting comparisons to religion that I somewhat agree with. Though I don't think religion is as time consuming as EA is for many EAs. I'm also sure EA would seem less like a personal sacrifice if you were surrounded by EAs. 
3. Trying to make EA more mainstream is not simple. Many ideas seems ratical to the average person. You could ofc try to make the ideas seem more in line with the average viewpoint. But I don't think that's worth it if it makes us less efficient. 

I have a question I would like some thoughts on:

As a utilitarianist I personally believe alignment to be the most important course area - though weirdly enough, even though I believe x-risk reduction to be positive in expectation, I believe the future is most likely to be net negative.

I personally believe, without a high level of certainty, that the current utilities on earth are net negative due to wild animal suffering. If we therefore give the current world a utility value of -1, I would describe my believe for future scenarios like this:

  1. ~5% likelihood: ~101000  (Very good future e.g. hedonium shockwave)
  2. ~90% likelihood: -1 (the future is likely good for humans, but this is still negligible compared to wild animal suffering, which will remain)
  3. ~5% likelihood: ~-10100 (s-risk like scenarios)

 

My reasoning for thinking “scenario 2” is more likely than “scenario 1” is based on what seems to be the values of the general public currently. Most people seem to care about nature conservation, but no one seems interested in mass producing (artificial) happiness. While the earth is only expected to remain habitable for about two billion years (and humans, assuming we avoid any x-risks, are likely to remain for much longer), I think, when it comes to it, we’ll find a way to keep the earth habitable, and thus wild animal suffering. 

Based on these 3 scenarios, you don’t have to be a great mathematician to realize that the future is most likely to be net negative, yet positive in expectation. While I still, based on this, find alignment to be the most important course area, I find it quite demotivating that I spend so much of my time (donations) to preserve a future that I find unlikely to be positive. But the thing is, most long-termist (and EAs in general) don't seem to have similar beliefs to me. Even someone like Brian Tomasik has said that if you’re a classical utilitarian, the future is likely to be good. 

 So now I’m asking, what am I getting wrong? Why is the future likely to be net positive? 

Well I totally agree with you, and the simple answer is - I don't. All of the graph/tables I made (except fig. 5 - which was only a completely made up example) are based on averages (including made up numbers that's supposed to represent the average). They don't take into account personal factors - like your income or how expensive rent+utilities are in your area. Therefore the models should only be used as a very rough guide, that can't work by itself (I guess one could make a more complex model that includes these factors). Therefore one should also make a budget, to see how it would look in real life, as I suggested in this section

X-risk reduction (especially alinement) is highly neglected and it's less clear how our actions can impact the value of the future. However, I think the impact of both is very uncertain and I still think working on s-risk reduction and long-termest animal work is of high impact. 

I agree. :) Your idea of lobbying and industry-specific actions might also be more neglected. In terms of WAW, I think it could help reduce the amount of human caused suffering to wild animals, but likely not have an impact on natural caused suffering. 

Load more