A

Arepo

5329 karmaJoined

Participation
1

Sequences
4

EA advertisements
Courting Virgo
EA Gather Town
Improving EA tech work

Comments
766

Topic contributions
18

I remember him discussing person-affecting views in Reasons and Person, but IIRC (though it's been a very long time since I read it) he doesn't particularly advocate for them. I use the phrase mainly because of the quoted passage, which appears (again IIRC) in both The Precipice and What We Owe the Future, as well as possibly some of Bostrom's earlier writing. 

I think you could equally give Bostrom the title though, for writing to my knowledge the first whole paper on the subject.

Cool to see someone trying to think objectively about this. Inspired by this post, I had a quick look at the scores on the world happiness report to compare China to its ethnic cousins, and while there are many reasons to take this with a grain of salt, China does... ok. On 'life evaluation', which appears to be the all things considered metric (I didn't read the methodology, correct me if I'm wrong), some key scores:

Taiwan: 6.669

Philippines: 6.107

South Korea: 6.038

Malaysia: 5.955

China: 5.921

Mongolia: 5.833

Indonesia: 5.617

Overall it's ranked 68th of 147 listed countries, and outscores several (though I think a minority of) LMIC democratic nations. One could attribute some of its distance from the top simply as a function of lower GDP per capita, though one could also argue (as I'm sure many do) that its lower GDP per capita is a result of CCP control (though maybe if this is true and is going to continue to be true, that's incompatible with the idea that they've got a realistic chance of winning an AI arms race and consequently dominating the global economy).

One view I wish people would take more seriously is the possibility that it can be true both that

  • Chinese government is net worse for welfare standards than most liberal democracies; and
  • the expected harms coming from ratcheting up global tensions to avoid them winning an AI arms race are nonetheless much higher than the expected benefits

Thanks :)

I don't think I covered any specific relationship between factors in that essay (except those that were formally modelled in), where I was mainly trying to lay out a framework that would even allow you to ask a question. This essay is the first time I've spent meaningful effort on trying to answer it.

I think it's probably ok to treat the factors as a priori independent, since ultimately you have to run with your own priors. And for the sake of informing prioritisation decisions, you can decide case by case how much you imagine your counterfactual action changing each factor.

You don't need a very high credence in e.g. AI x risk for it to be the most likely reason you and your family die

 

I think this is misleading, especially if you agree with the classic notion of x-risk as excluding events from which recovery is possible. My distribution of credence over event fatality rates is heavily left-skewed, so I would expect far more deaths under the curve between 10% and 99% fatality than between 99% and 100%, and probably more area to the left even under a substantially more even partition of outcomes. 

I fear we have yet to truly refute Robin Hanson’s claim that EA is primarily a youth movement.

FWIW my impression is that CEA have spent significantly more effort on recruiting people from universities than any other comparable subset of the population.

Somehow despite 'Goodharting' being a now standard phrase, 'Badharting' is completely unrecognised by Google. 

I suggest the following intuitive meaning: failing to reward a desired achievement, because the proxy measure you used to represent it wasn't satisfied:

'No bonus for the staff this year: we didn't reach our 10% sales units growth target.'

'But we raised profits by 30% by selling more expensive products, you Badharting assholes!'

I guess in general any decision binds all future people in your lightcone to some counterfactual set of consequences. But it still seems practically useful in interpersonal interactions to distinction a) between those that deliberately restrict their action set/those that just provide them in expectation with a different access set of ~the same size, and b) between those motivated by indifference/those motivated specifically by an authoritarian desire to make their values more consistently with ours.

Muchos hugs for this one. I'm selfishly glad you were in London long enough for us to meet, fwiw :)

I feel like this is a specific case of a general attitude in EA that we want to lock in our future selves to some path in case our values change. The more I think about this the worse it feels to me, since 

a) your future values might in fact be better than your current ones, or if you completely reject any betterness relation between values then it doesn't matter either way

b) your future self is a separate person. If we imagine the argument targeting any other person, it's horrible - it states that should lock them into some state that ensures they're forced to serve your (current) interests

I hope over time you reshift your networks to your real home <3

I'm not doing the course, but I'm pretty much always on the EA Gather, and usually on for coworking and accountabilitying compatible with my timezone (UTC+8). Feel free to hop on there and ping me - there's a good chance I'll be able to reply at least by text immediately, and if not pretty much always within 12 hours.

Load more