JM

Joey Marcellino

25 karmaJoined

Posts
1

Sorted by New

Comments
5

Right, gotcha. 

I have conflicting intuitions here. In the case as you described it, I want to bite the bullet and say that everyone is acting rationally, and the 8000 are just unlucky. Something seems off about reducing risk for yourself in service of the project of reducing overall suffering, when you wouldn't do it in service of the project of reducing your own suffering. 

That said, if you change the thought experiment so that everyone can experience Y in order to prevent a 1 in a million chance of someone else experiencing X, I'm much more inclined to say that we should integrate as you've described. It seems like the dynamics are genuinely different enough that maybe I can make this distinction coherently?

Re: seatbelts, I was a bit confused; you seemed to be saying that when you integrate over "decisions anyone makes that cause benefit/harm," you collapse back to expected utility theory. I was suggesting that expected utility theory as I understand it does not involve integrating over everyone's decisions, since then e.g. the driver with deflated self-worth in my previous example should wear a seatbelt anyway. 

Hmm, I don't really feel the force of this objection. My decision to wear my own seatbelt is causally unconnected to both everyone else's decisions and whatever consequences everyone else faces, and everyone else's decisions are unconnected to mine. It seems odd that I should then be integrating over those decisions, regardless of what decision theory/heuristic I'm using. 

For example, suppose I use expected value theory, and I value my own life a little less than everyone else's. I judge that the trivial inconvenience of putting on a seatbelt genuinely is not worth the decreased risk to my life, although I would counsel other people to wear seatbelts given the higher value of their lives (and thus upon reflection support a universal policy of seatbelt wearing). Do you think I ought to integrate over everyone's decisions and wear a seatbelt anyway? If so, I think you're arguing for something much stronger than standard expected value reasoning.

Thanks!

Indeed, if all we're considering is the decision to wear seatbelts or not, I would say that wearing a seatbelt for a lifetime total of 1 mile is (maybe) fanatical, and 500 is (maybe) not. In practice, your second question about groupings will come into play, see below. If you don't know how many miles you'll drive and have a probability distribution, I suppose you'd treat it the same way as the scenarios I discuss in the post: discretize the distribution according to your discount threshold so you don't end up discounting everything, then take the expected value as normal and see if it's worth all the seatbelt applications. The results will depend heavily on the shape of the distribution and your numbers for discount threshold, value of life, etc.

The grouping issue is tricky. It seems to me like we ought to consider the decisions together, since I'm (more or less) indifferent between dying in a bike crash vs a car crash. Perhaps we ought to group all "decisions that might kill you" together, and think of it somewhat like the repeated trade offer described in the post; each time you contemplate going helmet or seatbelt-less, you have the option to gain some utility at the cost of a slightly higher risk of dying, and the reasonable thing to do is integrate over your decisions (although it'll be slightly more complicated, since maybe you expect to e.g. drive many more miles in the future and need to account for that somehow).

As mentioned in the post, integrating like this in situations of repeated decision-making can mean that you reject arbitrarily small changes in probability, even those below your discount threshold. I wouldn't say that this effect means that your practical discount threshold is arbitrarily small.

Thanks for reading!

Re: seatbelts, I don't think you need to invoke effects on future reasoning. If I'm understanding correctly, you're imagining a situation where, after each one-mile drive with no seatbelt, you say to yourself something like "well, I've driven all these other miles with no seatbelt, so there's no reason to wear a seatbelt for this next mile." The previous decisions then somehow make you even less likely to wear a seatbelt in the future. But even totally absent this effect, where you use the exact same reasoning every time independent of all past and future decisions, if the one-mile risk is below your threshold you'll never wear a seatbelt. This is a pretty general problem that doesn't really depend on the particulars of the person or situation (applies to anything where a big important thing can be decomposed into many small unimportant things), so I'm not sure an appeal to practical reasoning will suffice. 

Re: the collective, I'll tentatively suggest something like "the set of people taking the same action as me, maybe up to differences in magnitude." If I think that donating a million dollars to Charity X would have a non-negligible impact on the world, I guess it shouldn't matter if I personally donate a million all at once, if I donate a million in many ten-dollar increments, or if I know that 99,999 other people will all also donate ten dollars and I donate the final ten. But I agree that this is still underspecified.

Hi all! New to the forum, but I've been aware of and thinking about EA for a while, and would like to begin engaging a bit with the community. I'm a PhD student in physics at the University of Geneva working on quantum communication.

I'm particularly interested in chatting with anyone with ideas or knowledge about humanitarian use cases for quantum computing (might start a thread about this later, especially if anyone else is interested). My education re: quantum computing has tended to focus more on principles than applications, but when applications are mentioned they tend to fall into the broad categories of finance or pharmaceutical research. I'm hoping to find some different perspectives. 

Cheers!