To answer your question, personally, I think we should probably stick with standard expected value reasoning over the approach you are advocating here. So no, I wouldn't tell you to wear one anyway. But I'm confused about exactly what it is you are advocating.
I'll try to make the objection I am trying to articulate more forceful:
Suppose we are considering some awful painful experience, X, and some trivial inconvenience, Y. Suppose everyone on earth agrees that, when thinking altruistically about others, 8000 people having experience X would be worse than 8 billion people having experience Y (that's how bad experience X is, and how trivial experience Y is).
Suppose also that everyone on earth adopts a discount threshold of just over 1 in a million.
Now suppose that everyone on earth is faced with the choice of experiencing Y or facing a 1 in a million chance of X. Since they have a discount threshold, they all choose to go with the 1 in a million chance of X.
Now, with extremely high probability, ~8,000 people on earth will experience X. Take the perspective of any one individual looking at what's happened to everyone else. They will agree that the situation for everyone else is bad. They should, at least when thinking altruistically, wish that everyone else had chosen to experience Y instead (everyone agrees that it is worse for 8000 people to experience X than 8 billion to experience Y). But they can't actually recommend that any particular person should have decided any differently, because they have done exactly the same thing themselves!
Something seems wrong here!
Thanks for the reply!
On the grouping issue, suppose we take your suggestion of grouping all "decisions that might kill you" into one, and suppose that everyone on earth follows this policy. Suppose also that there is some precaution against painful death (like seatbelt wearing) that everyone decides not to do, in order to gain some trivial benefit. Suppose that integrating over their own life, this decision makes sense, because the risk is still below their discount threshold. Whereas on expected value terms it does not.
It might then be the case that if everyone follows this policy independently, that globally, over billions of people, we might still expect thousands of people to end up dying avoidable painful deaths. Which seems bad!
This seems like a strong case for integrating not just over "decisions that might kill you", but over "decisions that anyone takes that might kill them".. and I think through similar appeals you could imagine extending that to "decisions anyone takes that might cause any benefit/harm to any sentient being", at which point, in a big universe, have you not just arrived back at expected utility theory again..?
This is an interesting approach!
I'm still a bit confused about exactly how to apply this method in practice though. If I am understanding it correctly, then if someone knows they will only drive 1 mile in their entire life, you would say that wearing a seatbelt becomes the wrong thing to do for them? On the other hand, if they know that they will drive 500 miles, then wearing a seatbelt for those 500 miles might make sense?
But what if they are in a situation where they do not know how long their journeys are going to be? They are taking 1 car journey in their life, and maybe the car will stop after 1 mile, or maybe after 1000. Maybe they have some subjective probability distribution over these possible journey lengths. How do they make their decision in this situation? I'd be interested to see a worked example here!
I'm also still confused about how you decide on the groupings in practice. If I know that I will travel 250 miles by car in my life, and 250 miles by bike, and each risk is below the discount threshold, does that mean I should wear neither a seatbelt nor a bike helmet? Or should I wear both, if the combined risk of both driving and cycling is together enough to cross the threshold? Should I treat seatbelts/bike-helmets as one decision or two separate ones?
If I'm treating it as two separate decisions, then this feels arbitrary (why not split the seatbelt decision into driving on main roads vs driving on side roads, to push things under the threshold when it previously would have been over?), but if I treat them together, then it feels like my actual discount threshold in practical situations is going to become far smaller than the one I decide on a priori (since I make a lot of decisions in my life!)
Having thought this through some more, I've realised I'm wrong, sorry!
 
Person A shouldn't say that the probability of extinction halves each century, but they can say that it will decay as 1/N, and that will still lead to an enormous future without them ever seeming implausibly overconfident.
A 1/N decay in extinction risk per century (conditional on making it that far) implies a O(1/N) chance of surviving >= N centuries, which implies a O(1/N^2) chance of going extinct in the Nth century (unconditional). Assuming that the value of the future with extinction in the Nth century is at least proportional to N (a modest assumption) then the value of the future is the sum of terms that decay no faster than 1/N, so this sum diverges, and we get a future with infinite expected value.
I think your original argument is right.
I still have separate reservations about allowing small chances of high stakes to infect our decision making like this, but I completely retract my original comment!
I was assuming in my example that the "Time of perils" that Person A believes we might be living through is supposed to be over by the 50th century, so that the 50th century is already in the period where extinction risk is supposed to have become very low.
But suppose Person A adopts your alternative probabilities instead. Person A now believes in a 1/1000 chance of going extinct in the 50th century, conditional on reaching it, and then the probability halves in each century after that.
But if that's what they believe, you can now just run my argument on the 100th century instead. Person A now proposes a probability of ~10^(-18) of going extinct in the 100th century (conditional on reaching it) which seems implausibly overconfident to me on the face of it!
I agree with you, that if we were considering the 50 millionth century, then a probability of 1/1000 would be far too high. I agree that it would be crazy to stipulate a probability for the Nth century that is much higher than 1/N, because surviving N centuries is evidence that typical extinction risk per century is lower than this (except maybe if we were considering centuries close to the time the sun is expected to die..?)
But my point is that in order to get a truly big future, with the kind of stakes that dominate our expected value calculations, then we need the probability of extinction to decay much faster than 1/N. We need the "Time of Perils" hypothesis. It needs to decay exponentially* (something like the halving that you've suggested). And before too long that exponential decay is going to lead to implausibly low probabilities of extinction.
*Edit: Actually not too confident on this claim now I think it through some more. Perhaps you can still get a very large future with sub-exponential decay. Maybe this is another way out for Person A in fact!
Edit: I no longer endorse this any more. The important point I was missing was that Person A's probability of extinction per century only needs to decay as 1/N in order for the value of the future to remain enormous, and a 1/N decay is not implausibly overconfident.
You are saying that we do not need to assign high probability to the "time of perils" hypothesis in order to get high stakes. We only need to assign it non-vanishing probability. And assigning it vanishing probability would appear to be implausibly overconfident.
But I'm not sure this works, because I think it is impossible to avoid assigning vanishingly small probability to some outcome. If you just frame the question differently, the position that appears to be the overconfident one can be reversed.
Suppose you ask two people what credence they each have in the "time of perils" hypothesis. Person A replies with 10%, and Person B replies with 10^(-20). Person B sounds wildly overconfident.
But now ask each of them what the probability is that humanity (or humanity's descendants/creations) will go extinct in the 50th century, conditional on surviving until that point. Person B may respond in many different ways. Maybe they say 1/1000. But Person A is now committed to giving a vanishingly small answer to this question, in order to be consistent with their 10% credence in "time of perils". Now it is Person A who sounds overconfident!
Person A is committed to this, because Person A places a non-vanishing probability on the future being very large. But the probability of making it to the far future is just the product of the probabilities of making it through each century along the way (conditional on surviving the centuries prior). For there to be a non-vanishing probability of a large future, most of these probabilities must be extremely close to 1. Does that not also seem overconfident?
I don't think this example tells us which of Person A or Person B are doing the right thing, but I think it shows that we can't decide between them with an argument of the form: "this person's view is implausible because they are assigning vanishingly small probability to something that seems, on the face of it, credible".
Maybe the two readings you describe can both be correct at the same time, and even complement each other?
Perhaps the point being made is: we find the initially described utopia hard to believe because we are in a situation similar to Omelas, where our pleasures depend on someone else's misery. So when someone tries to have us believe that true utopia is possible, we reject it, because facing up to its possibility would force us to confront our guilt about our current situation.
This is a fascinating read!
In the paper you discuss how your approach to infinite utilities violates the continuity axiom of expected utility theory. But in my understanding, the continuity axiom (together with the other VNM axioms) provide the justification for why we should be trying to calculate expectation values in the first place. If we don't believe in those axioms, then we don't care about the VNM theorem, so why should we worry about expected utility at all (hyperreal or not)?
Is it possible to write down an alternative set of plausible axioms under which expected hyperreal utility maximization can be shown to be the unique rational way to make decisions? Is there a hyperreal analogue of the VNM theorem?
It seems very strange to me to treat reducing someone's else chance of X differently to reducing your own (if you're confident it would affect each of you similarly)! But thank you for engaging with these questions, it's helping me understand your position better I think.
By 'collapsing back to expected utility theory' I only meant that if you consider a large enough reference class of similar decisions, it seems like it will in practice be the same as acting as if you had an extremely low discount threshold? But it sounds like I may just not have understood the original approach well enough.