Richard Y Chappell🔸

Associate Professor of Philosophy @ University of Miami
7224 karmaJoined Working (6-15 years)South Miami, FL 33146, USA
www.goodthoughts.blog/
Interests:
Bioethics

Bio

Participation
1

Academic philosopher, co-editor of utilitarianism.net, writes goodthoughts.blog

🔸10% Pledge #54 with GivingWhatWeCan.org

Comments
467

I'm not seeing the barrier to Person A's thinking there's a 1/1000 chance, conditional on reaching the 50th century, of going extinct in that century. We could easily expect to survive 50 centuries at that rate, and then have the risk consistently decay (halving each century, or something like that) beyond that point, right?

If you instead mean to invoke, say, the 50 millionth century, then I'd think it's crazy on its face to suddenly expect a 1/1000 chance of extinction after surviving so long. That would no longer "seem, on the face of it, credible".

Am I missing something?

Thanks, yeah, I like your point there that "false negatives are costlier than false positives in this case", and so even <50% credence can warrant significant action. (I wouldn't literally say we should "act as if 3H is true" in all respects—as per Nuno's comment, uncertainty may justify some compounding "patient philanthropy", which could have high stakes if the hinge comes later. But that's a minor quibble: I take myself to be broadly in agreement with your larger gist.)

My main puzzlement there is how you could think that you ought to perform an act that you simultaneously ought to hope that you fail to perform, subsequently (and predictably) regret performing, etc. (I assume here that all-things-considered preferences are not cognitively isolated, but have implications for other attitudes like hope and regret.) It seems like there's a kind of incoherence in that combination of attitudes, that undermines the normative authority of the original "ought" claim. We should expect genuinely authoritative oughts to be more wholeheartedly endorsable.

Right, so one crucial clarification is that we're talking about act-inclusive states of affairs, not mere "outcomes" considered in abstraction from how they were brought about. Deontologists certainly don't think that we can get far merely thinking about the latter, but if they assess an action positively then it seems natural enough to take them to be committed to the action's actually being performed (all things considered, including what follows from it). I've written about this more in Deontology and Preferability. A key passage:

If you think that other things besides impartial value (e.g. deontic constraints) truly matter, then you presumably think that moral agents ought to care about more than just impartial value, and thus sometimes should prefer a less-valuable outcome over a more-valuable one, on the basis of these further considerations. Deontologists are free to have, and to recommend, deontologically-flavored preferences. The basic concept of preferability is theory-neutral on its face, begging no questions.

Thanks! You might like my post, 'Axiology, Deontics, and the Telic Question' which suggests a reframing of ethical theory that avoids the common error. (In short: distinguish ideal preferability vs instrumental reasoning / decision theory rather than axiology vs deontics.)

I wonder if it might also help address Mogensen's challenge. Full aggregation seems plausibly true of preferability not just axiology. But then given principles of instrumental rationality linking reasons for preference/desire to reasons for action, it's hard to see how full aggregation couldn't also be true with regard to choiceworthiness. (But maybe he'd deny my initial claim about preferability?)

To be clear: I'd be excited for more people to look into these claims! Seems worth investigating. But it's not my comparative advantage.

Sorry, I don't think I have relevant expertise to assess such empirical claims (which is why I focus more on hypotheticals). It would certainly be convenient if helping people turned out to be the best way to also reduce non-human suffering! And it could be true (I don't take convenience to be an automatic debunker or anything). I just have no idea.

Thanks for your reply! Working backwards...

On your last point, I'm fully on board with strictly decoupling intrinsic vs instrumental questions (see, e.g., my post distinguishing telic vs decision-theoretic questions). Rather, it seems we just have very different views about what telic ends or priorities are plausible. I give ~zero credence to pro-annihilationist views on which it's preferable for the world to end than for any (even broadly utopian) future to obtain that includes severe suffering as a component. Such pro-annihilationist lexicality strikes me as a non-starter, at the most intrinsic/fundamental/principled levels. By contrast, I could imagine some more complex variable-value/threshold approach to lexicality turning out to have at least some credibility (even if I'm overall more inclined to think that the sorts of intuitions you're drawing upon are better captured at the "instrumental heuristic" level).

On moral uncertainty: I agree that bargaining-style approaches seem better than "maximizing expected choiceworthiness" approaches. But then if you have over 50% credence in a pro-annihilationist view, it seems like the majority rule is going to straightforwardly win out when it comes to determining your all-things-considered preference regarding the prospect of annihilation.

Re: uncompensable monster: It isn't true that "orthodox utilitarianism also endorses this in principle", because a key part of the case description was "no matter what else happens to anyone else". Orthodox consequentialism allows that any good or bad can be outweighed by what happens to others (assuming strictly finite values). No one person or interest can ever claim to settle what should be done no matter what happens to others. It's strictly anti-absolutist in this sense, and I think that's a theoretically plausible and desirable property that your view is missing.

Another way to flip the 'force' issue would be "suppose a society concludes unanimously including via some extremely deliberative process (that predicts and includes the preferences of potential future people) that annihilation is good and desired. Should some outside observer forcibly prevent them taking action to this end (assume that the observer is interested purely in ethics and doesn't care about their own existence or have valenced experience)?"

I don't think it's helpful to focus on external agents imposing their will on others, because that's going to trigger all kinds of instrumental heuristic norms against that sort of thing. Similarly, one might have some concerns about there being some moral cost to the future not going how humanity collectively wants it to. Better to just consider natural causes, and/or comparisons of alternative possible societal preferences. Here are some possible futures:

(A) Society unanimously endorses your view and agrees that, even though their future looks positive in traditional utilitarian terms, annihilation would be preferable.

  • (A1): A quantum-freak black hole then envelops the Earth without anyone suffering (or even noticing).
  • (A2): After the present generations stop reproducing and go extinct, a freak accident in a biolab creates new human beings who go on to repopulate the Earth (creating a future similar to the positive-but-imperfect one that previous generations had anticipated but rejected).

(B) Society unanimously endorses my view and agrees that, even though existence entails some severe suffering, it is compensable and the future overall looks extremely bright.

  • (B1): A quantum-freak black hole then envelops the Earth without anyone suffering (or even noticing).
  • (B2): The broadly-utopian (but imperfect) future unfolds as anticipated.

Intuitively: B2 > A2 > A1 > B1.

I think it would be extremely strange to think that B1 > B2, or that A1 > B2. In fact, I think those verdicts are instantly disqualifying: any view yielding those verdicts deserves near-zero credence.

(I think A1 is broadly similar to, though admittedly not quite as bad as, a scenario C1 in which everyone decides that they deserve to suffer and should be tortured to death, and then some very painful natural disaster occurs which basically tortures everyone to death. It would be even worse if people didn't want it, but wanting it doesn't make it good.)

Regarding the "world-destruction" reductio:

this isn't strong evidence against the underlying truth of suffering-focused views. Consider scenarios where the only options are (1) a thousand people tortured forever with no positive wellbeing whatsoever or (2) painless annihilation of all sentience. Annihilation seems obviously preferable.

I agree that it's obviously true that annihilation is preferable to some outcomes. I understand the objection as being more specific, targeting claims like: 

(Ideal): annihilation is ideally desirable in the sense that it's better (in expectation) than any other remotely realistic alternative, including <detail broadly utopian vision here>. (After all, continued existence always has some chance of resulting in some uncompensable suffering at some point.)

or

(Uncompensable Monster): one being suffering uncompensable suffering at any point in history suffices to render the entire universe net-negative or undesirable on net, no matter what else happens to anyone else. We must all (when judging from an impartial point of view) regret the totality of existence.

These strike me as extremely incredible claims, and I don't think that most of the proposed "moderating factors" do much to soften the blow.

I grant your "virtual impossibility" point that annihilation is not really an available option (to us, at least; future SAI might be another matter). But the objection is to the plausibility of the in principle verdicts entailed here, much as I would object to an account of the harm of death that implies that it would do no harm to kill me in my sleep (the force of which objection would not be undermined by my actually being invincible).

Moral uncertainty might help if it resulted in the verdict that you all things considered should prefer positive-utilitarian futures (no matter their uncompensable suffering) over annihilation. But I'm not quite sure how moral uncertainty could deliver that verdict if you really regard the suffering as uncompensable. How could a lower degree of credence in ordinary positive goods rationally outweigh a higher degree of credence in uncompensable bads? It seems like you'd instead need to give enough credence to something even worse: e.g. violating an extreme deontic constraint against annihilation. But that's very hard to credit, given the above-quoted case where annihilation is "obviously preferable".)

The "irreversibility" consideration does seem stronger here, but I think ultimately rests on a much more radical form of moral uncertainty: it's not just that you should give some (minority) weight to other views, but that you should give significant weight to the possibility that a more ideally rational agent would give almost no weight to such a pro-annihilationist view as this. Some kind of anti-hubris norm along these lines should probably take priority over all of our first-order views. I'm not sure what the best full development of the idea would look like, though. (It seems pretty different from ordinary treatments of moral uncertainty!) Pointers to related discussion would be welcome!

I think a more promising form of suffering-focused ethics would explore some form of "variable value" approach, which avoids annihilationism in principle by allowing harms to be compensated (by sufficient benefits) when the alternative is no population at all, but introduces variable thresholds for various harms being specifically uncompensable by extra benefits beyond those basic thresholds. I'm not sure whether a view of this structure could be made to work, but it seems more worth exploring than pro-annihilationist principles.

Load more