Hide table of contents

Abstract

Objections to longtermism often focus on issues like fanaticism, discounting, or classic reasons to doubt the tractability of positively influencing the far future. I argue that another challenge has been underdiscussed relative to those—namely, that posed by unawareness: many of the long-term possibilities most relevant to our actions are unknown to us. I develop this challenge by introducing the notion of determinative unawareness, where unknown outcomes are decisive for whether an action is overall positive or negative. Greaves and MacAskill’s responses—treating unawareness as ordinary uncertainty or appealing to catchall states—do not, by themselves, suffice, as it is far from clear how they can avoid arbitrariness. Importantly, I argue that existing cases outlining the relative robustness of some longtermist strategies, such as option value maximization, do not ultimately succeed in sidestepping determinative unawareness. I then discuss two broad ways one could relax the normative views supporting the longtermist thesis to rescue some version of longtermism from the grip of unawareness, and suggest that they face serious limitations.

1. Introduction

Longtermism, the view that the impact of our actions on the far future is of particular moral importance (MacAskill 2022; Ord 2020; Greaves & MacAskill 2025),[1] has raised a wide variety of criticisms (Greaves & MacAskill 2025). Prominent examples include arguments for discounting, in some manner, the interest of future beings (see Mogensen 2022; Setiya 2014; Nordhaus 2007; 2008; Curran 2025; Riedener 2025), classic intractability objections (see Thorstad 2025a, §4; Tarsney 2023), the objection from fanaticism (see Bottomley & Williamson 2025; Tarsney 2020; Beckstead 2013, §6)[2], and that from infinite paralysis (see Tarsney & Wilkinson 2025). Alongside such critiques, the academic literature also counts numerous defenses of the longtermist thesis against these. Both critiques and defenses appear in Greaves et al.’s (2025) recent collection, Essays on longtermism. The most salient assumptions behind the longtermist view seem to have all been well discussed at this point.

Taking stock, here are six claims we might or might not agree on about longtermism:[3] 

  1. Even if one assumes the world will plausibly end very soon, such that there may not be any long-term future,[4] moral patients that might still exist in the far future still likely far outnumber those in the present and near future in expectation[5] (see Greaves & MacAskill 2025, §§3-4; MacAskill 2022; Ord 2020; Beckstead 2019; 2013; Bostrom 2013; 2003).[6]

  2. Because far-future moral patients might not even exist (e.g., human extinction would prevent the existence of further human descendants), their interests should, in practice, be discounted relative to those who do or will certainly live (see, e.g., Stern 2007; 2008; Thorstad 2023; 2024). However, pure time discounting, i.e., assigning less intrinsic importance to the interests of future beings, conditional on their existence, for the sole reason that they will not exist before a certain point in the future, seems highly unprincipled and is rejected by many economists and, especially, philosophers (for an overview, see Greaves & MacAskill 2025, §6). And, given the first fact mentioned above, i.e., the immense potential size of the far future, the latter kind of discounting is what most experts believe is required to undermine the “astronomical stakes” premise of longtermism.
  3. While some actions one might take have only morally relevant effects that do not last, like ripples on a pond (Moore [1903] 2005; Smart & Williams 1973; Thorstad 2025, §4; Bernard 2023; and Grilo’s informal extrapolation of Bernard’s results in this 2025 comment thread), others do have immense long-run effects, whether or not intentionally (Thorstad 2025a, §7; Beckstead 2013, pp. 3-8; Greaves & MacAskill 2025, §4; Lenman 2000, §III; Greaves & Tarsney 2025; Schmidt & Barrett 2025; my informal response to Grilo’s extrapolation of Bernard’s results in this 2025 comment thread). The importance of long-term effects often seems to swamp that of near-term ones, even in seemingly neartermist decision contexts like whether to donate to the Against Malaria Foundation or the Make a Wish Foundation, due to second-order effects on, e.g., population sizes (Mogensen 2021; Kollin et al. 2025, §1; Greaves 2016). Hence, the morally significant effects of at least some actions do not seem to “wash out” ex post.
  4. Forming an action-guiding belief about whether a given intervention does more good than harm, considering all its long-run consequences we are aware of, requires judgment calls that appear somewhat arbitrary (Greaves 2016; Mogensen 2021; Tarsney 2023; Kollin et al. 2025, §§2 and 7). This may lead to the classic version[7] of the problem of complex cluelessness as defined by Greaves (2016).[8] Nevertheless, longtermists may hold that the arbitrariness of such judgment calls is only partial, such that it does not in fact give rise to complex cluelessness (see Greaves & MacAskill 2025, §7.3; Lewis 2021).

  5. To the extent that endorsing longtermism requires fanaticism (i.e., accepting that tiny probabilities of astronomical outcomes dominate the calculus),[9] which is unappealing to many, the alternatives to fanaticism seem like (even) harder bullets to bite (Wilkinson 2022; Tarsney 2025; Beckstead & Thomas 2024; Russell 2024).

  6. Longtermists find ways that do not involve pure time discounting nor rejecting fanaticism, to overcome the infinite paralysis (for a fairly representative overview, see Tarsney & Wilkinson 2025; Ord 2025b; Askell 2018), at least in practice. These ways sometimes remain imprecise and might not have been as rigorously defended as in other rejections of objections to longtermism. Nonetheless, they are so consensual that the issue of how to deal with infinite cases is not even acknowledged in Greaves & MacAskill (2025) and MacAskill’s (2022) lists of challenges longtermism faces.

One may not endorse all the above conclusions. However, I believe that discussions of these six potential problems with longtermism may have (nearly) reached the point of diminishing returns, making it difficult to advance the debate on these issues. More particularly, I believe that there is a seventh problem that, although it has received far less attention, remains the deepest challenge to longtermism: the problem of (outcome)[10] unawareness.

When forming an opinion on the long-term expected value (positive or negative) of a given act, we know there are relevant possibilities we are not entertaining or cannot (fully) comprehend.[11] Yet it seems likely that at least one of these unknown possibilities turns out to be crucial, in the sense that factoring it in only once we became aware of it would flip the sign of the expected value calculation (Bostrom 2007; 2014c, pp. 354–355; 2014a; Tomasik 2015). The tractability objection and the “epistemic challenge to longtermism”[12] (Tarsney 2023) thereby take on a deeper and more fundamental form. The problem is not merely that the conflicting possibilities we are aware of may leave us clueless, but that we likely are not even aware of the most significant long-run effects of our actions (Roussos 2021, slide 12). This makes outcome unawareness a serious challenge for longtermists which has, as yet, received very little attention. The problem has been articulated only very recently and outside of academic journals, by Roussos (2021, slides),[13] Thorstad (2025b), and DiGiovanni (2025).[14] And this work has so far only fostered the limited discussions offered by Tarsney et al. (2024, §3); The Global Priorities Institute (2024, §§1.2.1 and 4.2.1), Greaves and MacAskill (2025, §7.2), and Kollin et al. (2025, §§2 and 7).[15] 

In the rest of this essay, I, firstly, build upon these rare prior discussions to help conceptualize the challenge that outcome unawareness constitutes for longtermism, introducing the concept of determinative unawareness, and making very explicit how Greaves and MacAskill’s attempt at solving the problem remains incomplete—though they deserve credit for bringing unawareness onto the longtermist agenda and proposing responses. This is the scope of §2.[16] Secondly, §3 investigates other, potentially more promising ways of rescuing longtermism—from strategies that aim to be robust to unawareness, to views that relax expected value maximization or impartial consequentialism. While none may prove fully convincing yet, they frame the space of possible responses. Finally, in §4, I draw together the takeaways to assess what unawareness means for the future of research on longtermism and global priorities.

2. The (determinative) unawareness challenge to longtermism

Lola is considering whether to donate to a campaign promoting large-scale solar geoengineering as a response to climate change.[17] If she knew the most relevant potential climatic and socio-political effects of her (in)action in the next decade or century,[18] she would not want to make her decision based on this, as a longtermist. The next hundred years are only a tiny portion of what she cares about. But that is not even the only challenge. If she fully knew the most crucial potential effects her donation would have on climate change in the very long run; she would not want to decide based on this alone, either. She has to factor in the consequences for countless moral subjects in the long run, including consequences of her donation, other than whether it would overall reduce long-term climate change. Yet she is—and will likely always remain—unaware of many of these consequences. In truth, applying the observations made by Bostrom (2007; 2014c, pp. 354–355; 2014a), Tomasik (2015), Steele and Stefánsson (2021, pp. 87–90, 93–94), Roussos (2021, slides), Thorstad (2025b), DiGiovanni (2025) to her case suggests that, not only is she “unaware of many of these consequences”, she is likely unaware of the most decisive ones. The overall longtermist desirability of large-scale solar geoengineering overwhelmingly depends on second-order unintended effects, such as its effects on the severity and likelihood of various possible existential catastrophes.[19] This itself depends on how Lola’s donation affects, through unknown systematic levers, the resilience of natural ecosystems, potential climate-driven resource wars, and different possible technological lock-ins, among many others. These, themselves, and all the other potentially crucial factors I omit, depend on severely complex causal chains one could only hope to comprehend. Call Lola’s situation one of determinative unawareness, where the desirability of her (in)action essentially depends on possibilities she (knows she)[20] is not or cannot be aware of.[21] This contrasts with cases where outcome unawareness is negligible, like in situations where maximizing good expected consequences is not the goal, such as when picking a nice gift for an old friend or treating one’s headache (DiGiovanni 2025, Chapter 1), or when only attempting to maximize good short-term consequences for a particular population, like GiveWell does (Roussos 2021, slide 11).

Importantly, Lola cannot assume that the unknown positive and negative outcomes “cancel out” ex ante, such that she could simply ignore them otherwise—see Grant & Quiggin (2013, §6.2), Bradley (2017, pp. 55–56), Steele and Stefánsson (2021, pp. 88–90, 117–118, and 127–128), and DiGiovanni’s (2025) discussion of “biased sampling” (Chapter 3), as well as his treatment of “symmetry” and “extrapolation” (Chapter 4). She has reasons to believe that unknown negative outcomes dominate positive ones—e.g., the history of predicting the effects of technological interventions shows that unforeseen harms may tend to dominate unforeseen benefits (Grant and Quiggin 2013, §6.2; Steele and Stefánsson 2021, pp. 88–90, 117–118, and 127–128). Yet, she also has asymmetric reasons to believe the exact opposite—e.g., an omission bias making her more likely to become aware of considerations that favor inaction over action. She cannot justifiably assume these reasons are of exactly equal weight. In other words, her determinative (outcome) unawareness gives rise to complex cluelessness. This implies that no matter how confident she might be about the overall sign of the known consequences of her donation,[22] this would not provide her with any action-guidance. This is what it means for unawareness to be determinative.[23]

Greaves and MacAskill (2025, §7.2) give two arguments for why the problem illustrated above does not undermine the longtermist enterprise. I critically discuss these arguments, in turn, in the sub-sections that follow.

  2.1 Unawareness importantly differs from mere uncertainty

Greaves and MacAskill (2025, §7.2) write (bold emphasis is mine):

First, we know that we operate with coarse-grained models, and that the reasons for this include unawareness of some fine-grainings. Of course, failure to consider key fine-grainings might lead to different expected values and hence to different decisions, but this seems precisely analogous to the fact that failure to possess more information about which state in fact obtains similarly affects expected values (and hence decisions). Since our question is which actions are ex ante rational, both kinds of failure are beside the point.

I see two different ways one could support this argument:

  • a) There are longtermist decision problems where (unlike with Lola’s) unawareness is not determinative. Therefore, our potential inability to assign precise (or not-too-imprecise) utilities to unknown outcomes (see Steele & Stefánsson 2021; DiGiovanni 2025, Chapter 1) is unproblematic. It is not crucial for longtermist expected value calculations.
  • b) If unawareness is always determinative for the longtermist, we can still assign an expected utility to the consequences we are unaware of. This rescues expected value calculations.

Since I address (b) in §2.2, I focus on (a) in the rest of this sub-section. More specifically, I argue that the interventions longtermists consider most robust to uncertainty still give rise to determinative unawareness.

Greaves and MacAskill (2025, §4, §6, §7.3–7.4) allude to the maximization of option value through preventing undesirable lock-in scenarios for our civilization (e.g., its extinction or the establishment of some permanent totalitarian regime) or enabling our descendants (e.g., through research or resource investments). They portray option value maximization as the class of longtermist interventions that is most robust to uncertainty vis-à-vis long-term effects. This concurs with what has been defended in many other longtermist academic writings (see Beckstead 2013, p.5; Thorstad & Mogensen 2020MacAskill 2022, Chapters 1, 2, 9; Tarsney 2023; Greaves & Tarsney 2025; Askell & Neth 2025; Vallinder 2025; Ord 2020; Beckstead & Thomas 2024, p.450; Parfit 2011, pp.614–615; Powell 2025). But is option value maximization robust enough to avoid determinative unawareness?

Building on unpublished work from Anni Leskelä, DiGiovanni (2025, Chapter 3) differentiates between outcome robustness and implementation robustness.[24] Option value maximization is outcome robust if the option value positively correlates with better futures, all things considered (including unknown consequences of maximizing option value). A specific intervention aimed at preserving or increasing option value is implementation-robust if it actually preserves or increases option value in expectation, all things considered (including unknown consequences of the intervention).

Firstly, let’s call into question the supposed outcome robustness of option value maximization. For the longtermist, such an endeavor is only beneficial under the assumptions that future stakeholders will i) positively influence the long-term future, and ii) do so in a way that makes up for the potential harm incidentally caused by our option-value maximizing interventions[25]. While these conditions might first seem easier to meet than those required for other longtermist interventions to be worthwhile, it is far from clear they are satisfied. Let’s follow Greaves and MacAskill (2025) in taking the reduction of existential risks to our civilization[26] (X-risk reduction hereafter) as our leading example of option value maximization (via mere option value preservation, in this case).[27] Longtermists ought to assess the desirability of X-risk reduction by evaluating (to a precise-enough degree) a large panel of complex parameters. These may include inevitably controversial assumptions regarding

These parameters give rise to extreme degrees of unawareness. There may also be crucial parameters we may have missed, altogether. Our understanding of the relevant mechanisms at play is extremely limited (DiGiovanni 2025, Chapter 2). One can still, despite all this, try to justify judgment calls that estimate X-risk reduction to be “better than nothing” from a longtermist perspective,[29] but this goes nowhere near proving that it escapes determinative unawareness and is robustly good.[30]

One response Greaves and MacAskill’s (2025, §6), as well as Kruus (2025, §6) and Rulli (2024), give to this concern is that those who do not take X-risk reduction to be robustly positive should remain longtermists and focus on preventing non-X-risk lock-in scenarios. This could be achieved via the strategies proposed by, e.g., MacAskill (2025); Browning and Veit (2025), Horta and Rozas (2024), O’Brien (2024), Baumann (2022), Vinding (2020, Chapter 14), Sotala and Gloor (2017). However, one problem none of these authors address is that any intervention that reduces the probability of such lock-ins surely also substantially influences that of existential catastrophes. Therefore, our unawareness-driven cluelessness about the sign of X-risk reduction very plausibly infects that of these other longtermist interventions—just as cluelessness about indirect effects on existential catastrophes contaminates Lola’s decision about solar geoengineering. (see the introduction of §2).[31] 

At this stage, one might argue that there still is the option of enabling our descendants, e.g., through research or resource investments—see Greaves and MacAskill’s (2025, §4.4) “meta options” proposal, Kitcher’s (2025) successive-deliberations idea, Bostrom’s (2007; 2014c, pp. 354–355; 2014a) case for “building good capacity”, and Tomasik’s (2015) discussion of “punting to the future”. But, as concisely pointed out in this comment from Eli Rose, cluelessness is also infectious here—see also DiGiovanni’s (2025, Chapter 4) discussion of “capacity-building” and this comment of mine. If a longtermist is paralyzed by unawareness concerning whether making our descendants more likely to exist (through X-risk reduction) does more good than harm, why would they believe that enabling these same descendants does any good?

Secondly, even assuming that option value maximization is outcome-robust, implementation robustness also has to be demonstrated. Schwitzgebel (2024) and Friederich (2025, §5) discuss a few X-risk reduction interventions endorsed by most longtermists, and challenge the view that these are more likely to lead to the intended outcome than to an overall increase in X-risks. Note that they do not even explicitly factor in unknown outcomes—which DiGiovanni (2025, Chapters 3-4) however does in his brief discussion of the implementation robustness of AI control and preventing AI takeover. The general challenge these three authors pose has not been given any satisfying response, as far as I am aware.

Thirdly, whether it is outcome or implementation robustness that is in question, we should arguably make pessimistic inductions based on longtermists’ track record of (re)discovering sign-flipping considerations (DiGiovanni 2025, Chapter 2; Bostrom 2007; 2014c, pp. 354–355; 2014a; Tomasik 2015). Greaves and MacAskill (2025, §7.2) themselves seem to agree, to a certain extent, when they propose the following analogy:

Consider, for example, would-be longtermists in the Middle Ages. It is plausible that the considerations most relevant to their decision—such as the benefits of science, and therefore the enormous value of efforts to help make the scientific and industrial revolutions happen sooner—would not have been on their radar. Rather, they might instead have backed attempts to spread Christianity, perhaps by violence: a putative route to value that, by our more enlightened lights today, looks wildly off the mark.

The suggestion, then, is that our current predicament is relevantly similar to that of our medieval would-be longtermists.

In sum, if there exist cases where our unawareness of some long-term outcomes is not determinative, the existence of such cases has yet to be demonstrated. However, Greaves and MacAskill’s (2025, §7.2) second argument suggests that longtermists can deal with determinative unawareness, which I turn to in §2.2.

  2.2 The challenge of assigning utilities to unknown long-term outcomes

Following a popular model of conscious unawareness in the literature on deep uncertainty, Greaves and MacAskill (2025, §7.2) suggest that an agent facing a dilemma like Lola’s (wondering whether to support large-scale solar geoengineering from a longtermist perspective) should assign a probability to a catchall state and a utility to the outcomes associated with it to cover all possibilities she is unaware of. This would make determinative unawareness unproblematic.

However, Greaves and MacAskill remain (like most scholars who have discussed this catchall model) silent on how such an agent is supposed to evaluate the desirability of unknown outcomes, i.e., is supposed to assign a value to the catchall (Roussos 2021, slide 20). Therefore, they do not provide any support to their claim that “conceptualising parts of this state in more explicit terms might change some expected-value assessments, but [...] does nothing to undermine the ex ante rationality of decisions taken on the basis of one’s existing assessments”. These existing assessments presumably do not adequately account for unknown outcomes (DiGiovanni 2025, Chapter 2); hence the need to explicitly model conscious unawareness to factor it in appropriately, to begin with. Indeed, it is this key problem that remains unaddressed (DiGiovanni 2025, Chapter 3; Roussos 2021, slide 20; Thorstad 2025b, §4). In many (non-longtermist) decision problems that have been studied in the unawareness literature, any (non-severely-imprecise) utility[32] an unaware agent could assign to their catchall would seem unjustifiably arbitrary.[33] But are there decision problems for longtermists that could be exceptions to this general tendency? What are potential cases where one can assign a non-arbitrary—but action-guiding—value to the catchall?

Steele & Stefánsson (2021, pp. 106–110 and 127–128; 2022) present Grant & Quiggin’s (2013) inductive reasoning proposal as a reliable method to reduce the indeterminacy of the catchall utility: by appealing to reference classes of similar past decision problems and extrapolating from how the unforeseen consequences in those cases turned out, agents could assign a plausible utility to the unknown outcomes in the case they presently face. In theory, this makes determinative unawareness unproblematic, by allowing agents to non-arbitrarily estimate the utility of the catchall. But, presumably, it is precisely because the reference class of the longtermist is too empty that they face determinative unawareness in the first place. It is far from clear that the inductive reasoning proposal helps us when estimating the utility of the catchall when it is most crucial. One must either A) argue there are real-world decision problems where unawareness about long-term consequences is not determinative, or B) find an appropriate reference class (such that determinative unawareness is unproblematic). Since §2.1 already covers prospects for A, I focus on B in the remainder of the present sub-section.

      Interlude on imprecision and insensitivity to mild sweetening

Complex cluelessness vis-à-vis long-term outcomes we are unaware of does not mean one must assign a utility of 0 to the catchall. We are not in a situation where we have no evidence either way. Rather, we are in one where we have conflicting pieces of evidence in both directions, and no clearly principled way of determining what they sum up to (see the introduction of §2) or to assume they exactly cancel out ex ante. This means the utility of the catchall is severely indeterminate, or imprecise, rather than precisely equal to 0. (DiGiovanni 2025, Chapter 2).

Consequently, adding any new piece of evidence to the pile does not make us any less clueless. This is what Schoenfield (2012) calls insensitivity to mild sweetening: when our credences are highly imprecise, due to asymmetric evidence pointing in conflicting directions, we are not justified in using the mildest piece of evidence to update away from agnosticism.

I cannot demonstrate, at least not within the scope of the present essay, that there exists no piece of evidence “sweet enough” for inductive reasoning to allow us to assign a utility to the catchall in longtermist decision problems. (I briefly go back to this in §4’s fourth paragraph.) However, it is longtermists who bear the burden of proof of showing the opposite, and I defend that the arguments I scrutinize below do not successfully show the opposite. (End of the interlude.)

 

One could gesture at past examples of historical figures who had a lasting impact on human society, or at forecasters making relatively “long-range” predictions that were successful. MacAskill (2022, Chapters 1–2) mentions Shakespeare and Thucydides preserved memories, the Founding Fathers’ constitution, and Benjamin Franklin’s successful investments for the cities of Boston and Philadelphia. Winter et al. (2021, p.17) give the examples of the long-lasting effects of Roman Law and the German criminal code of 1871. Kerslake & Wagg (2021) discuss George Peabody’s housing philanthropy and how it pioneered a form of long-term philanthropic investment. Supporters of long-range forecasting sometimes point to evidence from superforecasters making accurate geopolitical predictions several years out (see Mellers et al. 2015), population and economic projections that held up over decades (see Lutz et al. 1997), or climate models from the 1980s that closely matched subsequent temperature trends (see Hansen 1981). However, no one has demonstrated how this data has non-trivial relevance when ti comes to how to overall positively influence the very far future. If longtermists’ goal simply was to “leave a mark” on our civilization, or to predict some fairly simple, although far-out, geopolitical, climate, or societal events, no one would have ever seriously made tractability objections. Such goals are, evidently, fairly attainable. What is on a whole other level of difficulty is finding an action that predictably affects the long-term future in a morally desirable way, all things considered. Outcome unawareness threatens this latter goal far more than the former ones.[35] To illustrate this, see for instance §2.1 and its long list of crucial considerations (filled with unawareness) longtermist have to consider when it comes to the desirability of X-risk reduction—or to any other longtermist enterprise that predictably affects X-risks (arguably all of them). Shakespeare, Benjamin Franklin, geopolitical forecasters, and others, did not have to deal with anything remotely as complex as these to successfully do what they did.

Alternatively, one can appeal, more directly,[36] to heuristics that did well in the past (see Thorstad & Mogensen 2020; Tomasik 2015; The Global Priorities Institute 2024, §§1.2.1 and 4.2.1; Grant & Quiggin 2013)[37], implicitly endorsing what DiGiovanni (2025, Chapter 4) calls meta-extrapolation:

The problem with [extrapolating from the possibilities we are aware of] was that strategies that work on the awareness set might not generalize to the catch-all. We can’t directly test how well strategies’ far-future performance generalizes, of course. But what if we use strategies that have successfully generalized under unawareness with respect to more local goals? We could look at historical (or simulated?) cases where people were unaware of considerations highly relevant to their goals (that we’re aware of), and see which strategies did best.

Such meta-extrapolation is well-illustrated by Tomasik (2015):

[I]magine an effective altruist in the year 1800 trying to optimize his positive impact. [...] What [he] might have guessed correctly would have been the importance of world peace, philosophical reflection, positive-sum social institutions, and wisdom. Promoting those in 1800 may have been close to the best thing this person could have done, and this suggests that these may remain among the best options for us today.

A lot more work would be needed for this to follow, however. Let’s grant that such interventions in 1800 would have done more good than harm, considering their overall effects until today, for the sake of argument (although even this assumption is extremely questionable)[38]. This goes nowhere near proving that it would have done the same once we also factor in far future effects that are still unknown to us today. Total welfare in the far future (or whatever else the longtermist cares about) could differ drastically from past patterns. Again, see for instance all the crucial considerations listed in §2.1, the overwhelming majority of which apply only to the former. Relying on naïve extrapolation from historical evidence, without accounting for these structural differences and competing considerations, therefore offers little reason to think we are non-clueless. (DiGiovanni 2025, Chapter 4).

It is technically true that we are, relative to this person from 1800, in a better position to know whether the initiatives they might have taken would have had net positive long-term effects, since we now are aware of some possible scenarios they were unaware of. However, the above may suggest that this is only true in the same way it is true that someone at the southern tip of Argentina is, relative to someone deep inside Antarctica, in a better starting position to swim all the way to Greenland. Because of imprecision and insensitivity to mild sweetening (see the Interlude), better all else equal does not mean good enough.

At this stage, the longtermist can still make judgment calls based on their “best guesses” to evaluate the catchall and, hence, whether a given intervention overall does more good than harm in the very long run. But the problem is that, given determinative unawareness, these judgment calls have unmatched levels of arbitrariness. The best guesses of unaware longtermists must factor in, among many others, all the plausibly crucial considerations that have to do with potential aliens and acausal reasoning, with all the crippling determinative unawareness surrounding these (§2.1). What reasons do we have to believe that such best guesses correlate with the truth any more than a coin toss?[39] Superforecasters generally make judgment calls that fare better than chance when attempting to predict geopolitical events. But, unlike someone who wants to cause more good than harm, all long-term effects on all moral patients considered, these superforecasters do not face determinative unawareness. (For more on these points, see DiGiovanni (2025, Chapter 2) and this other discussion of his). Their success cannot be taken as evidence that longtermists’ best guesses are truth-tracking. DiGiovanni (2025, Chapter 2) writes:

[T]he signal from [our longermist] intuitions is drowned out, not by noise, but by systematic reasons why they might deviate from the truth.

A longtermist might disagree and keep trusting their unshakeable belief that, say, it would be terrible for every human to die. But for this bedrock belief to support longtermism, it needs to be purely grounded in an assessment of overall long-term implications. In fact, the present section implies it needs to be based on inductive reasoning to evaluate the overall sign of the far-future outcomes driving the longtermist’ determinative unawareness. And, as I have argued, longtermists have yet to show non-mildly sweet evidence (see the Interlude) that can allow such inductive reasoning. Otherwise, it is not clear how they can appropriately account for determinative unawareness in their judgment calls and, hence, why the longtermist should trust these any more than random.

3. More promising avenues for rescuing longtermism in the face of unawareness?

In the absence of a convincing response to the challenge outlined in §2, the unaware longtermist ought to suspend judgment about which course of action is best (DiGiovanni 2025, Chapters 3-4). Longtermism would provide no action-guidance (Kollin et al. 2025, §7). However, to the extent that we endorse the premises of the longtermist thesis other than the tractability one (see §1), this conclusion may seem unattractive. Is there any principle the longtermist can appeal to that somewhat rescues some version of their view, where Greaves and MacAskill’s attempts appear too incomplete? I believe that answering this very question should be the utmost priority of academic research surrounding longtermism.

The most straightforward solution would be to find longtermist strategies that are robust to unawareness (see Bostrom 2007; 2014c, pp. 354–355; 2014a; Tomasik 2015; Roussos 2021, slide 23). While option value maximization, discussed in §2.1, was supposed to be a contender, I have argued that we lack reasons to believe it is a plausible one. However, say we one day discover some important selection pressure against the civilizations that will overall do more harm than good, across their lifespan, such that these tend to go extinct before they reach the industrial age. Such a discovery would provide strong evidence that makes our unawareness about the desirability of X-risk reduction non-determinative. In this (totally hypothetical) scenario, the fact that our civilization made it past the industrial age would indicate that it likely will do more good than harm in the long run (otherwise, we should statistically assume it would not have made it thus far). In this case, no matter how unaware we are about the long-term trajectories of our civilization, this selection effect, if it existed, would give us a deeply-grounded logical reason to believe X-risk reduction is desirable.[40] Our discovery of such a selection effect would have helped us circumvent the unawareness-driven challenge of finding outcome-robust longtermist interventions. Implementation robustness could then potentially be found thanks to a similar discovery,[41] providing us with unawareness-overthrowing evidence that some intervention reliably reduces X-risks, all things considered. The problem, of course, is that longtermists have yet to discover such considerations that make some strategies robust to unawareness. The contenders proposed so far seem to fall short (§2.1). I briefly go back to this in §4’s fourth paragraph.

Another way to save longtermism, though, would be to weaken its foundations, in some sense. The most natural set of normative views leading to endorsing longtermism contains i) impartial consequentialism and ii) (explicit or implicit)[42] expected value maximization (DiGiovanni n.d). What I argue in §2, is essentially that endorsing (i) appears to make any version of (ii) impossible, since determinative unawareness—and, hence, severely imprecise credences or otherwise indeterminate beliefs—quickly confronts anyone attempting to evaluate overall long-term consequences impartially. Therefore, finding action-guidance seems to require abandoning either (i) or (ii), and substituting it with another plausible normative view (DiGiovanni n.d.).

Kollin et al. (2025) have proposed an alternative to (ii) expected value maximization, which they call bracketing. The rough idea is to “bracket out” those consequences about which we are clueless and to compare actions only in terms of outcomes where we can make determinate judgments. (See the paper for details.) This avoids paralysis in cases where orthodox expected value reasoning, taking into account all long-term effects, fails to be action-guiding. It allows agents to act on the basis of the parts of the outcome space they can actually assess.[43] In doing so, bracketing, as a decision theory that can substitute standard expected value maximization, might—if coupled with certain epistemic assumptions about the long-term future—offer a way of salvaging a modest form of impartial longtermist consequentialism: even if we cannot evaluate the overall long-run impact of our actions, we may still justify prioritizing interventions that predictably matter for a relevant subset of the long-term future.[44] This should not be confused with simply ignoring unawareness, e.g., by assuming positive and negative unknowns “cancel out” (see the introduction of §2). Bracketing still requires us to ask, for each domain, whether our unawareness is so deep that the outcomes cannot be treated as comparable at all.[45] For instance, one might initially judge that investing in AI safety research produces more value than, say, preventing the use and spread of non-human animals off-Earth (see Browning and Veit 2025, §2.1; Horta and Rozas 2024O’Brien 2024, §§3–4; Vinding 2020, Chapter 14; Buhler n.d.-b), based on its potential to reduce existential risks. But if the causal pathways by which AI research influences the long-run future are sufficiently more opaque and subject to unknown possibilities, bracketing tells us not to treat this apparent advantage as decisive, but rather to withhold comparison on that dimension. In this way, bracketing incorporates unawareness into the decision procedure, rather than pretending it does not exist.

In order to rescue some version of the longtermist paradigm, one can also make proposals that constitute alternatives to (i) impartial consequentialism. For example, Vinding appeals to virtue ethics (Vinding 2025a) and non-fully-impartial or “scope-adjusted” forms of consequentialism (Vinding 2025b) that remain compatible with the longtermist view. These could be interpreted as different forms of what Buhler (n.d.-a) calls bounded longtermism.

All the above alternative normative views, whether they are understood as plausible substitutes for expected value maximization, or for impartial consequentialism, face serious limitations. First, the reason why we need alternatives is to avoid unacceptably arbitrary longtermist expected value calculations (see §2). And the reason why impartial consequentialism was attractive as an ethical theory, to begin with, is that it avoided arbitrariness by forbidding any form of partiality. Hence, attempting to solve the first problem via appealing to alternatives to impartial consequentialism is unsatisfying. It reintroduces arbitrariness by endorsing moral theories that demand that we care only about a certain subset of outcomes or implications of our actions. Second, bracketing faces the problem of specifying individual consequences (see this discussion from Clifton for some version of this concern). The rule tells us to ignore those consequences about which we are clueless and act based on the remainder. But which consequences should count as “the remainder”? In practice, we could carve up the outcome space in many ways, some of which would make an action look better and others worse. DiGiovanni (n.d.) argues that the most morally well-motivated way to individuate consequences gives us maximal bracket-sets in opposite directions. So, unless we can non-arbitrarily identify a privileged subset of long-term consequences, bracketing risks reintroducing exactly the kind of arbitrariness it was meant to avoid.[46] Third, bounded forms of longtermism may also fail to be action-guiding (Buhler n.d.-a). While unawareness hits the pure and full version of longtermism much harder, it may still be determinative, even in bounded cases. For the same reasons, to the extent that one can use bracketing in a non-arbitrary manner, it is not clear that they can epistemically justify bracketing out some long-term consequences rather than all of them.[47]

While the present section may so far suggest these proposals have low plausibility when aimed at rescuing longtermism, one can consider this enough to justify following them, invoking some form of what DiGiovanni (n.d.) calls metanormative bracketing.[48] Metanormative bracketing extends the idea of bracketing from consequences to whole normative views. Instead of trying to aggregate across all the views we take seriously—including those that are silent, in terms of action-guidance, due to determinative unawareness—we set aside silent views that give us no determinate reasons, and let the remaining views guide action. The appeal is that this move avoids both arbitrariness (since we are bracketing out views precisely because they are silent, not because of any ad hoc judgment) and paralysis (since the other views can still generate comparisons). In effect, metanormative bracketing may rescue action-guidance at the meta-level: even if every first-order option for handling unawareness faces decisive problems, we can still justify following one of them once silent verdicts are bracketed out.[49] However, metanormative bracketing is the newest of the proposals discussed in this section. Whether it stands up to scrutiny, and could help justify longtermist interventions,[50] will be determined by future research.

4. Conclusion

Greaves and MacAskill (2025, §10) suggest that a few challenges to longtermism would significantly benefit from further research: the issue of numbers for the cost-effectiveness of particular attempts to benefit the far future, fanaticism, and cluelessness concerns. I (strongly) join them only when it comes to the third one, and even more particularly in the case of unawareness-driven cluelessness, specifically. I believe the problem unawareness poses to the longtermist thesis deserves (vastly) more attention, hence the motivation for the present manuscript (§1).

The discussion in §2 showed that Greaves and MacAskill’s responses to the unawareness challenge leave open crucial challenges. First, treating unawareness as analogous to ordinary uncertainty may miss the crucial point that many of the unknown outcomes are likely to be determinative of whether our actions are net positive or negative. Importantly, this remains the case, even for allegedly robust longtermist strategies such as researching our way out of unawareness and other attempts to maximize option value—meaning one cannot simply fall back on these as a last resort to act in accordance with longtermism. Second, Greaves and MacAskill’s appeal to catchall states does not solve the problem, at least not on its own, since any non-arbitrary utility assignment to such a state may seem unavailable. Together, these points extend the agenda of Greaves et al.’s (2025) Essays on Longtermism, suggesting that unawareness challenges the tractability premise of longtermism more profoundly than has been recognized in Greaves and MacAskill's (2025), §7.2, the only chapter of the volume that explicitly, although only briefly, recognizes it.[51] Notably, I have identified what I believe to be the two major cruxes between those who think unawareness undermines longtermism and those who do not: i) are there longtermists interventions that escape determinative unawareness?, and ii) if not, can this determinative unawareness be unproblematic? (e.g., can we appeal to catchall states and inductive reasoning to form non-arbitrary beliefs about unknown outcomes?).

§3 explored possible ways forward, but each faces serious limitations, leaving the prospects for rescuing longtermism highly uncertain. In particular, it seems difficult to identify a rescue that does not reintroduce some form of arbitrariness—arbitrariness in longtermist judgments being the very reason why we started questioning longtermism in the first place. Also, it is not clear how the reasons that might warrant ignoring some long-term consequences do not also, or rather, constitute reasons to ignore all of them.

At this stage, I believe the most promising path forward, in assessing whether any form of longtermism can overcome paralyzing unawareness, would be to clarify what would constitute non-mildly sweet evidence (see §2.2, Interlude) supporting its epistemic premise—i.e., evidence that should actually make us update away from agnosticism on what longtermism recommends.[52] In §3 (second paragraph), I give an example of a hypothetical discovery of a selection effect that would make our unawareness non-determinative. Future research could find more examples of this kind. In §2.2, I give strong reasons to believe the reference classes longtermists may use, to estimate the catchall (via inductive reasoning) for a given intervention, are not appropriate—or at least they have not successfully demonstrated they are. However, it would help to clearly identify what could be a reference class similar enough to longtermists’ decision context for longtermism to be action-guiding despite determinative unawareness—which I do not do (§2.2, Interlude).

Meanwhile, it remains defensible that the most appropriate response to the challenge unawareness poses to longtermists, simply is to reject longtermism, whether it is to instead endorse some form of neartermism, or another alternative to what DiGiovanni (n.d.) calls radical cluelessness—the view that all actions are permissible because the sign of their total consequences is indeterminate. Scrutinizing the assumptions behind such positions, and their longtermist counterparts, should be a prime project in global priorities studies.

Before concluding, I should note that it may be tempting to reject unawareness concerns on pragmatic grounds: if unawareness truly undermines longtermism, then there is “no upside” to addressing it, whereas if it does not but we mistakenly believe it does, we risk foregoing valuable opportunities to overall help far future moral patients. This reasoning, however, overlooks that the choice is not binary between (A) full and non-relaxed longtermism and (B) radical cluelessness. Many other normative views remain live options (such that there is an upside to addressing unawareness concerns)—e.g., the variants of longtermism discussed in §3, neartermism (see Kollin et al. 2025, §7; Clifton’s informal overview and discussion of their paper), taking care of your loved ones (see DiGiovanni 2025, Chapter 1), or other appealing ethical orientations (see, e.g. DiGiovanni n.d.; Vinding 2025a). Failing to follow the recommendations of whichever of these may be most justified, by ignoring the threat unawareness poses to longtermism and pursuing classic longtermist interventions regardless, would also be serious. We would be failing those we should actually be attempting to help.[53] Hence, even given the apparent asymmetry in stakes[54] between (A) and the other alternatives to (B), the moral cost of dismissing unawareness concerns may be greater than that of overstating its importance. The former risks neglecting the very domains where our actions can still be predictably beneficial.[55]

Acknowledgments

For invaluable comments on earlier versions of this manuscript, I thank Sylvester Kollin, Anthony DiGiovanni, Nathan Barnard, Kuutti Lappalainen, Antonin Broi, Neil Crawford, Oscar Delaney, Joseph Ancion, Daniel Barham, Jordan Stone, Magnus Vinding, Milan Griffes, Nicolas Macé, and Michael St. Jules.

As evidenced by the astronomical number of times I cite his work, I am highly indebted to Anthony DiGiovanni for his insightful research on the problem of unawareness of long-term consequences. I also owe a fair portion of my own thinking on the topic to our discussions. Conversations with Sylvester Kollin, Antonin Broi, Nicolas Macé, and Oscar Delaney were also appreciably helpful. 

My attempt at making parts of this essay look somewhat academic would be (even more of) a disaster without Oscar Horta’s tremendous help on other writings of mine.

None of the above implies endorsement. I remain the only person who can be held accountable for my claims, omissions, and unpleasant stylistic choices.

Note for the jury of the Essays on Longtermism competition

I would like to turn some version of this post into an academic paper (removing the unnecessary informal references and most of the background in §1, importantly). Therefore, I would highly appreciate any feedback, regardless of whether this piece gets selected as one of the winners. :)

References

Alembert, Jean LeRond. 1761. Opuscules Mathématiques, Ou Mémoires Sur Différens Sujets de Géométrie, de Méchanique, d’Optique, d’Astronomie, Etc. Tome 2. https://eudml.org/doc/203146.

Alexandrie, Gustav, and Maya Eden. 2025. “Is Extinction Risk Mitigation Uniquely Cost-Effective? Not in Standard Population Models.” In Essays on Longtermism: Present Action for the Distant Future, edited by Hilary Greaves, Jacob Barrett, and David Thorstad. Oxford University Press. https://doi.org/10.1093/9780191979972.003.0023.

Arrhenius, Gustaf, and Krister Bykvist. 1995. “Future Generations and Interpersonal Compensations: Moral Aspects of Energy Use.” https://philpapers.org/rec/ARRA.

Askell, Amanda. 2018. “Pareto Principles in Infinite Ethics.” PhD Thesis, New York University. https://philarchive.org/rec/ASKPPI.

Askell, Amanda, and Sven Neth. 2025. “Longtermist Myopia.” In Essays on Longtermism: Present Action for the Distant Future, edited by Hilary Greaves, Jacob Barrett, and David Thorstad. Oxford University Press. https://doi.org/10.1093/9780191979972.003.0020.

Baumann, Tobias. 2022. Avoiding the Worst: How to Prevent a Moral Catastrophe. Independently published.

Beckstead, Nicholas. 2013. “On the Overwhelming Importance of Shaping the Far Future.” Rutgers University - Graduate School - New Brunswick. https://doi.org/10.7282/T35M649T.

Beckstead, Nick. 2019. “A Brief Argument for the Overwhelming Importance of Shaping the Far Future.” In Effective Altruism: Philosophical Issues, edited by Hilary Greaves and Theron Pummer. Oxford University Press. https://doi.org/10.1093/oso/9780198841364.003.0006.

Beckstead, Nick, and Teruji Thomas. 2024. “A Paradox for Tiny Probabilities and Enormous Values.” Noûs 58 (2): 431–55. https://doi.org/10.1111/nous.12462.

Benatar, David. 2006. Better Never to Have Been:The Harm of Coming into Existence: The Harm of Coming into Existence. Clarendon Press.

Benatar, David. 2013. “Still Better Never to Have Been: A Reply to My Critics.” The Journal of Ethics 17 (1–2): 121–51. https://doi.org/10.1007/s10892-012-9133-7.

Bendell, Jem, and Rupert Read. 2021. Deep Adaptation: Navigating the Realities of Climate Chaos. John Wiley & Sons.

Bergström, Lars. 1977. “Utilitarianism and Future Mistakes.” Theoria 43 (2): 84–102. https://doi.org/10.1111/j.1755-2567.1977.tb00781.x.

Bernard, David Rhys. 2023. “Uncertainty over Time and Bayesian Updating.” Rethink Priorities, October 25. https://rethinkpriorities.org/research-area/uncertainty-over-time-and-bayesian-updating/.

Bernard, David Rhys, Gharad Bryan, Sylvain Chabé-Ferrett, Jonathan De Quidt, Jasmin Claire Fliegner, and Roland Rathelot. 2024. How Much Should We Trust Observational Estimates? Accumulating Evidence Using Randomized Controlled Trials with Imperfect Compliance. Working Paper No. 976. Working Paper. https://www.econstor.eu/handle/10419/306608.

Bernard, David Rhys, Jojo Lee, and Victor Yaneng Wang. 2023. “Estimating Long-Term Treatment Effects without Long-Term Outcome Data (David Rhys Bernard, Jojo Lee and Victor Yaneng Wang).” October 23. https://www.globalprioritiesinstitute.org/wp-content/uploads/Estimating-long-term-treatment-effects-without-long-term-outcome-data-David-Rhys-Bernard-Jojo-Lee-and-Victor-Yaneng-Wang.pdf.

Bernard, David Rhys, and Eva Vivalt. 2025. “What Are the Prospects of Forecasting the Far Future?” In Essays on Longtermism: Present Action for the Distant Future, edited by Hilary Greaves, Jacob Barrett, and David Thorstad. Oxford University Press. https://doi.org/10.1093/9780191979972.003.0012.

Bostrom, Nick. 2002. “Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards.” Journal of Evolution and Technology 9. https://ora.ox.ac.uk/objects/uuid:827452c3-fcba-41b8-86b0-407293e6617c.

Bostrom, Nick. 2003. “Astronomical Waste: The Opportunity Cost of Delayed Technological Development: Nick Bostrom.” Utilitas 15 (3): 308–14. https://doi.org/10.1017/s0953820800004076.

Bostrom, Nick. 2007. “Technological Revolutions: Ethics and Policy in the Dark.” In Nanoscale. John Wiley & Sons, Ltd. https://doi.org/10.1002/9780470165874.ch10.

Bostrom, Nick. 2013. “Existential Risk Prevention as Global Priority.” Global Policy 4 (1): 15–31. https://doi.org/10.1111/1758-5899.12002.

Bostrom, Nick. 2014a. “Crucial Considerations and Wise Philanthropy.” Effective Altruism. https://www.effectivealtruism.org/articles/crucial-considerations-and-wise-philanthropy-nick-bostrom.

Bostrom, Nick. 2014b. “Hail Mary, Value Porosity, and Utility Diversication.” https://www.semanticscholar.org/paper/Hail-Mary%2C-Value-Porosity%2C-and-Utility-Bostrom/f447c5858b0f31ecda40261f8a6cda8ee3dac9da.

Bostrom, Nick. 2014c. Superintelligence: Paths, Dangers, Strategies. Oxford University Press.

Bostrom, Nick. 2024. Deep Utopia: Life and Meaning in a Solved World. Ideapress Publishing.

Bradley, Richard. 2017. Decision Theory with a Human Face. Cambridge University Press. https://doi.org/10.1017/9780511760105.

Browning, Heather, and Walter Veit. 2023. “Positive Wild Animal Welfare.” Biology & Philosophy 38 (2): 14. https://doi.org/10.1007/s10539-023-09901-5.

Browning, Heather, and Walter Veit. 2025. “Longtermism and Animals.” In Essays on Longtermism: Present Action for the Distant Future, edited by Hilary Greaves, Jacob Barrett, and David Thorstad. Oxford University Press. https://doi.org/10.1093/9780191979972.003.0028.

Buffon, Georges Louis Leclerc comte de. 1777. Essai d’arithmétique morale.

Buhler, Jim. n.d.-a. “How Far Do Our Duties to Future Generations Extend? Full vs. Bounded Longtermism.” Accessed October 17, 2025. https://docs.google.com/document/d/1GyIlsU31jB9ewSNC7veauPv6uTLLRERdCNhlGCYj5qs/edit?tab=t.0&usp=embed_facebook.

Buhler, Jim. n.d.-b. “The Animal Ethics of Spreading Life Off-Earth.” Accessed October 17, 2025. https://docs.google.com/document/d/1MYod2phNgCqe0WOJGXi-8Ko6Mgx9EtC4iSeIXN6v20I/edit?usp=sharing&usp=embed_facebook.

Burch-Brown, Joanna M. 2014. “Clues for Consequentialists.” Utilitas 26 (1): 105–19. https://doi.org/10.1017/S0953820813000289.

Canson, Chloé de. 2024. “The Nature of Awareness Growth.” The Philosophical Review 133 (1): 1–32. https://doi.org/10.1215/00318108-10880419.

Carlsmith, Joe. 2025. “Existential Risk from Power-Seeking AI.” In Essays on Longtermism: Present Action for the Distant Future, edited by Hilary Greaves, Jacob Barrett, and David Thorstad. Oxford University Press. https://doi.org/10.1093/9780191979972.003.0025.

Cotton-Barratt, Owen, and Toby Ord. 2015. Existential Risk and Existential Hope: Definitions. https://ora.ox.ac.uk/objects/uuid:e14d5677-b98b-427b-a02c-d681351713c9.

Crisp, Roger. 2022. “Pessimism about the Future.” Midwest Studies in Philosophy 46 (July): 373–85. https://doi.org/10.5840/msp202311139.

Curran, Emma J. 2025. “Longtermism and the Complaints of Future People.” In Essays on Longtermism: Present Action for the Distant Future, edited by Hilary Greaves, Jacob Barrett, and David Thorstad. Oxford University Press. https://doi.org/10.1093/9780191979972.003.0008.

DiGiovanni, Anthony. 2025. “The Challenge of Unawareness for Impartial Altruist Action Guidance.” June 2. https://forum.effectivealtruism.org/s/rHqdsrieinyhM5KDv.

DiGiovanni, Anthony. n.d. “Resolving Cluelessness Nihilism with Metanormative Bracketing.” Accessed October 17, 2025. Forthcoming EA Forum post.

Friederich, Simon. 2025. “Causation, Cluelessness, and the Long Term.” Ergo an Open Access Journal of Philosophy 12 (0). https://doi.org/10.3998/ergo.7428.

Grant, Simon, and John Quiggin. 2013. “Inductive Reasoning about Unawareness.” Economic Theory 54 (3): 717–55. https://doi.org/10.1007/s00199-012-0734-y.

Greaves, Hilary. 2016. “XIV—Cluelessness.” Proceedings of the Aristotelian Society 116 (3): 311–39. https://doi.org/10.1093/arisoc/aow018.

Greaves, Hilary, Jacob Barrett, and David Thorstad, eds. 2025. Essays on Longtermism: Present Action for the Distant Future. Oxford University Press. https://doi.org/10.1093/9780191979972.001.0001.

Greaves, Hilary, and William MacAskill. 2025. “The Case for Strong Longtermism.” In Essays on Longtermism: Present Action for the Distant Future, edited by Hilary Greaves, Jacob Barrett, and David Thorstad. Oxford University Press. https://doi.org/10.1093/9780191979972.003.0003.

Greaves, Hilary, and Christian Tarsney. 2025. “Minimal and Expansive Longtermism.” In Essays on Longtermism: Present Action for the Distant Future, edited by Hilary Greaves, Jacob Barrett, and David Thorstad. Oxford University Press. https://doi.org/10.1093/9780191979972.003.0021.

Gustafsson, Johan E, and Petra Kosonen. 2025. “Prudential Longtermism.” In Essays on Longtermism: Present Action for the Distant Future, edited by Hilary Greaves, Jacob Barrett, and David Thorstad. Oxford University Press. https://doi.org/10.1093/9780191979972.003.0005.

Hansen, J., D. Johnson, A. Lacis, et al. 1981. “Climate Impact of Increasing Atmospheric Carbon Dioxide.” Science 213 (4511): 957–66. https://doi.org/10.1126/science.213.4511.957.

Horta, Oscar, and Mat Rozas. 2025. “Animals and Longtermism.” World Futures 81 (2): 85–95. https://doi.org/10.1080/02604027.2024.2424711.

Karni, Edi, and Marie-Louise Vierø. 2013. “‘Reverse Bayesianism’: A Choice-Based Theory of Growing Awareness.” American Economic Review 103 (7): 2790–810. https://doi.org/10.1257/aer.103.7.2790.

Kerslake, Robert, and Christine Wagg. 2021. “The Challenge of Effective Long-term Thinking in the UK Government and the Critical Role of Philanthropy.” In Creating a Better Future: The Case for Long-term Thinking and Institutions, edited by Natalie Jones, Sam Hilton, and Matthew S. O’Brien, pp. 71–84. Centre for the Study of Existential Risk and the All-Party Parliamentary Group for Future Generations.

Kitcher, Philip. 2025. “Coping with Myopia.” In Essays on Longtermism: Present Action for the Distant Future, edited by Hilary Greaves, Jacob Barrett, and David Thorstad. Oxford University Press. https://doi.org/10.1093/9780191979972.003.0014.

Knutsson, Simon. 2021. “The World Destruction Argument.” Inquiry 64 (10): 1004–23. https://doi.org/10.1080/0020174X.2019.1658631.

Kollin, Sylvester, Jesse Clifton, Anthony DiGiovanni, and Nicolas Macé. 2025. “Bracketing Cluelessness.” September. https://longtermrisk.org/files/Bracketing_Cluelessness.pdf.

Kovic, Marko. 2020. “Risks of Space Colonization.” Futures 126 (December). https://doi.org/10.1016/j.futures.2020.102638.

Kruus, Nicholas. 2025. “Axiological Cluelessness.” No. 2025031503. Preprint, Preprints, March 20. https://doi.org/10.20944/preprints202503.1503.v1.

Lenman, James. 2000. “Consequentialism and Cluelessness.” Philosophy & Public Affairs 29 (4): 342–70. https://doi.org/10.1111/j.1088-4963.2000.00342.x.

Lewis, Gregory. 2021. Complex Cluelessness as Credal Fragility. February 8. https://forum.effectivealtruism.org/posts/Q3ZBt3X8aeLaWjbhK/complex-cluelessness-as-credal-fragility.

Lundgren, Björn, and Karolina Kudlek. 2024. “What We Owe (to) the Present: Normative and Practical Challenges for Strong Longtermism.” Futures 164 (December): 103471. https://doi.org/10.1016/j.futures.2024.103471.

Lutz, Wolfgang, Warren Sanderson, and Sergei Scherbov. 1997. “Doubling of World Population Unlikely.” Nature 387 (6635): 803–5. https://doi.org/10.1038/42935.

MacAskill, William. 2022. What We Owe the Future. Basic Books.

MacAskill, William. 2025. “Better Futures.” Forethought, August 3. https://www.forethought.org/research/better-futures.

MacAskill, William, Krister Bykvist, and Toby Ord. 2020. Moral Uncertainty. Oxford University Press.

MacAskill, William, Aron Vallinder, Caspar Oesterheld, Carl Shulman, and Johannes Treutlein. 2021. “The Evidentialist’s Wager.” Journal of Philosophy 118 (6): 320–42. https://doi.org/10.5840/jphil2021118622.

Matharu, Manjit, and Peter Goadsby. 2001. “Cluster Headache.” Update on a Common Neurological Problem. Practical Neurology 1 (1): 42–49. https://doi.org/10.1046/j.1474-7766.2001.00505.x.

Mellers, Barbara, Eric Stone, Pavel Atanasov, et al. 2015. “The Psychology of Intelligence Analysis: Drivers of Prediction Accuracy in World Politics.” Journal of Experimental Psychology. Applied 21 (1): 1–14. https://doi.org/10.1037/xap0000040.

Mogensen, Andreas L. 2021. “Maximal Cluelessness.” The Philosophical Quarterly 71 (1): 141–62. https://doi.org/10.1093/pq/pqaa021.

Mogensen, Andreas L. 2022. “The Only Ethical Argument for Positive δ? Partiality and Pure Time Preference.” Philosophical Studies 179 (9): 2731–50. https://doi.org/10.1007/s11098-022-01792-8.

Mogensen, Andreas L. 2024. “The Weight of Suffering.” Journal of Philosophy 121 (6): 335–54. https://doi.org/10.5840/jphil2024121624.

Mogensen, Andreas L. 2025. “Would a World Without Us Be Worse? Clues from Population Axiology.” In Essays on Longtermism: Present Action for the Distant Future, edited by Hilary Greaves, Jacob Barrett, and David Thorstad. Oxford University Press. https://doi.org/10.1093/9780191979972.003.0006.

Mogensen, Andreas L., and David Thorstad. 2022. “Tough Enough? Robust Satisficing as a Decision Norm for Long-Term Policy Analysis.” Synthese 200 (1): 36. https://doi.org/10.1007/s11229-022-03566-5.

Monton, Bradley. 2019. “How to Avoid Maximizing Expected Utility.” Philosophers’ Imprint 19.

Moore, George Edward. 2005. Principia Ethica. Barnes & Noble Publishing.

Nordhaus, William D. 2007. “A Review of the Stern Review on the Economics of Climate Change.” Journal of Economic Literature 45 (3): 686–702. https://doi.org/10.1257/jel.45.3.686.

Nordhaus, William D. 2008. A Question of Balance: Weighing the Options on Global Warming Policies. Yale University Press.

O’Brien, Gary David. 2022. “Directed Panspermia, Wild Animal Suffering, and the Ethics of World-Creation.” Journal of Applied Philosophy 39 (1): 87–102. https://doi.org/10.1111/japp.12538.

O’Brien, Gary David. 2024. The Case for Animal-Inclusive Longtermism. January 19. https://doi.org/10.1163/17455243-20234296.

Ord, Toby. 2020. The Precipice: Existential Risk and the Future of Humanity. Bloomsbury.

Ord, Toby. 2025a. “Evaluating the Infinite.” arXiv:2509.19389. Preprint, arXiv, September 22. https://doi.org/10.48550/arXiv.2509.19389.

Ord, Toby. 2025b. “Forecasting Can Get Easier over Longer Timeframes.” https://www.youtube.com/watch?v=D6p3NYHB2gA.

Parfit, Derek. 2011. On What Matters: Volume II. OUP Oxford.

Paul, Laurie. 2014. Transformative Experience. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780198717959.001.0001.

Paul, Laurie, and John Quiggin. 2018. “Real World Problems.” Episteme 15 (3): 363–82. https://doi.org/10.1017/epi.2018.28.

Pettigrew, Richard. 2023. “The Foundations of Longtermism.” Richard Pettigrew’s Homepage, December 8. https://richardpettigrew.com/the-foundations-of-longtermism/.

Pettigrew, Richard. 2024. “Should Longtermists Recommend Hastening Extinction Rather Than Delaying It?” The Monist 107 (2): 130–45. https://doi.org/10.1093/monist/onae003.

Powell, Rachell. 2025. “Taking the Long View: Paleobiological Perspectives on Longtermism.” In Essays on Longtermism: Present Action for the Distant Future, edited by Hilary Greaves, Jacob Barrett, and David Thorstad. Oxford University Press. https://doi.org/10.1093/9780191979972.003.0013.

Rees, Martin J. 2003. Our Final Century: A Scientist’s Warning : How Terror, Error, and Environmental Disaster Threaten Humankind’s Future in This Century - on Earth and Beyond. Random House.

Riedener, Stefan. 2025. “Authenticity, Meaning, and Alienation: Reasons to Care Less about Far-Future People.” In Essays on Longtermism: Present Action for the Distant Future, edited by Hilary Greaves, Jacob Barrett, and David Thorstad. Oxford University Press. https://doi.org/10.1093/9780191979972.003.0010.

Roussos, Joe. 2021. “Unawareness for Longtermists.” https://joeroussos.org/wp-content/uploads/2021/11/210624-Roussos-GPI-Unawareness-and-longtermism.pdf.

Roussos, Joe. 2025. “Awareness Revision and Belief Extension.” Australasian Journal of Philosophy 103 (2): 373–96. https://doi.org/10.1080/00048402.2024.2412251.

Rulli, Tina. 2024. “Effective Altruists Need Not Be Pronatalist Longtermists.” Public Affairs Quarterly 38 (1): 22–44. https://doi.org/10.5406/21520542.38.1.03.

Russell, Jeffrey Sanford. 2024. “On Two Arguments for Fanaticism.” Noûs 58 (3): 565–95. https://doi.org/10.1111/nous.12461.

Sandberg, Anders. n.d. “Grand Futures.” Unpublished manuscript.

Savulescu, Julian, and Nick Bostrom, eds. 2009. Human Enhancement. Oxford University Press. https://doi.org/10.1093/oso/9780199299720.001.0001.

Schmidt, Andreas T, and Jacob Barrett. 2025. “Longtermist Political Philosophy: An Agenda for Future Research.” In Essays on Longtermism: Present Action for the Distant Future, edited by Hilary Greaves, Jacob Barrett, and David Thorstad. Oxford University Press. https://doi.org/10.1093/9780191979972.003.0030.

Schoenfield, Miriam. 2012. “Chilling out on Epistemic Rationality.” Philosophical Studies 158 (2): 197–219. https://doi.org/10.1007/s11098-012-9886-7.

Schopenhauer, Arthur. 2020. On the Suffering of the World. Watkins Media Limited.

Schwitzgebel, Eric. 2024. The Washout Argument Against Longtermism. https://faculty.ucr.edu/~eschwitz/SchwitzAbs/WashoutLongtermism.htm.

Setiya, Kieran. 2014. “The Ethics of Existence.” Philosophical Perspectives 28: 291–301.

Smith, Nicholas J. J. 2014. “Is Evaluative Compositionality a Requirement of Rationality?” Mind 123 (490): 457–502. https://doi.org/10.1093/mind/fzu072.

Soryl, Asher A., Andrew J. Moore, Philip J. Seddon, and Mike R. King. 2021. “The Case for Welfare Biology.” Journal of Agricultural and Environmental Ethics 34 (2): 7. https://doi.org/10.1007/s10806-021-09855-2.

Soryl, Asher, and Anders Sandberg. 2025. “To Seed or Not to Seed: Estimating the Ethical Value of Directed Panspermia.” Acta Astronautica 232 (July): 397–404. https://doi.org/10.1016/j.actaastro.2025.03.025.

Sotala, Kaj, and Lukas Gloor. 2017. “Superintelligence as a Cause or Cure for Risks of Astronomical Suffering.” Informatica: An International Journal of Computing and Informatics 41 (4): 389–400.

Steele, Katie, and H. Orri Stefánsson. 2021. Beyond Uncertainty: Reasoning with Unknown Possibilities. Cambridge University Press.

Steele, Katie, and H. Orri Stefánsson. 2022. “Transformative Experience, Awareness Growth, and the Limits of Rational Planning.” Philosophy of Science 89 (5): 939–48. https://doi.org/10.1017/psa.2022.55.

Stern, Nicholas. 2007. The Economics of Climate Change: The Stern Review. Cambridge University Press. https://doi.org/10.1017/CBO9780511817434.

Stern, Nicholas. 2008. “The Economics of Climate Change.” American Economic Review 98 (2): 1–37. https://doi.org/10.1257/aer.98.2.1.

Tarsney, Christian. 2023. “The Epistemic Challenge to Longtermism.” Synthese 201 (6): 195. https://doi.org/10.1007/s11229-023-04153-y.

Tarsney, Christian. 2025. “Against Anti-Fanaticism.” Philosophy and Phenomenological Research 110 (2): 734–53. https://doi.org/10.1111/phpr.13120.

Tarsney, Christian, Teruji Thomas, and William MacAskill. 2024. “Moral Decision-Making Under Uncertainty.” In The Stanford Encyclopedia of Philosophy, Spring 2024, edited by Edward N. Zalta and Uri Nodelman. Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/spr2024/entries/moral-decision-uncertainty/.

Tarsney, Christian, and Hayden Wilkinson. 2025. “Longtermism in an Infinite World.” In Essays on Longtermism: Present Action for the Distant Future, edited by Hilary Greaves, Jacob Barrett, and David Thorstad. Oxford University Press. https://doi.org/10.1093/9780191979972.003.0007.

The Global Priorities Institute. (2024). Philosophy Research Agenda: Version 1 (November 2024). University of Oxford. https://www.globalprioritiesinstitute.org/wp-content/uploads/GPI-Philosophy-Research-Agenda-Version-1-November-2024.pdf

Thorstad, David. 2023. “High Risk, Low Reward: A Challenge to the Astronomical Value of Existential Risk Mitigation.” Philosophy & Public Affairs 51 (4): 373–412. https://doi.org/10.1111/papa.12248.

Thorstad, David. 2024. “Mistakes in the Moral Mathematics of Existential Risk.” Ethics 135 (1): 122–50. https://doi.org/10.1086/731436.

Thorstad, David. 2025a. “The Scope of Longtermism.” Australasian Journal of Philosophy, 1–22. https://doi.org/10.1080/00048402.2025.2521030.

Thorstad, David. 2025b. “The Scope of Longtermism (Part 4: Unawareness).” Reflective Altruism, March 7. https://reflectivealtruism.com/2025/03/07/the-scope-of-longtermism-part-4-unawareness/.

Thorstad, David. forthcoming. “General-Purpose Institutional Decision-Making Heuristics: The Case of Decision-Making Under Deep Uncertainty.” British Journal for the Philosophy of Science, ahead of print. https://doi.org/10.1086/722307.

Thorstad, David, and Andreas Mogensen. 2020. Heuristics for Clueless Agents: How to Get Away with Ignoring What Matters Most in Ordinary Decision-Making. https://globalprioritiesinstitute.org/david-thorstad-and-andreas-mogensen-heuristics-for-clueless-agents-how-to-get-away-with-ignoring-what-matters-most-in-ordinary-decision-making/.

Tomasik, Brian. 2015. “Charity Cost-Effectiveness in an Uncertain World.” Center on Long-Term Risk, August 29. https://longtermrisk.org/charity-cost-effectiveness-in-an-uncertain-world/.

Torres, Phil. 2018. “Space Colonization and Suffering Risks: Reassessing the ‘Maxipok Rule.’” Futures 100 (June): 74–85. https://doi.org/10.1016/j.futures.2018.04.008.

Ullmann-Margalit, Edna. 2006. “Big Decisions: Opting, Converting, Drifting.” Royal Institute of Philosophy Supplements 58 (May): 157–72. https://doi.org/10.1017/S1358246106058085.

Vallinder, Aron. 2025. “Longtermism and Cultural Evolution.” In Essays on Longtermism: Present Action for the Distant Future, edited by Hilary Greaves, Jacob Barrett, and David Thorstad. Oxford University Press. https://doi.org/10.1093/9780191979972.003.0016.

Vinding, Magnus. 2020. Suffering-Focused Ethics: Defense and Implications. Ratio Ethica.

Vinding, Magnus. 2025a. A Virtue-Based Approach to Reducing Suffering given Long-Term Cluelessness. August 13. https://forum.effectivealtruism.org/posts/SvueCx6HyGfMpKZxn/a-virtue-based-approach-to-reducing-suffering-given-long.

Vinding, Magnus. 2025b. Reducing Suffering given Long-Term Cluelessness. June 26. https://forum.effectivealtruism.org/posts/dq7cHFgJrZSQBcNrN/reducing-suffering-given-long-term-cluelessness.

Wilkinson, Hayden. 2022. “In Defense of Fanaticism.” Ethics 132 (2): 445–77. https://doi.org/10.1086/716869.

Williams, Bernard, and J. J. C. Smart, eds. 1973. “An Outline of a System of Utilitarian Ethics.” In Utilitarianism: For and Against. Cambridge University Press. https://doi.org/10.1017/CBO9780511840852.001.

Williamson, Patrick. 2022. “On Cluelessness.” https://doi.org/10.25911/ZWK2-T508.

Winter, Christoph, Jonas Schuett, Eric Martínez, et al. 2021. “Legal Priorities Research: A Research Agenda.” SSRN Scholarly Paper No. 3931256. Social Science Research Network, January 7. https://doi.org/10.2139/ssrn.3931256.

Yim, Lok Lam. 2019. “The Cluelessness Objection Revisited.” Proceedings of the Aristotelian Society 119 (3): 321–24. https://doi.org/10.1093/arisoc/aoz016.

Zandbergen, Jeroen Robbert. 2021. “Wailing From the Heights of Velleity: A Strong Case for Antinatalism in These Trying Times.” South African Journal of Philosophy 40 (3): 265–78. https://doi.org/10.1080/02580136.2021.1949559.

Zuber, Stéphane, Nikhil Venkatesh, Torbjörn Tännsjö, et al. 2021. “What Should We Agree on about the Repugnant Conclusion?” Utilitas 33 (4): 379–83. https://doi.org/10.1017/S095382082100011X.

  1. ^

     One could make this definition more precise, distinguishing between weak versus strong longtermism and deontic versus axiological longtermism (Greaves & MacAskill 2025), or full versus bounded longtermism (Buhler n.d.-a). I intentionally treat the unspecified definition since all variants are implicated by the points I make in this manuscript (although some of the examples I happen to give might not apply to some forms of bounded longtermism as suggested in §3).

  2. ^

     For criticisms of fanaticism that are not applied to the context of potential astronomical long-term payoffs, specifically, see Smith (2014); Monton (2019); Alembert (1761); Buffon (1777).

  3. ^

     This stocktaking takes inspiration from Zuber et al.’s (2021) attempt to drive forward debates surrounding the repugnant conclusion.

  4. ^

     For instance, Rees (2003) and Bendell & Read (2021) both argue that our civilization is likely to go extinct or collapse in the next century, or even decades.

  5. ^

     As is implied by Gustafsson and Kosonen (2025), this might be true even if we do not account for new births, but focus on contemporary individuals who might keep existing for a very long time, in some meaningful sense, e.g., through digital uploads or thanks to anti-ageing science. Future versions of them would still “outnumber” their contemporary versions, in a morally relevant way.

  6. ^

     See Thorstad (2023; 2024), Alexandrie and Eden (2025), and Kruus (2025, §5) for some push-back on this fairly widespread consensus among those who engaged with the longtermist literature, however.

  7. ^

     The core of my essay will discuss a less classic version of this problem, involving consideration of long-term effects we are unaware of.

  8. ^

     For relevant discussions of the classic cluelessness objection to longtermism, see also Lenman (2000); Burch-Brown (2014); Yim (2019); Thorstad & Mogensen (2020); Williamson (2022); Lundgren & Kudlek (2024); Tarsney et al. (2024, §3), Thorstad (2024, §5, §9); Friederich (2025); Kruus (2025);  Greaves & MacAskill (2025), §7; Riedener (2025); Powell (2025); Askell and Neth (2025, §4.2).

  9. ^

     Greaves and MacAskill (2025, §8) argue that it might not necessarily.

  10. ^

     While the unawareness literature primarily discusses unawareness about the outcomes that may result from choosing a given option and action—for a fairly representative overview, see Bradley (2017), Steele and Stefánsson (2021), de Canson (2024), and Roussos (2025)—some writings touch on unawareness about the options available to the decision-maker (see Thorstad 2025a, §6; Steele & Stefánsson 2021, pp. 40–41 and 127–128; Bradley 2017, pp. 52–53, 248–259, 399–401, and 439–440; Karni & Vierø 2013; Greaves and MacAskill 2025, §7.2). Thorstad (2025b, §4) draws a similar distinction between unawareness of states and actions. I shall make clear that it is, exclusively, outcome unawareness I will treat along the present essay. I agree with Greaves and MacAskill’s (2025, §7.2) third point that option unawareness does not threaten longtermism on its own. In fact, as long as we are talking about our current epistemic situation and do not consider potential expected awareness change, an option we are unaware of is arguably not in fact an option in the relevant sense. (Thanks to Sylvester Kollin for bringing this to my attention.) This undermines the usefulness of the concept of option unawareness in our context, to begin with.

  11. ^

     Paul and Quiggin (2018, §4.1) differentiate between restricted awareness (some possibilities are not represented in our models at all) and coarse awareness (where one represents possibilities only in a rough or lumped way, masking restricted unawareness within these lumps). See also DiGiovanni (2025, Chapter 1) for a helpful illustration of this distinction.

  12. ^

     Which itself only constitutes a challenge for what one might call ex ante longtermism (focusing on action-guidance under uncertainty, prior to knowing outcomes), not the ex post version (which can only be assailled by objections to the thesis that the effects of our actions on the long-term future swamp their near-term effects).

  13. ^

     Who also claims that “the problem has been under-appreciated”.

  14. ^

     Before them, Bostrom (2007; 2014c, p. 354-355; 2014a) and Tomasik (2015) had examined less precise versions of this concern, although far less thoroughly and without the more recently developed unawareness vocabulary. They also seem to underplay the depth of the problem, as the case I build in the remainder of the present manuscript suggests. In addition, some of the cluelessness literature (referred to in the above fourth claim about longtermism) arguably cashes in implicitly the problem of unknown outcomes. Although, if so, it is without explaining how unawareness-driven cluelessness qualitatively differs from cluelessness driven by our incapacity to non-arbitrarily weight (fully) known outcomes.

  15. ^

     There is also a bourgeonning literature on the broader topic of unawareness, in various decision problems other than the ones faced by longtermist, which is indirectly relevant. For a fairly representative overview, see Bradley (2017), Steele and Stefánnson (2021), de Canson (2024), and Roussos (2025).

  16. ^

     Another way in which the case I make in §2 is more than a mere rephrasing of that made by Roussos (2021, slides), Thorstad (2025b), and DiGiovanni (2025), is that I emphasize how unawareness questions the desirability of reducing existential risks for our civilization altogether—while, e.g., DiGiovanni (2025) focuses on specific strategies aimed at reducing x-risks. Also, I stress how unawareness-driven cluelessness about the (dis)value of such strategies infects other longermists interventions, including the least controversial ones such as preventing dystopian lock-in scenarios, as well as enabling future generations and other interventions aimed at maximizing option value. Finally, I make very clear what I believe the cruxes to be between those who think unawareness undermines longtermism and those who do not (see also my summarized takeaways from §2 in §4).

  17. ^

     I borrow the example of solar geoengineering against climate change from Steele and Stefánsson (2021, pp. 87–90, 93–94).

  18. ^

     See Steele and Stefánsson (2021, §6.2) for reasons why she arguably would not know that. Sometimes, there is no need to even consider effects on a time-scale longer than decades-to-century for unawareness to already be paralyzing (see also DiGiovanni’s What to do about near-term cluelessness in animal welfare).

  19. ^

     This is analogous to the third claim about longtermism discussed in §1, which is about how second-order effects (even only those we are fully aware of) often seem to swamp first-order effects, in our decisions.

  20. ^

     Lola has to be conscious of her own unawareness for it to affect her decision-making.

  21. ^

     Here is a more formal definition: a subject S is in a state of determinative unawareness with respect to a decision problem D iff there exist possible outcomes of S’s available actions such that (i) S is not aware of these outcomes at the time of choice, (ii) the inclusion or exclusion of these outcomes would, given reasonable evaluative standards, change the overall comparative desirability of the options in D.

  22. ^

     Although, as the literature on the classic problem of cluelessness suggests (see the fourth claim listed in §1), Lola should arguably not be confident in this. Complex cluelessness may also arise only from outcomes she is aware of. Unawareness is an arguably far more evident motivation for cluelessness, however. To the extent that it was defensible to get away from agnosticism with a judgment call as to what known considerations sums up to, decisive unknown considerations pose a qualitatively new challenge. (More on this in §2.2.)

  23. ^

     Thanks to Anthony DiGiovanni for pressing me to make this explicit.

  24. ^

     See also Kollin et al. (2025, §7) for a similar distinction.

  25. ^

     For instance, maybe, in order to preserve option value for our civilization, we prevented an AI takeover that would have in fact made the future far more positive than in the counterfactual scenarios.

  26. ^

     As defined by Bostrom (2002; 2013), Cotton-Barratt & Ord (2015), Ord (2020), MacAskill (2022, Part III), and Carlsmith (2025).

  27. ^

     DiGiovanni (2025) makes, across his sequence, a similar and more elaborate argument to what follows, taking (repeatedly) the example on reducing the chance of misaligned-AI takeover and increasing AI control, rather than that of X-risk reduction. He also treats the example of advocating for digital mind welfare in his third chapter.

  28. ^

     If one is skeptical that digital mind welfare dominate longtermists expected value calculations, they should probably pay attention to at least some of the following parameters: how biological enhanced science goes and how it is used (Savulescu & Bostrom 2009; Bostrom 2024), which animals are welfare subjects and to what extent (MacAskill 2022), whether and/or when humanity will stop farming (some of) the animals with miserable lives we would not wish to live ourselves, whether it will spread wild ecosystems to other celestial bodies and how much (see Soryl & Sandberg 2025; Horta and Rosas 2024), and the overall welfare of animals living in the wild (see, e.g., Soryl et al. 2021; O’Brien 2022, §2.4; Browning & Veit 2023; Buhler n.d.-b).

  29. ^

     It is worth noting that such judgment calls have already been seriously (although sometimes only indirectly) questioned in various academic writings (see Kruus 2025; Mogensen 2025; Crisp 2022; Knutsson 2021, §3; Benatar 2006, p.194; 2013, p.121; Bergström 1977; Schopenhauer [1942] 2020; Zandbergen 2021; Kovic 2020; Torres 2018; Arrhenius & Bykvist 1995, Chapter 3; Pettigrew 2024b; 2024a; Rulli 2024). However, these do not explicitly factor in unknown possibilities, which is unfortunate, since explicitly accounting for outcome unawareness seems to constitute a far deeper challenge to the desirability of reducing X-risks (or affecting X-risks in any direction, for that matter). In §2.2, I argue that it is far from clear that longtermists’ judgment calls (factoring in determinative unawareness) are any “better than chance”.

  30. ^

     I believe the longtermist’s best hope of justifying the overall desirability of X-risk reduction is, instead, to demonstrate that they can deal with determinative unawareness, like Greaves and MacAskill (2025, §7.2) attempt to do in their second argument (see §2.2).

  31. ^

     An implicit assumption here is that longtermists cannot ignore the effects of their (in)actions on X-risks, for the same reasons why Lola cannot ignore the unknown consequences of (not) supporting large-scale solar geoengineering. These do not “cancel out” (introduction of §2).

  32. ^

     Note that assigning a utility to the catchall differs from assigning it a weight or probability (unlike the present essay, the unawareness literature often discusses the latter). Here, we are assuming such probability is high enough for the sign of the utility of the catachall to be determinative.

  33. ^

     Bradley (2017, pp. 55–56) claims that “we have no way of assigning a probability to [the catchall] state”. Ullmann-Margalit (2006) and Paul (2014) seem to concur in the specific context of what they respectively call “big decisions” and “transformative experiences”, although their models do not explicitly employ the catchall framing.

  34. ^

     For a more in-depth defense of how imprecision and insensitivity to mild sweetening are highly relevant, here, see DiGiovanni (2025, Chapter 2).

  35. ^

     As an aside, this means that the nascent research field aimed at (dis)proving longtermism by gathering existing data from the forecasting literature or producing new data (see Ord 2025b; Bernard & Vivalt 2025; Thorstad 2025a, §5; Bernard et al. 2024; Bernard et al. 2023; Bernard 2023) might struggle to provide us with decisive answers either way. The available reference classes seem too remotely relevant.

  36. ^

     Arguably, those who gesture at past examples of historical figures who had a lasting impact on human society, or successful “long-range” forecasters, in an attempt to rescue longtermism in the face of determinative unawareness, are indirectly doing this.

  37. ^

     Relatedly, see Mogensen & Thorstad (2022) and Thorstad (forthcoming).

  38. ^

     As DiGiovanni (2025, Chapter 4) writes in response to Tomasik: “Technically, we don’t know the counterfactual, and one could argue that these strategies made atrocities in the 1900s worse. See, e.g., the consequences of dictators reflecting on ambitious philosophical ideas like utopian ideologies, or the rise of factory farming thanks to civilizational stability. At any rate, farmed animal suffering is an exception that proves the rule. Once we account for a new large set of moral patients whose welfare depends on different mechanisms, the trend of making things “better” breaks.”.

  39. ^

    I even have informally argued, elsewhere, that judgment calls favoring popular longtermist interventions may be better explained by pro-natalists drives than by a process making these judgment calls truth-tracking. This renders them subjects to evolutionary debunking.

  40. ^

     Thanks to Anni Leskelä for first bringing to my attention, years ago, the strength of arguments based on logical connections and selection effects in contexts with severe unawareness.

  41. ^

     Although coming up with a specific example proved too challenging for me to find it appropriate to give one. Systematic backfire risks to consider include preventing potentially benign harms that act as effective vaccines against bigger ones, attention hazards, impeding safety-concerned actors, and the interplay of different X-risks. These carry with them a lot of unawareness and are immense challenges for the implementation robustness of interventions aimed at reducing X-risks.

  42. ^

     See DiGiovanni (2025, Chapter 2) on why most supposed alternatives to expected value maximization longtermists appeal to are in fact implicit forms of it.

  43. ^

     See also Clifton (2025) for a less precise version of this bracketing proposal that makes fewer assumptions regarding what can be bracketed.

  44. ^

     This aligns with some interpretation or specification of Powell’s (2025) argument for “project[ing ourselves only] tens or even hundreds of millennia into the future, [since this] is probably within our horizon of epistemic plausibility.” However, bracketing must be contrasted with treating cluelessness about effects beyond a certain point in time as the simple kind (while it in fact should arguably be treated as complex—see §1). This seems to be what Tarsney (2023, pp.24 and 33) does when he suggests we should maybe be longtermists only on a time scale of “thousands or millions of years”, while assuming precise bayesianism along his paper. Schwitzgebel (2024) implicitly makes equivalent assumptions before showing sympathy for “a more modest version of ‘longtermism’ [that] might commit only to being influenced by expectations over [...] the medium-term (several-hundred-year) future” (p.4).

  45. ^

     See this comment from Anthony DiGiovanni.

  46. ^

     For other challenges that can be posed to bracketing, see this discussion from Clifton.

  47. ^

     As Clifton writes in an informal overview and discussion of Kollin et al., “longtermists who bracket out, e.g., acausal effects should have a story for why they should not get off the train to crazy town earlier, bracketing out the non-exotic long-term effects, too.”.

  48. ^

     Some version of this proposal, focusing on on moral theories (rather than, e.g, decision theory), is implicitly made by Vinding (2025b; 2025a). MacAskill et al’s (2020, Chapter 6) proposal on how to deal with “infectious incomparability” in the context of moral uncertainty is also relevant, here—although see DiGiovanni (n.d.) for an argument for why it fails when the uncertain agent has high weight on a non-action-guiding view (where, he defends, metanormative bracketing does not fail).

  49. ^

     One could even use metanormative bracketing to rationalize following our stubborn longtermist intuitions, assuming one can give substantial-enough weight to normative views that consider these intuitions robust to unawareness (DiGiovanni n.d.) despite all the countervailing considerations raised in §2.

  50. ^

     Interestingly, Lenman (2000, §VII) implicitly appeals to some version of metanormative bracketing to reject caring about (long-term) consequences, altogether.

  51. ^

     For allusions to the broader challenge of cluelessness, see also Bernard and Vivalt (2025), Powell (2025), Riedener (2025), and Askell and Neth (2025, §4.2).

  52. ^

     Thanks to Nathan Barnard for prompting me to mention this explicitly, here.

  53. ^

     E.g., plausibly, the millions of people currently suffering from cluster headaches, “probably one of the most painful conditions known to mankind” (Matharu & Goadsby 2001). We seem warranted in believing we can overall help those with this condition (see clusterfree.org).

  54. ^

     Whether the stakes are comparable at all is a subtle question. See DiGiovanni’s (2025, Chapter 2) discussion in his Appendix A.

  55. ^

     For a related analysis of this meta-level objection to unawareness concerns, see DiGiovanni’s (n.d.) discussion of arbitrariness and neartermism vs. longtermism.

  56. Show all footnotes
Comments1
Sorted by Click to highlight new comments since:

I think this is a very compelling (and enjoyable) essay. I particularly appreciate the first point of 2.1 as an intuitive reminder of the complicated empirical issues at hand. The main argument here is strengthened by this intuitive way of highlighting that doing (impartial) good is actually complicated.

I appreciate the efforts made here of highlighting alternatives to long-term EV maximization with precise credences, since the lack of "other options" can be a big mental blocker. Part 3 (and the conclusion, to an extent) seem to constitute the first solid high-level overview of this on the Forum, so this is quite helpful. Not to mention, these sections act as serious reminders of how important it is to "get it right", whatever that ends up meaning.

Curated and popular this week
Relevant opportunities