Feedback on this post is very much welcome.
Preface
This essay focuses on advanced artificial intelligence (AI) and its relation to the course of the long-run future. Discourse around such AI, has centred on risks of misuse by malicious actors, structural societal disruption, and the catastrophic consequences of deploying a misaligned, agent-like super-intelligence (Ord, 2020).
The two essays by Joe Carlsmith (‘Existential Risk from Power-Seeking AI’), and by Richard Ngo and Adam Bales (‘Deceit and Power: Machine Learning and Misalignment’) also focus on existential risks and the danger of human disempowerment. Both essays make compelling cases and my essay does not attempt to refute their claims.
I argue that longtermists should also treat the risk of not developing transformative AI capable of realising large‑scale benefits, as a core concern. If transformative AI is plausibly necessary for certain high‑value futures, then preventing its development or failing to achieve it could forfeit enormous moral value.
Note that by AI or advanced/transformative AI, I have in mind at minimum the agentic systems capable of planning and strategic awareness (APS systems) described in Joe's essay, and any other AI system with greater capabilities than those.
The Argument
Longtermism rests on three foundational claims: that future people have moral worth, that the potential scale of the future is astronomically vast, and that we in the present can meaningfully influence its trajectory. If we accept these premises, our primary moral obligation is to act as stewards of this immense potential value. The goal is not merely to survive, but to enable a future that is rich in flourishing, discovery, and well-being on a cosmic scale.
Suppose we accept the longtermist view. Achieving anything like a 'technologically mature' civilisation, one that realises a significant fraction of humanity's long-term potential (e.g., interstellar settlement, mastering biology, ensuring cosmic-scale security) is unlikely, if not impossible without the scientific and technological capabilities provided by advanced artificial intelligence. Of course by definition a strictly technologically mature state in the sense Bostrom describes in Deep Utopia, would mean that AGI itself is solved.
My claim is that a future without advanced AI is one of perpetual technological stagnation. Achieving a civilisation capable of mitigating all natural existential risks, unlocking post-scarcity economics, and beginning interstellar settlement; presents scientific and coordination problems of a magnitude far exceeding our current abilities. While humanity might survive for a long time, it would remain vulnerable to natural existential risks (e.g., ecological disasters, asteroid impacts) and would never achieve the flourishing that constitutes the vast majority of potential future value. Therefore, the deliberate or accidental failure to develop advanced AI would constitute a serious moral failure. It actively locks in a suboptimal future, representing a loss of value on a scale comparable to that of extinction.
Stagnation as Existential Catastrophe
An existential catastrophe is defined by Ord (2020) as 'the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential'. Crucially, this definition includes not just extinction but also scenarios of 'unrecoverable collapse' or 'unrecoverable dystopia'. I believe that indefinite technological stagnation definitely falls into this category.
Consider the following scenarios:
- Extinction: Humanity is destroyed this century from misaligned AI. The total future value is 0.
- Stagnation: Humanity, fearing existential AI risk or considering it too high, does not develop advanced AI. Humanity survives but fails to achieve its potential. We might live for another 100,000 years on Earth with welfare pretty similar to that of the 21st century, before a super-volcano erupts or asteroid hits. We never reach the stars. The value realised is a tiny fraction, say , of what was possible.
From a utilitarian longtermist perspective, the moral difference between realising 0% of our potential and realising 0.0000001% of it is vast in absolute terms but small compared to the case where we've achieved, say, 50% of it. The moral loss from Stagnation is therefore in the same ballpark as the moral loss from Extinction. While the risk from misaligned AI is great, the loss from failing to develop beneficial AI is also profound. In the attempt to protect what we have, let us not lose sight of what could be.
Why AI is Plausibly Necessary for Longtermist Goals
Not only is AI itself as a technology useful, it can also enable or accelerate many of the necessary technologies required to actually reach an abundant long-term future. Unlocking post-scarcity economics, radical life extension, and the cognitive breakthroughs needed for interstellar travel and settlement. Without it, certain classes of large‑scale material abundance and technological pathways to space settlement may not be achievable. It's in this context that advanced AI transitions from a potential threat to a probable necessity.
First, high‑value futures might be dependent on capabilities that only AI can deliver at scale or speed. Complex scientific projects that could take millennia of cumulative effort could potentially be accelerated at an unprecedented rate.
Second, there is also the problem of other existential risks. Many non-AI existential risks are problems of immense complexity and scale, which we as a civilisation haven't addressed. Safely managing biotechnology, developing planetary defence systems against asteroids, or preventing ecological collapse at a global level may require predictive and coordinating intelligence far beyond unassisted human capabilities.
Lastly, there are the insane logistical and energetic challenges of becoming a multi-planetary species or even an interstellar one. Managing self-replicating probes, terraforming planets, and governing societies spread across interstellar distances would almost certainly require autonomous, intelligent agents.
Note that I'm not claiming that all of the above is needed for a prosperous longterm future. Only that without AI or AI-enabled technologies the probability of realising such a future is significantly reduced. So much so that it's practically infeasible.
Conclusion
Longtermists should consider not only risk-mitigation but also positive differential technological development: simultaneously pushing progress toward beneficial AI while building the guardrails to prevent harmful AI.
If true, this view necessitates a subtle shit in perspective for longtermists views on existential risk from AI:
- Pausing is not risk free: Global moratorium/pauses on AI capabilities research, often proposed as a safety measure, are not without their own risks. Since they increase the probability of a stagnation scenario.
- Rethinking 'AI Safety': A future without advanced AI is not a very safe one. There needs to be space for moving beyond the standard focus on the prevention of harms from advanced AI to consider the astronomical opportunity cost of its absence.
LLM disclosure: This post was first hand written without any LLM assistance. Then Gemini 2.5 Flash was prompted to check for grammatical errors and flag inconsistencies in the main argument.
---
References
- Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. Oxford University Press, 2014.
- Bostrom, Nick. Deep utopia: life and meaning in a solved world. Washington, DC: Ideapress Publishing, 2024.
- Carlsmith, Joe, 'Existential Risk from Power-Seeking AI', in Hilary Greaves, Jacob Barrett, and David Thorstad (eds), Essays on Longtermism: Present Action for the Distant Future (Oxford, 2025; online edn, Oxford Academic, 18 Aug. 2025), https://doi.org/10.1093/9780191979972.003.0025, accessed 17 Oct. 2025.
- Ngo, Richard, and Adam Bales, 'Deceit and Power: Machine Learning and Misalignment', in Hilary Greaves, Jacob Barrett, and David Thorstad (eds), Essays on Longtermism: Present Action for the Distant Future (Oxford, 2025; online edn, Oxford Academic, 18 Aug. 2025), https://doi.org/10.1093/9780191979972.003.0026, accessed 17 Oct. 2025.
- Ord, Toby. The Precipice: Existential Risk and the Future of Humanity. Hachette, 2020.
