Hide table of contents

Note (added after original post): This essay was ghostwritten by AI and as such has a few significant, sometimes subtle, mistakes. An updated final version can be found here

Note: after posting a temporary version for the "Essays on Longtermism" competition deadline, this series of posts has been temporarily removed from the front page, and will be reposted in the following days after updates have been made. 

Epistemic status: In the spirit of draft amnesty, I am posting this series slightly before it is fully ready or in ideal form. 

This represents many years' worth of my thinking and I think the core material here is quite important, but I really wanted to submit them for the "Essays on Longtermism" competition, due today which ended up competing with some other important fellowships and other applications, hence getting far lower priority and attention than it deserved.

Nonetheless, I believe these ideas are fundamentally important, execution may just be closer to middle draft than final draft.

That said, I will likely be updating this series significantly in the following days, especially the last post, “Shortlist of Longtermist Interventions,” and the last two pieces which are unpublished. I will have a running list of when updates have been made, so that readers can easily see when each essay has been updated and finalized, available on the series explainer page here:

(Brief and Comprehensive Series Explainers Here)

This essay, easily readable as a stand-alone piece, is this first essay and part of my submission to the "Essays on Longtermism" competition, based on my unpublished work on Deep Reflection (comprehensive examination of crucial considerations to determine the best achievable future). (Deep Reflection Summary Available Here)

TL;DR

This essay comments on two pieces from the Essays on Longtermism collection: Owen Cotton-Barratt and Rose Hadshar's "What Would a Longtermist Society Look Like?" and Hilary Greaves and Christian Tarsney's "Minimal and Expansive Longtermism." 

I argue that while the longtermist community has developed strong theoretical foundations, we face a critical infrastructure gap. We need concrete mechanisms that make longtermism practically achievable without requiring coercion or universal adoption of explicitly longtermist daily behaviors.

The essay establishes several key frameworks that subsequent essays build on: the commoditization thesis (AI will soon make implementation trivial while direction becomes everything, making values work a high-leverage pre-AGI Better Futures intervention), the Cooperative collaboration paradigm (community infrastructure that makes all longtermists more effective through novel systematic collaboration mechanisms and sharing of best practices), and the importance of systematic value reflection infrastructure. 

Cotton-Barratt and Hadshar describe what longtermist societies might look like but provide limited guidance on mechanisms to create them; 

Greaves and Tarsney note that expansive longtermism is less robust than minimal approaches. 

I show how concrete institutional designs and community infrastructure can make expansive longtermism more tractable, bridging the gap between their theoretical analyses and practical implementation. This essay provides the conceptual foundation that unifies the concrete mechanisms explored in subsequent essays.

A Brief Longtermist Autobiography

When I was a kid, I discovered longtermism on my own, and have been one ever since. For most of that time, it has been a lonely journey.

In 2021, I wrote a book draft on longtermism and broad societal strategies to prevent existential risk and achieve the best possible future. I was planning to call it either "Ways to Save the World" or "Paths to Utopia." 

The primary motivation behind writing this book was to launch a longtermist movement.

Then, in January 2022, while preparing for a Master's degree on Social Entrepreneurship, I discovered there already was a longtermist movement! 

I became obsessed with the Effective Altruism group at my university and dropped out of school to pursue direct longtermist community building and research while living at a longtermist group house in Berkeley, California.

I was quite thrilled when Will MacAskill wrote the book, "What We Owe the Future," proposing trajectory change as on par with extinction risk due to the threat of value lock-in (1) (2). Trajectory change was half of my book, and I had been sad to see most longtermists had gone sour on “Broad Longtermist” strategies (1) (2), which my book preferred, with most longtermists favoring narrow strategies such as technical AI safety.

In 2024, I requested a debate week on this topic, and in 2025, I got my wish. Unfortunately, I got carried away writing an essay for the piece and spent about half a year writing a 35,000-word (still unfinished) essay—although an intermediate summary version is available here.

When I asked for feedback, and Will MacAskill shared his own essay series with me, I realized I needed to update some of my own ideas on "Seed Reflection," which was my new name for "Paths to Utopia," which is quite similar to what Will MacAskill calls "Viatopia."

In the spirit of "draft amnesty week," I am going to share some of my current and previous writings, edited somewhat to bring them closer to my current views, although not saying exactly what I want to say which would require extensive revision;

I had only mildly studied AI before learning about EA and the Longtermist Movement, and had not even heard of generative AI. I could do a lot more to update my old ideas to bring them in line with my new understanding of AI, as well as EA, rationality, and many other new ideas I have learned—but unfortunately, the perfect is the enemy of the good, and done is better than perfect. It’s time this work sees the light of day, and I do believe that some of these ideas are quite important and could be useful for conceptualizing and pursuing Viatopia/Seed Reflection/Paths to Utopia/Deep Reflection.

I will begin by commenting on two of the "Essays on Longtermism." The first, by Owen Cotton-Barratt and Rose Hadshar, is on what a longtermist society might look like. The second essay, by Hilary Greaves and Christian Tarsney, explores and compares minimal and expansive versions of longtermism.

I will then give a section from my in-progress essay on Deep Reflection called Viatopia and Buy-In which delineates my reasoning on the need for a transitional state between the current world and Deep Reflection, where "Deep Reflection" (such as “The Long Reflection”) is an end-stage process in which society comprehensively examines all essential crucial considerations in order to determine how to achieve the best possible future.

Finally, I will explore a couple of my somewhat updated Paths to Utopia, and a long list of promising interventions I would like there to be more public awareness of.

(As per the rules of this competition, none of these have been previously published, and at least a few of them may have never been published if it weren't for this competition.)

My deepest gratitude to the EA Forum, Effective Ventures, Toby Tremlett, Will MacAskill, David Thorstad, Hilary Greaves, Jacob Barrett, and Eva Vivalt; as well as all of the wonderful authors who contributed essays, for their inspiring work on longtermism, and for making this competition possible.

Section 1: Building the Bridge: From Longtermist Theory to Institutional Reality

In "What Would a Longtermist Society Look Like?", Owen Cotton-Barratt and Rose Hadshar (2025) explore what societies with various levels of commitment to longtermism might look like. The definition of longtermism they use is roughly that of "Strong Longtermism" from Greaves and MacAskill (2021): "A longtermist perspective is a perspective which assesses actions almost entirely on the basis of their expected impacts on the far future."

Their analysis reveals an important tension. They note that it seems implausible that both a state and all of its citizens would be strictly longtermist, adhering strictly and exclusively to longtermist ethics. More realistic scenarios include a strictly longtermist state with mixed citizen commitment, or a partially longtermist society where both state and citizens only partially prioritize longtermism. Yet crucially, Cotton-Barratt and Hadshar identify what longtermist societies would need without fully addressing how to create these conditions (what we might call the "actionability gap" in longtermist institutional design).

They provide an illuminating list of instrumental goods even a strictly longtermist society must pursue: having and raising children; education; mechanisms to match people with suitable work; technology and built environments for productive work; psychological health through communities, entertainment, and therapy; nutritious food; housing and domestic goods; healthcare; and stable governance. They note two reasons for maintaining citizen wellbeing: political stability (maintaining power when not all citizens are longtermist) and productivity (citizens need psychological health and energy to be effective in longtermist work).

I strongly agree with their framework. For interventions to be realistic, they must serve public wellbeing; for them to be effective, they must create public goods. However, I believe we can go significantly further in bridging the gap between their theoretical analysis and practical implementation.

The Missing Link: From Description to Mechanism

Cotton-Barratt and Hadshar describe what longtermist societies might look like, but provide limited guidance on the mechanisms to create and sustain them. This is the central challenge: how do we move from our current myopic society to one that naturally embodies longtermist values, without coercion or requiring everyone to be explicitly longtermist in their daily decisions?

This challenge becomes particularly urgent when we consider what I call "skating to where the puck is going." Very soon, advanced AI will enable us to implement nearly any societal design we can imagine. At that point, the limiting factor won't be capability but direction (knowing what we want to create). As AI commoditizes everything except ideas and values, determining the best possible future becomes perhaps the highest-leverage work possible before transformative AI arrives. Whatever institutional designs and value frameworks are well-developed and readily available when this capability arrives are likely to be what we actually implement, simply because we'll use the tools that are lying around rather than conducting an exhaustive search.

This creates two critical imperatives. First, we need concrete institutional designs ready in advance, so that when the capability to implement them arrives, we have vetted, robust options available. Second, we need diversity in our institutional designs. If only one or two viatopia proposals exist, decision-makers might simply choose one and proceed. But if dozens of well-developed proposals exist, each compelling in different ways, this forces the serious reflection and debate necessary to avoid premature lock-in to suboptimal futures. The existence of multiple attractive paths makes it obvious that we should explore the space further before committing.

Viatopia as Practical Implementation

The concept of "viatopia" (an intermediate state that helps humanity converge on the best achievable future) provides the missing link between Cotton-Barratt and Hadshar's theoretical analysis and concrete action. While MacAskill introduced this term and Ord's "Long Reflection" represents one viatopian mechanism, there remains vast unexplored design space for institutions and mechanisms that could serve this function.

My work focuses on developing concrete viatopian mechanisms that address the specific challenges Cotton-Barratt and Hadshar identify. The Hybrid Market, for instance, directly implements their observation that "there is no reason that the state need provide these productivity-enhancing things directly, rather than leaving individuals to obtain them via markets. If the state wishes to tip the scales of consumption choices to account for the externalities they cause, they can do that via taxes and subsidies."

However, rather than relying on state implementation, the Hybrid Market represents a decentralized mechanism whose core premise is taxes and subsidies (automatically pricing in externalities, both positive and negative, across all timeframes including long-term effects on the far future). This allows society to efficiently move toward better futures by making longtermist considerations economically rational at the individual level, without requiring centralized coordination or everyone to explicitly adopt longtermist ethics.

The Children's Movement and Systematic Value Evolution

Cotton-Barratt and Hadshar emphasize "educating people so they are well equipped to tackle challenging research work" and "the education of his children to become productive workers." But this understates what may be the highest-leverage intervention available: systematically improving how we raise children to create a generation that naturally embodies the collaborative, long-term-oriented, epistemically rigorous mindset longtermism requires.

The Children's Movement I propose goes beyond education to comprehensive reimagining of how we support child development. By showing children love, empowering them with autonomy, and teaching them to take responsibility for their future, we create adults for whom longtermist interventions seem obvious and natural. This addresses both the political challenge (creating citizens who genuinely support longtermist priorities) and the productivity challenge (developing the psychological health and capabilities needed for effective work) that Cotton-Barratt and Hadshar identify.

More fundamentally, childhood represents perhaps our most crucial leverage point for systematic value improvement. We need institutions and interventions that help humans reflect on values, experiment with different value frameworks, engage in substantive debate, and systematically move toward better values over time. This could include not just childhood interventions, but psychological and social interventions throughout life, new institutions for value deliberation, and AI coaches that help individuals explore their values. (These will be explored in the future essay in this series "Shortlist of Longtermist Interventions") The Children's Movement exemplifies this principle by targeting humans at the earliest stage possible, creating compounding effects across entire lifetimes and generations.

This systematic attention to value evolution is essential if we want to "keep the future human" (maintaining human agency and allowing human values to evolve carefully rather than immediately optimizing everything with AI). Some viatopia designs envision humans remaining the primary determiners of values into the far future, at least until we're confident we want AI to have significant influence over value formation. This requires infrastructure (including institutional and technological infrastructure) for deliberate, thoughtful value evolution rather than rushing to lock in our current values.

Existential Compromise: Resolving the Strict/Partial Tension

Cotton-Barratt and Hadshar note a concerning possibility: "rather dystopian scenarios of a coercive state with a strict commitment to Longtermism, ruling over a people that does not share its views." This highlights a fundamental challenge: how can strict and partial longtermism coexist without coercion?

The concept of "existential compromise" provides a potential resolution. This idea builds on Nick Bostrom and Carl Shulman's analysis in "Propositions Concerning Digital Minds and Society," where they showed how vast resources enable win-win compromises. They illustrated this with a specific example: "Consider three possible policies: (A) 100% of resources to humans (B) 100% of resources to super-beneficiaries (C) 99.99% of resources to super-beneficiaries; 0.01% to humans. From a total utilitarian perspective, (C) is approximately 99.99% as good as the most preferred option (B), and from an ordinary human perspective, (C) may also be 90+% as desirable as the most preferred option (A)." William MacAskill has developed similar ideas in his "Grand Bargain" framework in the Better Futures series, exploring how different value systems can reach mutually beneficial agreements given sufficient resources.

The vast scale of the far future (potentially 10^52 or more human-life-equivalents) makes possible agreements that would be impossible with smaller stakes. With sufficient abundance (which advanced AI could provide), we can satisfy both those who want to pursue immediate personal preferences and those who want to optimize for the far future.

For instance, we might guarantee that a substantial fraction of cosmic resources remain available for those who prefer to live in relatively unoptimized, human-scale societies experiencing the full range of human and near-human experiences. Simultaneously, we pursue careful reflection and eventual optimization with the remaining resources (potentially the vast majority). This makes it possible for both strictly longtermist and partially longtermist citizens to coexist within the same civilization, with neither feeling oppressed by the other's values.

Critically, viatopia can be designed to appeal broadly while still moving systematically toward better futures. By ensuring current generations experience genuine wellbeing improvement (not sacrifice for the future), we make longtermist institutions politically viable. This addresses Cotton-Barratt and Hadshar's observation that maintaining citizen happiness is essential both for political stability and psychological productivity.

Building on Existing Foundations

The effective altruism and longtermist community has developed substantial theoretical frameworks and important infrastructure for effective implementation. Existing meta work and community building (through organizations like CEA, 80,000 Hours, and numerous local and student groups) provides crucial foundations. Fellowship programs, career advising, and community platforms like the EA Forum demonstrate the power of well-designed infrastructure to coordinate and amplify individual efforts.

However, there remain significant opportunities to expand this infrastructure, particularly in ways that are more decentralized, crowdsourced, and scalable. The interventions I explore in "Shortlist of Longtermist Interventions" build on these existing foundations: systematic programs to generate and evaluate interventions; platforms for sharing best practices; mechanisms for crowdsourcing and collectively evaluating ideas; research automation tools that scale with AI capabilities; and institutions designed to compound in effectiveness over time.

This represents an important direction for the community: continuing to develop practical infrastructure (including institutional and technological infrastructure) alongside theory development. We need entrepreneurs, institution-builders, and engineers working alongside philosophers and researchers, creating systems that make it easier for more people to contribute effectively to longtermist goals.

Conclusion

Cotton-Barratt and Hadshar provide valuable analysis of what longtermist societies might look like. Their emphasis on instrumental goods, citizen wellbeing, and the distinction between strict and partial longtermism offers a rigorous framework for thinking about institutional design.

However, moving from description to reality requires concrete mechanisms (ways of structuring markets, raising children, organizing communities, and building institutions that naturally embody longtermist values while maintaining broad appeal and respecting human agency). Viatopia provides the conceptual framework; specific implementations like the Hybrid Market and Children's Movement provide the practical mechanisms; and existential compromise provides the political solution that makes it all feasible.

The next section, "Viatopia and Buy-In," explores why viatopia is essential and performs stakeholder mapping to analyze what various actors (AI labs, governments, and the public) can do to move us closer toward viatopia. This connects directly to Cotton-Barratt and Hadshar's concerns about who must be bought into longtermist institutions for them to succeed. Following that, "Shortlist of Longtermist Interventions" details specific interventions and mechanisms, all aimed at creating the infrastructure that allows humanity to reliably navigate toward excellent futures.


Section 2: Making Expansive Longtermism Tractable Through Infrastructure

In "Minimal and Expansive Longtermism," Hilary Greaves and Christian Tarsney (2025) explore a crucial distinction in longtermist thought. While standard arguments establish "minimal longtermism" (focused on targeted interventions against specific technological existential risks), many longtermists find themselves drawn to "expansive longtermism," which holds that nearly all personal and societal decisions should be made considering their impact on the far future.

Greaves and Tarsney present this tension clearly: minimal longtermism enjoys strong evidential support through clear causal mechanisms (certain technologies increase extinction risk; mitigating that risk increases expected future value), while expansive longtermism's arguments are "significantly less robust and significantly more speculative." For minimal longtermist interventions targeting technological x-risks, they note philanthropists could provide funding, talented individuals could devote careers, and policymakers could allocate resources and implement regulations. This might require less than 2% of GDP.

Expansive longtermism, by contrast, could justify spending over 50% of GDP on broad interventions like indirect existential risk mitigation, patient philanthropy, space settlement, accelerating growth, and improving values and institutions. On this view, even breakfast choices become longtermist decisions through opportunity cost or productivity effects. Yet Greaves and Tarsney express legitimate skepticism about whether we can reliably identify which expansive interventions genuinely improve the far future.

I find myself in substantial agreement with both their analytical framework and their cautious optimism. Technological x-risk prevention should indeed be prioritized. However, I believe the case for certain forms of expansive longtermism is stronger than their analysis suggests—not due to further philosophical considerations, but because we can build infrastructure that makes it tractable.

The Tractability Problem and Its Solution

Greaves and Tarsney's primary concern with expansive longtermism is not that it's wrong in principle, but that it's difficult to identify which broad interventions reliably improve the far future. The causal chains are long, complex, and uncertain. How do we know if improving education or institutional decision-making actually helps millennia hence?

This is fundamentally an information problem: we lack the capacity to systematically analyze hundreds of potential broad interventions, strategy considerations, and crucial considerations to determine which are robustly good. But this is precisely the kind of problem advanced AI can help solve, and indeed, the kind of problem where early preparation is essential.

The fact that there are so many different crucial considerations, and it is very difficult to know how they all affect each other and how they affect the long-term future, significantly raises the value of automating high-level strategic work early. William MacAskill has expressed enthusiasm for "automated macrostrategy," recognizing that developing methods for throwing massive amounts of compute at these problems could help us map complex relationships and forecast likely interactions between different strategic considerations.

My 2024 work on research automation using AI workflows demonstrates this potential. Rather than attempting to fully automate research (which current AI cannot reliably do), the focus is on creating an AI research automation tool library that longtermist researchers can use to dramatically accelerate their work. These tools enable systematic analysis of intervention proposals, generation and evaluation of strategic considerations, and exploration of crucial considerations that humans might not naturally consider.

For example, we could run numerous automated Monte Carlo simulations to predict a large number of possible plausible interactions between different interventions and circumstances. AI can generate vast numbers of ideas, vastly exceeding what humans produce in a given timespan, potentially uncovering crucial considerations we simply wouldn't think of because there are many topics, events, and ideas we're not aware of that AI could use to inform its analysis. As AI capabilities approach and exceed human level in strategic domains (and in some narrow domains, they already far exceed human capabilities), we want these research automation tools already developed and refined, enabling researchers to navigate the intelligence explosion wisely.

Intervention development and incubation using AI tools seem nearly equally important as overall strategic automation. It's not just important to have an accurate overall picture, but also to develop specific tactics for enacting that vision. Moreover, interventions that heavily rely on human-AI symbiosis are possible now and will become increasingly effective as AI improves. By figuring out how to get the greatest amount of AI leverage per human researcher now, that leverage grows in lockstep as AI becomes more powerful. These tools themselves represent a form of expansive longtermism that becomes more robust over time, compounding in effectiveness precisely when we most need them.

This human-AI symbiosis paradigm, where humans and AI intensively collaborate via mutual feedback loops to increase human effectiveness and agency, may be essential for maintaining human relevance as AI capabilities grow. By systematically developing practices for giving AI profound context about our goals and values at individual, organizational, and societal levels, we can ensure humans remain meaningfully involved in directing the future rather than becoming obsolete. This 'strapping humans to AI' creates a foil to pure AI self-improvement

The Commoditization Thesis and Strategic Focus

Perhaps the most important consideration is what I call the commoditization thesis: as AI capabilities grow, implementation becomes increasingly trivial while direction becomes overwhelmingly important. Soon we'll be able to implement nearly any institutional design, technological development, or societal intervention we can specify clearly. The binding constraint shifts from "can we do this?" to "should we do this? What exactly should we do?"

This suggests that the highest-leverage work in the pre-AGI period may not be direct implementation but rather determining the best possible goals, developing multiple viable paths to good futures, and creating the deliberative infrastructure to choose wisely among them. In other words: expansive longtermism focused specifically on values, institutions, and strategic clarity.

This isn't about micromanaging breakfast choices. It's about recognizing that once AI makes everything else easy, knowing what we want becomes the last remaining challenge. Focusing attention on this challenge now (before path dependencies solidify) represents perhaps the best possible way of "skating to where the puck is going" and utilizing AI at its highest fulcrum leverage point.

Community Infrastructure Interventions and Cooperative Dynamics

An underappreciated aspect of the minimal versus expansive debate is that we're not operating as isolated individuals but as a community. This opens up a crucial category of expansive interventions: community infrastructure interventions that make the entire longtermist community more effective.

Greaves and Tarsney note that expansive longtermism raises coordination challenges. But we can flip this: infrastructure that reduces coordination costs represents high-leverage expansive interventions. The EA community has built valuable foundations through organizations like CEA, 80,000 Hours, and numerous fellowship programs and through platform technologies for coordination, such as the EA Forum, LessWrong, and the AI Alignment Forum. These demonstrate how good infrastructure compounds effectiveness across the entire community.

Building on these foundations, we can develop additional community infrastructure interventions, explored extensively in a post later in this series “Shortlist of longtermist interventions.” These include:

  • Fellowship and incubator programs that systematically train researchers and charity entrepreneurs in the neglected but high-leverage areas of Better Futures, Viatopia, and Deep Reflection
  • Platforms where longtermist community members share and collectively evaluate their best ideas for systematically improving longtermist community effectiveness
  • Research automation tool libraries that any researcher can use, compounding collective effectiveness
  • Weekly longtermist mastermind groups  where longtermists support each other's projects through shared ideation, feedback, and sharing of best practices and resources
  • A particularly promising example is a Charity Entrepreneurship-style Better Futures Fellowship-Incubator that would systematically train cohorts to generate and evaluate 100-200 interventions each, creating both concrete vetted interventions and a scalable pipeline of trained researchers, facilitators, and entrepreneurs. This directly addresses the tractability problem Greaves and Tarsney identify while building on CEA's proven fellowship model.
  • Systematic collection and dissemination of best practices from the most effective community members

These represent what we might call community infrastructure interventions: they generate the capacity for more interventions. When one person develops a research workflow that doubles productivity, and that workflow is shared community-wide through a tool library, the community's collective research output potentially doubles. Each such infrastructure project multiplies the impact of all subsequent work.

This Cooperative or collaborative paradigm shift is fundamentally important. The longtermist community has many motivated individuals struggling to find paid longtermist roles. But if we systematically help each other become more effective (systematically sharing best practices and creating infrastructure that accelerates the community’s ability to generate, evaluate, and launch highly effective interventions), we can achieve something like exponential improvements in community-wide effectiveness.

The key insight is that individual effectiveness is not solely determined by individual talent. It's heavily influenced by available tools, clear strategies, supportive networks, synergistic platforms, and systematic practices. By focusing on these communal resources, we can dramatically increase the productivity of average individuals and help highly effective individuals become even more impactful.

Systematic Value Reflection as Core

If the commoditization thesis is correct (if AI increasingly handles implementation while direction becomes crucial), then interventions focused on improving humanity's ability to determine good directions become centrally important. This means infrastructure (including institutional and technological infrastructure) for systematic value reflection.

We need mechanisms that help humans reflect on their values, experiment with different frameworks, engage in substantive debate, and systematically evolve toward better values. This could include:

  • Psychological interventions and tools (including AI coaching) that help individuals explore and refine their values
  • Social institutions designed to facilitate productive moral discourse
  • Educational approaches that build strong epistemics and collaborative mindsets from childhood
  • Political and economic systems that aggregate preferences while guarding against premature value lock-in
  • Deliberative processes that take advantage of AI's ability to model consequences and explore vast possibility spaces

The Children's Movement exemplifies one approach: by systematically improving how we raise children, we create generations that are better equipped to navigate these challenges. But we need a whole ecosystem of value-improving interventions spanning individual psychology, social institutions, and AI-augmented deliberation.

Crucially, this work becomes more urgent as AI timelines shorten. We don't know whether we have decades or years before transformative AI arrives. Pre-AGI institutional design takes time to develop, test, and refine. The institutions and ideas we have ready when advanced AI arrives may be what we end up using, as there will be great pressure to make decisions relatively quickly about what path humanity should take once these capabilities exist. This creates strong reasons to develop these institutions now, even if their deployment is years away.

Keeping the Future Human

One important consideration for choosing between minimal and expansive approaches is uncertainty about what kind of future we want. Some envision AI rapidly optimizing everything according to well-specified values. But we might instead prefer to "keep the future human"—maintaining human agency and allowing human values to evolve carefully over time within societies that aren't radically different from our current experience.

If we take this more gradual path seriously, then expansive interventions focused on human psychological health, moral development, and institutional design become critically important. We need viatopian mechanisms that help human values evolve in positive directions without coercion, that maintain meaningful agency while guarding against catastrophic choices, and that enable us to eventually converge on excellent futures without rushing there immediately.

This connects to a key design criterion for viatopia: balancing agency and guidance. We want to maximize human freedom to explore different possibilities while systematically encouraging movement toward better values. This is a difficult design problem, but it's one worth solving if we want to preserve meaningful human choice in shaping the far future.

Conclusion and Integration

Greaves and Tarsney are right that minimal longtermism enjoys stronger evidential support than expansive longtermism in general. But I argue that certain expansive interventions (particularly those focused on infrastructure, values, and community effectiveness) are more tractable than their analysis suggests.

The key is recognizing that we're in a unique historical moment. AI is about to make implementation trivial but direction critical. We have a brief window to build the infrastructure (including institutional and technological infrastructure) that helps humanity reliably choose good directions. And we have a community of motivated individuals who could be far more effective with better coordination, tools, and systematic practices.

By prioritizing interventions that create broad safeguards against existential risks, systematically improve human values and institutional quality, maximize community effectiveness through infrastructure, and leverage AI in ways that compound over time, we can pursue a form of expansive longtermism that is both philosophically defensible and practically tractable.

Moreover, I agree strongly with Greaves and Tarsney's implicit emphasis: we must not downplay the importance of current the generation’s wellbeing. As Cotton-Barratt and Hadshar emphasized in their analysis of longtermist societies, focusing on wellbeing is essential both for political viability and psychological health. Fortunately, there is no fundamental conflict between creating good worlds today and ensuring excellent futures. By prioritizing high-leverage, wellbeing-promoting interventions and ensuring everyone's needs are met, while simultaneously creating high leverage interventions for values research, reflection, experimentation, and debate, we can create a present that naturally evolves into an even better future—and perhaps eventually the best future achievable.

The next section, "Viatopia and Buy-In," explores why viatopia is essential and performs stakeholder mapping to analyze what various actors (AI labs, governments, and the public) can do to move us closer toward viatopia. This directly addresses the feasibility concerns that Greaves and Tarsney raise about expansive interventions. Following that, "Shortlist of Longtermist Interventions" details specific interventions embodying these principles, demonstrating how expansive longtermism can move from philosophical speculation to practical institutional reality, while " Hybrid Market" and " Children's Movement" explore two wide-ranging visions for Viatopia.

In the next essay, "Why Viatopia is Important," I provide the theoretical foundation from my Deep Reflection work, explaining Will MacAskill's viatopia concept and why it matters for achieving the best achievable future. The essay introduces the multiplicative crucial considerations framework, showing why dozens to hundreds of interacting factors make comprehensive reflection orders of magnitude more valuable than narrow approaches. It explores the commoditization thesis in greater depth, explains how diversity of viatopia paths prevents premature lock-in through a bootstrapping mechanism, and discusses parallels and differences between MacAskill's Better Futures framework and my own work. This theoretical foundation establishes why the practical mechanisms explored throughout this series are essential for navigating the challenges ahead.

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities