Hide table of contents

Epistemic status: 

This series, written for the "Essays on Longtermism" competition, was my first experiment in using Claude 4.5 Sonnet as a ghostwriter, and I learned some valuable lessons worth sharing. 

While I feel relatively good about this final product, I had to do extensive editing to correct errors on Claude’s initial version, and due to serious time constraints (the original series was written in 3 days) the version I submitted for the competition contained some obvious errors I missed and, more importantly, some very subtle errors which I am quite embarrassed of. 

While the large majority of the ideas in the series are my own, Claude did a good job and sometimes an excellent job of articulating them and a decent job of organizing them in a logical sequence. My main complaint is that the essays are occasionally repetitive and there are some elements which are duplicated across essays, however partly this was intentional in order to make it so that each essay could easily be read as a fully stand-alone piece. Claude can also be wildly mis-calibrated, often claiming something is “the best” or “the most important,” when there is no strong evidence for this or consideration of alternatives.

Additionally, while I have tried to remove any subtle errors, I have occasionally realized I missed one that was actually quite important, and so I apologize if any remain which I have missed. This is a big challenge, as Claude can sound very eloquent at times and yet be completely wrong, misconstruing ideas in logical sounding yet subtly misleading ways. 

My biggest recommendation to anyone using AI for ghostwriting is to very, very carefully read the writing and think about it and make sure the ideas are correct, as well as to perhaps explicitly mention AI has been used in the writing and there may be subtle errors of this type.

With those caveats in mind, here’s a link to the initial version of the essay submitted to the “Essays on Longtermism” competition.

I also want to note that while this series represents some of my best ideas over the last few years, it does not have the level of finesse and flavor of posts I typically like to write, perhaps with the exception of “Viatopia and Buy-In” which was an excerpt from my in-progress essay on “Deep Reflection.” 

Deep Reflection is the comprehensive examination of crucial considerations to determine a strategy to achieve best achievable future; this series was largely based on my research on Deep Reflection. (Deep Reflection Summary Available Here)

As mentioned, this essay, easily readable as a stand-alone piece, is the fourth essay and part of my submission to the "Essays on Longtermism" competition.

Brief and Comprehensive Series Explainers Here

TL;DR

This essay presents a shortlist of concrete high-leverage Better Futures interventions for moving toward MacAskill’s ‘viatopia.’ Viatopia, a concept introduced by Will MacAskill, refers to “a state of the world where society can guide itself towards near-best outcomes.” These viatopian interventions are based on the key principles established in Introduction to Building Cooperative Viatopia , Why Viatopia is Important, and Viatopia and Buy-In.

These principles prioritize interventions that: 

  • Improve infrastructure for the longtermist community
  • Focus on values and human psychology/flourishing
  • Improve exponentially through compounding feedback loops
  • Enhance strategy
  • Generate more resources and interventions
  • Create ongoing self-improving institutions
  • Leverage AI in ways that become more effective as compute and capabilities grow

The interventions span multiple categories: 

  • Fellowship programs and incubators (like a Charity Entrepreneurship-style Better Futures fellowship)
  • Research automation tools (workflows that help researchers be exponentially more productive)
  • Coordination platforms (novel systematic collaboration mechanisms for sharing best practices and ideas)
  • Value reflection infrastructure (institutions and technologies for systematic moral progress)
  • Field-building initiatives (creating the ecosystem needed for Viatopia work to flourish)
  • AI tools designed to compound human effectiveness over time

This essay addresses the gap between longtermist theory (which the EA community has developed extensively) and practical implementation infrastructure (which possesses significant room for growth), especially in regards to work on Better Futures in general, and Viatopiain particular.

Each intervention includes brief rationale for why it's high-leverage, demonstrating the breadth of concrete work we could be doing right now. The list shows that moving from philosophical arguments about viatopia to actual implementation is achievable via building extensive practical infrastructure.

Please note that while there are many important existing ideas in this space, I focused here mainly on ones that I have come up with myself, as I want to provide some fresh perspectives inspired by my recent thinking, as explored in earlier essays in this series.

Interventions

Intervention 1. General List of Foundational Infrastructure for Viatopia

I am generally posting these interventions in priority order. However, I will first start with this list I created a few months ago, which serves as a good brief orienting overall prioritized list. I originally created this list of interventions for Deep Reflection, but it mostly equally applies to Viatopia and serves as a great starting point for highly effective interventions. I will include research as an “intervention” when research is necessary to have the right strategic picture to take effective action. You may notice these are mainly "meta" interventions, and that is because I believe these are especially high impact early on when building a developing field such as Better Futures/Viatopia/Deep Reflection. For a great list of object level interventions, see MacAskill’s essay on this topic:

  • Develop an overall strategy and prioritize research and interventions to orient the field
    • To take advantage of the bitter lesson, (link) an essential part of this is having a hierarchical prioritized longlist of all of the important components of Viatopia/Deep Reflection research and interventions for progressively advanced AI to automate.
      • For example, if creating a hierarchical research list for deep reflection, at the top of the list could be some simplistic instruction like “do research to figure out the best possible strategy.” Then, on the next level down, “solve morality and all empirical crucial considerations and figure out what the best possible strategy is.” Then, “Research all of these specific research questions on moral philosophy, and all of these specific crucial considerations, then search for any missing considerations, and synthesize the answers to figure out what the best possible strategy is.” etc. (footnote: This is only meant to be illustrative, such a project would require much more thought and care to make sure such a list includes carefully framed research agendas with data, examples, carefully crafted instructions, well thought-through projects and interventions, an overall worldview and frame to act within, and progressive levels of decreasing scaffolding for progressive levels of advanced AI.)
      • Start with a, and then narrow down to the AI technologies most important for ensuring Viatopia/Deep Reflection, so that as soon as those technologies become possible we can build them – and perhaps prepare data pipelines, scaffolding & post-training, compute allocation, and address non-AI barriers in advance to accelerate these tools.
  • “Blitzscaling” Viatopia/Deep Reflection field-building – a rapid ambitious buildup of the Viatopia/Deep Reflection ecosystem to become the first mover at scale advocating detailed robust proposals for what to do with advanced AI, before it arrives
    • Create a central organization focused on field building and infrastructure
    • Write up existing strategies and tactics known to scale a field extremely rapidly without sacrificing quality
    • Launch a Charity Entrepreneurship-style incubator for high-impact Viatopia/Deep Reflection organizations
    • Create a database of funding opportunities for Viatopia/Deep Reflection work
    • With comparatively advantaged and motivated talent as a target: promote understanding of the key concepts, secure commitment, direct talent toward key projects, and build a synergistic, interconnected community
      • Community building and advocacy at existing effective altruism groups and organizations e.g. student groups, city groups, online groups, podcasters and influencers, grantmaking organizations, and research organizations
        • Create a week’s worth of readings/content for weekly groups, or an entire fellowship focused on Viatopia/Deep Reflection
        • A “Global Challenges Project” (Link) for Viatopia/Deep Reflection, which puts on Viatopia/Deep Reflection workshops and events for groups and organizations
    • Create the necessary infrastructure to:
      • Ensure a cohesive research portfolio
        • Create an archive plus summaries of all previous research on Viatopia/Deep Reflection
        • Create a registry for existing research
        • Create programs to train or mentor aspiring Viatopia/Deep Reflection researchers
      • Build consensus on highly leveraged opportunities & collaborate on important projects that this
    • Create a directory of all current work on Viatopia/Deep Reflection
  • Detailed analyses on actual Viatopia/Deep Reflection processes and how likely they are to succeed
    • Fundamental research and mechanism design on viatopia and other seed reflection processes to determine the most feasible, effective, and actionable processes and mechanisms.
      • First principles model building and analysis of the fundamental factors of seed ref will lection and viatopia
      • World-building, (link) but with the constraint that the world-builds must reliably converge on the best possible world
      • Crowdsourcing ideas through prize competitions or a Viatopia/Deep Reflection mechanism-design course (links FLI and Foresight examples of both)
    • Research on Viatopia/Deep Reflection strategies besides viatopia (footnote to the list I had earlier, right before this final conclusion)
    • Literature review and analysis of any existing Viatopia/Deep Reflection processes that have been designed besides The Long Reflection, CEV, and Good Reflective Governance
      • Existing world-builds (links) and utopias with unique institutions, governance, and societal mechanisms could be useful to examine, even if these were not explicitly designed to converge on the best possible future
      • It could be useful to map out the possibility space of all the fundamental components which vary across different imagined worlds, in order to create a comprehensive list of variables and a model for enabling Monte Carlo simulations, stress testing, and the ability to generate new worlds by recombining variables; as well as the possibility of using AI automation to explore new worlds and various scenarios in a structured way
  • Foundational and strategy research on Viatopia/Deep Reflection governance and advocacy
    • Determining who is highest leverage to target
    • Determining the key components and bottlenecks of effective governance work
    • Developing governance proposals that will be most well-received and most likely to make a difference with AI labs and policymakers
    • Talking with AI labs and policymakers to get feedback on early ideas
    • Exploring the impact of other avenues for advocacy to the public, entrepreneurs and businesses, social entrepreneurs and civil society, academia and public intellectuals, other influencers, and any other important actors
  • Determining how Viatopia/Deep Reflection and Extinction Security interact
    • Research on how different Viatopia/Deep Reflection mechanisms differentially affect extinction risk
    • How much does Viatopia/Deep Reflection reduce post-alignment extinction risk? (Footnote: If I were to rewrite this essay from scratch, this may be the one thing I would want to add. I am worried I may have significantly understated the value of Viatopia/Deep Reflection work and so failed to fully assess its importance, as much or most of its value may come from extinction risk reduction.)
    • What are the positive and negative impacts of various AI safety approaches on Viatopia/Deep Reflection?
    • What interventions are synergistic between these two cause areas? – While this essay has juxtaposed the two for the purpose of evaluation, ideally they should both be considered when choosing interventions to create the most value across both cause areas.
  • Research on which specific interventions from these adjacent cause areas would be most effective for increasing the likelihood of Viatopia/Deep Reflection:
    • AI for epistemics
    • Avoiding concentration of power (link)
    • Improving institutional decision-making
    • Automation of wisdom

Intervention 2. Charity Entrepreneurship style intervention research pipeline for Better Futures, especially Viatopia/Deep Reflection:

I believe enormous value lies at stake in Better Futures, and information on high-impact interventions is especially valuable. In particular, Viatopia interventions, which ultimately lead to comprehensive reflection (Deep Reflection) in which we’ve analyzed all crucial considerations, and equally importantly, a state of humanity in which we are likely to take action to ~maximize value based on these considerations via “existential compromise.”

The research pipeline would generate 200-300 interventions across Better Futures/Viatopia/Deep Reflection intervention focus areas and then do progressively deeper dives on them to find which is likely to be most effective and robust against downside & failure.

Charity Entrepreneurship style Better Futures Fellowship-Incubator 1-Page Summary:

Following my personal experience developing and enacting the Charity Entrepreneurship style Better Futures research pipeline, I could leverage my facilitation experience leading fellowships and facilitator trainings on EA/longtermism/x-risk for CEA and at UC Berkeley to launch a fellowship with 16-week cohorts.

These cohorts would participate in reading/discussion groups on a core curriculum encompassing:

  1. The essential theoretical/strategic foundations of Better Futures (and especially Viatopia/Deep Reflection)
  2. Charity Entrepreneurship intervention pipeline research methodology.
  3. Extensive education on effectively using AI tools for automating research at every step in the process 

Fellows would then each generate and progressively evaluate 100-200 interventions in whichever Better Futures focus areas they have personal interest or expertise in, with emphasis on interventions most likely to create a state of Viatopia, eventually leading to comprehensive reflection.

This fellowship has the advantage that it upskills fellows who are then in a much better position to, depending on their comparative advantage:

  1. Enact the interventions they have designed
  2. Become full-time intervention researchers
  3. Become future fellowship facilitators to help keep scaling the program

This structure would scalably increase the amount of talent focused on high-impact Better Futures research, while simultaneously increasing the tractability of the field by developing highly effective interventions.

This takes advantage of the massive gap between field interest and a seeming paucity of specialized talent, organizations, and interventions in the field, as evidenced by my analysis of the EA forum "existential choices debate,” showing a slight preference for work on improving the quality futures in which we survive over increasing chances of survival (n=366)  (link)

Charity Entrepreneurship's Viatopia/Deep Reflection adaptation.

(very rough draft)

Stage 0 — Process Design (30 hours): Pre-register decision criteria, establish a "bar to beat" (interventions must demonstrate X% estimated increase in likelihood of comprehensive reflection, without increasing x-risk), define evaluation weights, and specify kill-criteria.

Stage 1 — Mass Hypothesis Generation (200-300 ideas, 50 hours): Generate interventions across categories using literature sweeps, brainstorming, and expert consultation. Categories include AI tools, organizations, new institutions, policy advocacy, social mechanisms, governance mechanisms, and field-building. Define major theories of change, such as creating self-reinforcing cycles toward comprehensive reflection.

Stage 2 — Quick Prioritization (~50 ideas, 120-150 hours): Apply four rapid evaluation methods: quick cost-effectiveness estimate, initial weighted factor scoring, evidence quality scan, and informed consideration snapshot.

Stage 4 — Expert Review & Critical Uncertainties (~10 ideas, 100 hours): Interview 1+ experts per intervention. Identify 1-6 "killer" uncertainties and spend 90 minutes each to address them. Red-team assumptions.

Stage 5 — Deep Dives (~5 ideas, 250 hours): Produce comprehensive reports including: detailed theory of change, stakeholder power/interest matrices, scalability assessments, externality analysis, crucial considerations, and expert review.

Stage 6 — Implementation Blueprints (2-4 finalists, 5-30 hours): Create full "charity blueprints"—detailed 2-year launch plans with monthly milestones, team composition, funding roadmaps distinguishing EA vs. mainstream sources, measurable success proxy metrics for increased reflection likelihood, and limiting factor mitigation strategies.

Supporting Infrastructure: I would develop AI research tools to accelerate this work, based on my award-winning work on the automation of research. Tools would be made publicly available for other high-impact researchers.

 Documentation and Scale: Every process would be documented to facilitate future work, including potential growth into a full CE-style Better Futures incubator, sustainably improving the field’s neglectedness and tractability.

Personal fit: My entrepreneurial experience, facilitation background (CEA, UC Berkeley), and research foundation position me uniquely to bridge theory and practice.

This work will make extensive use of AI automation to help with the research. However, since this is in a rough stage, I will save this part of the intervention for another time.

Intervention 3. Crux: Systematically Mapping Crucial Considerations Through Targeted Infrastructure

The Strategic Knowledge Fragmentation Problem

One of longtermism's most persistent challenges is that crucial strategic insights remain scattered across individual researchers' minds, buried in long papers few read completely, or confined to private conversations. A researcher at one organization might understand something critical that researchers at another organization desperately need to know, but the insight never transfers. New researchers entering the field must slowly reconstruct the community's strategic understanding through years of reading and conversation, reinventing wheels and missing key considerations. Meanwhile, we face multiplicative crucial considerations where missing even one important factor could reduce future value by orders of magnitude, yet we lack systematic infrastructure for ensuring all researchers are aware of all crucial considerations the community has identified.

This is fundamentally a collective intelligence problem. Each researcher has limited time and attention. Long papers and forum posts serve important purposes, but they're not optimized for rapid knowledge transfer of the single most important insights. We need infrastructure that encourages researchers to distill their best thinking to its essence and makes those essences rapidly consumable by everyone else, creating what I like to call a “firehose of the most important ideas in the world, directly into every researcher's mind.”

Crux: A Platform for Crucial Considerations

Crux is a platform currently in development (by Dony Christie) designed to solve this problem through a combination of encouraging distillation, sophisticated incentives, and AI-assisted drafting. The core mechanism is simple: each user can post at most one "crux" per day, where a crux is defined by three core components:

  • A crux is crucial consideration that materially affects existential risk or the long-term quality of the future
  • A crux is action-relevant (makes a difference to what we could actually do)
  • A crux is non-obvious (not already well-known, such that spreading awareness would have significant impact).

Users are also encouraged to consider sub-components of action-relevance, such as:

  • Strategic intelligence
  • Ease of implementation
  • Opportunity cost
  • Leverage
  • Scalability
  • Robustness against failure of action
  • Robustness against downside risk of action

(these are just a few examples, we plan to map out what makes great crux much more deeply in the future.

Every crux must be readable in 1-5 minutes. The title itself states the crux as a short sentence, enabling users to rapidly scan dozens of cruxes and immediately identify whether they already understand each consideration or need to read further. This creates an extremely efficient knowledge consumption experience: researchers can move through the platform quickly, absorbing only genuinely new insights rather than re-reading familiar ideas.

The one-post-per-day limit forces genuine prioritization. Rather than posting whenever an idea occurs to them, users must ask: "Of everything I understand that others might not, what is the single most important consideration I could share today?" The platform encourages spending an hour daily on this practice—thinking carefully about one's most important insight, then using the built-in AI ghostwriting system (customized Claude with crux-optimized instructions) to iteratively refine the explanation. Users speak naturally to the AI, which asks clarifying questions and iteratively improves the writeup until it captures the essential insight in the clearest, most concise form possible. This eases the work of painstakingly compressing an idea into a short, highly optimized explanation, enabling researchers to focus on the core intellectual work of identifying and explaining important considerations while AI handles the communication optimization.

Cruxes can span many categories: strategic considerations, epistemic insights, community or institutional improvements, infrastructure proposals, new interventions, philosophical considerations, personal effectiveness practices, or effective ways of leveraging AI. The unifying criteria is that each must be a genuine crux—something that, if the community understood and acted on it, could significantly improve the expected value of the future.

The Incentive Architecture

Crux employs sophisticated mechanisms to ensure the platform surfaces genuinely important considerations rather than devolving into noise. All cruxes are rated 0-11 (though 0 and 11 are effectively taboo, as nothing has literally zero relevance or guarantees a best possible future, making this functionally a 1-10 scale). 

Critically, the platform uses believability-weighted voting inspired by Ray Dalio's believability-weighted decision-making. Your voting power isn't determined by activity level or tenure, but by your calibration: how well your early ratings of cruxes predict what the community ultimately rates them long-term. If you consistently identify important cruxes before others do, your future ratings carry more weight. This creates powerful incentives to rate carefully and only on topics where you have genuine expertise, rather than superficially voting on everything.

The system tracks not just your post karma but your rating calibration score, decomposed into earliness (how quickly you identify important cruxes) and accuracy (how well your ratings match eventual consensus). This makes the platform self-improving: over time, the best judges of importance have the most influence, making the community's collective prioritization increasingly accurate.

Each crux contains hierarchically organized sub-cruxes—comments and replies to comments that explore crucial considerations within the main crux. These are also rated 0-11, creating a nested structure where you can see not just the most important top-level considerations but also the most important sub-considerations within each. This builds comprehensive maps of strategic considerations rather than isolated insights. When you click into a crux, you see a threaded hierarchy showing which objections, refinements, or extensions the community judged most important, enabling rapid absorption of the full consideration landscape around any topic.

We hope to offer weekly monetary prizes to incentivize both the best crux and the best rating calibration, potentially with additional prizes for best sub-cruxes. This creates strong motivation to think carefully about your daily idea, ensure it's genuinely crucial, and explain it clearly—while also incentivizing deep engagement with others' ideas and careful thought about what's actually most important.

An additional benefit is that these monetary prizes could provide financial support for highly impactful researchers who lack adequate funding, enabling them to continue strategic research or invest in research multipliers such as research assistants and compute for AI research automation tools.

Why This Accelerates Longtermist Strategy

This infrastructure addresses the multiplicative crucial considerations challenge directly. With potentially dozens to hundreds of factors affecting future value, we need systematic processes ensuring researchers are aware of all important considerations. Crux creates exactly this: a living, community-maintained map of crucial considerations, constantly refined and expanded as researchers identify new factors or better understand existing ones.

The forced daily practice has compounding effects. If the field's best researchers each identify and clearly explain their single most important insight daily, within months the platform contains hundreds of carefully distilled strategic considerations. Within years, thousands. New researchers can absorb this collective wisdom far faster than through traditional onboarding. Instead of years reconstructing the community's understanding through reading and conversation, they can rapidly scan the highest-rated cruxes and quickly identify which considerations they're missing.

The action-orientation ensures these aren't just interesting ideas but factors that should influence decisions. The non-obviousness criterion means the platform surfaces under-rated important knowledge—considerations some researchers understand but others don't yet know they're missing. The hierarchical structure and semantic search (with AI-generated tags) enable discovering related considerations and building comprehensive understanding of how different factors interact.

Importantly, Crux makes gaps visible. When you can see what the community has extensively discussed and what hasn't been mentioned, you identify blind spots. If everyone is posting about AI governance but nobody about space settlement considerations, that absence itself becomes informative. The platform creates a collective strategic map that reveals both what we know and what we're neglecting.

This shares important features with existing platforms like the EA Forum, LessWrong, and the Alignment Forum, all of which enable knowledge sharing and discussion in the longtermist and x-risk communities. What makes Crux distinct is the extreme focus on crucial considerations only (not general discussion or preliminary ideas), the forced brevity (1-5 minutes rather than potentially hour-long reads), the one-post-daily limit forcing prioritization, and the sophisticated calibration-based incentive system. It's complementary to these platforms rather than competitive—different infrastructure optimized for a specific purpose: rapidly building and maintaining shared awareness of the most important strategic considerations.

Implementation and Next Steps

Crux is currently being developed by Dony Christie, who has written about potentially complementary mechanisms in his work on retroactive funding and impact certificates. The platform may experiment with impact certificates as a mechanism for retroactive funding, where future stakeholders could purchase certificates associated with particularly important cruxes, rewarding the strategic work that helped create good futures. This could allow concrete cooperation with citizens of the long-term future, giving them the capacity to incentivize the creation of the best versions of themselves through retroactive funding. This remains exploratory given regulatory uncertainties, but represents an interesting alignment of incentives with long-term value creation.

Platform design emphasizes both clarity and engagement—the interface should make it effortless to consume important ideas while also making the practice of identifying and sharing cruxes feel meaningful and energizing rather than burdensome. Semantic search, automated linking to the Crux wiki anytime jargon is used, and thoughtful information architecture ensure accessibility without sacrificing depth.

The aspirational vision is that all leading x-risk and longtermist researchers adopt a daily practice: spending one hour identifying their most important current insight, refining it with AI assistance, and posting it to Crux. Similarly, researchers would spend time regularly engaging with others' cruxes, rating them carefully, and contributing important sub-cruxes that refine or challenge the considerations. Over time, this creates a comprehensive, dynamically updated strategic commons—a shared map of everything the community understands about what matters for achieving good long-term futures.

For those interested in beta testing the platform or learning more about its development, please reach out via private message. This infrastructure has potential to dramatically accelerate our collective strategic capacity precisely when we most need it: in the critically high leverage time before the arrival of transformative AI.

Intervention 4. Human-AI Symbiosis: A Framework for Differential Human Acceleration

Core Principle: We can only slow AI progress so long and by so much. To navigate the transition to advanced AI while preserving human agency and ensuring wise outcomes, we must differentially accelerate human capabilities in three domains (in order of priority): values and wisdom, wellbeing, and agency. Human-AI Symbiosis represents infrastructure and technologies that systematically amplify these human capacities as AI capabilities grow, ensuring humans remain relevant decision-makers rather than being left behind.

Loosely inspired by J.C.R. Licklider's "Man-Computer Symbiosis" (1960), which envisioned computers extending human capabilities, this framework extends that vision to advanced AI. The central insight: AI models possess in-context learning capabilities that are radically underutilized. By providing comprehensive personal and collective context, creating systematic feedback loops, and building the right infrastructure, we can create human-AI systems that compound effectiveness over time, allowing humans to advance alongside AI rather than being superseded by it.

This framework represents the cooperation dimension of "Building Cooperative Viatopia " applied to human-AI relationships, both individually and collectively. It embodies positive longtermism principles: creating positive-sum dynamics through virtuous feedback loops where improving human wisdom and capabilities generates more resources to do even more good, benefiting everyone through cooperation rather than competition or control.

Why This Matters for Viatopia

The instrumental commoditization thesis suggests AI will soon make implementation trivial while direction becomes critical. But if humans can't keep up with AI's strategic sophistication, we risk either handing over decision-making to systems we don't fully understand, or making hasty decisions with inadequate wisdom. Human-AI Symbiosis infrastructure ensures humans remain capable of making wise choices affecting the far future even as AI capabilities increase dramatically.

Moreover, for viatopia to succeed as a deliberate path toward better futures, humanity needs not just theoretical frameworks but practical tools that systematically improve our wisdom, wellbeing, and agency. These tools must scale automatically as AI improves, creating a positive feedback loop where better AI produces more capable humans who can make wiser use of even better AI.

Five Infrastructure Categories

A. Technical Infrastructure for Research and Strategy

AI workflow libraries that automate key thinking processes for researchers, particularly in high-stakes domains like AI strategy and existential risk. These workflows break complex reasoning into component steps, add natural language error-correcting codes (epistemic checks, bias detection, rationality techniques), and chain together to solve multistep problems. As base models improve, the same workflows become dramatically more powerful. Research organizations act as "scaffolding to bootstrap artificial wisdom," gradually transitioning from humans doing research with occasional AI assistance to humans designing workflows that orchestrate AI's research capabilities.

Key interventions include: 

  • Systematic workflow databases covering hypothesis generation, strategic analysis, crucial consideration mapping, bias checking, and synthesis
  • Meta-workflows that automatically select and combine workflows based on task type
  • Performance metrics and iterative improvement systems that make workflows progressively more effective
  • Public workflow libraries enabling community-wide productivity multiplication.

B. Personal Infrastructure for Individual Optimization

Systems that elicit comprehensive personal context (life goals, values, daily routines, learning styles, work patterns, important relationships, etc.) and make this accessible to AI through various technical architectures. This enables AI to act as a "mega-coach" that helps individuals optimize across all life domains, systematically working toward best-self goals rather than short-term gratification or zero-sum dynamics.

Key interventions include: 

  • Protocols for creating comprehensive personal databases
  • Voice/wearable recording systems that capture life data for continuous AI understanding
  • AI coaches fine-tuned on data from the best human therapists, coaches, and teachers
  • Automated content screening systems that scan news, research, and media to develop highly optimized informational briefs for enhanced self-actualization
  • Life integration tools that help optimize habits, decision-making, self-development, and values reflection
  • Summarization/personalization AI that extract essence from large amounts of information, at appropriate level of detail for a given individual

C. Community Infrastructure for Collective Multiplication

Platforms and systems that enable longtermists (and humanity broadly) to share AI tools, best practices, and insights, creating network effects where each person's AI improvements benefit everyone. This addresses the anti-memetic nature of altruism (hard to persuade others to be altruistic) by pairing it with the memetic nature of positivity and practical effectiveness (easy to spread).

Key interventions include: 

  • GitWise-style platforms where users share prompts, workflows, and AI interaction techniques, rated by effectiveness
  • Matching systems using neural network embeddings based on deep personal data to connect collaborators, co-founders, or partners based on shared values and complementary strengths
  • Coordination infrastructure where personal AIs communicate to find positive synergies and resolve conflicts
  • Systematic collection and dissemination of best AI practices from the most effective community members
  • Public libraries of highly optimized specialized AI personalities (research assistant, therapist, strategic advisor, etc.) that anyone can use

D. Values Amplification Infrastructure

Systems that systematically help humans reflect on values, explore different ethical frameworks, debate moral questions, and progressively move toward wiser value systems. This isn't about imposing specific values but creating infrastructure for moral progress through deliberate reflection and experimentation.

Key interventions include: 

  • AI systems that facilitate deep values clarification dialogues, helping people understand their own preferences and resolve internal conflicts
  • Tools that help people explore how different value systems would play out in practice
  • Forecasting AI (à la AI enabled Futarchy and decision forecasting)  that predicts satisfaction with different goals, helping people choose paths they would most endorse on reflection
  • Systematic exposure to diverse moral frameworks with AI helping synthesize insights

The exact values to target require ongoing collective deliberation, but the infrastructure enables that deliberation to happen effectively.

E. Wellbeing Enhancement Infrastructure

Tools specifically targeting psychological health, fulfillment, and human flourishing. While related to values, wellbeing enhancement focuses on helping people achieve positive mental states and life satisfaction, which then enables better decision-making and more effective pursuit of larger goals. 

Key interventions include: 

  • AI therapy systems fine-tuned on outcome data from effective therapeutic approaches
  • Daily/weekly reflection protocols where AI helps process experiences and maintain psychological health
  • Tools for addressing specific challenges (addiction, anxiety, relationship issues, meaning crisis) based on the best human intervention techniques scaled through AI
  • Systems that help people design fulfilling lives aligned with their deepest values and strengths

How This Compounds

Human-AI Symbiosis infrastructure exhibits several compounding dynamics:

Capability Scaling: As AI models improve, the same infrastructure becomes dramatically more powerful without additional human effort. Workflows designed today will work better tomorrow.

Network Effects: Each person who develops better prompts, workflows, or AI interaction techniques can share them, multiplying effectiveness across the community.

Feedback Loop Acceleration: Better Human-AI symbiosis tools help humans become wiser and more capable; wiser humans design better Human-AI symbiosis tools and infrastructure; this cycle accelerates over time.

Bitter Lesson Advantage: By focusing on scalable infrastructure that leverages increasing compute and capability rather than hand-crafted solutions, we take advantage of AI's trajectory rather than fighting it.

Self-improving institutions: Infrastructure can include feedback loops, enabling it to automatically improve over time through use and feedback.

Most importantly, Human-AI Symbiosis represents a positive-sum approach where human capabilities can be systemically enhanced, with each enhancement generating more resources enabling further enhancement. This positive feedback loop creates a foil to model self-improvement, ensuring humans remain capable players rather than obsolete decision-makers. By using AI to differentially accelerate human progress on values, wellbeing, and agency, this cooperative Human-AI Symbiosis framework allows us to systematically enhance human capabilities and push toward better futures. 

Note: Specific interventions within this framework (research workflow libraries, GitWise platforms, forecasting AI systems, personal optimization tools, etc.) are detailed elsewhere in this essay series and in my award-winning work on Designing Artificial Wisdom.

Intervention 5. Automated Macrostrategy: Scaling Human Strategic Capacity Through AI

The Strategic Cluelessness Challenge

Perhaps the most fundamental challenge facing longtermist strategy is what we might call the multiplicative crucial considerations problem. A crucial consideration, in Nick Bostrom's terminology, is any factor that could materially influence the value of the future that we need to understand to act correctly. This extends far beyond normative questions about what constitutes "the good" to include empirical facts about technological trajectories, strategic questions about coordination and path dependencies, political considerations about how we build robustly beneficial institutions and governance mechanisms, and epistemic questions about the most effective methods to discover truth.

The critical insight is that, in expectation, these considerations interact multiplicatively rather than additively. With potentially dozens to over a hundred such considerations, each potentially affecting value by factors of 2X, 10X, or even more, missing even one consideration could reduce future value by an order of magnitude. Missing several compounds dramatically. Moreover, we face complex cluelessness: we don't know in advance which crucial considerations will turn out most important, or how they interact with each other. This creates a fundamental challenge for narrow trajectory change interventions. If we focus on changing one aspect of the future (say, ensuring certain governance structures or advancing certain technologies), we might succeed at that narrow goal while inadvertently creating catastrophic effects on other crucial considerations we haven't examined carefully. The interactions are complex enough that we can't simply work on considerations one by one and expect good results.

This framework suggests that comprehensive reflection processes examining all crucial considerations together may represent orders of magnitude greater expected value than addressing considerations individually, despite significant tractability challenges. The question becomes: how can we possibly achieve the strategic clarity needed to navigate this complexity?

The Solution: Automating High-Level Strategic Work

The answer lies in leveraging AI's capacity to scale our strategic thinking far beyond what humans can achieve alone. This approach, which I detail in my award-winning essay "Designing Artificial Wisdom: The Wise Workflow Research Organization" (2024), focuses on creating AI workflow libraries that dramatically accelerate longtermist researchers' capacity to analyze strategic questions.

The core mechanism involves throwing massive amounts of compute at mapping the complex relationships between crucial considerations and forecasting their most likely interactions. This includes running numerous automated Monte Carlo simulations to predict plausible interactions between different interventions, circumstances, and strategic factors. AI can generate vast numbers of hypotheses, vastly exceeding what humans produce in a given timespan, potentially uncovering crucial considerations we simply wouldn't think of because there are many topics, events, and ideas we're not aware of that AI could use to inform its analysis.

Critically, this isn't about attempting to fully automate research, which current AI cannot reliably do. Instead, we create tool libraries that enable systematic analysis of intervention proposals, generation and evaluation of strategic considerations, and exploration of how different factors interact. These tools break complex strategic reasoning into component workflows, add natural language error-correcting codes (epistemic checks, bias detection), and chain together to solve multistep strategic problems.

Examples of crucial domains requiring such systematic analysis include: AI governance and strategy, deep space governance, differential technological development across multiple path-dependent technologies, values reflection and evolution mechanisms, artificial sentience and digital rights, and the design of institutions for ongoing reflection and adaptation.

As AI capabilities improve, the same automation infrastructure becomes dramatically more powerful without additional human effort, yet it is important to further take advantage of the Bitter Lesson: by designing architectures which make use of increasingly powerful AI in advance, it is possible to immediately capitalize on these new capabilities as soon as they arise. By building scalable infrastructure that leverages increasing compute and capability rather than hand-crafted solutions (e.g. self-designing and self-orchestrating architectures + adversarial dynamics and chain of thought monitoring to guard against scheming), we ride AI's trajectory rather than fighting it. Research organizations can gradually transition from humans doing strategic research with occasional AI assistance to humans operating workflows that direct AI's strategic capabilities.

The Uniquely High-Leverage Window

We face a brief but critical period where automated macrostrategy work is uniquely high-leverage. Right now, we have relatively high strategic leverage but low strategic clarity. We must develop comprehensive strategies, institutional designs, and decision frameworks before path dependencies solidify with the arrival of transformative AI. Each early decision on a crucial consideration creates path dependencies that progressively constrain future possibilities, whereas early highly intelligent strategic wins can create positive path dependencies and generate positive processes that compound exponentially over time as they interact with increasingly powerful AI capabilities, making early work unusually high leverage. 

Will MacAskill has expressed support for automated macrostrategy in public statements, noting it as a something Forethought may look into. He observes that during early stages of an intelligence explosion, it will be highly contingent how much research effort goes into different issues. By default, AI companies won't immediately use transformative AI to work through grand strategy questions. They'll focus on recursive self-improvement of AI itself.

Creating automated macrostrategy infrastructure beforehand ensures we can leverage advanced AI for comprehensive strategic thinking as soon as capabilities permit, rather than scrambling to develop these tools once transformative AI already exists.

Implementation: Building the Infrastructure

Several concrete steps enable automated macrostrategy to succeed:

Compute and Model Access: Early access to advanced models from leading AI companies will be critical, along with donations of compute or funding to purchase compute. This may face significant pressure against it from companies focused on other priorities, making coordination and advocacy around compute access for strategic research particularly important.

Systematic Tool Development: Building hierarchically organized research automation tools, starting with relatively simple workflows and progressively developing more sophisticated capabilities as AI improves. Insofar as possible, these tools should be designed with the principle that as base models improve, the same infrastructure becomes dramatically more capable.

Researcher Training and Adoption: Creating systematic outreach programs and training for longtermist, AI governance, AI strategy, etc. researchers to use automation tools effectively. This could take the form of a powerful AI ghostwriting / ghostresearch platform combined with comprehensive training programs, ensuring the research community can collectively benefit from these tools rather than having them remain niche.

The robustness of comprehensive reflection work is worth emphasizing: thorough examination of all crucial considerations seems unusually likely to be net positive, considering it by definition attempts to act correctly across all important factors simultaneously. This makes automated macrostrategy unusually robust compared to narrow interventions that risk negative flow-through effects due to unconsidered interactions.

It is also worth noting that due to the ability to monitor workflows and chains of thought, workflows seem especially likely to increase strategic intelligence while remaining safe, as contrasted with approaches which leverage RL or stronger base models to increase strategic intelligence (although workflows can be even more effective in combination with these.)

By building automated macrostrategy infrastructure now, we scale humanity's capacity to handle strategic complexity precisely when we need it most, ensuring we can make wise decisions about the far future even as the pace of change accelerates beyond human strategic thinking's natural limits.

Intervention 6. The Good World Project: Crowdsourcing Visions for Better Futures

Why Broad Vision Sourcing Matters

With transformative AI, humanity will soon have the capacity to build nearly any future we can clearly specify. This makes the question "what future do we actually want?" dramatically more important than it has ever been. Yet currently, only small groups of researchers, futurists, and longtermists are systematically articulating comprehensive visions for good futures. This creates three interconnected problems that The Good World Project addresses:

Democratic Legitimacy: For any vision of the future to be legitimate and maintain broad support, people need to feel invested in it. The best way to create such investment is through participation. When people have contributed to shaping a vision, they're far more likely to support its implementation and feel ownership over the future trajectory. Moreover, there are genuine values and preferences distributed throughout humanity that matter morally and practically. A future designed only by technical elites, however well-intentioned, risks missing crucial dimensions of human flourishing that only diverse participants could identify.

Strategic Clarity, Mapping the Full Space of Possibilities:

The multiplicative crucial considerations framework reveals deep uncertainty about which factors will most affect future value. Broad vision sourcing systematically maps this landscape by crowdsourcing diverse perspectives on what good futures could look like. Each participant brings different life experiences, values frameworks, and creative intuitions that reveal considerations we wouldn't think of independently. When we can compare dozens of compelling but different proposals, we're forced to grapple with fundamental trade-offs: individual autonomy versus collective coordination, technological acceleration versus precautionary governance, present welfare versus long-term value. These trade-offs remain invisible when we only have one or two visions to consider. Strategic clarity comes from understanding what we're actually choosing between, which requires mapping the full possibility space systematically.

Preventing Premature Lock-In Through Diversity:

Perhaps most critically, broad vision sourcing helps prevent premature lock-in of suboptimal futures. When transformative AI arrives, there will be intense pressure to make rapid decisions about how to use it. Whatever visions are "lying around" at that moment may disproportionately influence what actually gets built. By systematically generating, documenting, and refining a diverse collection of well-specified future visions beforehand, we ensure that when the moment comes, decision-makers have a rich menu of thoughtfully developed options rather than rushing to implement the first plausible-sounding proposal. The presence of multiple well-developed paths forward makes premature lock-in much harder to justify and forces more careful reflection on what we're actually trying to achieve.

AI-Enabled Accessibility: The Conversational Elicitation System

The core bottleneck for broad participation isn't that people lack values or preferences about the future. It's that most people can't easily articulate complex future visions. They're not professional writers, they don't have frameworks for thinking systematically about utopia, and they lack the time to develop lengthy written proposals.

The Good World Project solves this through an AI-powered conversational elicitation system that makes participation accessible to virtually anyone. Here's how it works:

The system functions as an intelligent interviewer that guides people through exploring and articulating their values and vision. Rather than confronting a blank page, participants have a natural conversation (which can be spoken aloud using speech-to-text, removing literacy barriers). The AI asks carefully designed questions to understand what the person values and cares about, what kinds of things they think would be good to have in the world, what frustrates them about current society, and what excites them about possible futures.

Based on what it learns, the AI suggests various ideas for the kinds of worlds the person might like, drawing from its knowledge of utopian literature, philosophical frameworks, technological possibilities, and institutional designs. The person provides feedback in natural language, refining and redirecting the AI's understanding. This iterative process continues, with the AI generating increasingly accurate representations of the person's vision while the person clarifies and develops their thinking through the dialogue itself.

Eventually, the AI writes out a comprehensive vision document for each person, incorporating their values, preferences, and ideas. Crucially, this can be written in whatever style the person most likes or finds most compelling, whether that's a narrative story, a policy proposal, a philosophical treatise, or a technical specification. The person continues giving natural language feedback until the document authentically captures their vision.

This system is likely buildable with current or near-future AI technology, though it requires thoughtful design to ensure the AI genuinely elicits the person's authentic vision rather than leading them toward particular answers or imposing the AI's training biases. The key is creating an open-ended exploration process that treats the participant as the authority on their own values.

Once well-designed, this system scales to millions or billions of people at relatively low marginal cost, enabling unprecedented broad participation in envisioning humanity's future.

Utopedia: Mapping the Possibility Space

The second major component involves systematic aggregation and synthesis of collected visions. This takes several forms:

Utopedia (or "Future-pedia") functions as a comprehensive wiki mapping all articulated visions and their components, similar to Wikipedia but where each page describes an element of possible futures. This might include specific institutional designs (like the Hybrid Market described later in this series), technological capabilities (like advanced AI alignment techniques or biotechnology applications), social arrangements (like particular forms of governance or economic systems), or values frameworks (like different approaches to weighing wellbeing against autonomy).

Each vision can be broken down into its constituent elements, which can then be recombined in novel ways. If one vision emphasizes particular governance structures while another emphasizes certain technological development priorities, Utopedia makes it easy to explore how these elements might work together in synthesis.

A utopia tech tree or decision tree organizes these components hierarchically, showing dependencies and prerequisites. For instance, certain institutional designs might require specific technological capabilities, or certain values-aggregation mechanisms might depend on particular epistemic infrastructure. This helps identify which components are most fundamental and which build on others.

The system includes voting and collaboration mechanisms allowing the community to collectively evaluate visions, identify common threads across superficially different proposals, and iteratively refine the most promising directions. This isn't about declaring winners and losers, but about understanding patterns in what diverse people find compelling and why.

From Visions to Action: The Intervention Development Pipeline

The Good World Project's ultimate value lies in its connection to concrete action. This links directly to the intervention development infrastructure described earlier in this series, particularly the Charity Entrepreneurship-style Better Futures Fellowship and research pipeline.

Once we've crowdsourced diverse visions and mapped the possibility space through Utopedia, the intervention development pipeline systematically generates 200-300 concrete interventions that could move us toward the most promising elements of these visions. Researchers evaluate these interventions for tractability, robustness, and expected impact, then launch the highest-value ones as actual projects.

For example, if many visions emphasize the importance of improved decision-making institutions, the pipeline might generate interventions around forecasting infrastructure, deliberative democracy mechanisms, or AI-assisted collective reasoning tools. If numerous visions highlight the value of ensuring children develop strong epistemics and collaborative mindsets, this flows into interventions like the Children's Movement detailed later in this series.

This creates a powerful feedback loop: broad vision sourcing identifies what humanity wants, systematic mapping reveals patterns and possibilities, intervention development creates concrete pathways, and launched projects begin actualizing these futures. As real-world experience accumulates, this informs refinement of visions, creating an ongoing cycle of reflection and action.

The Perpetual Reflection Organization

Rather than being a one-time project, The Good World Project ideally evolves into a permanent institution performing ongoing reflective governance activities. This organization continuously:

  • Maps and remaps where humanity wants to go as values evolve and new possibilities emerge
  • Facilitates dialogue between holders of different visions to find synthesis and common ground
  • Generates new intervention ideas based on emerging visions and technological capabilities
  • Nudges society toward trajectories that align with collectively endorsed visions
  • Prevents premature lock-in by maintaining awareness of multiple viable paths forward

This perpetual function is essential because comprehensive reflection isn't something we do once and then stop. As our circumstances change, as we learn more, and as new possibilities emerge, we need ongoing capacity to deliberate about our direction and ensure we're still heading toward futures we endorse upon reflection.

Starting Points and Partnerships

While the ultimate goal involves global participation, practical implementation could begin with communities already engaged with these questions: AI researchers and developers, effective altruists, longtermists, AI safety researchers, social entrepreneurs, and futurists. These groups are both easier to access and already concerned with thinking carefully about the future, making them natural starting points for refining the elicitation system and demonstrating value.

Several existing initiatives offer potential partnerships or inspiration:

Hyperstition AI already generates positive narratives about AI futures going well, providing a complementary approach to vision development through automated storytelling.

The Existential Hope Worldbuilding Course from Foresight Institute teaches systematic worldbuilding methodology that could inform the Good World Project's approach.

Future of Life Institute's Worldbuilding Competition demonstrates an effective mechanism for incentivizing vision creation, which could be integrated into the Good World Project's design.

Rose Hadshar's essay "Good Government" exemplifies the kind of aspirational vision the project aims to generate at scale, showing what's possible when someone articulates their conception of better institutional arrangements.

By combining conversational AI elicitation, systematic mapping through Utopedia, connection to concrete intervention development, and ongoing institutional capacity for reflection, The Good World Project transforms abstract discussions about what future we want into practical infrastructure for actually achieving it. This embodies the core viatopia principle: creating systematic processes that help humanity converge on better futures through comprehensive reflection rather than hasty decisions based on whatever visions happen to be most prominent when transformative capabilities arrive.

Intervention 7. Hybrid Market: Toward an Optimal Economics for the Far Future

(I've included one page on this idea here because it is quite important to this series, even though I'm not sure it is actually the number four most important idea here, as it is a heavy lift to get it implemented. Unfortunately, I ran out of time so am just including one page on it instead of the twelve pages I had planned. The same is all also true of the Children's Movement, which is next. I should hopefully be able to post the full versions soon.)

The Hybrid Market represents one approach to a core challenge in creating better futures: how to aggregate diverse values across society while systematically incentivizing value-creating behavior. As viatopian infrastructure, it addresses the institutional design problem of steering toward better outcomes without requiring universal longtermist adoption or centralized control.

Core Mechanism

Unlike traditional markets measuring only financial performance, the Hybrid Market has traditional financial transactions but simultaneously prices all externalities—positive and negative—across all timeframes, using Advanced AI (notably, AGI or at least powerful forecasting AI is likely necessary for easy implementation) to model flow-through effects and integrate with the Internet of Things to act as an intelligent oracle. Organizations receive currency ("happies") proportional to total value created, pegged to and individual’s subjective improvements in wellbeing over a given period of time, allowing them to explicitly make decisions on how they spend money based on how much value they get. Yet, value is measured not only in terms of wellbeing but in terms of all important public goods, near-term and long-term, including: reducing existential risk, improving wellbeing, accelerating beneficial research, or any measurable contribution to long-term flourishing. They are penalized for harm caused.

Critically, the currency flows to those actively creating value rather than accumulating with asset holders. Because assets must be continually "rented" (similar to Harberger taxes) and the overall hybrid market return rate is 0% on average, wealth naturally redistributes to value creators. This creates automatic incentives: generating public goods yields currency, while hoarding assets costs currency. The system thus systematically channels resources toward interventions that improve society's long-term trajectory.

Why This Increases Long-Term Value

The mechanism creates a clear causal chain: First, by rewarding all positive externalities (not just profitable ones), it dramatically increases production of public goods—particularly those with large long-term benefits but weak near-term market incentives, like existential risk research, institutional improvement, and moral progress initiatives. Second, this systematic increase in public goods provision makes society function substantially better across multiple dimensions: better epistemics, improved coordination, stronger institutions, and more resources allocated to genuinely important problems. Third, a better-functioning society with robust institutions and wise resource allocation is far more likely to navigate future challenges successfully and realize significant long-term value.

One way to think of this is as a systematized, marketized, universal, effective altruism mechanism which applies to all financial transactions and investment.

Importantly, the system will develop specific proxies for longtermist goods. Just as carbon markets created standardized metrics for emissions, the Hybrid Market would establish proxies and indices for x-risk reduction, institutional quality, better futures trajectories, and other far-future-relevant factors. This makes previously invisible long-term considerations economically legible, while simultaneously improving the provision of all public goods by funding the most high-leverage, efficient providers of those goods. This makes the Hybrid Market a highly appealing version of Viatopia from a short-term, as well as a long-term perspective.

Making Trade-offs Explicit

Perhaps most crucially, the Hybrid Market renders trade-offs transparent. When a harmful product's true social cost appears in its price, individuals directly observe that they're trading potentially vast numbers of future lives for trivial present consumption. This visibility itself shifts behavior by making consequences salient in ways current markets systematically obscure.

Implementation and AI Enhancement

The system functions as decentralized mechanism design—implementing Cotton-Barratt and Hadshar's insight about market-based provision with taxes and subsidies, but without requiring state coordination. AI serves two critical roles: futarchy-style prediction of which interventions will succeed, and modeling complex flow-through effects computationally intractable for humans. However, humans retain control over values—deciding which indices matter most through investment choices.

As AI capabilities improve, measurement accuracy and predictive power scale, making the system increasingly effective at channeling resources toward genuinely valuable interventions. This represents infrastructure that compounds in effectiveness as technology advances—precisely the kind of intervention longtermism should prioritize.

Intervention 8. The Children's Movement: Systematic Value Evolution Through Early Development

The Children's Movement represents a human-centric approach to viatopia, targeting humanity at arguably its highest point of leverage: early childhood development, when epistemics, values, and collaborative capacities first form. Originally conceived in 2021 as part of my pre-EA work, this intervention maintains human agency while systematically improving humanity's capacity for wisdom and long-term thinking.

The Leverage Argument

Several considerations suggest childhood intervention may be exceptionally high-leverage for longtermist goals. First, effects compound over entire lifetimes and across generations—a single cohort of better-raised children influences society for 70+ years and shapes how the next generation is raised. Second, empirical evidence is encouraging: studies show $4-12 return per dollar invested in quality early childhood programs, measured through later achievement and prevention of negative outcomes. While these studies are somewhat crude and focus on near-term metrics, they suggest how a basic compounding mechanism may be at play. Third, and perhaps most importantly, this intervention addresses the "human substrate" that all other interventions depend on—creating populations with stronger epistemics, better collaborative capacity, and more careful reasoning about values.

Cotton-Barratt and Hadshar emphasize "educating people so they are well equipped to tackle challenging research work" and developing "psychological health and ability to productively work over long periods." Childhood intervention directly addresses both challenges while also tackling the political feasibility problem: creating citizens who genuinely support longtermist priorities, avoiding any need for coercion.

Why This Improves Long-Term Value

The causal pathway is straightforward: By systematically building strong epistemics, collaborative mindsets, moral reflection capacity, and psychological health from earliest ages, we create future generations naturally inclined toward the careful reasoning about values and far futures that longtermism requires. These adults will be better equipped to navigate strategic cluelessness, avoid premature value lock-in, solve coordination challenges, and generally make wiser decisions about humanity's trajectory.

Childhood represents perhaps our most crucial leverage point for systematic value improvement. Following MacAskill et al. framework on moral uncertainty, we need institutions that help humans reflect on values (rather than optimizing for a single fixed conception of value), experiment with different value frameworks, engage in substantive debate, and systematically move toward better values over time. Childhood is when value formation is most plastic.

This aligns with "keep the future human" frameworks that prefer allowing human values to evolve deliberately rather than immediately optimizing via AI. If we want humans to remain the primary factor determining values into the future—at least until we're confident about AI's role—then improving humans' capacity for wisdom becomes essential.

Evidence-Based Implementation

The following interventions represent a starting framework based on both empirical evidence and first-principles reasoning. These specific proposals may be controversial and should be viewed as proof-of-concept rather than final recommendations. Implementation should prioritize the most effective, high-leverage practices as determined by rigorous evaluation, and incorporate AI tools where they prove as or more effective than human alternatives.

The intervention builds on the existing Convention on the Rights of the Child while extending it significantly, adding positive rights (what children should receive) alongside negative rights (what shouldn't be done to them):

Core Intervention Categories:

  1. Evidence-based parenting, education & childcare research and implementation
  2. Universal access to child therapists, advocates & organizers
  3. Free ongoing evidence-based training for parents, childcare professionals & educators
  4. Improved compensation and working conditions for childcare professionals
  5. Extended paid parental leave
  6. Children's rights education for both children and adults
  7. Broad societal education on child development and needs
  8. Personal advocates ensuring children's rights
  9. Evidence-based attachment and bonding practices
  10. Elimination of physical and emotional abuse
  11. Prevention of neglect through systematic support
  12. Maximum age-appropriate autonomy and self-determination
  13. Systematic development across social, emotional, cognitive, and mindfulness domains
  14. Education in systems thinking, changemaking, and civic engagement

This represents institutional design from first principles, demonstrating how systematic attention to human development could serve as powerful viatopian infrastructure for improving humanity's capacity to navigate the challenges ahead.

In the next essay, "Hybrid Market," (not yet published), I explore one specific comprehensive mechanism in depth: an economic system that systematically prices all positive and negative externalities, creating automatic incentives for value creation while penalizing value destruction. Originally conceived in my 2021 pre-EA work "Ways to Save the World," it demonstrates one possible institutional design for aggregating diverse values while steering society toward better outcomes. The Hybrid Market shows how market mechanisms, if properly designed with longtermist considerations built in from first principles, could serve as powerful viatopian infrastructure that addresses key challenges around value aggregation, incentive alignment, and systematic improvement without requiring central planning.

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities