Hide table of contents

The Case for a Shared Mission

Beneath the sweeping vision of longtermism lies a quiet problem. Even if we accept the premise, what are we actually trying to achieve?

The effective altruism community often talks as if all its projects—AI safety, biosecurity, animal welfare, moral circle expansion—are pieces of the same puzzle. But if the pieces don’t fit together, the picture never emerges. Without some overarching framework, each effort risks pulling in its own direction. The image that comes to mind is a half-built flowchart, arrows pointing upward but never converging, the mission unfinished.

 

The Fragmented Landscape
Essays on Longtermism presents a wide range of arguments: how to evaluate the far future, how to weigh present against future lives, how to avoid lock-in, how to think about population ethics. The diversity of perspectives is a strength—it shows serious engagement with hard problems. But it also highlights how little agreement exists about what ultimately matters.

Should we maximize total wellbeing, even if that means creating vast populations of lives barely worth living? Should we prevent extinction above all else, even if survival means locking in suffering? Should we focus on s-risks, preventing futures of astronomical misery, even if that means passing up flourishing? Each camp has defenders. Each points to a different “highest priority.”

In practice, this diversity has meant effective altruism becomes a portfolio of bets. All missions are important. But they form a scatterplot of objectives, not a unified trajectory. 

 

The Need for Alignment
If the future truly is as vast as longtermists claim, then the stakes are unimaginably high. Trillions of beings, billions of years. Against such a backdrop, fragmentation feels like a luxury we may not have. If each group pushes in different directions, the overall effect could cancel out. Efforts to preserve humanity might conflict with efforts to minimize suffering. Work on maximizing population might conflict with work on reducing inequality.

What longtermism lacks is a shared compass—something like a master flowchart of outcomes, where every intervention is mapped to probabilities and values, feeding upward into a single mission. Without that integration, projects risk becoming valuable in isolation but incoherent in the aggregate.

 

Why Alignment Is Hard
Of course, alignment is not simple. The very debates catalogued in Essays on Longtermism—total vs. average utilitarianism, critical-level views, prioritarianism, pluralism—show how contested our moral theories are. To demand consensus may be impossible. Worse, premature agreement risks entrenching a flawed vision, closing off progress.

But the absence of any alignment leaves us rudderless. It risks turning longtermism into an umbrella for endless disagreements, a movement defined more by the questions it raises than by the goals it pursues. If the far future is the point, then at some stage, humanity needs a way to converge—not necessarily on a final answer, but on a framework that integrates diverse priorities into something coherent.

 

LLMs as Tools for Alignment
One surprising ally in this search for coherence may be the very technologies longtermists worry most about: advanced language models. While their dangers are real, their promise as tools for communication and coordination is equally striking.

LLMs can host infinite philosophical debate, patiently engaging with each perspective, surfacing the crux of disagreements, and clarifying where moral preferences split. Unlike human forums that fracture into camps, an AI system can track the entire conversation at once, mapping values across individuals and communities.

Such systems could help build the flowchart longtermism lacks. They could assign weights, probabilities, and outcomes not by fiat, but by collating the judgments of thousands of thinkers, showing transparently where consensus exists and where fault lines remain. Rather than resolving disagreement, the point would be to make disagreement visible, organized, and navigable.

In this way, LLMs might not solve the problem of alignment, but they could become instruments for alignment’s pursuit—a medium through which humanity comes to know not only what it believes, but also where it cannot yet agree.

 

Possible Futures Without Alignment
Consider two futures. In the first, humanity never aligns. Different groups pursue survival, flourishing, suffering reduction, expansion, or moral progress on separate tracks. Progress happens, but without integration, efforts collide, and the long-term trajectory is unstable.

In the second, humanity builds something closer to a shared compass. Not unanimity, but a structured framework—a way of weighting outcomes, probabilities, and moral views into a picture that everyone can work with. The details may evolve, but the mission is shared: a recognition that all projects ultimately feed upward into a common task.

The first future is noise. The second is music.

 

Conclusion
Longtermism’s great promise is that humanity can take responsibility for its future. But responsibility without alignment risks incoherence. Essays on Longtermism lays out the raw material: the arguments, theories, and possibilities. What remains missing is a framework to bring them together—a way to translate diverse efforts into a shared mission.

Perhaps the true challenge is not only to prevent extinction, suffering, or dystopian lock-in. It is to ensure that our efforts converge into something larger than themselves. Without alignment, Longtermism (and thus, EA) may remain just noise -- a chorus of soloists. With it, the project could become a symphony that changes Earth for the better, for all beings here. 

3

1
0

Reactions

1
0

More posts like this

Comments1
Sorted by Click to highlight new comments since:

On "LLMs as Tools for Alignment":

Wanted to respond to one specific paragraph from this. Kids ask "why?" over and over until their parents go insane. The parents who cling to sanity the longest make the smartest kids.

LLMs tirelessly answer "why?" just for you. Is that curiosity still inside the average adult?

Ways LLMs improve coordination:

  1. Helping people define problems (many of which we all share)
  2. Pointing out stable solutions involving coordination when they exist and are described by literature
  3. Suggesting coordination mechanisms

GPT-5 can do all three of these to a useful degree today, even if no further progress was made. It's not a PhD level thinker, but it can connect you to PhD level ideas. Sycophancy is a problem, as is distraction. Either could kill the concept. Maybe we get the Wall-E world. But I think people want to know "why?".

What LLMs don't do:

  1. Alignment (although disagreements may converge to cruxes faster, enabling better understanding [or more direct conflict?])
  2. Quorum sensing[1] - the ability to detect when enough actors are willing to cooperate to make cooperation effective. People often avoid being the first to move, unless they know they have support.

I have been thinking about this a lot, and would appreciate links to further reading. OP[2], you should look into Pol.is if you haven't already. It's on my reading list. Also, see Nepal [3] for some tech-enabled coordination on a large scale.

  1. ^

    For things like collective bargaining, voting behaviors, and civic coordination.

  2. ^

    As an aside, parts of this read like they were written by an LLM, and I'd expect more engagement if you added more of your voice throughout.

  3. ^

    I do not necessarily expect Nepal to go well.

Curated and popular this week
Relevant opportunities