Hide table of contents

Veganism as the Alignment Test for Longtermism

Longtermism asks us to imagine the vastness of the future—trillions of lives, billions of years—and to act today as though those lives matter. It is a stirring vision, but it rests on a fragile assumption: that humanity is capable of aligning on a mission, coordinating across cultures and centuries, and acting with compassion at scale.

Before we leap to the stars, we should ask whether we can pass the first test right in front of us: ending factory farming. Each year, tens of billions of animals are confined and killed in conditions so grim that many philosophers consider their lives net negative. If we cannot align to end this, an atrocity happening in plain sight, what confidence can we have that humanity will succeed in steering the far future somewhere positive?


What the Book Says
Essays on Longtermism acknowledges this point, though with the cool tone of population ethics. It notes that ending factory farming could sharply reduce the number of suffering animals, both now and in the future. Widespread adoption of vegan diets and clean meat are cited as viable interventions. The book also highlights a subtler point: animal agriculture entrenches attitudes that block moral circle expansion, making it harder for humanity to extend compassion to future beings.

This is important recognition. But in the book, veganism appears as one cause among many, a piece of the longtermist portfolio. I see it differently. Veganism is not just a priority; it is the proving ground for whether longtermism itself can work. Should we even waste our time talking about it?


The Prototype for Alignment
To beat favtory farming, we must demonstrate exactly the skills longtermism demands:

  • Global cooperation, since animal agriculture is entrenched in every economy.
  • Moral circle expansion, since the victims cannot advocate for themselves.
  • Systemic change, since entire industries must be reimagined against their own inertia.

If we fail here—on a problem that is visible, solvable, and immediate—what hope do we have with challenges that are abstract, diffuse, and harder to prove, like AI alignment or biosecurity?  


A Moral Ratchet
Ending factory farming would do more than stop present suffering. It would lock humanity into a higher moral baseline. Once cruelty is gone from food systems, it is unlikely to return. Children raised in a vegan world may look back at cages and slaughterhouses the way we look back at slavery or witch trials: bewildered that anyone tolerated them.

That permanence makes veganism a ratchet: a one-way shift toward a more compassionate world. If we can entrench care for nonhuman animals, we make it more plausible that we will entrench care in other domains—toward digital minds, toward distant populations, toward the fragile ecosystems of future worlds.


A Rehearsal for the Future
Longtermism often struggles with abstraction. How do you rally people around probabilities that stretch into centuries? Veganism is concrete. It is a rehearsal for the coordination we will need later.

Factory farming is visible, measurable, and solvable. Solutions already exist: plant-based diets, cultivated meat, policy shifts. The resistance is cultural and political, not technical. That makes it a uniquely revealing test. Success here would show humanity can align on compassion when it matters. Failure would expose longtermism’s limits: a movement rich in thought experiments but unable to move the systems right in front of us.


What I’ve Seen
I know this not only from philosophy but from practice. I’ve run a vegan restaurant. I’ve written books about veganism. I’ve lived inside the attempt to shift culture. I’ve seen how hard it is to change people’s habits, how quickly rationalization sets in, and how breakthroughs happen when compassion is made joyful and concrete.

These lessons apply directly to longtermism. The future will not bend to elegant arguments alone. It will bend—or not—through the messy work of aligning emotional, cultural, and economic systems. Veganism is where that alignment is being tested now.


Conclusion
Longtermism’s promise is immense, but its credibility depends on alignment. Essays on Longtermism highlights animal welfare as one priority among many. I see it as the test of whether the project can succeed at all.

If we cannot end factory farming—an atrocity unfolding daily, with solutions already in hand—then our talk of safeguarding the far future risks sounding silly. If we succeed, we establish a precedent: humanity can align on compassion, coordinate across systems, and lock in progress that lasts.

We should question whether longtermism deserves our attention when we can’t even face the moral emergency happening right now. Before we fantasize about saving future civilizations, let’s prove we’re capable of one simple thing: agreeing that treating living beings like products is what’s destroying us.

24

2
4
1

Reactions

2
4
1

More posts like this

Comments14
Sorted by Click to highlight new comments since:

I think this argument is pretty wrong for a few reasons:

  • It generalizes way too far... for example, you could say "Before trying to shape the far future, why don't we solve [insert other big problem]? Isn't the fact that we haven't solved [other big problem] bad news about our ability to shape the far future positively?" Of course, our prospects would look more impressive if we had solved many other big problems. But I think it's an unfair and unhelpful test to pick a specific big problem, notice that we haven't solved it, and infer that we need to solve it first.
  • Many, if not most, longtermists believe we're living near a hinge of history and might have very little time remaining to try to influence it. Waiting until we first ended factory farming would inherently forgo a huge fraction of the time remaining on those views to make a difference. 
  • You say "It is a stirring vision, but it rests on a fragile assumption: that humanity is capable of aligning on a mission, coordinating across cultures and centuries, and acting with compassion at scale." but that's not true/exactly; I don't think longtermism rests on the assuption that the best thing to do is try to directly cause that right now (see the hinge of history link above). For example, I'm not sure how we would end factory farming, but it might require, as you allude to, massive global coordination. In contrast, creating techniques to align AIs might require only a relatively small group of researchers, and a small group of AI companies adopting research that is in their best interests to use. To be clear, there are longtermist-relevant interventions that might also require global and widespread coordination, but they don't all require it (and the ones I'm most optimistic about don't require it, because global coordination is very difficult).
  • Related to the above, the problems are just different, and require different skills and resources (and shaping the far future isn't necessarily harder than ending factory farming; for example, I wouldn't be surprised if cutting bio x-risk in half ends up being much easier than ending factory farming). Succeeding at one is unlikely to be the best practice for succeeding at the other. 

(I think factory farming is a moral abomination of gigantic proportions, I feel deep gratitude for people who are trying to end it, and dearly hope they succeed.)

Many, if not most, longtermists believe we're living near a hinge of history

Right but this requires believing the future will be better if humans survive. I take ops point as saying she doesn't agree or is at least skeptical. 

and a small group of AI companies adopting research that is in their best interests to use.

I think again, the point of OP is trying to make is we have very little proof of concept of getting people to go against their best interests. And so if doing what's right isn't in the ai companies best interest op wouldn't believe we can get them to do what we think they should.  

Right but this requires believing the future will be better if humans survive. I take ops point as saying she doesn't agree or is at least skeptical

I think the post isn't clear between the stances "it would make the far future better to end factory farming now" and "the only path by which the far future is net positive requires ending factory farming", or generally how much of the claim that we should try to end factory farming now is motivated by the "if we can't do that, we shouldn't attempt to do longtermist interventions because they will probably fail" vs. "if we can't do that, we shouldn't attempt to do longtermist interventions because they are less valuable becuase the EV of the future is worse" 

Anyway, working to cause humans to survive requires (or at least, is probably motivated by) thinking the future will be better that way. Not all longtermism is about that (see e.g. s-risk mitigation), and those parts are also relevant to the hinge of history question. 

I think again, the point of OP is trying to make is we have very little proof of concept of getting people to go against their best interests. And so if doing what's right isn't in the ai companies best interest op wouldn't believe we can get them to do what we think they should.  

I am saying aligning AI is in the best interests of AI companies, unlike the situation with ending factory farming and animal ag companies, which is a relevant difference. Any AI company that could align their AIs once and for all for $10M would do it in a heartbeat. I don't think they will do nearly enough to align their AIs (so in that sense, their incentives are not humanity's incentives), given the stakes, but they do want to at least a little

Yea my original framing was a little confused wrt the "vs"  dichotomy you present in paragraph one, good shout. I guess I actually meant a little bit of each, though. My interpretation of the post is basically, (1) in so forth as we need to defeat powerful people or thought patterns we (ea or humans) haven't proven it (2) it's somewhat likely we will need to do this to create the world we want. 

I.e. Given that future s-risk efforts are probably not going to be successful, current extinction-risk efforts are therefore also less useful. 

I am saying aligning AI is in the best interests of AI companies
 

If you define it in a specifically narrow AI Takeover way yes. Making sure it doesn't allow a dictator to take power or gradually disempowerment scenarios, not really. Or to the extent that ensuring alignment requires slowing down progress.

Anyway mostly in agreement with your points/world, I definitely think we should be focusing on AI right now and I think that our goals and the AI companies/US gov are sufficiently aligned atm that we aren't swimming up stream, but I resonante with OP that it would allievate some concerns if we actually racked up some hard fought politically unpopular battles before trying to steer the whole future. 

It certainly seems possible (>1%) that in the next 2 US admins (current plus next) AI safety becomes so toxic that all the EA -adj ai safety people in the gov get purged and they stop listeing to most ai safety researchers. If this co-occurs with some sort of AI nationalization most of our TOC is cooked. 

I feel like you switch back and forth a bit here between causal and evidential:

  • failure to end factory farming is evidence that future steering efforts will go badly

vs

  • failure to end factory farming will cause future steering efforts to go badly

I am amenable to this argument and generally skeptical of longtermism on practical grounds. (I have a lot of trouble thinking of someone 300-500 years ago plausibly doing anything with my interests in mind that actually makes a difference. Possible exceptions include folks associated with the Gloriois Revolution.)

I think the best counterargument is that it’s easier to set things on a good course than to course correct. Analogy: easier to found Google, capitalizing on advertisers’ complacency, than to fix advertising from within; easier to create Zoom than to get Microsoft to make Skype good. 

Im not saying this is right but I think that is how I would try to motivate working on longtermism if I did (work on longtermism).


 

It does sometimes feel like longtermism leans into "argument for the sake of argument," and may not be very practical for today's problems, or tomorrow's. But I also feel that way about EA (at times). I wish we could mobilize and get some big things done! I think alignment is the key to the engine.

Interesting and provocative argument. While I don’t agree with all of it, including for the reasons some others outlined in the comments below, your piece did bring forward for me the concept that personally practicing veganism should be the minimum for those aligned with longtermism. 

For those focusing on helping the billions of beings to come who cannot speak for themselves, as you said, they should be behaving in a way that minimizes harm for the billions of beings suffering today who cannot speak for themselves. I view veganism as a moral imperative period, but especially for those thinking deeply and working toward a better future world. Thank you for raising this topic. 


 

I don't have much to add here. This was a great read. Thank you for writing it.

I appreciate that you read it, and took the time to share this thought with me.

Yea it's kinda like what they tell you not to do when building a startup. Every founder wants to build a beautiful, hyperscaling tech-heavy product before they have even confirmed that they have a few single real customers. In this case we are gonna write out our entire plans for the future of the universe before we win a single congressional seat. 

Anyway this community isn't set up to end something like veganism I don't think. That requires large scale evangelizing and coalition building (unless we can solve it with tech). This movement is investing mostly into research and policy, I.E. we are betting on lots of the most important issues of our time not being politically toxic/salient. I think there is a lot of truth to the notion that most federal policy is written by people in think tanks and OMB - and that as long as it doesn't piss off the electorate then the policymaker rather than the elected politician effectively gets to write the law. 

But for stuff that obviously is in the mainstream overton window, e.g. veganism that is going to require large behavioral changes from ordinary citizens , you need an actual coalition of hard power. 

Moral behavior evolves especially when it is part of a lifestyle (ethos). Compartmentalizing moral behavior is not in keeping with human nature. The most effective long-term approach would undoubtedly be one that focuses primarily on developing a compassionate, benevolent, and enlightened lifestyle that is viable as a social alternative. Veganism and the end of animal abuse would be necessary consequences of this.

However, the opposite is not so true, as there are well-known examples of social initiatives in favor of animal welfare linked to intolerant political ideologies as well as less-than-benevolent personal behavioral styles.

I think it's worse than this on a micro level. How many AI alignment researchers are not vegan/don't care about animal ethics? Yet they are working on a problem that is directly parallel to how humans treat animals, e.g how a far greater intellgience AGI/ASI will treat us.

Currently we are even encoding AI with our current values, in some way it had make sense to align ourseleves thus aligning AI. Either way the cognitive disonnace and hypocracy is staggering to me and lacks a coherent and consistance moral worldview.

Great post by the way!

Curated and popular this week
Relevant opportunities