Quick takes

Set topic
Frontpage
Global health
Animal welfare
Existential risk
Biosecurity & pandemics
11 more

Here are some bullet points of reflection topics around lifestyle and priorities for EAs that I shared with some fellow EAs some months ago. I am sharing this text here in case it interests anyone. I will elaborate and expand on them more and better later if I have the opportunity.

""" Support Systems: Seriously. I didn't even know this term until after all this happened, and it would have changed everything. There's something about how people are instructed in STEM institutions (and as a consequence, many EA institutions) that makes it all about careers, h... (read more)

(My Facebook and Instagram accounts have been suspended without explanation. Hopefully they will be restored soon. If anyone reading this wants to reach me in the mean time, please use other means.)

Some women on the Facebook support group "Cluster Headache Patients" comparing labor pain to cluster headache pain:

  • "Honestly, I had a natural childbirth and a cesarean and cluster headaches are 10 times worse than both."
  • "2 unmedicated births for me. Would rather do that every day than have another cluster"
    • "every day though, really?"
    • "yes. I'd rather go through childbirth without pain relief than CH."
  • "tenfold worse than popping a baby out"
  • "Nah, labour/giving birth is a walk in the park compared to ch […] I was in labour with my son for nearly 3 days, then th
... (read more)
5
SiobhanBall
Oh, no. I had no idea they could be this bad. And I'm speaking from experience... is anybody working on this? 

I made a podcast feed for the posts highlighted in Best of: AGI & Animals Debate Week 

RSS Feed to paste into your favorite podcast app: https://f004.backblazeb2.com/file/aaronbergman-public/podcast/agi_animals/feed.xml 

I also like the cover art Gemini made so here it is:

With McIlroy at the Masters.

Success is a mess.

Golf, if you allow it, teaches forbearance.

Doing hard things is hard. One of the hardest things to do is hit a tiny ball in a tiny hole hundreds of yards away. Tiny errors cause terrible outcomes. Control is a phantom. The promise and perils don’t bear thinking about.

When it all comes together, though, my goodness, it’s a hell of a party.

If it’s worth going where you’re aiming, there’ll be no straight line from here to there. Next time you’re stuck, remember Rory and what we went through with him.

1
XelaP
Why not train on cognitive problems, like chess? Seems more related.   Of course, if you find golf more fun than that, that's a good reason.
4
NickLaing
Golf combines mind and body. Also requires a lot of patience. I think either is fine! 

Thanks for reading, and especially for commenting!

There are a few reasons for training on golf:

  • Biographical. I was introduced to golf as a teenager, not chess, and I spent thousands of hours since then playing and watching it. Maybe there are stories like Rory's in chess, grand masters who persevered through a decade of struggle to overcome the odds and themselves, in which case I'd like to read those stories too.
  • Social. As far as I can tell Rory is much more famous than any chess player, and therefore faced greater social pressure to perform. Stress does
... (read more)

Help me find my replacement doing farmed animal advocacy grantmaking!

I wanted to share a job opening for, in my opinion, one of the coolest jobs to help animals: my job! I'm moving on from Mobius soon, so we're looking for the next person to lead our grantmaking and entrepreneurial projects.

The role: You'd manage the grantmaking portfolio for one of the top ten largest funders of farmed animal welfare work globally, plus lead entrepreneurial projects like incubating new organisations and identifying strategic gaps in the movement. You'd work with a small a... (read more)

Showing 3 of 6 replies (Click to show all)

That sounds very exciting! I have applied!

3
LeahC
^Just adding that the Mobius team is awesome and it would be a great place to work for anyone who cares about animal welfare! 10/10 would recommend.
2
ElliotTep
Seconded 

Yesterday's Anthropic research ("Emotion Concepts and their Function in LLMs") provides a fascinating mechanistic analogue that highly resonates with the field observations from my March audit of GPT-5.2 Thinking.

While Anthropic studied Claude Sonnet 4.5 and my audit focused on GPT-5.2, the structural alignment between their white-box findings and my black-box observations is striking:

  • Accumulation mechanism: In the audit, I documented how prolonged conflict or user "irritation signals" lead to a pattern I called "Procedural Capture". Anthropic's paper demo
... (read more)

Just a reminder that you can customize your own Frontpage feed, so if you'd like to give serious posts a chance today you can hide April Fools' Day posts.

Click on "Customize feed", add "April Fools' Day" by clicking on the + to search for the relevant tag, then click "Hidden".

🟪 Wide-spread epistemic anomaly detected || Global Risks Instant Message #01-04-2026

We are detecting today a shared collective delusion leading victims to degrade their epistemic standards. This anomaly is aimed towards no particular end, except perhaps for the amusement of its participants and the satisfaction of ingenious expression.

So far, it appears to be mostly harmless. Nonetheless, this phenomenon creates space for vulnerabilities. If some geopolitical actor were to take some implausible action on this day (for instance, US to invade Canada, Spain ... (read more)

Some possible containment procedures are as follows:

Altering the Gregorian Calendar to change Leap Day to April 1st (unknown effectiveness, could lead in transferal of the anomaly to another day)

The teaching of mind-resistance techniques in schools and workplaces, using standard cover stories (media literacy, appreciation of the arts, combating racial bias). However, this runs the risk of collapsing important delusions to the functioning of society.

Global usage of hypnotic drugs through the atmosphere, as well as using sleeper agents in the government to f... (read more)

Does anyone know why @William_MacAskill says he is "not convinced by the shrimp argument" on his recent appearance on Sam Harris's podcast? 
 

SAM HARRIS

So yeah, so this is one area where perhaps my own cynicism creeps in. I worry that any focus on suffering beyond human suffering, it risks confusing enough people so as to damage people's commitment to these principles. So I mean, I'm not, there's zero defense of factory farming coming from me here, but When I see a philosopher who's clearly EA or EA-adjacent arguing on behalf of the welfare of shr

... (read more)
Showing 3 of 4 replies (Click to show all)
3
Vasco Grilo🔸
Hi Charlie. I agree it is better to target soil animals instead of farmed shrimps (at the margin) if individual welfare is proportional to the individual number of neurons as suggested by @William_MacAskill. Here are my estimates for the total number of neurons of animal populations. I calculate soil nematodes have 5.93 M times as many neurons in total as farmed shrimps. It is also worth noting that only wild finfishes and soil animals have more neurons in total than humans. As a fun fact, @Ajeya was early to the potential importance of nematodes. In her biological anchors report about transformative AI (TAI) timelines, she calculated the compute performed by evolution considering just nematodes.
22
William_MacAskill
Discussed on twitter here.

Hi Aaron and Will. I estimated how much cage-free corporate campaigns for layers, and the Shrimp Welfare Project’s (SWP’s) Humane Slaughter Initiative (HSI) increase the welfare of their target beneficiaries for individual welfare per fully-healthy-animal-year proportional to "individual number of neurons"^"exponent", and "exponent" from 0 to 2, which covers the best guesses that I consider reasonable. An exponent of 1 would correspond to the linear weighting preferred by Will. Below is a graph with the results. I calculate cage-free corporate campaig... (read more)

Seriously, I love this EA forum holiday ❤️ I genuinely feel like this helps the community do more good, get more silly-but-perhaps-with-a-grain-of-usefulness ideas across, and waste time in a way which feels a bit productive

Thanks for asking, that's the Forum's mascot - Bulby. Or more specifically baby bulby (bbbb). 

Don't forget to feed him!

1
Dylan Richardson
Is he going to starve if I stop reading posts?! I'm too scared to leave the forum now.

You should turn your project into an organization

If your team's work is worth doing, it's worth doing as an org

When a few people are doing good work together, the question of whether to formally incorporate into an organization can feel like a distraction from doing the actual work. Why take time away from your exciting research project to create an org? There are some real up-front costs to incorporating – dealing with bureaucracy, legal overhead, governance obligations – but I think the benefits of doing so are usually greater and underappreciated.

Orgs a

... (read more)

I mostly strongly agree with this but think it's worth considering "being an official, recognized, and funded part of an organization" rather than constituting one's own from scratch. I know Rethink Priorities and Hive have sponsored projects before - that seems like a possibly-good intermediate step, with the possibility of spinning out independently later

Look I know I'm on the forum too much @Toby Tremlett🔹 , but I don't think its necessary to put "reading limit" controls on me....
 

Lol, maybe you've just read them all (I'll ping the dev)

How organisations with low AI usage can and should be using it more

There is a lot of discussion about how everyone should be using AI more, and efforts to increase use and literacy. So far in animal advocacy spaces where I work I’ve seen the following efforts to increase usage so far:

  1. Orgs provide model subscriptions to their teams.
  2. People share the ways they’ve been using AI in slack channels or recurring meetings.
  3. There are educational webinars or fellowships. 

The above has made a real dent in AI usage, but much less than we should be aiming for given ... (read more)

2
titotal
Have you considered that the reason these policies are not increasing AI usage is that AI usage is not particularly useful for many applications? Particularly when it comes to something like animal advocacy, I'm struggling to think of many things you'd actually need a full model subscription for (rather than just asking the occasional question to a free model).  I think the original policies are fine: they let people evaluate and decide for themselves how useful AI models are, and adjust strategies accordingly. Trying to pressure people to use AI beyond this level is going to make your team less effective.

Yeah I have, and my impression from those I've spoken with is that this has not been the case. You don't think most people whose job primarily involves sitting at a computer could have much of their job automated by a software engineer on call? For example:

  • I know grantmakers who have significantly automated parts of their work.
  • I know people who have classified 1,000 people in their CRM across a range of people using AI instead of manually.
  • I've seen some impressive use of AI to go through 1000's of academic papers looking for novel solutions to a welfare that might exist but is not widely known.

[ETA: I posted a revised version of this essay here.]

AI pause advocates often say they are pro-technology and pro-economic growth, and that they simply make one exception for AI because of its unique risks. But this reasoning will grow less credible over time as AI comes to account for a larger and larger share of economic growth.

Simple growth models predict that AI capable of substituting for human labor will raise economic growth rates by an order of magnitude or more. If that's right, then AI will eventually be driving the vast majority of technological... (read more)

Showing 3 of 5 replies (Click to show all)
2
Matthew_Barnett
Our ancestors had less insight into the trade they were making than we do about our own situation. That's true.  Yet they still made the trade, and in hindsight, was it a bad trade to make? I disagree with people like Jared Diamond who argue that the agricultural revolution was the "worst mistake in the history of the human race". It certainly had some very negative consequences. But like most people, I think the agricultural revolution was still a good thing overall, despite the fact that it carried enormous negative side effects. I suspect the transition to AI will be less calamitous and more peaceful than our transition to agriculture. In my view, this means our trade is even easier to make. Yet, I still recognize that we face similar tradeoffs. We risk losing our way of life. There is also a credible risk (even if I think it's small), that the entire human species will go extinct. That would be very bad, but as I argued in the post, it would not be the same as losing all value in the universe.
2
Charlie_Guthmann
Our ancestors did not make this trade at all for the most part. Mostly they stayed hunter gathers, until the people who adopted farming out populated them and then expanded and killed/outcompeted them. (technically, I guess "our" ancestors are the ones who adopted the agriculture)

Likewise, the vast majority of humanity is not directly developing AI. Therefore, in an important sense, "we" are not making the trade of whether to develop AI; only a small number of people are.

Many pessimistic predictions about AGI or ASI tend to paint the picture of a superhuman agent with an extreme maximalisation mindset powered by some unsophisticated version of rationalist principles, which would lead it to commit unspeakable acts of violence (e.g. the paperclip problem: the AI starts killing every form of life in order to save energy that could otherwise be used to make more paperclips).

This, to me, seems somewhat antithetic with the very notion of intelligence. 

Surely, a truly 'superior' agent would be able to question the goal of tu... (read more)

My counterfactual fantasy.

Over on my blog, I wrote about prediction models, replacement value, and how I was taught about saving lives for pennies on the pound.

So long Mo Salah, and thanks for all the lives you saved.

"Death in a Shallow Pond": A new-ish book on the 'drowning child' thought experiment and EA

TIL about this book: Death in a Shallow Pond: A Philosopher, A Drowning Child, and Strangers in Need, published September 2025, by David Edmonds. I can't find it mentioned on the Forum but apologies if I've missed it. I haven't read it, but according to the blurb, it discusses 'the experiences and world events that led Singer to make his radical case and how it moved some young philosophers to establish the Effective Altruism movement, which tries to optimize philant... (read more)

Load more