Quick takes

Set topic
Frontpage
April Fools' Day
Global health
Animal welfare
Existential risk
13 more

Yesterday's Anthropic research ("Emotion Concepts and their Function in LLMs") provides a fascinating mechanistic analogue that highly resonates with the field observations from my March audit of GPT-5.2 Thinking.

While Anthropic studied Claude Sonnet 4.5 and my audit focused on GPT-5.2, the structural alignment between their white-box findings and my black-box observations is striking:

  • Accumulation mechanism: In the audit, I documented how prolonged conflict or user "irritation signals" lead to a pattern I called "Procedural Capture". Anthropic's paper demo
... (read more)

Just a reminder that you can customize your own Frontpage feed, so if you'd like to give serious posts a chance today you can hide April Fools' Day posts.

Click on "Customize feed", add "April Fools' Day" by clicking on the + to search for the relevant tag, then click "Hidden".

🟪 Wide-spread epistemic anomaly detected || Global Risks Instant Message #01-04-2026

We are detecting today a shared collective delusion leading victims to degrade their epistemic standards. This anomaly is aimed towards no particular end, except perhaps for the amusement of its participants and the satisfaction of ingenious expression.

So far, it appears to be mostly harmless. Nonetheless, this phenomenon creates space for vulnerabilities. If some geopolitical actor were to take some implausible action on this day (for instance, US to invade Canada, Spain ... (read more)

Some possible containment procedures are as follows:

Altering the Gregorian Calendar to change Leap Day to April 1st (unknown effectiveness, could lead in transferal of the anomaly to another day)

The teaching of mind-resistance techniques in schools and workplaces, using standard cover stories (media literacy, appreciation of the arts, combating racial bias). However, this runs the risk of collapsing important delusions to the functioning of society.

Global usage of hypnotic drugs through the atmosphere, as well as using sleeper agents in the government to f... (read more)

Does anyone know why @William_MacAskill says he is "not convinced by the shrimp argument" on his recent appearance on Sam Harris's podcast? 
 

SAM HARRIS

So yeah, so this is one area where perhaps my own cynicism creeps in. I worry that any focus on suffering beyond human suffering, it risks confusing enough people so as to damage people's commitment to these principles. So I mean, I'm not, there's zero defense of factory farming coming from me here, but When I see a philosopher who's clearly EA or EA-adjacent arguing on behalf of the welfare of shr

... (read more)
Showing 3 of 4 replies (Click to show all)
2
Vasco Grilo🔸
Hi Charlie. I agree it is better to target soil animals instead of farmed shrimps (at the margin) if individual welfare is proportional to the individual number of neurons as suggested by @William_MacAskill. Here are my estimates for the total number of neurons of animal populations. I calculate soil nematodes have 5.93 M times as many neurons in total as farmed shrimps. It is also worth noting that only wild finfishes and soil animals have more neurons in total than humans. As a fun fact, @Ajeya was early to the potential importance of nematodes. In her biological anchors report about transformative AI (TAI) timelines, she calculated the compute performed by evolution considering just nematodes.
21
William_MacAskill
Discussed on twitter here.

Hi Aaron and Will. I estimated how much cage-free corporate campaigns for layers, and the Shrimp Welfare Project’s (SWP’s) Humane Slaughter Initiative (HSI) increase the welfare of their target beneficiaries for individual welfare per fully-healthy-animal-year proportional to "individual number of neurons"^"exponent", and "exponent" from 0 to 2, which covers the best guesses that I consider reasonable. An exponent of 1 would correspond to the linear weighting preferred by Will. Below is a graph with the results. I calculate cage-free corporate campaig... (read more)

Seriously, I love this EA forum holiday ❤️ I genuinely feel like this helps the community do more good, get more silly-but-perhaps-with-a-grain-of-usefulness ideas across, and waste time in a way which feels a bit productive

Thanks for asking, that's the Forum's mascot - Bulby. Or more specifically baby bulby (bbbb). 

Don't forget to feed him!

1
Dylan Richardson
Is he going to starve if I stop reading posts?! I'm too scared to leave the forum now.

You should turn your project into an organization

If your team's work is worth doing, it's worth doing as an org

When a few people are doing good work together, the question of whether to formally incorporate into an organization can feel like a distraction from doing the actual work. Why take time away from your exciting research project to create an org? There are some real up-front costs to incorporating – dealing with bureaucracy, legal overhead, governance obligations – but I think the benefits of doing so are usually greater and underappreciated.

Orgs a

... (read more)

I mostly strongly agree with this but think it's worth considering "being an official, recognized, and funded part of an organization" rather than constituting one's own from scratch. I know Rethink Priorities and Hive have sponsored projects before - that seems like a possibly-good intermediate step, with the possibility of spinning out independently later

Look I know I'm on the forum too much @Toby Tremlett🔹 , but I don't think its necessary to put "reading limit" controls on me....
 

Lol, maybe you've just read them all (I'll ping the dev)

How organisations with low AI usage can and should be using it more

There is a lot of discussion about how everyone should be using AI more, and efforts to increase use and literacy. So far in animal advocacy spaces where I work I’ve seen the following efforts to increase usage so far:

  1. Orgs provide model subscriptions to their teams.
  2. People share the ways they’ve been using AI in slack channels or recurring meetings.
  3. There are educational webinars or fellowships. 

The above has made a real dent in AI usage, but much less than we should be aiming for given ... (read more)

2
titotal
Have you considered that the reason these policies are not increasing AI usage is that AI usage is not particularly useful for many applications? Particularly when it comes to something like animal advocacy, I'm struggling to think of many things you'd actually need a full model subscription for (rather than just asking the occasional question to a free model).  I think the original policies are fine: they let people evaluate and decide for themselves how useful AI models are, and adjust strategies accordingly. Trying to pressure people to use AI beyond this level is going to make your team less effective.

Yeah I have, and my impression from those I've spoken with is that this has not been the case. You don't think most people whose job primarily involves sitting at a computer could have much of their job automated by a software engineer on call? For example:

  • I know grantmakers who have significantly automated parts of their work.
  • I know people who have classified 1,000 people in their CRM across a range of people using AI instead of manually.
  • I've seen some impressive use of AI to go through 1000's of academic papers looking for novel solutions to a welfare that might exist but is not widely known.

[ETA: I posted a revised version of this essay here.]

AI pause advocates often say they are pro-technology and pro-economic growth, and that they simply make one exception for AI because of its unique risks. But this reasoning will grow less credible over time as AI comes to account for a larger and larger share of economic growth.

Simple growth models predict that AI capable of substituting for human labor will raise economic growth rates by an order of magnitude or more. If that's right, then AI will eventually be driving the vast majority of technological... (read more)

Showing 3 of 5 replies (Click to show all)
2
Matthew_Barnett
Our ancestors had less insight into the trade they were making than we do about our own situation. That's true.  Yet they still made the trade, and in hindsight, was it a bad trade to make? I disagree with people like Jared Diamond who argue that the agricultural revolution was the "worst mistake in the history of the human race". It certainly had some very negative consequences. But like most people, I think the agricultural revolution was still a good thing overall, despite the fact that it carried enormous negative side effects. I suspect the transition to AI will be less calamitous and more peaceful than our transition to agriculture. In my view, this means our trade is even easier to make. Yet, I still recognize that we face similar tradeoffs. We risk losing our way of life. There is also a credible risk (even if I think it's small), that the entire human species will go extinct. That would be very bad, but as I argued in the post, it would not be the same as losing all value in the universe.
2
Charlie_Guthmann
Our ancestors did not make this trade at all for the most part. Mostly they stayed hunter gathers, until the people who adopted farming out populated them and then expanded and killed/outcompeted them. (technically, I guess "our" ancestors are the ones who adopted the agriculture)

Likewise, the vast majority of humanity is not directly developing AI. Therefore, in an important sense, "we" are not making the trade of whether to develop AI; only a small number of people are.

Many pessimistic predictions about AGI or ASI tend to paint the picture of a superhuman agent with an extreme maximalisation mindset powered by some unsophisticated version of rationalist principles, which would lead it to commit unspeakable acts of violence (e.g. the paperclip problem: the AI starts killing every form of life in order to save energy that could otherwise be used to make more paperclips).

This, to me, seems somewhat antithetic with the very notion of intelligence. 

Surely, a truly 'superior' agent would be able to question the goal of tu... (read more)

My counterfactual fantasy.

Over on my blog, I wrote about prediction models, replacement value, and how I was taught about saving lives for pennies on the pound.

So long Mo Salah, and thanks for all the lives you saved.

"Death in a Shallow Pond": A new-ish book on the 'drowning child' thought experiment and EA

TIL about this book: Death in a Shallow Pond: A Philosopher, A Drowning Child, and Strangers in Need, published September 2025, by David Edmonds. I can't find it mentioned on the Forum but apologies if I've missed it. I haven't read it, but according to the blurb, it discusses 'the experiences and world events that led Singer to make his radical case and how it moved some young philosophers to establish the Effective Altruism movement, which tries to optimize philant... (read more)

Help me find my replacement doing farmed animal advocacy grantmaking!

I wanted to share a job opening for, in my opinion, one of the coolest jobs to help animals: my job! I'm moving on from Mobius soon, so we're looking for the next person to lead our grantmaking and entrepreneurial projects.

The role: You'd manage the grantmaking portfolio for one of the top ten largest funders of farmed animal welfare work globally, plus lead entrepreneurial projects like incubating new organisations and identifying strategic gaps in the movement. You'd work with a small a... (read more)

Showing 3 of 5 replies (Click to show all)
3
LeahC
^Just adding that the Mobius team is awesome and it would be a great place to work for anyone who cares about animal welfare! 10/10 would recommend.

Seconded 

4
Jacob_Peacock
Wow, congrats!

Coal and nuclear electricity generation kill a significant number of fish through water intake systems. This matters for evaluating the impact of any new electricity load.

Most thermal power plants (coal, nuclear, and to a lesser extent gas) draw large volumes of water from rivers and lakes for cooling. This causes two underappreciated harms to fish:

Impingement — fish get trapped against water intake filters and die. Entrainment — eggs, larvae, and small fish are pulled through pumps and heat exchangers, killing them. A single coal plant in Ohio (Bay Shore ... (read more)

9
huw
(Can you point me to something about the moral weight of fish eggs? I have never heard of this before)

To be honest it wasn't my intention to argue that fish eggs have moral weight - I included them to give a sense of the scale of impact - but I can see how that came across, so apologies.

Fish eggs may more clearly have moral worth under non-utilitarian value systems, such as believing that all life that will eventually be sentient has intrinsic moral worth, or that impacts to nature should be minimised for intrinsic reasons (to be clear I don't hold these views personally, but maybe having some level of moral uncertainty leads to a non-zero moral weight).

On... (read more)

Buck
20
1
0

There's been some discussion here of the claim that AI capabilities improvements have been a consequence of unsustainable increases in inference compute. Redwood Research Astra fellow Anders Cairns Woodruff has written a great post analyzing the data and disputing this.

Unjournal AI-assisted research prioritization dashboard (very early prototype)

We've been experimenting with using LLMs to help identify and prioritize research for Unjournal evaluation, to work with and complement human prioritization (and learn). We now have a public prototype dashboard:

uj-prioritization-dashboard.netlify.app

What it does: Automatically discovers recent papers from NBER, arXiv (econ), CEPR, SSRN, Semantic Scholar, EA Forum paper links, and OpenAlex, then scores them using AI models (GPT-5.4 family) against our prioritization criteria — d... (read more)

2
Charlie_Guthmann
Very cool.  I work on text parsing / meta science and do a lot of stuff like this on the side and for my lab.  https://docgmedicalsummaries.com/rankings  I've done something similar for ranking clinical medicine articles, it's pretty similar to your site but might be able to share some insights. (might comment more later regardless, just throwing this up for now so I remember).  edit: also signing up will auto subscribe you to emails just to note but should be easy to unsubscribe, can also see how we do rankings without signing up on the landing page. 

Thanks, I'd be up for hearing insights. This is related to a larger project (see https://llm-uj-research-eval.netlify.app/), but this part of it is still pretty early stage.

Will DM

🚨 New EA book cover to critique 🚨 

http://80000hours.org/book/ 

Tell me all the ways we messed up, and how/why the original was better actually (see for example this excellent alternative design by Catherine: https://x.com/wilhelmscreamin/status/2029302612210626958 )

Showing 3 of 4 replies (Click to show all)

try this https://x.com/wilhelmscreamin/status/2029302612210626958

11
Tobias Häberli
It looks much nicer than the original imo. If I didn't have context, I'd probably be confused though. Why 80,000 hours? And what is the pie chart / watch face analogy about? On first glance I’m not sure whether it’s about career choice, time management, life balance, or some '5pm' metaphor. I looked at it in this order: (1) “80,000 hours”, (2) pie chart / watch face, trying to figure it out, (3) subtitle, (4) endorsement. But the subtitle and endorsement are doing most of the work of telling me what the book is actually about and whether it’s for me. Maybe some of this is intended, to make people pick up the book and try to find answers. :) 
1
Oscar Sykes
I agree with this, the link to between the pie chart and the clock isn't very obvious to me

Reminder that the symposium kicks off in an hour! If you want to help the conversation go well, you can write up particular considerations, cruxes or questions you have as comments on the symposium post. Invited guests and other participants will respond to them later. 
 

Load more