Quick takes

Set topic
Frontpage
Global health
Animal welfare
Existential risk
Biosecurity & pandemics
11 more

I mentioned in the Essays on Longtermism competition announcement that we would hopefully expect the winners to the competition to be announced on November 4th. In fact it now looks like I'll be announcing them during the week of 10-16 November. 

Apologies for the delay. It isn't the judges' fault, they're doing great. I'll be on a CEA wide retreat next week and I want to make sure I have time to write a proper post to celebrate the winners. 

Potential opportunity to influence the World Bank away from financing factory farms: The UK Parliament is currently holding an open consultation on the future of UK aid and development assistance, closing on November 14, 2025. It includes the question, "Where is reform needed in multilateral agencies and development banks the UK is a member of, and funds?". This would include the World Bank, which finances factory farms,[1][2] so could this consultation be a way to push it away from doing that, via the UK government? 

Are any organisatio... (read more)

Several people working on climate issues out of World Bank HQ are involved in the local EA community. It may be worth a conversation with them around feasibility and bureaucratic pathways / challenges to shifting strategy on major funding areas. Your second footnote focused on climate impacts, so I assume you're not opposed to arguments from that perspective. 

3
Gemma 🔸
Thanks for flagging this! @Ben Anderson and @Ameema Talat are coordinating on this from a GHD perspective 

MrBeast just released a video about “saving 1,000 animals”—a well-intentioned but inefficient intervention (e.g. shooting vaccines at giraffes from a helicopter, relocating wild rhinos before they fight each other to the death, covering bills for people to adopt rescue dogs from shelters, transporting lions via plane, and more). It’s great to see a creator of his scale engaging with animal welfare, but there’s a massive opportunity here to spotlight interventions that are orders of magnitude more impactful.

Given that he’s been in touch with people from Giv... (read more)

Showing 3 of 4 replies (Click to show all)

As Huw says, the video comes first. I think this puts almost anything you'd be excited about off the table. Factory farming is a really aversive topic for people, and people are quite opposed to large scale WAS interventions. The intervention in the video he did make wasn't chosen at random. People like charismatic megafauna.

32
MHR🔸
Manifesting
4
Noah Birnbaum
Yooo - nice! Seems good and would cost under ~100k. 

I think the term "welfare footprint" (analogous to the term "carbon footprint") is extremely useful, and we should make stronger attempts to popularise it among the public as a quick way to encapsulate the idea that different animal products have vastly different welfare harms, e.g. milk vs eggs

Showing 3 of 4 replies (Click to show all)
4
Dan_Keys
Wouldn't a person's "welfare footprint" also include, e.g., all the cases where they brightened someone's life a little bit by having a pleasant interaction with them? The purpose ("different animal products have vastly different welfare harms") seems fairly narrow but the term suggests something much broader.

Interesting. Then I guess strictly speaking it makes more sense to speak only of the welfare footprint of products, rather than of a whole person's carbon footprint, unlike how we speak of both products and people having carbon footprints. 

2
Hugh P
Yes, this is a good point -- perhaps you could speak of "the dairy industry's welfare footprint" if you sought to avoid this.  Though I guess people could only support policy change that tried to, for example, reduce flying in favour of travel by train, if they are first aware of the differences in emissions (254g vs 6g per km apparently), rather than just being aware that both release some emissions -- and perhaps the idea of carbon footprints helped popularise that there are such big differences (?)  But maybe there's something about the term "footprint" which is too closely tied to individual behaviour, and a better term could be found. 

I am pre-registering my forecasts for the amount of prize money each essay will win. In brief, I expect that these three essays will win just over half the prize money: 

  • Utilitarians Should Accept that Some Suffering Cannot be “Offset”
  • Are longtermist ideas getting harder to find?
  • Discussions of Longtermism should focus on the problem of Unawareness

I didn't spend much time on these forecasts though - mainly it is based on karma with an adjustment based on my subjective judgement of the essay's title/summary.

One of my very minor complaints about EAG is that they give out T-shirts that I would not normally want to wear in daily life. I now have three (admittedly very nice) pyjama T-shirts from conferences. This is nice! But I would love to have a simple shirt with a small logo that I can wear in everyday life, not just at home. It would actually get more exposure than the current T-shirts do!

For inspiration, a subtle range of T-shirts from Cortex. Just imagine the small heart-lightbulb there!

19
matthes
I personally am much more likely to take, keep, and wear a shirt with a large and/or unusual design. (Although as much as I like getting the shirts (and sometimes stickers), I would be even happier to see the cost of EAGs go down. I don't know how much time and money goes into merch, though.)
4
akash 🔸
+1 to this, I would be disappointed if EAG merch was super generic. The sweatshirt from EAG Bay  (which I do not have) had a fantastic design, and I liked the birds on the EAG NYC t-shirt.  But I am also someone who has a bright teal colored backpack with pink straps and my laptop has 50,000 stickers so ... 

+1 that the EA Global Bay Area sweatshirt with the bridge is the only EA Global merch I wear with any regularity; it's simply a really nice looking shirt! I wear it more than any other conference / company swag, I think.

image of different EAG merch

calebp
*48
10
18
1
1

(Weakly) Against 1:1 Fests

I just returned from EAG NYC, which exceeded my expectations - it might have been the most useful and enjoyable EAG for me so far.

Ofc, it wouldn’t be an EAG without inexperienced event organisers complaining about features of the conference (without mentioning it in the feedback form), so to continue that long tradition here is an anti-1:1s take.

EAGs are focused on 1:1s to a pretty extreme degree. It’s common for my friends to have 10-15 30 minute 1:1s per day, at other conferences I’ve been to it’s generally more like 0-5. I woul... (read more)

Showing 3 of 7 replies (Click to show all)
3
Mick
I've "only" been to 2 EAGs and 4 EAGx's so take this with that as context For previous EAGs I always booked my schedule full of 1-1's to ask people about their experience, resolve uncertainties, and just generally network with people in similar roles. This EAG (NYC 2025) I didn't find as many people on Swapcard that I wanted to talk to and received much less requests for 1-1s, so I also ended up having just 7 1-1s in total. This was a fun experiment. I found it much more relaxed, and I enjoyed being able to have spontaneous conversations with people I ran into, but I think overall I got less value out of this EAG than if I had booked more meetings: I have less actionable insights and met less people than during other EAG(x) conferences I have attended. However, I'm definitely in favour of less 1-1 cramming. I do think if this was one of my first EAGs and I didn't know anyone, I would've been quite lost without the structure of the 1-1's and the explicit encouragement that it is normal to book a lot. I also feel weird about just joining a conversation in case it was people having a private 1-1. Having an improved spontaneous conversations area with bigger signs/cause area specific areas (or time slots?) sounds like a great solution for both of these problems. Tangentially, my favourite meetups are also those where you just stand and mingle, ideally with specific areas in specific corners, rather than do forced speed meets or roundtable discussions. This makes it much easier to leave if you don't like a conversation and move on to a different one until you find one you like.
15
Jan_Kulveit
My impression is EAGx Prague 22 managed to balance 1:1s with other content simply by not offering SwapCard 1:1s slots part of the time, having a lot of spaces for small group conversations, and suggesting to attendees they should aim for something like balanced diet. (Turning off SwapCard slots does not prevent people from scheduling 1:1, just adds a little friction; empirically it seems enough to prevent the mode where people just fill their time by 1:1s). As far as I understand this will most likely not happen, because weight given to / goodharting on metrics like people reporting 1:1s is the most valuable use of time, metrics tracking "connections formed" and weird psychological effect of 1:1 fests. (People feel stimulated, connected, energized,... Part of the effect is superficial).  Also the counterfactual value lost from lack of conversational energy at scales ~3 to 12ppl is not visible and likely not tracked in feedback  (I think this has predictable  effects on what types of collaborations do start and which do not, and the effect is on the margin bad.) The whole is downstream of problems like  Don't Over-Optimize Things / We can do better than argmax. Btw I think you are too apologetic / self-deprecating ("inexperienced event organisers complaining about features of the conference").  I have decent experience running events and all what you wrote is spot on. 

Thanks Jan, I appreciate this comment. I'm on the EAG team, but responding with my personal thoughts. 

While it's true that we weight 1:1s heavily in assessing EAG, I don't think we're doing 'argmax prioritisation'—we still run talks, workshops, meetups, and ~1/4 of our team time goes to this. My read of your argument is that we're scoring things wrong and should give more consideration to the impact of group conversation. You're right that we don't currently explicitly track the impact of group conversations, which could mean we're missing significant... (read more)

Rob Wiblin is interviewing Will MacAskill for the 80K podcast - this time on the better futures series. You can leave your questions for the interview in the response to this tweet

I've increasingly become concerned about a rise in people (especially teens) frequently using AI to get advice on how to engage in important interpersonal conflicts. 

I've seen lots of discussion (primarily in mainstream media) about AI Psychosis, and the more obvious fear of people/kids outsourcing their thinking to chatbots and getting dumber. Both of these things matter, but specifically the outsourcing of interpersonal conflict resolution seems possibly quite bad and feels under discussed. 

I'd estimate something like ~40-60% of young Americans... (read more)

Even my daughter uses gpt after our arguments and I'm very much aware of this. She even praises gpt for calming her down in her discussions with her mom. If the stuff if calming her down- i guess it's a good as well as a bad thing. Good as she's not stressing herself and bad since she's probably hiding her thoughts now.

I'm starting to think that the EA Global meetup format might not be optimal. At the very least, I didn't get as much out of it this year as I was hoping to, and the same thing happened last year, and I suspect others might have been in the same position. (At one meetup, others I talked to expressed frustrations similar to my own.) Here are some thoughts on why, and how it might be improved.

For context: Meetups are the most frequent type of event on the EA Global schedule other than talks. There are meetups for people working in a particular cause area (e.g... (read more)

Hi Taymon. I'm on the EAG team, and we're currently thinking about how we can improve meetups. I really appreciate you writing this up and sharing!

PurpleAir collects data from a network of private air quality sensors. Looks interesting, and possibly useful for tracking rapid changes in air quality (e.g. from a wildfire).

PurpleAir contribute all of their sensors to IQAir also! So you can get a very comprehensive sense of air quality very quickly and compare private and public sources.

California state senator Scott Wiener, author of AI safety bills SB 1047 and SB 53, just announced that he is running for Congress! I'm very excited about this.

It’s an uncanny, weird coincidence that the two biggest legislative champions for AI safety in the entire country announced their bids for Congress just two days apart. But here we are.*

In my opinion, Scott Wiener has done really amazing work on AI safety. SB 1047 is my absolute favorite AI safety bill, and SB 53 is the best AI safety bill that has passed anywhere in the country. He's been a dedicat... (read more)

Distribution rules everything around me

 

First time founders are obsessed with product. Second time founders are obsessed with distribution.

 

I see people in and around EA building tooling for forecasting, epistemics, starting projects, etc. They often neglect distribution. This means that they will probably fail, because they will not get enough users to justify the effort that went into their existence.

 

Some solutions for EAs:

  • Build a distribution pipeline for your work. Have a mailing list on substack. Have a twitter account. This means that
... (read more)
Showing 3 of 4 replies (Click to show all)
3
Linch
Link appears to be broken.

<https://forum.effectivealtruism.org/posts/4DeWPdPeBmJsEGJJn/interview-with-a-drone-expert-on-the-future-of-ai-warfare>

5
NunoSempere
roastmypost.org,  www.squiggle-language.com come to mind

If the people arguing that there is an AI bubble turn out to be correct and the bubble pops, to what extent would that change people's minds about near-term AGI? 

I strongly suspect there is an AI bubble because the financial expectations around AI seem to be based on AI significantly enhancing productivity and the evidence seems to show it doesn't do that yet. This could change — and I think that's what a lot of people in the business world are thinking and hoping. But my view is a) LLMs have fundamental weaknesses that make this unlikely and b) scali... (read more)

I'm really curious what people think about this, so I posted it as a question here. Hopefully I'll get some responses.

Just calling yourself rational doesn't make you more rational. In fact, hyping yourself up about how you and your in-group are more rational than other people is a recipe for being overconfidently wrong.

Getting ideas right takes humility and curiosity about what other people think. Some people pay lip service to the idea of being open to changing their mind, but then, in practice, it feels like they would rather die than admit they were wrong. 

This is tied to the idea of humiliation. If disagreement is a humiliation contest, changing one's mind can fe... (read more)

I have a question I would like some thoughts on:

As a utilitarianist I personally believe alignment to be the most important course area - though weirdly enough, even though I believe x-risk reduction to be positive in expectation, I believe the future is most likely to be net negative.

I personally believe, without a high level of certainty, that the current utilities on earth are net negative due to wild animal suffering. If we therefore give the current world a utility value of -1, I would describe my believe for future scenarios like this:

  1. ~5% likelihood:
... (read more)
Showing 3 of 8 replies (Click to show all)
2
Jens Aslaug 🔸
First off, I must say - I really like that answer.  I guess I'm concerned about how much of a value lock-in there will be with the creation of AGI. And I find it hard to imagine a majority caring about wild animal suffering or mass producing happiness (e.g. creating a large amount of happy artificial sentience). But I do agree - I shouldn't giv it a 90% likelihood. 

Personally, I’ve never bought the whole value lock-in idea. Could AGI make scientific, technological, and even philosophical progress over time? Everybody seems to say yes. So, why would we think AGI would not be capable of moral progress? 

It seems like an awkward relic of the "MIRI worldview", which I don’t think ever made sense, and which has lost credibility since deep learning and deep reinforcement learning have become successful and prominent. Why should we think “value lock-in” is a real thing that would ever happen? Only if we make certain pec... (read more)

1
Jens Aslaug 🔸
True. But I think that's more of an argument that the future is uncertain (which ofc is a relevant argument). But even with the technology I don't necessarily think we'll have a majority interested in eliminating all forms of suffering (especially in the wild) or mass producing happiness.    

I think monied prediction markets are negative EV. The original reasons the CFTC were not allowing binary event contracts on most things are/were actually good reasons. It's quite clear that our elected officials can get away with insider trading (and probably to a certain extent market manipulation). My intuition is that in the current admin I expect this behavior to increasingly not be punished and maybe actively encouraged. Importantly, insider trading on the existing financial instruments doesn't really work. My take here is just that the marginal valu... (read more)

We've passed the deadline for the essays on longtermism competition! Thanks to everyone who took part. I'm getting to work with the judges now, and we should have the results in a couple of weeks. If you were drafting something and missed the deadline, please do consider posting regardless, with the competition tag. You won't be eligible for a prize or judging, but you might be included in summaries/ a best-of sequence (and I'd very much like to read your essay). 

Reminder: The deadline for the 'Essays on Longtermism' competition is this coming Monday (specifically the end of October 20th, anywhere on earth). If you've been thinking of entering, this weekend is your chance! Consider nudging any friends who mentioned wanting to enter. $1000 top prize, many wonderful judges. 

Also a reminder that anywhere on earth may mean that the deadline is on Tuesday for you (as it is for me). Maybe more simply - the deadline is 12pm Tuesday UTC. 

Load more