I mentioned in the Essays on Longtermism competition announcement that we would hopefully expect the winners to the competition to be announced on November 4th. In fact it now looks like I'll be announcing them during the week of 10-16 November.
Apologies for the delay. It isn't the judges' fault, they're doing great. I'll be on a CEA wide retreat next week and I want to make sure I have time to write a proper post to celebrate the winners.
Potential opportunity to influence the World Bank away from financing factory farms: The UK Parliament is currently holding an open consultation on the future of UK aid and development assistance, closing on November 14, 2025. It includes the question, "Where is reform needed in multilateral agencies and development banks the UK is a member of, and funds?". This would include the World Bank, which finances factory farms,[1][2] so could this consultation be a way to push it away from doing that, via the UK government?
Are any organisatio...
Several people working on climate issues out of World Bank HQ are involved in the local EA community. It may be worth a conversation with them around feasibility and bureaucratic pathways / challenges to shifting strategy on major funding areas. Your second footnote focused on climate impacts, so I assume you're not opposed to arguments from that perspective.
MrBeast just released a video about “saving 1,000 animals”—a well-intentioned but inefficient intervention (e.g. shooting vaccines at giraffes from a helicopter, relocating wild rhinos before they fight each other to the death, covering bills for people to adopt rescue dogs from shelters, transporting lions via plane, and more). It’s great to see a creator of his scale engaging with animal welfare, but there’s a massive opportunity here to spotlight interventions that are orders of magnitude more impactful.
Given that he’s been in touch with people from Giv...
As Huw says, the video comes first. I think this puts almost anything you'd be excited about off the table. Factory farming is a really aversive topic for people, and people are quite opposed to large scale WAS interventions. The intervention in the video he did make wasn't chosen at random. People like charismatic megafauna.
I think the term "welfare footprint" (analogous to the term "carbon footprint") is extremely useful, and we should make stronger attempts to popularise it among the public as a quick way to encapsulate the idea that different animal products have vastly different welfare harms, e.g. milk vs eggs.
I am pre-registering my forecasts for the amount of prize money each essay will win. In brief, I expect that these three essays will win just over half the prize money:
I didn't spend much time on these forecasts though - mainly it is based on karma with an adjustment based on my subjective judgement of the essay's title/summary.
One of my very minor complaints about EAG is that they give out T-shirts that I would not normally want to wear in daily life. I now have three (admittedly very nice) pyjama T-shirts from conferences. This is nice! But I would love to have a simple shirt with a small logo that I can wear in everyday life, not just at home. It would actually get more exposure than the current T-shirts do!
For inspiration, a subtle range of T-shirts from Cortex. Just imagine the small heart-lightbulb there!
I just returned from EAG NYC, which exceeded my expectations - it might have been the most useful and enjoyable EAG for me so far.
Ofc, it wouldn’t be an EAG without inexperienced event organisers complaining about features of the conference (without mentioning it in the feedback form), so to continue that long tradition here is an anti-1:1s take.
EAGs are focused on 1:1s to a pretty extreme degree. It’s common for my friends to have 10-15 30 minute 1:1s per day, at other conferences I’ve been to it’s generally more like 0-5. I woul...
Thanks Jan, I appreciate this comment. I'm on the EAG team, but responding with my personal thoughts.
While it's true that we weight 1:1s heavily in assessing EAG, I don't think we're doing 'argmax prioritisation'—we still run talks, workshops, meetups, and ~1/4 of our team time goes to this. My read of your argument is that we're scoring things wrong and should give more consideration to the impact of group conversation. You're right that we don't currently explicitly track the impact of group conversations, which could mean we're missing significant...
Rob Wiblin is interviewing Will MacAskill for the 80K podcast - this time on the better futures series. You can leave your questions for the interview in the response to this tweet.
I've increasingly become concerned about a rise in people (especially teens) frequently using AI to get advice on how to engage in important interpersonal conflicts.
I've seen lots of discussion (primarily in mainstream media) about AI Psychosis, and the more obvious fear of people/kids outsourcing their thinking to chatbots and getting dumber. Both of these things matter, but specifically the outsourcing of interpersonal conflict resolution seems possibly quite bad and feels under discussed.
I'd estimate something like ~40-60% of young Americans...
Even my daughter uses gpt after our arguments and I'm very much aware of this. She even praises gpt for calming her down in her discussions with her mom. If the stuff if calming her down- i guess it's a good as well as a bad thing. Good as she's not stressing herself and bad since she's probably hiding her thoughts now.
I'm starting to think that the EA Global meetup format might not be optimal. At the very least, I didn't get as much out of it this year as I was hoping to, and the same thing happened last year, and I suspect others might have been in the same position. (At one meetup, others I talked to expressed frustrations similar to my own.) Here are some thoughts on why, and how it might be improved.
For context: Meetups are the most frequent type of event on the EA Global schedule other than talks. There are meetups for people working in a particular cause area (e.g...
PurpleAir collects data from a network of private air quality sensors. Looks interesting, and possibly useful for tracking rapid changes in air quality (e.g. from a wildfire).
PurpleAir contribute all of their sensors to IQAir also! So you can get a very comprehensive sense of air quality very quickly and compare private and public sources.
California state senator Scott Wiener, author of AI safety bills SB 1047 and SB 53, just announced that he is running for Congress! I'm very excited about this.
It’s an uncanny, weird coincidence that the two biggest legislative champions for AI safety in the entire country announced their bids for Congress just two days apart. But here we are.*
In my opinion, Scott Wiener has done really amazing work on AI safety. SB 1047 is my absolute favorite AI safety bill, and SB 53 is the best AI safety bill that has passed anywhere in the country. He's been a dedicat...
Distribution rules everything around me
First time founders are obsessed with product. Second time founders are obsessed with distribution.
I see people in and around EA building tooling for forecasting, epistemics, starting projects, etc. They often neglect distribution. This means that they will probably fail, because they will not get enough users to justify the effort that went into their existence.
Some solutions for EAs:
If the people arguing that there is an AI bubble turn out to be correct and the bubble pops, to what extent would that change people's minds about near-term AGI?
I strongly suspect there is an AI bubble because the financial expectations around AI seem to be based on AI significantly enhancing productivity and the evidence seems to show it doesn't do that yet. This could change — and I think that's what a lot of people in the business world are thinking and hoping. But my view is a) LLMs have fundamental weaknesses that make this unlikely and b) scali...
I'm really curious what people think about this, so I posted it as a question here. Hopefully I'll get some responses.
Just calling yourself rational doesn't make you more rational. In fact, hyping yourself up about how you and your in-group are more rational than other people is a recipe for being overconfidently wrong.
Getting ideas right takes humility and curiosity about what other people think. Some people pay lip service to the idea of being open to changing their mind, but then, in practice, it feels like they would rather die than admit they were wrong.
This is tied to the idea of humiliation. If disagreement is a humiliation contest, changing one's mind can fe...
I have a question I would like some thoughts on:
As a utilitarianist I personally believe alignment to be the most important course area - though weirdly enough, even though I believe x-risk reduction to be positive in expectation, I believe the future is most likely to be net negative.
I personally believe, without a high level of certainty, that the current utilities on earth are net negative due to wild animal suffering. If we therefore give the current world a utility value of -1, I would describe my believe for future scenarios like this:
Personally, I’ve never bought the whole value lock-in idea. Could AGI make scientific, technological, and even philosophical progress over time? Everybody seems to say yes. So, why would we think AGI would not be capable of moral progress?
It seems like an awkward relic of the "MIRI worldview", which I don’t think ever made sense, and which has lost credibility since deep learning and deep reinforcement learning have become successful and prominent. Why should we think “value lock-in” is a real thing that would ever happen? Only if we make certain pec...
I think monied prediction markets are negative EV. The original reasons the CFTC were not allowing binary event contracts on most things are/were actually good reasons. It's quite clear that our elected officials can get away with insider trading (and probably to a certain extent market manipulation). My intuition is that in the current admin I expect this behavior to increasingly not be punished and maybe actively encouraged. Importantly, insider trading on the existing financial instruments doesn't really work. My take here is just that the marginal valu...
We've passed the deadline for the essays on longtermism competition! Thanks to everyone who took part. I'm getting to work with the judges now, and we should have the results in a couple of weeks. If you were drafting something and missed the deadline, please do consider posting regardless, with the competition tag. You won't be eligible for a prize or judging, but you might be included in summaries/ a best-of sequence (and I'd very much like to read your essay).
Reminder: The deadline for the 'Essays on Longtermism' competition is this coming Monday (specifically the end of October 20th, anywhere on earth). If you've been thinking of entering, this weekend is your chance! Consider nudging any friends who mentioned wanting to enter. $1000 top prize, many wonderful judges.