JoA🔸

Full-time volunteer
289 karmaJoined Pursuing a graduate degree (e.g. Master's)Paris, France

Bio

Participation
3

Campaign coordinator for the World Day for the End of Fishing and Fish Farming, organizer of Sentience Paris 8 (animal ethics student group), FutureKind AI Fellow, freelance translator, enthusiastic donor.

Fairly knowledgeable about the history of animal advocacy and possible strategies in the movement. Very interested in how AI developments and future risks could affect non-human animals (both wild and farmed). Reasonably clueless about this.

"We have enormous opportunity to reduce suffering on behalf of sentient creatures [...], but even if we try our hardest, the future will still look very bleak." - Brian Tomasik

Comments
59

Interesting question! Might the Kurzgesagt video on factory farming count as an example of this for animal welfare? If someone wants to do it again, they could try to assess what they think the video did right (and wrong) and improve upon it. Maybe some cues on messaging could be taken from Lewis Bollard's fairly successful appearance on the Dwarkesh podcast?

Also, a potential reason why AI Safety focused on it (compared to other cause areas) might be that they have pipelines which can absorb a fair amount of people, and so they find it more worthwhile to launch broad outreach that could get a few dozen counterfactual people applying to fellowships, and the like? This may less be the case for other causes when it comes to talent - I assume that for animal welfare and global health, the informal theory of change behind funding a high-quality video would be rather donation-focused. However, I could be wrong about the talent pipeline reason, and maybe some content creation funders mostly want to raise broad awareness of AI risk issues (seemed to be the case for the Future of Life Institute).

I think this is a very compelling (and enjoyable) essay. I particularly appreciate the first point of 2.1 as an intuitive reminder of the complicated empirical issues at hand. The main argument here is strengthened by this intuitive way of highlighting that doing (impartial) good is actually complicated.

I appreciate the efforts made here of highlighting alternatives to long-term EV maximization with precise credences, since the lack of "other options" can be a big mental blocker. Part 3 (and the conclusion, to an extent) seem to constitute the first solid high-level overview of this on the Forum, so this is quite helpful. Not to mention, these sections act as serious reminders of how important it is to "get it right", whatever that ends up meaning.

When discussing considerations around backfire risks and near-term uncertainty, it is common to hear that this is all excessive nitpicking, and that such discussion lacks action guidance, making it self-defeating. And it's true that raising salience of these issues isn't always productive because it doesn't offer clear alternatives to going with our best guess, deferring to current evaluators that take backfire risks less seriously, or simply not seeking out interventions to make the world a bit better.

Thus, because this article centers the discussion on the search for positive interventions through a reasonably action list of criteria, it has been one of my most valuable reads of the year.

I think the more time we spend exploring the consequences of our interventions, the more we realize that doing good is hard. But it's plausibly not insurmountable, and there may be tentative, helpful answers to the big question of effective altruism down the line. I hope that this document will inspire stronger consideration for uncertainty. Because the individuals impacted by near-term second-order effects of an action are not rhetorical points or numbers on a spreadsheet: they're as real and sentient as the target beneficiaries, and we shouldn't give up on the challenge of limiting negative outcomes for them.

As someone who's interested in the practical implications of cluelessness for practical decisions but would not be able to read that paper, I'm grateful that you went beyond a linkpost and took the time to make your theory accessible to more Forum readers. I'm excited to see what comes next in terms of practical action guidance beyond the reliance on EV estimates. Thank you so much for a great read!

JoA🔸
1
0
0
20% agree

(10% disagree) I do not think there are any robust interventions for current humans who wish to improve "impartial welfare" in the future, but I'd find these interventions probably dominant if I believed there were any. 

I don't want to say I'm "not a longtermist" since I'm never sure whether action-guidance has to be contained within one's theory of morality, but given the framing of the question is about what to do, I have to put myself in disagree, as I'm quite gung-ho on extreme neartermism (seeing a short path to impact as a sort of multiplier effect, though I may be wrong).

I don't have anything super wise to say here, but I stumbled upon this post and found it moving and radically original, definitely of the most memorable and daring things I've read on the Forum this year. Well done!

Compelling and moving linkpost. However, the first footnote is broken for some reason, and says "Here the best AI system is shown as Claude 3.7 Sonnet, though note that a more recent evaluation finds that OpenAI’s o3 may be above trend, also broadly at a 1-2h time horizon." when I slide my mouse over it. However, at the bottom, the footnote appears correctly. I wonder what causes this.

JoA🔸
*4
0
0
50% agree

There is a strong chance that the sum total of what I do due to EA will end up having no impact (due to short AGI timelines) or being net-negative (due to flow-through effects). However, EA has also convinced me that all but a few altruistic endeavors are strongly likely to be beneficial for the world. My donations of a few K a year (and occasional volunteering) for these endeavors would have been extremely unlikely had I not engaged deeply with EA.
The counterfactual seems pretty bleak. Before getting convinced overnight of EA's importance by stumbling onto the pdf of Suffering-focused Ethics, I was convinced that it was impossible to be positive for the world, and I felt diseased by guilt (the latter turned out to be useful fuel to get into doing good, so I don't regret it). 

"It doesn’t cost you anything!" - oh, not in monetary terms it doesn't, not in monetary terms!

I'm curious to understand what you mean by this. I don't know if the implication is meant to be self-evident, but I have trouble getting it.

Thank you for this post! It's quite clear and illustrates all the different "reflexes" in the face of potential TAI development that I've observed in the movement. Since we can often jump to a mode of action and assume it's the correct path, I find it useful to get the opportunity to carefully read the assumptions and show all the possible responses. 

Right now, my decision parliament tries to accommodate "Optimise harder for immediate results" and "Focus on building capacity to prepare for TAI". Though it is frustrating to know that one of the ways of responding to AI developments you list here will be the "best" path for sentient beings... but that we can't actually be sure of which one it is.

Load more