Written in the spirit of Draft Amnesty, spurred by various people posting similar things recently. I've sat on this draft for too long, but don't have time to polish it. It flips between how these principles can be applied for EA Cambridge students (where I work) vs more abstract strategy.
The format is
1. A goal
Caveats and relevant links to this goal
To service this goal, concretely you should
My general thinking is that EA-Principles community building first[1] can be boring to people, compared e.g. to AI safety field building. That doesn't mean it's not important. There’s some tradeoffs in making EA Cambridge principles first (intro fellowship readings focused on the principles) vs cause first (Research program with tracks for specific causes - e.g. Impact Research Groups). This feels like an increasingly relevant tradeoff as the cause areas spin out of EA and become in-vogue/mainstream. For instance, a lot of “EA-y” GHD work happens outside of the EA space. I think AI Safety has somewhat spun out of EA (more empirical research into whether the claims in this post are true, would be great).
Principles-led community building’s functions, are 4-fold:
- Guide people towards the cause areas, through a combination of career planning/networking and discussion about moral axioms (which have now spun off to a decent extent)
- Smart, less philosophically minded people, or those who find EA-principles obvious, might prefer to jump straight to the cause areas which is what IRG does
- A guided cause prioritisation flowchart
- The case of the missing cause prioritisation research
- Servicing 1.
- Focus on being a funnel for specific causes
- Focus on traditional career planning + advertise the intro fellowship as using stuff like this flowchartBe a hub for cross-pollination between cause areas
- Be a hub for cross-pollination between cause areas
- This seems hard to encourage (you need people to have good knowledge about 2 cause areas) but important
- Servicing 2.
- Run events at the intersection of causes (GHD x AI, AI x Animals, GHD x Climate x Food Systems etc.)
Focus on using the principles within cause areas (nudge projects in this direction during the project-based fellowship.
- Run events at the intersection of causes (GHD x AI, AI x Animals, GHD x Climate x Food Systems etc.)
3. Be a set of principles you can apply within your cause area (e.g. prioritise X-Risks over AI copyright). This still implies people have succeeded at 1.
- Improving the epistemics within AI safetyBeing more explicit about the theory of change for one’s research agendaI.e. Linking your belief in longtermism to whether you endorse existentially focused AI safety research vs AI ethics
3.5. Improve the effectiveness of mainstream issues. Like how effectiveness mindset (Givewell) has been good for global health and Animal Rights.
- Some candidates here, are improving the effectiveness of progress studies, climate change, social reform (prison reform, developed country reform etc.)
- Servicing 3 and 3.5
- Helping people do research within their causes, doing IRG type stuff, introducing EA principles explicitly into cause specific groups at your universities
4. Be a way to spin up new, weird cause areas. Similar to 2.
- This probably appeals to nerdy, philosophical, willing to entertain weird idea, people?
- I can also image EA groups leaning into Rationalism, and focusing on bringing excellent epistemics into the world.
- Third Wave Effective Altruism
- EA as Field Incubation
- Concretely, uni groups servicing 4.
- Do more philosophical stuff like the flowchart mentioned earlier
- All the stuff in as 2.
- ^
I find it amusing how little EA community builders seem to be able to list off the EA principles. Vaguely gesturing at scope-sensitive, radically impartial, sensitive to tradeoffs, scout mindsetty and altruistic
