1) Questions
What are important thoughts, information, or posts on this topic?
Which people took this question seriously in the past and might be willing to discuss it with me? Or who is working on it now?
2) Elaboration
It seems possible that a large part of the EA community (both at the individual and the global level) allocates insufficient resources to ensuring that people choose the most optimal direction of altruistic work.
A common pattern is an “agree to disagree” stance: one person works on animal welfare, another on existential risks, a third on what GiveWell recommends. Yet one of the core ideas of EA is that the impact of different cause areas can differ by orders of magnitude, and that prioritization is critically important.
It is also important that differences in world models are often masked as differences in values.
This topic has certainly been raised and worked on before, but it feels like not very extensively. It may be that people actively tried to improve the situation in the past and concluded that it was unproductive. At present, this seems like an important neglected area that I am considering working on.
