IMO one way in which EA is very important to AI Safety is in cause prioritization between research directions. For example, there's still a lot of money + effort (e.g. GDM + Anthropic safety teams) going towards mech interp research despite serious questioning to whether it will help us meaningfully decrease x-risk. I think there's a lot of people who do some cause prioritization, come to the conclusion that they should work on AI Safety, and then stop doing cause prio there. I think that more people even crudely applying the scale, tractability, neglectedness framework to AI Safety research directions would go a long way for increasing the effectiveness of the field at decreasing x-risk.
IMO one way in which EA is very important to AI Safety is in cause prioritization between research directions. For example, there's still a lot of money + effort (e.g. GDM + Anthropic safety teams) going towards mech interp research despite serious questioning to whether it will help us meaningfully decrease x-risk. I think there's a lot of people who do some cause prioritization, come to the conclusion that they should work on AI Safety, and then stop doing cause prio there. I think that more people even crudely applying the scale, tractability, neglectedness framework to AI Safety research directions would go a long way for increasing the effectiveness of the field at decreasing x-risk.