M

meeri

4 karmaJoined

Comments
2

Nice analysis!

If not, AI safety research is the better career option in terms of expected value. (At least, that’s how I was thinking about it, because my other option for doing good was entrepreneurship + earning-to-give at scale)

I hope if you were thinking about earning to give at scale, that you consider funding AI safety. It seems based on these calculations that in this model, funding AI safety work would need a lot less than 1 million USD per year to have more impact-in-expectation than 5 million USD to GiveWell. 

Would you say this is more accurately an ML safety upskilling bootcamp or a mechanistic interpretability bootcamp?