Well, you just have told them, I guess! I'm all for people negotiating freely based on their individual circumstances, and I don't think there's anything wrong with that at all. But, it sends signals that might not increase your chances. The incentives in EA are different to for-profit, and the question in my mind would be 'why not just work part-time and volunteer?'
I think impact compounding is less reliable than stock market returns. The first stunner might be much more impactful (in shaping norms) than the 1000th. The impact of AI safety orgs remain to be seen. Meanwhile, that 6-7% in stock returns is somewhat consistent (though I think it’s closer to 5% in real terms. Also, I’m assuming you would donate the assets directly to avoid tax).
I’m also unsure of what skill can only be learned by the practice of donating as opposed to, for example, reading grant reports to understand funders’ reasoning. I suppose you learn more about yourself and how you think about giving, and you develop better habits, but that isn’t a skill.
However, I think now is generally better than later and agree that death is an arbitrary cash-in point.
Congratulations on the new name! I think it's a great name; it immediately conveys the relationship between the ethical and environmental impacts of farming animals.
May I ask what the total cost was of this rebrand? The UK government recently got flack about spending £500m on a rebrand of their main website. I'm curious about how EA-aligned funders like Senterra think about cost effectiveness i.e how much went in, and what you're hoping the rebrand will achieve for unlocking new audiences.
Also, are you in any way affiliated with this investment group? https://senterra.com/
I really appreciate this post. From being on the candidate side recently, and from hiring in smaller org settings, I’ve seen a lot of friction come from a reluctance to say out loud what excellence actually looks like for a given role.
When teams try to keep the funnel broad, they get hundreds of earnest applicants who were never going to be close to the bar. Candidates lose time, the signal gets lost, and everyone feels worse. Clear expectations up front, even if they narrow the pool, make the whole thing more honest and efficient.
I agree completely on treating hiring as a living system. We iterate everywhere else in EA, yet hiring often stays fixed and opaque. There’s a lot of benefit to experimenting, testing assumptions, sharing what works, and building more transparent models over time.
I’m very interested in this problem, especially approaches that combine clear bar-setting, structured evaluation, and genuine care for candidates. If you’re exploring ideas here, happy to chat.