Rutger Bregman isn’t on the Forum, but sent me this message and gave me permission to share:
Great piece! I strongly agree with your point about PR. EA should just be EA, like the Quakers just had to be Quakers and Peter Singer should just be Peter Singer.
Of course EA had to learn big lessons from the FTX saga. But those were moral and practical lessons so that the movement could be proud of itself again. Not PR-lessons. The best people are drawn to EA not because it’s the coolest thing on campus, but because it’s a magnet for the most morally serious + the smartest people.
As you know, I think EA is at it’s best when it’s really effective altruism (“I deeply care about all the bad stuff in the world, desperately want to make it difference, so I gotta think really fcking hard about how I can make the biggest possible difference”) and not altruistic rationalism (“I’m super smart, and I might as well do a lot of good with it”).
This ideal version EA won’t appeal to all super talented people of course, but that’s fine. Other people can build other movements for that. (It’s what we’re trying to do at The School for Moral Ambition..)
If this perspective involves a strong belief that AI will not change the world much, then IMO that's just one of the (few?) things that are ~fully out of scope for Forethought
I disagree with this. There would need to be some other reason for why they should work at Forethought rather than elsewhere, but there are plausible answers to that — e.g. they work on space governance, or they want to write up why they think AI won't change the world much and engage with the counterarguments.
I can't speak to the "AI as a normal technology" people in particular, but a shortlist I created of people I'd be very excited about includes someone who just doesn't buy at all that AI will drive an intelligence explosion or explosive growth.
I think there are lots of types of people where it wouldn't be a great fit, though. E.g. continental philosophers; at least some of the "sociotechnical" AI folks; more mainstream academics who are focused on academic publishing. And if you're just focused on AI alignment, probably you'll get more at a different org than you would at Forethought. 
More generally, I'm particularly keen on situations where V(X, Forethought team) is much greater than than V(X) + V(Forethought team), either because there are synergies between X and the team, or because X is currently unable to do the most valuable work they could in any of the other jobs they could be in. 
Thanks for writing this, Lizka!
Some misc comments from me:
I'm not even sure your arguments would be weak in that scenario.
Thanks - classic Toby point!  I agree entirely that you need additional assumptions.
I was imagining someone who thinks that, say, there's a 90% risk of unaligned AI takeover, and a 50% loss of EV of the future from other non-alignment issues that we can influence. So EV of the future is 5%.
If so, completely solving AI risk would increase the EV of the future to 50%; halving both would increase it only to 41%.
But, even so, it's probably easier to halve both than to completely eliminate AI takeover risk, and more generally the case for a mixed strategy seems strong. 
Haha, thank you for the carrot - please have one yourself!
"Harangue" was meant to be a light-hearted term. I agree, in general, on carrots rather than sticks. One style of carrot is commenting things like "Great post!" - even if not adding any content, I think it probably would increase the quantity of posts on the Forum, and somewhat act as a reward signal (more than just karma).
Thanks! I agree strongly with that.