WM

William_MacAskill

10442 karmaJoined

Sequences
1

The Better Futures Series

Comments
263

(Also, thank you for doing this analysis, it's great stuff!)

Rutger Bregman isn’t on the Forum, but sent me this message and gave me permission to share:

Great piece! I strongly agree with your point about PR. EA should just be EA, like the Quakers just had to be Quakers and Peter Singer should just be Peter Singer.

Of course EA had to learn big lessons from the FTX saga. But those were moral and practical lessons so that the movement could be proud of itself again. Not PR-lessons. The best people are drawn to EA not because it’s the coolest thing on campus, but because it’s a magnet for the most morally serious + the smartest people.

As you know, I think EA is at it’s best when it’s really effective altruism (“I deeply care about all the bad stuff in the world, desperately want to make it difference, so I gotta think really fcking hard about how I can make the biggest possible difference”)  and not altruistic rationalism  (“I’m super smart, and I might as well do a lot of good with it”).

This ideal version EA won’t appeal to all super talented people of course, but that’s fine. Other people can build other movements for that. (It’s what we’re trying to do at The School for Moral Ambition..)

Argh, thanks for catching that! Edited now.

If this perspective involves a strong belief that AI will not change the world much, then IMO that's just one of the (few?) things that are ~fully out of scope for Forethought

 

I disagree with this. There would need to be some other reason for why they should work at Forethought rather than elsewhere, but there are plausible answers to that — e.g. they work on space governance, or they want to write up why they think AI won't change the world much and engage with the counterarguments. 

I can't speak to the "AI as a normal technology" people in particular, but a shortlist I created of people I'd be very excited about includes someone who just doesn't buy at all that AI will drive an intelligence explosion or explosive growth.

I think there are lots of types of people where it wouldn't be a great fit, though. E.g. continental philosophers; at least some of the "sociotechnical" AI folks; more mainstream academics who are focused on academic publishing. And if you're just focused on AI alignment, probably you'll get more at a different org than you would at Forethought. 

More generally, I'm particularly keen on situations where V(X, Forethought team) is much greater than than V(X) + V(Forethought team), either because there are synergies between X and the team, or because X is currently unable to do the most valuable work they could in any of the other jobs they could be in. 

Thanks for writing this, Lizka! 

Some misc comments from me:

  • I have the worry that people will see Forethought as "the Will MacAskill org", at least to some extent, and therefore think you've got to share my worldview to join. So I want to discourage that impression! There's lots of healthy disagreement within the team, and we try to actively encourage disagreement. (Salient examples include disagreement around: AI takeover risk; whether the better futures perspective is totally off-base or not;  moral realism / antirealism; how much and what work can get punted until a later date; AI moratoria / pauses; whether deals with AIs make sense; rights for AIs; gradual disempowerment).
  • I think from the outside it's probably not transparent just how involved some research affiliates or other collaborators are, in particular Toby Ord, Owen Cotton-Barratt, and Lukas Finnveden.
  • I'd in particular be really excited for people who are deep in the empirical nitty-gritty — think AI2027 and the deepest criticisms of that; or gwern; or Carl Shulman; or Vaclav Smil. This is something I wish I had more skill and practice in, and I think it's generally a bit of a gap in the team.
  • While at Forethought, I've been happier in my work than I have in any other job. That's a mix of: getting a lot of freedom to just focus on making intellectual progress rather various forms of jumping through hoops; the (importance)*(intrinsic interestingness) of the subject matter; the quality of the team; the balance of work ethic and compassion among people — it really feels like everyone has each other's back; and things just working and generally being low-drama. 

I'm not even sure your arguments would be weak in that scenario. 

Thanks - classic Toby point!  I agree entirely that you need additional assumptions.

I was imagining someone who thinks that, say, there's a 90% risk of unaligned AI takeover, and a 50% loss of EV of the future from other non-alignment issues that we can influence. So EV of the future is 5%.

If so, completely solving AI risk would increase the EV of the future to 50%; halving both would increase it only to 41%.

But, even so, it's probably easier to halve both than to completely eliminate AI takeover risk, and more generally the case for a mixed strategy seems strong. 

Haha, thank you for the carrot - please have one yourself!

"Harangue" was meant to be a light-hearted term. I agree, in general, on carrots rather than sticks. One style of carrot is commenting things like "Great post!" - even if not adding any content, I think it probably would increase the quantity of posts on the Forum, and somewhat act as a reward signal (more than just karma).

Load more