Currently doing local AI safety Movement Building in Australia and NZ.
I would like to suggest that folk not downvote this post below zero. I'm generally in favour of allowing people to defend themselves, unless their response is clearly in bad faith. I'm sure many folk strongly disagree with the OP's desired social norms, but this is different from bad faith.
Additionally, I suspect most of us have very little insight into how community health operates and this post provides some much needed visibility. Regardless of whether you think their response was just right, too harsh or too lenient, this post opens up a rare opportunity for the community to weigh in.
I suspect people are downvoting this post either because they think the author is a bad person or they don't want the author at EA events. I would suggest that neither of these are good reasons to downvote this specific post into the negative.
Create nice zones for spontaneous conversations (not sure how to do this well)
I've tried pushing for this without much success unfortunately.
It really is a lot more effort to have spontaneous conversations when almost all pairs are a one-on-one and almost all people by themselves are waiting for a one-on-one.
I've seen attempts to declare a space an area that's not for one-on-ones, but people have one-on-ones there anyway. Then again, organisers normally put up one or two small signs.
Honestly, the only way to stop people having one-on-ones in the area for spontaneous conversation might be to have an absurd number of big and obvious signs.
For most fellowships you're applying to a mentor rather than pursuing your own project (ERA is an exception). And, on the most common fellowships of a few months it's pretty much go, go, go, with little time to explore.
Thanks for the detailed comments.
Maybe the only way to really push for x-safety is with If Anyone Builds It style "you too should believe in and seek to stop the impending singularity" outreach. That just feels like such a tough sell even if people would believe in the x-safety conditional on believing in the singularity. Agh. I'm conflicted here. No idea.
I wish I had more strategic clarity here.
I believe there was a recent UN general assembly where world leaders were literally asking around for, like, ideas for AI red lines.
I would be surprised if anything serious comes sout of this immediately, but I really like this framing because it normalises the idea that we should have red lines.
I agree that EA might be somewhat “intellectually adrift”, and yes the forum could be more vibrant, but I don’t think these are the only metric for EA success or progress - and maybe not even the most important.
The EA movement attracted a bunch of talent by being intellectually vibrant. If I thought that the EA movement was no longer intellectually vibrant, but it was attracting a different kind of talent (such as the doers you mention) instead, this would be less of a concern, but I don't think that's the case.
(To be clear, I'm talking about the EA movement, as opposed to EA orgs. So even if EA orgs are doing a great job at finding doers, the EA movement might still be in a bad place if it isn't contributing significantly to this).
1. Rutger Bregman going viral with “The school for Moral ambition” launch
2. Lewis Bollard’s Dwarkesh podcast, Ted talk and public fundraising.
3. Anthropic at the frontier of AI building and public sphere, with ongoing EA influence
4. The shrimp Daily show thing…
5. GiveWell raised $310 million dollars last year NOT from OpenPhil, the most ever.
6. Impressive progress on reducing factory farming
7. 80,000 hours AI video reaching 7 million views
8. Lead stuff
9. CE incubated charities gaining increasing prominence and funding outside of EA, with many sporting multi-million dollar budgets and producing huge impact
10. Everyone should have a number 10....
These really are some notable successes, but one way to lose is to succeed at lots of small things, whilst failing to succeed at the most important things.
Once people have built career capital in AI/Animal welfare/ETG or whatever, I think we should be cautious about encouraging those people on to the next thing too quickly
You mostly only see the successes, but in practise this seems to be less of an issue I initially would have thought.
Honestly, I don't care enough to post any further replies. I've spent too much time on this whole Epoch thing already (not just through this post, but through other comments). I've been reflecting recently on how I spend my time and I've realised that I often make poor decisions here. I've shared my opinion, if your opinion is different, that's perfectly fine, but I'm out.
Very excited to read this post. I strongly agree with both the concrete direction and with the importance of making EA more intellectually vibrant.
Then again, I'm rather biased since I made a similar argument a few years back.
Main differences:
I also agree with the "fuck PR" stance (my words, not Will's). Especially insofar as the AIS movement has greater pressure to focus on PR, since it's further towards the pointy end, I think it's important for the EA movement to use its freedom to provide a counter-balance to this.