Counterargument: EA AI Safety is a talent program for Anthropic.
I wish it weren’t but that’s what’s going to continue happen if what the community has become pushes to grow. “Make AI go well” is code for their agenda. EA may be about morality but its answer on AI Safety is stuck and it is wrong. Anthropic’s agenda is not “up for renegotiation” at all. If you want to fix EA AI Safety it has to break out of the mentality 80k has done so much to put it in that the answer is to get a high-powered job working with AI companies or otherwise “playing the game”.
The good EA, the one I loved so much, was about being willing to do what was right even if it was  scrappy and unglamorous (especially then bc it would be more neglected!). EA AI Safety today is sneering reviews of a book that could help rally the public bc insiders all know we’re doing this wacky Superalignment thing today, and something else tomorrow, but whatever the “reason” we always support Anthropic trying to achieve world domination. And the young EAs are scared not to be elite and sophisticated by agreeing, and it breaks my heart. Getting more kids into current EA would not teach them “flexible decisonmaking”. 
EA needs to return to its roots in a way I gave up on waiting for before it needs to grow.
I think you hit the nail on the head— this forum is not a safe space for me. Like you said, I’m an all-time top poster, and yet I get snobby discouragement on everything I write since I started working on Pause, with the general theme that advocacy is not smart enough for EAs (and a secondary theme of wanting to work for AI companies).
This is a serious problem given what the EA Forum was supposed to be. It’s not a problem with following your rules for polite posts, but it’s against something more important— the purpose of the Forum and of the EA community.
But, I’ve clearly reached the end of my rope, and since I’d like to keep my account and be able to post new stuff here, I’ll just stop commenting.
As Carl says, society may only get one shot at a pause. So if we got it now, and not when we have a 10x speed up in AI development because of AI, I think that would be worse. It could certainly make sense now to build the field and to draft legislation. But it's also possible to advocate for pausing when some threshold or trigger is hit, and not now. It's also possible that advocating for an early pause burns bridges with people who might have supported a pause later.
This is so out of touch with the realities of opinion change. It sounds smart and it lets EAs and rationalists keep doing what they’re doing, which is why people repeat it. This claim that we would only get one shot at a pause is asinine— pause would become more popular as an option the more people were familiar with it. It’s only the AI industry and EA that do not like the idea of pausing and pretend like they’re gonna withdraw support that we actually never had if we do something they don’t like. 
The main thing we can do as a movement is gain popular support by talking about the message. There is no reliable way to “time” asks. None of that makes any sense. Honestly, most people who give this argument are industry apologists who just want you feel out of your league if you do anything against their interests. Hardware overhang was the same shit.
No I'm angry that people feel affronted by me pointing out that normal warning shot discourse entailed hoping for a disaster without feeling much need make sure that would be helpful. They should be glad that they have a chance to catch themselves, but instead they silently downvote. 
Just feels like so much of the vibe of this forum is people expecting to be catered to, like their support is some prize, rather than people wanting to find out for themselves how to help the world. A lot of EAs have felt comfortable dismissing PauseAI bc it's not their vibe or they didn't feel like the case was made in the right way or they think their friends won't support it, and it drives me crazy bc aren't they curious??? Don't they want to think about how to address AI danger from every angle?
To get a pause at any time you have to start asking now. It’s totally academic to ask about when exactly to pause and it’s not robust to try to wait until the last possible minute. Anyone taking pause advocacy seriously realizes this pretty quickly.
But honestly all I hear are excuses. You wouldn’t want to help me if Carl said it was the right thing to do or you’d have already realized what I said yourself. You wouldn’t be waiting for Carl’s permission or anyone else’s.  What you’re looking for is permission to stay on this corrupt be-the-problem strategy and it shows.
No I’m just concerned that the overwhelming effect of training EAs to do safety stuff that’s highly dependent on where the frontier labs are is them working at frontier labs. In theory there’s plenty of technical stuff to do that’s helpful, but in practice working at a frontier lab is the attractor. There are also knock-on effects in EA as a culture and movement when working at frontier labs is a primary occupation for top talent.