Another issue, and why the comment is getting downvoted heavily (including by myself) is because you seem to conflate the is-ought distinction with this post, and without the is-ought distinction being conflated, this post would not exist.
You routinely leap from "a person has moral views that are offensive to you" to "they are wrong about the facts of the matter", and your evidence for this is paper thin at best.
Being able to separate moral views from beliefs on factual claims is one of the things that is expected if you are in EA/LW spaces.
This is not mutually exclusive with the issues CB has found.
I currently can't find a source, but to elaborate a little bit, my reason for thinking this is that the GPT-4 to GPT-4.5 scaleup used 15x the compute instead of 100x the compute, and I remember that 10x compute is enough to be competitive with the current algorithmic improvements that don't involve scaling up models, whereas 100x compute increases result in the wow moments we associated with GPT-3 to GPT-4, and the GPT-5 release was not a scale up of compute, but instead productionizing GPT-4.5.
I'm more in the camp of "I find little reason to believe that pre-training returns have declined" here.
I broadly don't think inference scaling is the only path, primarily because I disagree with the claim that pre-training returns declined much, and attribute the GPT-4.5 evidence as mostly a case of broken compute promises making everything disappointing.
I also have a hypothesis that current RL is mostly serving as an elicitation method for pre-trained AIs.
We shall see in 2026-2027 whether this remains true.
A big part of the issue, IMO is the fact that EA funding is often very skewed by people who have managed to capture the long-tail of wealth/income, and while this is quite necessary for EA to be as impactful as it is in a world where it's good for EA to remain small, and I'd still say it was positive overall to do the strategy, this also inevitably distorts any conversations, because people reasonably fear that being unable to justify/defer to a funder about what to do means you can't get off the ground at all, since there are few alternative funders.
So this sort of deference to funders will likely always remain, unfortunately, and we will have to mitigate the downsides that come from seeking the long-tails of wealth/income (which very few people can achieve).
My general take on gradual disempowerment, independent of any other issues raised here, is that I think it's a coherent scenario, but that it ultimately is very unlikely to arise in practice, because it relies on an equilibrium where the sort of very imperfect alignment needed for divergence between human and AI interests to occur over the long-run being stable, even as the reasons for why the alignment problem in humans being very spotty/imperfect being stable get knocked out.
In particular, I'm relatively bullish on automated AI alignment conditional on non-power seeking/non-sandbagging if we give the AIs reward but misaligned human-level AI, so I generally think it quite rapidly resolves as either the AI is power-seeking and willing to sandbag/scheme on everything, leading to the classic AI takeover, or the AI is aligned to the principal in such a way that the principal-agency cost becomes essentially 0 over time.
Note I'm not claiming that most humans won't be dead/disempowered, I'm just saying that I don't think gradual disempowerment is worth spending much time/money on.
The "arbitrariness" of precise EVs is just a matter of our discomfort with picking a precise number (see above).
A non-trivial reason for this is that precise numbers expose ideological assumptions, and a whole of people do not like this.
It's easy to lie with numbers, but it's even easier to lie without a number.
Crossposting a comment from LessWrong:
@1a3orn goes deeper into another dynamic that causes groups to have false beliefs while believing they are true, and it's the fact that some bullshit beliefs help you figure out who to exclude, which is the people who don't currently hold the belief, and in particular assholery also helps people who don't want their claims checked, and it's a reason I think politeness is actually useful in practice for rationality:
(Sharmake's first tweet): I wrote something on a general version of this selection effect, and why it's so hard to evaluate surprising/extreme claims relative to your beliefs, and it's even harder if we expect heavy-tailed performance, as happens in our universe.
(1a3orn's claims) This is good. I think another important aspect of the multi-stage dynamic here is that it predicts that movements with *worse* stages at some point have fewer contrary arguments at later points...
...and in this respect is like an advance-fee scam, where deliberately non-credible aspects of the story help filter people early on so that only people apt to buy-in reach later parts.
Paper on Why do Nigerian Scammers Say They are from Nigeria?
So it might be adaptive (survivalwise) for a memeplex to have some bullshit beliefs because the filtering effect of these means that there will be fewer refutations of the rest of the beliefs.
It can also be adaptive (survivalwise) for a leader of some belief system to be abrasive, an asshole, etc, because fewer people will bother reading them => "wow look how no one can refute my arguments"
(Sharmake's response) I didn't cover the case where the belief structure is set up as a scam, and instead focused on where even if we are assuming LWers are trying to get at truth and aren't adversarial, the very fact that this effect exists combined with heavy-tails makes it hard to evaluate claims.
But good points anyway.
(1a3orn's final point)
Yeah tbc, I think that if you just blindly run natural selection over belief systems, you get belief systems shaped like this regardless of the intentions of the people inside it. It's just an effective structure.
Quotes from this tweet thread.
Another story is that this is a standard diminishing returns case, and once we have removed all the very big blockers like non-functional rule of law, property rights, untreated food and water, as well as disease, it's very hard to make the people who would still remain poor actually improve their lives, because all the easy wins have been taken, so what we are left with is the harder/near impossible poverty cases.
An example here is this quote, which straddles dangerously close to "these people have morality that you find to be offensive, therefore they are wrong on the actual facts of the matter" (Otherwise you would make the Nazi source allegations less central to your criticism here):
(I don't hold the moral views of what the quote is saying, to be clear).