Thanks for the quibble, seems big if true! And agreed it is not something that I was tracking when writing the article.
A few thoughts:
Interested in your takes here!
Thanks for the comment Michael.
A minor quibble is that I think it's not clear you need ASI to end up with dangerous levels of power concentration, so you might need to ban AGI, and to do that you might need to ban AI development pretty soon.
I've been meaning to read your post though, so will do that soon.
Thanks for your post AJ, and esp this comment which I found clarifying.
I'd be genuinely curious to hear how Cotton-Barratt and Hadshar see this difference. Is it a meaningful distinction? Are these frameworks reconcilable at different scales of analysis? When would we know which better serves long-term flourishing?
I've only skimmed your post, and haven't read what me and Owen wrote in several years, but my quick take is:
So my guess is, you have a fundamental disagreement with some version of longtermism, but less disagreement with me than you thought.
Thanks Lizka!
Some misc personal reflections:
One minor addition from me on why/not to work at Forethought: I think the people working at Forethought care pretty seriously about things going well, and are really trying to make a contribution.
I think this is both a really special strength, and something that has pitfalls:
And then a few notes on the sorts of people I'd be really excited to have apply:
Sorry for the slow response here! Agree that diffusion is an important issue. A few thoughts:
Sorry for the slow response here! Agree that diffusion is an important issue. A few thoughts:
h/t Will: having many countries part of the multilateral project removes their incentives to try to develop frontier AI themselves (and potentially open-source)
I agree that it's not necessarily true that centralising would speed up US development!
(I don't think we overlook this: we say "The US might slow down for other reasons. It’s not clear how the speedup from compute amalgamation nets out with other factors which might slow the US down:
Interesting take that it's more likely to slow things down than speed things up. I tentatively agree, but I haven't thought deeply about just how much more compute a central project would have access to, and could imagine changing my mind if it were lots more.
Thanks, I think these points are good.
- Learning may be bottlenecked by serial thinking time past a certain point, after which adding more parallel copies won't help. This could make the conclusion much less extreme.
Do you have any examples in mind of domains where we might expect this? I've heard people say things like 'some maths problems require serial thinking time', but I still feel pretty vague about this and don't have much intuition about how strongly to expect it to bite.
Thanks! I'm now unsure what I think.
if you can select from the intersection, you get options that are pretty good along both axes, pretty much by definition.
Isn't this an argument for always going for the best of both worlds, and never using a barbell strategy?
a concrete use case might be more illuminating.
This isn't super concrete (and I'm not if the specific examples are accurate), but for illustrative purposes, what if:
I think a lot of people's intuition would be that the compromise option is the best one to aim for. Should thinking about fat tails make us prefer one or other of the extremes instead?
I guess my prior coming into this is that non-existential catastrophes are still pretty existentially important, because:
It sounds like your prior was that non-existential catastrophes are much much less important than existential ones, and then these considerations are a big update for you.
So I think part of why I'm less interested in this than you are is just having different priors where this update is fairly small/doesn't change my prioritisation that much?