Sorry, I didn't mean mislabelled in terms of having the labels the wrong way around. I meant that the points you describe aren't necessarily the ends of the spectrum -- for instance, worse than just losing all alignment knowledge is losing all the alignment knowledge while keeping all of the knowledge about how to build highly effective AI.
At least that's what I had in mind at the time of writing my comment. I'm now wondering if it would actually be better to keep the capabilities knowledge, because it makes it easier to do meaningful alignment work as you do the rerun. It's plausible that this is actually more important than the more explicitly "alignment" knowledge. (Assuming that compute will be the bottleneck.)
You're discussing catastrophes that are big enough to set the world back by at least 100 years. But I'm wondering if a smaller threshold might be appropriate. Setting the world back by even 10 years could be enough to mean re-running a lot of the time of perils; and we might think that catastrophes of that magnitude are more likely. (This is my current view.)
With the smaller setbacks you probably have to get more granular in terms of asking "in precisely which ways is this setting us back?", rather than just analysing it in the abstract. But that can just be faced.
Why do you think alignment gets solved before reasonably good global governance? It feels to me pretty up in the air which target we should be aiming to hit first. (Hitting either would help us with the other. I do think that we likely want to get important use out of AI systems before we establish good global governance; but that we might want to then do the governance thing to establish enough slack to take the potentially harder parts of alignment challenge slowly.)
On section 4, where you ask about retaining alignment knowledge:
For much of the article, you talk about post-AGI catastrophe. But when you first introduce the idea in section 2.1, you say:
the period from now until we reach robust existential security (say, stable aligned superintelligence plus reasonably good global governance)
It seems to me like this is a much higher bar than reaching AGI -- and one for which the arguments that we could still be exposed to subsequent catastrophes seem much weaker. Did you mean to just say AGI here?
Yeah roughly the thought is "assuming concentrated power, it matters what the key powerful actors will do" (the liberal democracy comment was an aside saying that I think we should be conditioning on concentrated power).
And then for making educated guesses about what the key powerful actors will do, it seems especially important to me what their attitudes will be at a meta-level: how they prefer to work out what to do, etc.
I might have thought that some of the most important factors would be things like:
(Roughly because: either power is broadly distributed, in which case your comments about liberal democracy don't seem to have so much bite; or it's not, in which case it's really the values of leadership that matter.) But I'm not sure you really touch on these. Interested if you have thoughts.
Thanks AJ!
My impression is that although your essay frames this as a deep disagreement, in fact you're reacting to something that we're not saying. I basically agree with the heart of the content here -- that there are serious failure modes to be scared of if attempting to orient to the long term, and that something like loop-preservation is (along with the various more prosaic welfare goods we discussed) essential for the health of even a strict longtermist society.
However, I think that what we wrote may have been compatible with the view that you have such a negative reaction to, and at minimum I wish that we'd spent some more words exploring this kind of dynamic. So I appreciate your response.
I think my impression is that the strategic upshots of this are directionally correct, but maybe not a huge deal? I'm not sure if you agree with that.