OCB

Owen Cotton-Barratt

10510 karmaJoined

Sequences
3

Reflection as a strategic goal
On Wholesomeness
Everyday Longermism

Comments
959

Topic contributions
3

I think my impression is that the strategic upshots of this are directionally correct, but maybe not a huge deal? I'm not sure if you agree with that.

Sorry, I didn't mean mislabelled in terms of having the labels the wrong way around. I meant that the points you describe aren't necessarily the ends of the spectrum -- for instance, worse than just losing all alignment knowledge is losing all the alignment knowledge while keeping all of the knowledge about how to build highly effective AI.

At least that's what I had in mind at the time of writing my comment. I'm now wondering if it would actually be better to keep the capabilities knowledge, because it makes it easier to do meaningful alignment work as you do the rerun. It's plausible that this is actually more important than the more explicitly "alignment" knowledge. (Assuming that compute will be the bottleneck.)

You're discussing catastrophes that are big enough to set the world back by at least 100 years. But I'm wondering if a smaller threshold might be appropriate. Setting the world back by even 10 years could be enough to mean re-running a lot of the time of perils; and we might think that catastrophes of that magnitude are more likely. (This is my current view.)

With the smaller setbacks you probably have to get more granular in terms of asking "in precisely which ways is this setting us back?", rather than just analysing it in the abstract. But that can just be faced.

Why do you think alignment gets solved before reasonably good global governance? It feels to me pretty up in the air which target we should be aiming to hit first. (Hitting either would help us with the other. I do think that we likely want to get important use out of AI systems before we establish good global governance; but that we might want to then do the governance thing to establish enough slack to take the potentially harder parts of alignment challenge slowly.)

On section 4, where you ask about retaining alignment knowledge:

  • It feels kind of like you're mislabelling the ends of the spectrum?
  • My guess is that rather than think about "how much alignment knowledge is lost?", you should be asking about the differential between how much AI knowledge is lost and how much alignment knowledge is lost
  • I'm not sure that's quite right either, but it feels a little bit closer?

For much of the article, you talk about post-AGI catastrophe. But when you first introduce the idea in section 2.1, you say:

the period from now until we reach robust existential security (say, stable aligned superintelligence plus reasonably good global governance)

It seems to me like this is a much higher bar than reaching AGI -- and one for which the arguments that we could still be exposed to subsequent catastrophes seem much weaker. Did you mean to just say AGI here?

Yeah roughly the thought is "assuming concentrated power, it matters what the key powerful actors will do" (the liberal democracy comment was an aside saying that I think we should be conditioning on concentrated power).

And then for making educated guesses about what the key powerful actors will do, it seems especially important to me what their attitudes will be at a meta-level: how they prefer to work out what to do, etc. 

I might have thought that some of the most important factors would be things like: 

  • How likely is leadership to pursue intelligence enhancement, given technological opportunity?
  • How likely is leadership to pursue wisdom enhancement, given technological opportunity? 

(Roughly because: either power is broadly distributed, in which case your comments about liberal democracy don't seem to have so much bite; or it's not, in which case it's really the values of leadership that matter.) But I'm not sure you really touch on these. Interested if you have thoughts.

Thanks AJ!

My impression is that although your essay frames this as a deep disagreement, in fact you're reacting to something that we're not saying. I basically agree with the heart of the content here -- that there are serious failure modes to be scared of if attempting to orient to the long term, and that something like loop-preservation is (along with the various more prosaic welfare goods we discussed) essential for the health of even a strict longtermist society.

However, I think that what we wrote may have been compatible with the view that you have such a negative reaction to, and at minimum I wish that we'd spent some more words exploring this kind of dynamic. So I appreciate your response.

That makes sense! 

(I'm curious how much you've invested in giving them detailed prompts about what information to assess in applying particular tags, or even more structured workflows, vs just taking smart models and seeing if they can one-shot it; but I don't really need to know any of this.)

Load more