OscarD🔸

1961 karmaJoined Working (0-5 years)Oxford, UK

Comments
316

Right, but because we have limited resources, we need to choose whether to invest more in just a few stronger layers, or less each in more different layers. Of course in an ideal world we have heaps of really strong layers, but that may be cost-prohibitive.

I am pre-registering my forecasts for the amount of prize money each essay will win. In brief, I expect that these three essays will win just over half the prize money: 

  • Utilitarians Should Accept that Some Suffering Cannot be “Offset”
  • Are longtermist ideas getting harder to find?
  • Discussions of Longtermism should focus on the problem of Unawareness

I didn't spend much time on these forecasts though - mainly it is based on karma with an adjustment based on my subjective judgement of the essay's title/summary.

This seems true and useful to me, I'm surprised at the low agreement and karma scores!

I discuss another example here, where (using your framing) we cannot rule out that we are in the hinge of history, and since the stakes would then be so high, we ought to act significantly on that basis.

Interested if you agree with this example.

Not sure I follow properly - why would liberal democracy not matter? I think whether biological humans are themselves enhanced in various ways matters less than whether they are getting superhuman (and perhaps super-wise advice). Though possibly wisdom is different and you need the principal to themselves be wise, rather than just getting wise advice.

Possibly, though I expect ASI could also be used to lock in one's values such that there will be more stasis unless the people in pwoer deliberately embrace dynamism and liberalism of values.

Interesting, I hadn't seen that interview. I stand by the overall claim that AI safety is more prominent in the West than China, though I am glad to see more people in China becoming safety-oriented.

Re the CCP being more redistributionist: that could be the case, but I am also worried that once individuals aren't economically useful their interests won't be looked out for as much by the state, unless they stay politically empowered, which requires democracy. I think the CCP would still care enough about its people to distribute AI benefits to them even when the people aren't useful investments, but I'm unsure. Whereas I think I would be more surprised if e.g. the US let its people be greatly deprived even if they were ~useless deadweights.

I agree that both possibilities are very risky. Interesting re belief in hell being a key factor, I wasn't thinking about that.

Even if a future ASI would be able to very efficiently manage todays economy in a fully centralised way, possibly the future economy will be so much more complicated that it will still make sense to have some distributed information processing in the market rather than have all optimisation centrally planned? Seems unclear to me one way or the other, and I assume we won't be able to know with high confidence in advance what economic model will be most efficient post-ASI. But maybe that just reflects my economic ignorance and others are justifiedly confident.

Thanks, not over-critical at all! Good point: I am fairly confident that by my values a US-led future would be better, but I am quite uncertain how large this effect is, and each individual consideration/argument is fairly fuzzy.

I don't have any particular China expertise, but I work in international AI governance so try to stay quite familiar with at least AI-relevant aspects of things going on in China.

  • Moral innovation: I was considering citing something like comparing some university rankings for philosophy vs natural sciences where Chinese universities seem to do better in the latter than the former. But I'm not sure how much to trust such rankings, and my claim is more vibes-based that even though things I hear are very Western-tinted, I feel far more likely to hear about cutting-edge scientific work coming out of China than cutting-edge philosophy. Though yes, of course it is also the case that I personally just find Western philosophy more useful (specifically analytic philosophy, not continental).
  • Economic stasis: True, I think China is becoming more innovative and dynamic technologically/economically, and it is possible it will overall catch up with the West. Though my guess is that liberal, capitalist political-economic systems will still overall prove better for long-run innovation. 

Great points, I agree both of those are concerns, and don't have much to add. I think the risk of further democratic backsliding in the U.S. is very real, and could be AI-exacerbated. But I suppose a risk of backsliding is better than China already being autocratic.

And interesting re alt proteins, yes that seems quite plausible to me! If this ends up being hte crux it would probably be worth foing more surveys and social science work to understand this better.

Load more