Not sure what the author's argument is, but here's my interpretation: AI risk being a Knightian uncertainty is an argument against assigning P(doom) to it.
I'm curious to see how EA-in-the-east will develop. I think it could take on very different characteristics compared to the west. Perhaps less thinking about impact in terms of a single individual, but rather as a collective?
Not sure what the author's argument is, but here's my interpretation: AI risk being a Knightian uncertainty is an argument against assigning P(doom) to it.