Hello everyone,
I'm currently self-studying AI s-risks, and I’ve noticed that instrumental AI s-risks seem to be much less discussed compared to alignment failures or malevolent AI. It’s quite hard to find detailed analyses or estimates of possible instrumental s-risk scenarios and their probabilities.
It’s clear that instrumental s-risks could arise if AGI were to develop sentient suffering.
However, even if future AIs are not sentient, they might still simulate biological life, which could indirectly generate suffering — and thus, instrumental s-risks.
Below are the few motivations I can currently think of for why an advanced AI might simulate biological organisms, potentially causing suffering. I’d really appreciate any additional scenarios you might suggest:
- If humans still exist:
AI might simulate humans or animals to better understand or predict human behavior. This could be useful for coordination, alignment, or modeling purposes. - If humans (and most biological life) are extinct(which seems more likely if alignment failed):
AI might still simulate humans or animals for epistemic completion — that is, to fully understand the origins of its environment or its creators.
Alternatively, it could inherit human-origin goals related to biology or life sciences.
That said, for case (2), it still seems quite unlikely that AI would have a strong motivation to simulate biological lives purely for these reasons.
I’d love to hear if others see additional plausible motivations for AI to simulate biological systems that might involve suffering.
Thanks very much for reading and sharing your insights!

Some s-risks people may be afraid of informantion hazard of publicly answering this question, if that's the case, you can gmail carlosgpt500@gmail.com to privately answer this question