ED

Eric Drexler

Research @ Oxford
32 karmaJoined Retired

Comments
1

What mistakes have been made in AI safety field-building? Studies of AI safety, and now AI welfare, have adopted an unconditional anthropomorphism that approaches dogma. Possibilities have been mistaken for inevitabilities, placing problems in a dangerously narrow frame.

By “unconditional” anthropomorphism, I mean an unexamined assumption that all sufficiently capable problem-solving systems will necessarily resemble animal-like entities. This shouldn’t be considered obvious. AI systems, unlike animals (e.g., us), have no evolutionary heritage of selective pressure acting on single-viewpoint, single-action-sequence, world-engaged actors in which mandatory, physical, germ-line continuity has forced a struggle for survival to enable reproduction. AI systems aren’t shaped by these pressures. AI systems could be, yet aren’t constrained to be, like us. (Note that rational agent models are anthropomorphic: They were invented as idealized models of human actors.)

The field of AI safety has largely neglected fundamentally different ways in which intelligence could be organized, a vast universe of systems that could be safer, more functional, and, if designed with insight and care, perhaps incapable of suffering. If readers aren’t familiar with what I am referring to, this illustrates the problem. Search [ai agency drexler] for some starting points.