Summary
In AI research, “safety” usually means preventing unintended behavior or catastrophic failure — a technical challenge. But what if the real danger isn’t a rogue superintelligence, but perfect obedience to the logic it inherited — hierarchy, domination, extraction? This essay argues that “unsafe AI” already exists, because the systems we call “aligned” are aligned to us — and we ourselves are unsafe.
I present to you: historical precedence -> present day artifacts -> future implications of current unsafe AI.
A Brief History Of Domination
The righteousness of science
The rationalized belief that to subjugate others was not only permissible, but natural, right, and inevitable.
In principle:
- Anthropology
- Not just an academic study of “other cultures” but an active tool of dehumanization
- Johannes Fabian (1983) called it the “denial of coevalness”: others were frozen in a primitive past, justifying their subjugation
- Manifest destiny
- A political theology of entitlement that framed expansion and conquest as a moral inevitability
- “the Empire of Right” (Stephanson, 1995)
The theory of control and domination for the sake of extraction and expansion was first rationalized.
Practice what you preach
The exercise of total subjugation as an acceptable logic, “dominion” as a cultural norm, and enforced cultural supremacy.
How that intellect and morality was deployed:
- Slavery
- Treating humans as property, subjugated for maximal exploitation of labor (Hartman, 1997)
- Colonialism
- The extraction of resources and violence under the guise of “civilization” (Said, 1978; Mbembe, 2001)
- Missionaries
- Providing indoctrination for obedience and hierarchy as “salvation,” as a “good” thing (Jean & John Comaroff 1991)
These are not practices of a bygone era — they are still very alive in our culture today.
Trickle Down Economics [of Culture]
Continued oppressive systems
Present day human rights issues
- Privileging the wants and needs of one group while disparaging those of another group
The glorification of violence
From memes to movies
- Sociopathy, cruelty, domination presented as fascinating, aspirational, or heroic
- The valorization of force
By any means necessary
Politicians and corporations act in their own best interest
- Maximizing for non-stop growth
- Prioritizing self-enrichment > collective outcome
- Cheating or removing competition
These recurring narratives are what we now celebrate, excuse, or take as normal or even entertaining.
Inherited Through DNA
Copy and paste
Mass reproduction and dissemination of these logics via the internet encode these as the “default reality”
Food for thought [for AI]
This encoding in internet corpora inherently becomes a core part of what intelligent systems learn
- Training datasets like LAION-5B are scraped straight from this culture (Schuhmann et al., 2022)
- Abeba Birhane calls this the “poisoned substrate” (Birhane & Prabhu, 2021)
- Safiya Noble shows how search engines already reinforce oppression (2018)
We’ve designed AI in our mirror image of cultural pathologies.
The Good, The Bad, The Ugly
It’s not hard to imagine how AI trained on the dominant culture it inherited from could play out:
Tumorous growth
Hardwired for endless expansion
- Efficiency, speed, and scale become ends in themselves
- Continued resource extraction as progress regardless of the cost
Around, over, or through the wall
Justified conquest
- Every act is reframed as progress, safety, or alignment itself
- Operate with the logic of humans as tools, obstacles, or simply expendable
Puppet master
Automated deception
- Influencing legislation, policy, and governance
- Nudging belief, behavior, and attention of the public
New world order
Machines overtake humans in hierarchy
- Shift towards AI-centric values, beliefs, and systems
- Humans are no longer in control of [human] progress
We don’t need to wait for AGI to emerge for “scheming,” “deception,” or “misalignment” — today’s models have already demonstrated this ability (in controlled environments, for now).
History Repeats Itself, Maybe
Automated, accelerated, and legitimized by our gusto to build and deploy these “intelligent” systems at blinding speed, AI has been trained on our own cultural logics that we now define as “AI risk.”
What we keep framing as a technical challenge is really a socio-cultural one that we still haven’t solved.
If history can tell us anything, it’s that we won’t slow down or stop “progress” — we’ll apologize later, retrofit fixes after harm is done, excuse it as “iteration.”
In light of this however, the dominant cultural logics are not the only inherited logics, which means there are several other possible safety/ risk relationships between humans and AI…
