Cross-posted from my personal notes. I'm sharing this because I think the EA/AI safety community needs to hear it, and because I've been living it.
I lead AI safety work in Nigeria. When I tell people this, the most common reaction is a polite pause , the kind that says: that's interesting, but is that really AI safety?
I want to argue that it is. And that the gap it represents is one of the most neglected problems in the entire AI safety ecosystem.
The Situation on the Ground
I'm based in Ibadan, Nigeria. I am part of the AI Safety Fundamentals: AI Governance cohort for Nigeria under AI Safety Nigeria, and I hold an AI Lead appointment under the ITU — the UN's agency for ICT. In my day-to-day work, I deploy AI systems in low-resource healthcare settings, build multilingual NLP tools for communities that global AI largely ignores, and try to translate AI safety discourse into something meaningful for African researchers and policymakers.
Here's what I observe from that vantage point:
The global AI safety community is, with very few exceptions, a Western conversation. Its canonical texts were written in Oxford and Berkeley. Its conferences happen in San Francisco and London. Its implicit assumptions, about who builds AI, who governs it, who is harmed or helped by it, are shaped almost entirely by high-income, high-resource contexts.
Meanwhile, Africa has 1.4 billion people, the world's youngest and fastest-growing population, and some of the most acute governance vacuums on the planet. Frontier AI is not arriving after Africa figures out its institutions. It is arriving now, into contexts with limited regulatory capacity, under-resourced civil society, and almost no AI safety literacy among the researchers, policymakers, and civil servants who will have to manage its consequences.
This is not a small gap. It is a civilisational-scale oversight.
Why This Is an AI Safety Problem, Not Just an "AI for Good" Problem
I want to be precise here, because I think the distinction matters.
A lot of Global South AI work is about deploying AI to solve local problems , better crop yields, faster disease diagnosis, smarter financial inclusion. That work is valuable. But it is not what I'm describing.
What I'm describing is this: the safety and alignment of advanced AI systems will be shaped, in part, by the governance frameworks, regulatory norms, and institutional capacity that exist when those systems arrive. If Africa , a continent with 54 countries, significant geopolitical weight, and rapidly growing AI adoption, has no seat at that table, the frameworks we build will be incomplete. Worse, they may actively fail African populations in ways that go unnoticed because African researchers aren't in the room to flag them.
A few concrete examples of what I mean:
Alignment to whose values? The majority of RLHF and value alignment work uses predominantly Western annotators, Western-language corpora, and Western ethical frameworks. I work directly on this problem , building culturally situated NLP tools for Yoruba, Hausa, and Pidgin English communities. When I do this work, I encounter, repeatedly, the fact that what an AI system "learns" about mental distress, appropriate behaviour, or social norms from Western training data is often systematically wrong for West African contexts. This is not just an accuracy problem. It is an alignment problem. It is a question of whose values get encoded.
Governance vacuums as catastrophic risk factors. One underappreciated pathway to AI-related catastrophe runs not through a misaligned superintelligence but through the gradual erosion of meaningful human oversight in contexts where regulatory institutions are weak. Africa is full of such contexts. Authoritarian-adjacent governments are already adopting AI surveillance tools. Disinformation systems are already exploiting low-information-literacy environments. The slow-burn risk of AI undermining democratic institutions and human oversight is more acute here, not less , and almost no one in the safety community is working on it.
The absence of African voices at critical junctures. Right now, the norms, standards, and frameworks being negotiated at the ITU, the UN, and in bilateral AI agreements will shape the global AI governance architecture for decades. African countries participate in these processes with thin technical capacity and almost no exposure to AI safety thinking. I sit in some of these rooms. The gap is stark. And the window to build that capacity before the critical junctures pass is closing.
What I'm Actually Doing About It
I don't want this to be an abstract lament. Here's what the work looks like in practice:
I am part of a running structured AI safety and governance cohorts in Nigeria, bringing together researchers, policymakers, and practitioners and walking them through alignment, oversight failure, and catastrophic risk. Most participants have never encountered this framing before. The response is consistently: why has nobody brought this to us?
I'm working on research that interrogates how AI systems encode culturally embedded representations of human experience, and what governance responsibilities arise from that. This sits at the intersection of technical alignment and AI welfare in a way that I think is genuinely underexplored.
I'm using my ITU role to inject AI safety thinking into intergovernmental policy dialogue, trying to ensure that "AI governance" in international fora doesn't just mean "economic regulation" but includes meaningful engagement with catastrophic risk.
None of this is easy. There's almost no funding for it. There's almost no community for it. The people doing AI safety work in Africa can be counted on two hands, and most of them are doing it alongside other work, with no dedicated support.

Thank you for your important work! You're probably already speaking to them, but in this broader community and adjacent, I recommend the work of:
For 'role of Africa in frontier AI safety' isseues
For more locally-focused safety work, the African Hub on AI Safety, peace and security
https://www.globalcenter.ai/research/toward-an-african-agenda-for-ai-safety
(among others, of course).
This is exactly the kind of response I was hoping the post would generate, thank you genuinely. I was familiar with Cecil's work at ILINA and the African Hub at UCT, but Sumaya's CASA centre is new to me and I am going down that rabbit hole right now. The Oxford AIGI connection is particularly interesting given the intergovernmental policy.
What strikes me reading these is that the ecosystem is more alive than it appears from the outside, the problem is not that the work does not exist, it is that it is not visible enough to the broader EA and AI safety community, which is part of what I was trying to address with the post. These efforts deserve to be in the same conversations as the Anthropic safety teams and the GovAI fellows, not operating in parallel universes.
I will reach out to Cecil and Sumaya directly. If anyone reading this thread is working on connecting these dots more systematically — building the network between African AI safety researchers and the global safety ecosystem, I would love to talk. That connective tissue is precisely what I am trying to build through AI Safety Nigeria, and collaboration is worth far more than duplication.
Thank you again for these pointers. This thread is already doing what good EA Forum threads should do.
Thanks for surfacing this -- in the AI safety courses & organization researching I've been exploring, the ominous absence in agenda-setting of the vast majority of the world both by geographic and population scale is really frightening. So this is me giving an ineffectual +1, I have no solutions.
There's a somewhat along-side this question I've been hovering around. I'm in Canada, and from my perspective while the frontier development US-China poles make the current intent focus on the US make sense, at the same time I'm increasingly confused why the potential for middle power impact seems limited to our failed leverage to shape (ie stop) the frantic American development speed. Surely in concert we can do more than helplessly hang on and hope to benefit more than we're screwed?
I finally found a perspective on this worded way better than I could hope to put it, here: https://substack.com/home/post/p-185388441 (How AI Safety Is Getting Middle Powers Wrong - The case for pivoting from global governance to national interests, Anton Leicht).
Interesting to me is the case for these countries to actually act explicitly in national self-interest with AI safety integrated as national security to better gain salience and strategic action. I could see this picking up traction in even non-democratic contexts.
I'm curious about your thoughts on how this might resonate in Nigeria, SA, etc?
Hillary, thank you for this and for the Leicht piece which I had not encountered before. It is sharp and I think largely correct, and it maps onto something I experience directly working in Nigeria.
The frame of national interest as the entry point for AI safety in non-Western contexts resonates strongly. In my ITU work and in conversations with Nigerian government officials, the language of existential risk lands poorly. It sounds abstract, Western, and frankly like someone else's problem. But the language of economic sovereignty, of not wanting to be economically colonised a second time through AI-driven labour displacement and data extraction, that lands immediately. The fear is not superintelligence. The fear is that the value generated by Nigerian workers, Nigerian data, Nigerian creativity flows entirely to San Francisco while Nigeria gets the disruption without the upside. That is a tractable safety-relevant concern and it is deeply national.
Where I push back slightly on Leicht is the implicit suggestion that national interest and global catastrophic risk reduction are separable strategies to choose between. From where I sit they are not separable. Building domestic AI safety literacy in Nigeria is simultaneously a national interest play and a global safety play. A Nigerian policymaker who understands misuse risks, oversight failure, and value misalignment is better equipped to protect Nigerian citizens and also more likely to show up at ITU negotiations with something useful to contribute. The two things compound each other.
On your Canada question specifically: the most honest answer is that middle powers including Canada probably cannot stop the race. But they can determine whether the landing is controlled or catastrophic. That is not nothing. It is actually everything.
Thank you for this! I think the literacy angle is really powerful as it taps into knowledge-is-power through informing action without reducing its value to whether we can directly affect global power development.
I also realize my comment may be too tangential to your original post to really belong here --I've started a new post on the topic: https://forum.effectivealtruism.org/posts/oELJZFY9LBAkpCccw/is-safe-ai-development-intractable-for-middle-powers-the
Hillary this is a genuinely interesting tension and the Leicht piece sharpens it well. But I want to offer a perspective that I think is missing from both his framing and from the Canadian middle power conversation.
Leicht is essentially writing about Canada, the EU, the UK. Countries with functioning institutions, real technical capacity, and enough economic weight that their national interest is at least legible to the people making frontier AI decisions. When he says pivot to national interest, he means countries that have a national interest coherent enough to pivot to.
Now apply that frame to Nigeria. We are the largest economy in Africa, 220 million people, a tech ecosystem that is genuinely world class in fintech and mobile. And we have almost zero representation in any of the rooms where AI governance decisions are being made. Not because we lack smart people. Because the entire infrastructure of the conversation, the fellowships, the think tanks, the policy networks, the funding, was built without us and has not reconfigured to include us.
So here is what I think the Leicht piece misses. For countries like Nigeria the question is not how do we pivot from global governance to national interest. The question is how do we get into the game at all before the rules are set without us. That is a prior problem. And it is the problem I am actually working on.
The honest answer to your Canada question is this. Middle powers with existing institutional capacity should absolutely do what Leicht says. But they should also be asking which countries are not even at the table yet and what it would take to get them there. Because a stable AI transition requires more than Canada and the EU figuring out their national strategies. It requires the Global South having enough capacity to participate in the conversation as something other than passive recipients of whatever gets decided elsewhere.
That is not an ineffectual plus one. That is actually the work.
"Alignment to whose values?"
precisely — we need to train alternative models from the ground up on more diverse cultural contexts, instead of trying to retrofit a westernized model in non-western contexts
wonder who's working on this?
This is the right framing and I think about it constantly. The retrofit approach, fine-tuning a Western-trained base model on local data, is better than nothing but it is architecturally compromised from the start. You are trying to correct value misalignment at the surface while the deep structure of the model remains shaped by the corpus it was originally trained on. It is like translating a concept that does not exist in the target language and wondering why something is lost.
On who is working on this seriously: ILINA under Cecil Abungu is doing some of the most rigorous thinking on African-context AI development rather than adaptation. The MASAKHANE community has been building African NLP infrastructure from the ground up for years and is probably the closest thing to what you are describing in practice. My own work on GENSCORE, building culturally situated mental health NLP for Hausa, Yoruba, and Pidgin English communities using lived-experience corpora rather than translated Western instruments, is a small piece of this puzzle applied to a specific domain.
But to be honest, no one is doing this at the scale the problem demands. The compute costs of training frontier models from scratch put it out of reach for most Global South research groups without major institutional backing. Which is part of why the governance and funding conversation matters as much as the technical one. If the resources to build genuinely diverse foundational models only flow to labs in San Francisco and London, the alignment problem remains a Western problem with a Western solution applied everywhere else.
Worth a much longer conversation. What is your context for the question?