AO

ANTHONIO OLADIMEJI

AI Safety Researcher @ University of Ibadan
72 karmaJoined Pursuing a professional degree

Comments
5

Hillary this is a genuinely interesting tension and the Leicht piece sharpens it well. But I want to offer a perspective that I think is missing from both his framing and from the Canadian middle power conversation.

Leicht is essentially writing about Canada, the EU, the UK. Countries with functioning institutions, real technical capacity, and enough economic weight that their national interest is at least legible to the people making frontier AI decisions. When he says pivot to national interest, he means countries that have a national interest coherent enough to pivot to.

Now apply that frame to Nigeria. We are the largest economy in Africa, 220 million people, a tech ecosystem that is genuinely world class in fintech and mobile. And we have almost zero representation in any of the rooms where AI governance decisions are being made. Not because we lack smart people. Because the entire infrastructure of the conversation, the fellowships, the think tanks, the policy networks, the funding, was built without us and has not reconfigured to include us.

So here is what I think the Leicht piece misses. For countries like Nigeria the question is not how do we pivot from global governance to national interest. The question is how do we get into the game at all before the rules are set without us. That is a prior problem. And it is the problem I am actually working on.

The honest answer to your Canada question is this. Middle powers with existing institutional capacity should absolutely do what Leicht says. But they should also be asking which countries are not even at the table yet and what it would take to get them there. Because a stable AI transition requires more than Canada and the EU figuring out their national strategies. It requires the Global South having enough capacity to participate in the conversation as something other than passive recipients of whatever gets decided elsewhere.

That is not an ineffectual plus one. That is actually the work.

Hillary, thank you for this and for the Leicht piece which I had not encountered before. It is sharp and I think largely correct, and it maps onto something I experience directly working in Nigeria.

The frame of national interest as the entry point for AI safety in non-Western contexts resonates strongly. In my ITU work and in conversations with Nigerian government officials, the language of existential risk lands poorly. It sounds abstract, Western, and frankly like someone else's problem. But the language of economic sovereignty, of not wanting to be economically colonised a second time through AI-driven labour displacement and data extraction, that lands immediately. The fear is not superintelligence. The fear is that the value generated by Nigerian workers, Nigerian data, Nigerian creativity flows entirely to San Francisco while Nigeria gets the disruption without the upside. That is a tractable safety-relevant concern and it is deeply national.

Where I push back slightly on Leicht is the implicit suggestion that national interest and global catastrophic risk reduction are separable strategies to choose between. From where I sit they are not separable. Building domestic AI safety literacy in Nigeria is simultaneously a national interest play and a global safety play. A Nigerian policymaker who understands misuse risks, oversight failure, and value misalignment is better equipped to protect Nigerian citizens and also more likely to show up at ITU negotiations with something useful to contribute. The two things compound each other.

On your Canada question specifically: the most honest answer is that middle powers including Canada probably cannot stop the race. But they can determine whether the landing is controlled or catastrophic. That is not nothing. It is actually everything.

This is the right framing and I think about it constantly. The retrofit approach, fine-tuning a Western-trained base model on local data, is better than nothing but it is architecturally compromised from the start. You are trying to correct value misalignment at the surface while the deep structure of the model remains shaped by the corpus it was originally trained on. It is like translating a concept that does not exist in the target language and wondering why something is lost.

On who is working on this seriously: ILINA under Cecil Abungu is doing some of the most rigorous thinking on African-context AI development rather than adaptation. The MASAKHANE community has been building African NLP infrastructure from the ground up for years and is probably the closest thing to what you are describing in practice. My own work on GENSCORE, building culturally situated mental health NLP for Hausa, Yoruba, and Pidgin English communities using lived-experience corpora rather than translated Western instruments, is a small piece of this puzzle applied to a specific domain.

But to be honest, no one is doing this at the scale the problem demands. The compute costs of training frontier models from scratch put it out of reach for most Global South research groups without major institutional backing. Which is part of why the governance and funding conversation matters as much as the technical one. If the resources to build genuinely diverse foundational models only flow to labs in San Francisco and London, the alignment problem remains a Western problem with a Western solution applied everywhere else.

Worth a much longer conversation. What is your context for the question?

This is exactly the kind of response I was hoping the post would generate, thank you genuinely. I was familiar with Cecil's work at ILINA and the African Hub at UCT, but Sumaya's CASA centre is new to me and I am going down that rabbit hole right now. The Oxford AIGI connection is particularly interesting given the intergovernmental policy.

What strikes me reading these is that the ecosystem is more alive than it appears from the outside, the problem is not that the work does not exist, it is that it is not visible enough to the broader EA and AI safety community, which is part of what I was trying to address with the post. These efforts deserve to be in the same conversations as the Anthropic safety teams and the GovAI fellows, not operating in parallel universes.

I will reach out to Cecil and Sumaya directly. If anyone reading this thread is working on connecting these dots more systematically — building the network between African AI safety researchers and the global safety ecosystem, I would love to talk. That connective tissue is precisely what I am trying to build through AI Safety Nigeria, and collaboration is worth far more than duplication.

Thank you again for these pointers. This thread is already doing what good EA Forum threads should do.

The AI Safety Conversation Is Missing 1.4 Billion People

Cross-posted from my personal notes. I'm sharing this because I think the EA/AI safety community needs to hear it, and because I've been living it.

I lead AI safety work in Nigeria. When I tell people this, the most common reaction is a polite pause, the kind that says: that's interesting, but is that really AI safety?

I want to argue that it is. And that the gap it represents is one of the most neglected problems in the entire AI safety ecosystem.