Jamie_Harris

Courses Project Lead @ Centre for Effective Altruism
3588 karmaJoined Working (6-15 years)London N19, UK

Bio

Participation
5

Jamie is the Courses Project Lead at the Centre for Effective Altruism, leading a team running online programmes that inspire and empower talented people to explore the best ways that they can help others. These courses and fellowships provide structured guidance, information, and support to help people take tailored next steps that set them up for high impact.

He has very light-touch involvement as a Fund Manager at the Effective Altruism Infrastructure Fund, which aims to increase the impact of projects that use the principles of effective altruism, by increasing their access to talent, capital, and knowledge.

Lastly, Jamie is President of the board at Leaf, an independent nonprofit that supports exceptional teenagers to explore how they can best save lives, help others, or change the course of history. (Most of the hard work is being done by the wonderful Jonah Boucher though!)

Jamie previously worked as a teacher, as a researcher at the think tank Sentience Institute, as co-founder and researcher at Animal Advocacy Careers (which helps people to maximise their positive impact for animals), and as a Program Associate at Macroscopic Ventures (grantmaking focused on s-risks).
 

Comments
416

Topic contributions
5

Separately, here's Claude's direct reply to your specific points in case you're curious (sorry I don't enough of a developed inside view take to respond myself!):

On "China don't have any frontier labs, only labs which distill other models": this is probably too strong. DeepSeek introduced genuine architectural innovations (Multi-head Latent Attention, fine-grained MoE) that Epoch AI characterises as real advances, not just distillation. That said, the distillation question is genuinely debated: OpenAI has alleged it, and Chinese labs scraped millions of Claude conversations. The picture is mixed rather than one-sided.

On "no evidence of an arms race": both governments explicitly frame AI as a strategic contest (both opted out of the Feb 2026 responsible AI military declaration), there's confirmed espionage (Linwei Ding convicted Jan 2026 for stealing Google TPU secrets), and $2.5B in chip smuggling. Whether this constitutes an "arms race" depends on your definition, but the competitive dynamic Leopold predicted is clearly present.

Your most interesting point is the last one: that distillation and open source might mean an arms race never materialises because intelligence becomes cheap and accessible. This connects directly to what I think is Leopold's most consequential error. He predicted open source would fade and proprietary algorithms would create a durable American moat. Instead, capable AI is diffusing faster than his framework assumed. You're right that this weakens the case that compute concentration equals geopolitical power, and it's a genuinely underexplored implication of how things have played out.

Thanks for reviewing and raising this! You're right that the US/China dynamics are central to Situational Awareness's thesis and we underemphasised them. We've now added a dedicated China/US section with its own tab and three expandable cards, evaluating his specific sub-predictions on infrastructure (7nm chips, power, Middle East), algorithms and open source, and strategic dynamics. Would value your review of the updated version if you have time!

True, the 3.5 rating seems a bit harsh! I just tweaked the wording that you quoted directly

Blimey. Did you check with CE about offering it as part of their incubation program (funded by them, maybe paid by results as you say)? And/or other incubators like Catalyze, or fellowship programs (not founders per se) like Constellation? (IIRC they have an affiliated executive coach already)

I'm surprised by "I don't really want a grant" though. E.g. the usual process is basically seed funding grant to check/demonstrate progress --> if you achieve that (or seem on track to), you get renewed funding. The mechanism isn't perfect (maybe you can BS your way to success or you aren't funded without good reasons), but it's at least ideally fairly results based.

(I'd be inclined to agree that ideally the founders/participants themselves would pay, but if you have evidence that they are "irrationally self-sacrificial" and will continue to underpay for the service relative to what they'd endorse themselves with hindsight etc, then that seems like a decent case for grant funding.)

I opened your profile and website and couldn't tell what this referred to? I'm intrigued, even if it's no longer accepting sign ups! 

This post prompted me to write up an idea I've had in the back of my mind for a while. Asya argues that people in or considering technical or policy roles at AI safety organizations could maybe have more impact doing capacity-building work.

One way to test if this could be a good fit for you: if you have domain expertise in an AI safety or governance topic, creating a structured course around it might be more feasible than you'd expect. AI tools, volunteer facilitators, and people like me with more experience in courses/products can handle a lot of the heavy lifting, so the main contribution is your knowledge and judgment about what matters.

I've written up a short proposal exploring how this could work in practice; I'd be keen to hear from anyone interested in trying it out.

Separately: the discussion/comments on the LessWrong cross-post are pretty interesting regarding the case for and against working on capacity building, so people reading here might like to check through those discussions too.

This post felt motivating plus personally reassuring to me given that I work in capacity building (albeit not solely focused on AI safety). 

A couple of updates (or at least: things that feel more salient to me) from the case study /stories were around the value of personal connections and direct personal encouragement to consideration working on [specific thing]. In the stories, it seems that often came from workshops and in-person events, though I'm also wondering if I should be leaning even harder into ways to enable that in the online programs I run.

Cool, makes sense. To be clear, I think contacting representatives is helpful! I wasn't trying to question that.

I don't know anything about the Congress authorisation so will defer on that. I'll just say that if the legality is in dispute rather than unambiguous/settled, then using the word illegal might be counterproductive/polarising, whereas "unprecedented" seems unambiguously true. 

Nice one for taking action!

What was the illegal part? Isn't it just unprecedented?

(Checking partly for my own knowledge and also because it seemed quite central to your call to action to the legislators)

Load more