Jamie is the Courses Project Lead at the Centre for Effective Altruism, leading a team running online programmes that inspire and empower talented people to explore the best ways that they can help others. These courses and fellowships provide structured guidance, information, and support to help people take tailored next steps that set them up for high impact.
He has very light-touch involvement as a Fund Manager at the Effective Altruism Infrastructure Fund, which aims to increase the impact of projects that use the principles of effective altruism, by increasing their access to talent, capital, and knowledge.
Lastly, Jamie is President of the board at Leaf, an independent nonprofit that supports exceptional teenagers to explore how they can best save lives, help others, or change the course of history. (Most of the hard work is being done by the wonderful Jonah Boucher though!)
Jamie previously worked as a teacher, as a researcher at the think tank Sentience Institute, as co-founder and researcher at Animal Advocacy Careers (which helps people to maximise their positive impact for animals), and as a Program Associate at Macroscopic Ventures (grantmaking focused on s-risks).
Thanks for reviewing and raising this! You're right that the US/China dynamics are central to Situational Awareness's thesis and we underemphasised them. We've now added a dedicated China/US section with its own tab and three expandable cards, evaluating his specific sub-predictions on infrastructure (7nm chips, power, Middle East), algorithms and open source, and strategic dynamics. Would value your review of the updated version if you have time!
Blimey. Did you check with CE about offering it as part of their incubation program (funded by them, maybe paid by results as you say)? And/or other incubators like Catalyze, or fellowship programs (not founders per se) like Constellation? (IIRC they have an affiliated executive coach already)
I'm surprised by "I don't really want a grant" though. E.g. the usual process is basically seed funding grant to check/demonstrate progress --> if you achieve that (or seem on track to), you get renewed funding. The mechanism isn't perfect (maybe you can BS your way to success or you aren't funded without good reasons), but it's at least ideally fairly results based.
(I'd be inclined to agree that ideally the founders/participants themselves would pay, but if you have evidence that they are "irrationally self-sacrificial" and will continue to underpay for the service relative to what they'd endorse themselves with hindsight etc, then that seems like a decent case for grant funding.)
This post prompted me to write up an idea I've had in the back of my mind for a while. Asya argues that people in or considering technical or policy roles at AI safety organizations could maybe have more impact doing capacity-building work.
One way to test if this could be a good fit for you: if you have domain expertise in an AI safety or governance topic, creating a structured course around it might be more feasible than you'd expect. AI tools, volunteer facilitators, and people like me with more experience in courses/products can handle a lot of the heavy lifting, so the main contribution is your knowledge and judgment about what matters.
I've written up a short proposal exploring how this could work in practice; I'd be keen to hear from anyone interested in trying it out.
Separately: the discussion/comments on the LessWrong cross-post are pretty interesting regarding the case for and against working on capacity building, so people reading here might like to check through those discussions too.
This post felt motivating plus personally reassuring to me given that I work in capacity building (albeit not solely focused on AI safety).
A couple of updates (or at least: things that feel more salient to me) from the case study /stories were around the value of personal connections and direct personal encouragement to consideration working on [specific thing]. In the stories, it seems that often came from workshops and in-person events, though I'm also wondering if I should be leaning even harder into ways to enable that in the online programs I run.
Cool, makes sense. To be clear, I think contacting representatives is helpful! I wasn't trying to question that.
I don't know anything about the Congress authorisation so will defer on that. I'll just say that if the legality is in dispute rather than unambiguous/settled, then using the word illegal might be counterproductive/polarising, whereas "unprecedented" seems unambiguously true.
Separately, here's Claude's direct reply to your specific points in case you're curious (sorry I don't enough of a developed inside view take to respond myself!):