Hide table of contents

Today, I am announcing the launch of Theomachia Labs, because I believe/observe that:

  • There are many talented people who are abnormally motivated to work on AI safety, even for free, and yet do not have any long-term positions or a structured environment to do that
  • There are many people, me being one of them, who think that we do not have much time before ASI happens and who want to act right now and let others act right now
  • While research is important, there are many things in AI safety that can be done besides research, and there are many roles for non-researchers
  • There are net-positive research domains and questions in AI safety where much more useful work can be done
  • Some of this work can be scaled and parallelised
  • All of the above or most of the above create an opportunity which we can use.

Hence, the idea of a decentralised volunteer-based scalable lab for AI safety - a global platform to coordinate hundreds or eventually even thousands of contributors on AI safety research. Our goal is to let skilled people start working now, without waiting for the next fellowship cycle or permanent research position.

Core principles

Massive scale. We're an on-ramp, not a filter, designed to harness the full global talent pool of volunteer researchers. We want to be very selective about the research agenda and topics, but relatively non-selective about people, and we believe more people may be net-positive than usually when properly managed. The current bottleneck, at least for some research areas, is not incoming talent, but management and institutional opportunities. 

Sober mission. We assume the default is technical failure. Our research focuses on making risks legible, raising the bar for takeover, and building the case for a global pause. We know that this is not a consensus, but we believe this characterisation to be true and the corresponding interventions to be underprioritized. It is also the case that there is a significant portion of AI safety people who are demotivated to work at the orgs doing potentially harmful work, or helping frontier labs advance capabilities, or just doing useless work if we assume that alignment is hard, and we want to be a safe place for such people. 

Decentralised governance. We will use expert guidance as the first layer and prediction markets as the second layer in the pipeline to direct our research. However, the exact framework for that is yet to be established based on the feedback we get from advisors and from reality (so, testing it in practice to see what works and what does not). The point is that Theomachia Labs is much more about organisational approach and operational structure that utilises the existing opportunity than about specific research topics.

That said, we have some strong opinions and principles about research. Mostly, it is caution about capabilities spillovers. We recognise that some or even much of safety research can inadvertently lead to capabilities insights. We are committed to a research selection and publication process that actively weighs and minimises this risk, and creating and dynamically applying a framework for assessing the capability spillover risks for all potential research directions will indeed be the initial research priority for us. You can read more about the current vision of our research principles here (which may be adjusted based on the feedback).

The problem and the solution in detail 

Untapped talent in AI Safety: many eager contributors, few opportunities

The AI safety field is witnessing a surge of interest and capable individuals, but available positions have not kept pace. Across major AI safety training programs, acceptance rates are often in the single digits. Collectively, AI safety fellowships accept <5% of applicants, meaning they turn away over 95% of aspiring contributors. In fact, these fellowships now receive more applications each year than the total number of people currently working in AI safety. This imbalance has been termed the “bycatch” problem – a vast pool of would-be researchers and engineers left on the sidelines simply due to limited slots. Many of these rejected candidates are still highly talented and motivated; it seems crazy to many that in such a crucial field, we are not trying harder to incorporate as much of this talent as possible.

Awareness of AI existential risks has grown globally, creating far more potential talent than traditional pipelines can absorb. BlueDot plans to train 100,000 people in alignment fundamentals in the next few years, reflecting the millions now aware of AI risk. Yet only a tiny fraction of those enthusiasts can progress to advanced research roles under the current system. As it was noted only a couple of years ago, there may only be on the order of 300–500 people worldwide actively working on AI safety research today, even though the pool of individuals with the requisite intelligence or skills likely numbers in the hundreds of thousands. In other words, the field remains small and elite, while an untapped army of capable people stands ready to contribute if given the chance. Many are even willing to work for little or no pay initially, driven by altruism and the desire for career capital. Indeed, volunteer-driven projects in the AI community have shown that passionate contributors can produce significant results when given structure – for example, the AI Safety Camp (a volunteer-based research fellowship program) has “kickstarted high-impact projects with volunteer effort,” where some teams produced enough early results to later secure funding. Volunteers can gain experience and prove their value: one survey found that regular volunteering correlates with 27% better odds of employment, and 60% of hiring managers value volunteer experience as much as paid work. In short, there is a substantial reservoir of global talent eager to help on AI alignment – often willing to start as unpaid volunteers – if only we create the channels to engage them.

Limitations of existing AI safety fellowship programs

Traditional AI safety fellowships and internships (SERI MATS, Cambridge’s ERA fellowship, Anthropic’s Fellows program, AI Safety Camp, etc.) provide valuable training, but they reach only a select few. There are now 20+ full-time AI safety fellowship programs worldwide, yet spots in these programs are extremely scarce relative to demand. Many programs have acceptance rates well under 10%. Notably, opportunities for non-research roles are even more limited. Current fellowships overwhelmingly focus on grooming researchers, while talented engineers, managers, and communicators are largely “locked out” of the pipeline. Programs that do target policy, operations, or other non-technical roles receive an onslaught of applications, demonstrating huge unmet demand. This competitive filter not only leaves many capable people without a path in, but also sends a discouraging signal that “(technical) research is the only way in” to have an impact. As a result, valuable skill sets in policy, advocacy, and organization-building get underutilized while everyone chases a few research spots.

Another thing to address is the fragmentation and short horizon of work done in these fellowships. Most AI safety fellowships are brief (8 weeks to a few months) and project-focused. Dozens of small, independent research projects run in parallel, but there is often no overarching coordination among them. Each fellow or team pursues its own idea, which encourages exploration but also means efforts can be scattered and duplicative. Crucially, many fellowship projects end once the program ends – with work left unfinished or papers unpublished – because the participants must return to school or jobs unless they secure further funding. Organisers explicitly acknowledge a “hits-based” approach where some projects succeed and others fail or fizzle out. Without a permanent institutional home, promising research threads risk dying on the vine. Furthermore, the lack of a unified strategy can lead to gaps in coverage: important alignment problems might fall through the cracks if no small team just so happens to pick them up. In the current model, we effectively have “100 independent mini-research groups” each semester, rather than one coordinated effort. This fragmentation definitely has its advantages, and it is good that we have such platforms, but it also has its downsides, which can be fixed by another structure. This structure would function not as a replacement, but as a complement to the existing fellowships. It would solve the problem of leaving much talent unused and many projects incomplete or unaligned with strategic research agendas.​

The research-centric nature of fellowships has also created talent gaps in the broader AI safety ecosystem. Non-research expertise – in areas like management, operations, communication, and policy – is in short supply relative to the field’s needs. There seems to be a consensus that “non-research roles are more important to recruit for at this time” in AI safety organisations. However, very few training programs exist for these roles, and the field has struggled to integrate people who don’t fit the “researcher” mold. This is a structural problem: when nearly all fellowships signal that research is the main path to impact, many who might excel in policy or coordination either try to force themselves into research (and often get filtered out), or give up on the field. The result is an underutilization of people who could be top contributors in non-research domains. For instance, someone in the 94th percentile of research ability (not quite making the cut for a fellowship) might be in the 99th percentile at government or advocacy work – yet current introductory programs provide little avenue for them. This mismatch leaves critical functions understaffed. Some people have pointed to shortages of organisations and leaders in AI safety as key bottlenecks, not just a shortage of technical ideas. In other words, the community needs more builders, organisers, and specialists in implementation to translate research into impact. Existing fellowships do not usually specialise in cultivating that broader talent pool.

Toward a coordinated, inclusive model for AI safety research

The challenges above underscore why Theomachia Labs’ approach may be valuable. By creating a volunteer-powered, long-term research organisation, Theomachia aims to solve the talent utilisation and coordination problems with a new format.

​Theomachia Labs recognises that the AI safety community’s greatest asset may be the thousands of capable individuals eager to contribute outside the tiny elite of fellowship winners. Rather than letting this “bycatch” go to waste, Theomachia provides an open door for anyone globally who is motivated and qualified to help – including those who can only contribute part-time or cannot relocate. This inclusive ethos meets people “where they are,” much like recent part-time programs (e.g. TARA) have done to accommodate professionals with other commitments. By structuring as a volunteer organisation, Theomachia taps into altruistic energy that already exists in droves. History shows that volunteers, when well-coordinated, can significantly amplify a field’s capacity. For example, AI Safety Camp has a many-year track record of incubating new researchers through volunteer-led projects; as of 2024, alumni from its volunteer teams went on to found 10 organisations and land 43 jobs in AI safety, proving the model’s efficacy. Theomachia Labs extends this concept by giving volunteers a permanent home to continue contributing beyond a short sprint. This benefits the individuals (who gain experience and a network) and the field (which gains their labour and ideas). As one AI Safety Camp mentor noted, many high-neglect areas can be “kick-started with volunteer effort” and then attract funding after initial successes. Theomachia Labs is built to systematically unlock that volunteer potential at scale, globally.

Unlike the ad-hoc project selection in many fellowships, Theomachia Labs will pursue a unified research agenda guided by domain experts and prediction markets. This more centralised prioritisation ensures that volunteer researchers aren’t each reinventing the wheel or chasing pet projects in isolation. Instead, efforts will align with the most pressing unsolved problems in AI safety, as identified by expert consensus and the safety restrictions we impose. This addresses the critique that current fellowship outputs are scattershot and lack a clear strategic focus. By operating as one cohesive organisation rather than disparate cohorts, Theomachia can direct dozens of contributors toward common goals with clarity and purpose. Clear structure and internal accountability mean projects are less likely to fall through the cracks. Moreover, a permanent lab can undertake multi-phase or long-term research that an 8-week fellowship simply can’t. Promising work won’t be abandoned for lack of next-step support – Theomachia provides the scaffolding to carry research from initial idea to published result and beyond, even if some team members leave or join meanwhile. This responds directly to calls in the community for more organisational capacity: field-builders have noted a shortage of structured institutions to absorb and organise new talent. ​

A probably even more important feature is the ecosystem approach– welcoming volunteers in operations, outreach, HR, and other support roles, not only technical research. This is crucial because effective AI safety work is multidisciplinary and requires more than just researchers; it needs project managers, communicators, engineers, policy analysts, community-builders, etc. We can leverage talents that other programs (relatively) overlook. By providing pathways for people with diverse backgrounds, whether an HR specialist or an SMM manager, the lab builds out the robust support structure that a growing field demands. For example, even tasks like coordinating research efforts and improving organisational processes can have an outsized impact on AI alignment progress. In the long run, this creates a more resilient talent pipeline: someone who starts in an ops or communications volunteer role can later transition into a paid leadership position as they gain experience and prove their dedication. 

Finally, Theomachia Labs explicitly serves as a launchpad for careers – a response to the frustration many feel about “no way in” unless you get a top fellowship. Contributors to Theomachia will gain real project experience, mentorship from expert advisors, and demonstrable achievements, all of which make them strong candidates for paid roles in the wider AI safety ecosystem. It is well-known that many existing organisations have hired staff who initially came from volunteer or fellowship backgrounds. Theomachia formalises this pathway: volunteers who show impact and leadership can advance to core team roles with compensation as the lab grows (assuming we will get funding). This creates an incentive for talented people to participate even if unpaid at first – there is a clear meritocratic ladder to climb. Additionally, by rotating volunteers through different functions and projects, Theomachia will help them build a broad skill set. This addresses the “experience gap” problem: after fellowships, many alumni still struggle to find jobs because of limited publication records or niche expertise. In Theomachia’s model, however, a volunteer might spend a year or even two contributing and end up with a few co-authored papers, a network of professional contacts, and leadership experience organising a team – all of which significantly improve their employability. 

We believe that the value proposition of Theomachia Labs is strongly supported by current data and trends in AI safety, and also by what we quite literally observe on the level of personal interactions. The field is overflowing with capable people who want to help with the problem, but far too many are currently left out or underutilised. Existing fellowship programs, while helpful, are insufficient and sometimes inefficient in some dimensions, which can be fixed by complementary institutions: they cherry-pick a few individuals, splinter efforts into short projects, and leave systemic talent gaps in their wake. Theomachia Labs’ coordinated, volunteer-centric model directly addresses these issues by scaling opportunities to everyone globally, focusing efforts on priority research, and nurturing an inclusive community where all roles can contribute. This approach aligns with expert recommendations to widen the AI safety pipeline and build more sustainable infrastructure for the field. By converting latent enthusiasm into organised action, Theomachia Labs aims to produce alignment research at a greater scale and consistency – and to turn today’s passionate volunteers into tomorrow’s leaders in the fight against ASI ruin. 

We are looking for all kinds of people

Basically, we are looking for advisors, operations people, and research people - probably in that order.

Advisors

We need senior research and governance people to provide inputs on what research is needed, critique and red-team the research proposals and ensure that there are no significant capabilities spillovers. We have found some people, but more is better. That said, we want ideologically-aligned advisors, which mostly means people who are serious about not helping frontier labs to advance the capabilities and people who overall acknowledge the gravity of the situation. 

Besides that, we welcome any formal and informal feedback outside of advisor roles.

Operations people

We now have a basic setup of the operations team - 5-10 people, depending on whether everyone will stay active. These people came either from personal acquaintances or from several relatively small AI safety chats where we did several semi-private announcements. After the public announcements, which are being done now, we expect many more people to express their interest, and there is a lot of work to do, so if you want to join the founding team, please let us know! All roles and initiatives are welcome, within the restrictions described in this article (not helping to advance capabilities, being cautious about capabilities spillovers, acknowledging that the problem is hard, not doing safety-washing, etc.).

To mention some specific things we need help with:

  • Marketing/outreach
  • Fundraising
  • Helping research teams operationally
  • Helping research teams with technical infrastructure
  • Engagement with advisors
  • HR, hiring
  • Engagement with other research orgs to coordinate the effort
  • Compiling, red-teaming, and finalising research proposals and research agenda.

There may definitely be something else; feel free to suggest!

Research people

Research will start probably in early December, once everything is set up operationally, and the research agenda is finalised. For research, we will need both leads and fellows, in a manner similar to major AI safety fellowships. There may be different roles for leads and different scopes of teams and resources allocated for them, depending on the time they are ready to invest and their level of experience. 

Where we are now and our immediate plans

We will do public announcements and calls for action in different places to get more advisors and operations people in the next 2 weeks, starting from today, so until early November. We expect to get at least 100 applications, as we got 20+ applications in 1 week using only private channels. Once the consequent team expansion and setup are finished, we will proceed with finalising the research agenda, setting up research infrastructure, and teams - assembling leads and fellows into groups and coordinating them, so that actual research can start around December 1. Roughly at that time, we will also proceed with fundraising and coordinating with other AI safety orgs. 

FAQ

What are the most pressing bottlenecks right now?

Looking for advisors and screening candidates/conducting interviews. And money, of course. 

Isn't being non-selective going to hurt the quality and relevancy of the research agenda? 

We are going to be very selective about research but relatively non-selective about people. If a trade-off is faced, we stick to the goal of being selective about research. 

What if you are wrong on a major thing?

We are ready to pivot, in that case. The initial hypothesis that there are some talented people willing to work as volunteers seems to be confirmed and over-confirmed, and we received many more expressions of interest and excitement than we hoped for. There are some other assumptions that need to be tested, but that's the point - they need to be tested, and we will test them. On a high level, it looks like it is time for a large-scale volunteer-based AI safety research org - in one form or another.

Why not include more research topics and be so paranoid about capabilities spillovers and frontier labs?

Because that is what we believe to be useful, on the object-level. We believe doing otherwise would be harmful, and this priority is neglected. Someone may launch an organisation with a different research ideology. At Theomachia Labs, these are red lines we are not going to cross. We acknowledge that judging the potential capabilities spillover of a specific research topic is a difficult choice and will thus be intensely researched and debated.

Also, note that we are not limited to technical alignment. We consider putting significant effort into research on AI governance, metaresearch (what topics to prioritise, what topics are harmful, etc.), research on engagement with the public and activism, and are open to many other directions outside of technical alignment. 

How is everything dependent on funding?

Much more can be done with funding, but something can be done without it. Most importantly, if we get funding, some people will be able to switch to full-time positions (around 60-80 hours instead of 15-30), which will accelerate everything a lot. That said (1), we will proceed even without funding, at least for some time. That said (2), if what we do is relevant, there must be some positive feedback from reality - it may be funding, it may be producing impactful research, it may be finding and empowering talents. So, we are going to react to the presence or absence of this feedback and act accordingly. 

4

1
0

Reactions

1
0

More posts like this

Comments1
Sorted by Click to highlight new comments since:

This is actually solving a problem i've been running into. been trying to get into fellowships and just... haven't. reading this i realized it's not just the limited spots. It's that programs only really want researchers. I'm interested in international coordination and policy stuff, which doesn't fit their mold so i just don't exist to them.

What you're saying actually matters. when programs only talk about research, people like me stop trying. saying "we need coordinators and policy people" would probably unlock a bunch of people who've already given up.

Curated and popular this week
Relevant opportunities