I bet a number of generalist EAs (people who are good at operations, conceptual research / analysis, writing, generally getting shit done) should probably switch from working on AI safety and policy to working on biosecurity on the current margin.
While AI risk is a lot more important overall (on my views there's ~20-30% x-risk from AI vs ~1-3% from bio), it seems like bio is a lot more neglected right now and there's a lot of pretty straightforward object-level work to do that could take a big bite out of the problem (something that's much harder to come by in AI, especially outside of technical safety).
If you're a generalist working on AI because it's the most important thing, I'd seriously consider making the switch. A good place to start could be applying to work with my colleague ASB to help our bio team seed and scale organizations working on stuff like pathogen detection, PPE stockpiling, and sterilization tech. IMO switching should be especially appealing if:
To be clear, bio is definitely not my lane and I don't have super deep thinking on this topic beyond what I'm sharing in this quick take (and I'm partly deferring to others on the overall size of bio risk). But from my zoomed-out view, the problem seems both very real and refreshingly tractable.
Thanks Mishaal!
For me personally, research and then grantmaking at Open Phil has been excellent for my career development, and it's pretty implausible that grad school in ML or CS, or an ML engineering role at an AI company, or any other path I can easily think of, would have been comparably useful.
If I had pursued an academic path, then assuming I was successful on that path, I would be in my first or maybe second year as an assistant professor right about now (or maybe I'd just be starting to apply for such a role). Instead, at Open Phil, I wrote less-academic reports and posts about less established topics in a more home-grown style, gave talks in a variety of venues, talked to podcasters and journalists, and built lots of relationships in industry, academia, and the policy world in the course of funding and advising people. I am likely more noteworthy among AI companies, policymakers, and even academic researchers than I would have been if I had spent that time doing technical research in a grad school and then went for a faculty role — and I additionally get to direct funding, an option which wouldn't have been easily available to me on that alternative path.
The obvious con of OP relative to a path like that is that you have to "roll your own" career path to a much greater degree. If you go to grad school, you will definitely write papers, and then be evaluated based on how many good papers you've written; there isn't something analogous you will definitely be made to do and evaluated on at OP (at least not something clearly publicly visible). But I think there are a lot of pros:
I'm very interested in these paths. In fact, I currently think that well over half the value created by the projects we have funded or will fund in 2023 will go through "providing evidence for dangerous capabilities" and "demonstrating emergent misalignment;" I wouldn't be surprised if that continues being the case.
The way I approach the role, it involves thinking deeply about what technical research we want to see in the world and why, and trying to articulate that to potential grantees (in one-on-one conversations, posts like this one, RFPs, talks at conferences, etc) so that they can form a fine-grained understanding of how we're thinking about the core problems and where their research interests overlap with Open Phil's philanthropic goals in the space. To do this well, it's really valuable to have a good grip on the existing work in the relevant area(s).
I think this is definitely a real dynamic, but a lot of EAs seem to exaggerate it a lot in their minds and inappropriately round the impact of external research down to 0. Here are a few scattered points on this topic:
Professors typically have their own salaries covered, but need to secure funding for each new student they take on, so providing funding to an academic lab allows them to take on more students and grow (it's not always the case that everyone is taking on as many students as they can manage). Additionally, it's often hard for professors to get funding for non-student expenses (compute, engineering help, data labeling contractors, etc) through NSF grants and similar, which are often restricted to students.
Yeah, I feel a lot of this stress as well, though FWIW for me personally research was more stressful. I don't think there's any crisp institutional advice or formula for dealing with this kind of thing unfortunately. One disposition that I think makes it hard to be a grantmaker at OP (in addition to your list, which I think is largely overlapping) is being overly attached to perfection and satisfyingly clean, beautifully-justifiable answers and decisions.
I'm largely deferring to ASB on these numbers, so he can potentially speak in more detail, but my guess is this includes AI-mediated misuse and accident (people using LLMs or bio design tools to invent nastier bioweapons and then either deliberately or accidentally releasing them), but excludes misaligned AIs using bioweapons as a tactic in an AI takeover attempt. Since the biodefenses work could also help with the latter, the importance ratio here is probably somewhat stacking the deck in favor of AI (though I don't think it's a giant skew, because bioweapons are just one path to AI takeover).
ASB has pretty short ASI timelines that are broadly similar to mine and these numbers take that into account.
If you feel moved by these things and are a good fit to work on them, that's a much stronger reason to work on AI over bio than most people have. But the vast bulk of generalist EAs working on AI are working on AI takeover and more mundane misuse stuff that feels like it's a pretty apples-to-apples comparison to bio.