A

Ajeya

1759 karmaJoined

Sequences
1

Planned Obsolescence

Comments
76

Is the 1-3% x-risk from bio including bio catastrophes mediated by AI (via misuse and/or misalignment? Is it taking into account ASI timelines?

I'm largely deferring to ASB on these numbers, so he can potentially speak in more detail, but my guess is this includes AI-mediated misuse and accident (people using LLMs or bio design tools to invent nastier bioweapons and then either deliberately or accidentally releasing them), but excludes misaligned AIs using bioweapons as a tactic in an AI takeover attempt. Since the biodefenses work could also help with the latter, the importance ratio here is probably somewhat stacking the deck in favor of AI (though I don't think it's a giant skew, because bioweapons are just one path to AI takeover).

ASB has pretty short ASI timelines that are broadly similar to mine and these numbers take that into account.

Also, just comparing % x-risk seems to miss out on the value of shaping AI upside / better futures, s-risks + acausal stuff, etc. (also are you counting ai-enabled coups / concentration of power?). And relatedly the general heuristic of working on the thing that will be the dominant determinant of the future once developed (and which might be developed soon).

If you feel moved by these things and are a good fit to work on them, that's a much stronger reason to work on AI over bio than most people have. But the vast bulk of generalist EAs working on AI are working on AI takeover and more mundane misuse stuff that feels like it's a pretty apples-to-apples comparison to bio.

Ajeya
70
8
11
4
8

I bet a number of generalist EAs (people who are good at operations, conceptual research / analysis, writing, generally getting shit done) should probably switch from working on AI safety and policy to working on biosecurity on the current margin.

While AI risk is a lot more important overall (on my views there's ~20-30% x-risk from AI vs ~1-3% from bio), it seems like bio is a lot more neglected right now and there's a lot of pretty straightforward object-level work to do that could take a big bite out of the problem (something that's much harder to come by in AI, especially outside of technical safety).

If you're a generalist working on AI because it's the most important thing, I'd seriously consider making the switch. A good place to start could be applying to work with my colleague ASB to help our bio team seed and scale organizations working on stuff like pathogen detection, PPE stockpiling, and sterilization tech. IMO switching should be especially appealing if:

  • You find yourself unsatisfied by how murky the theories of change are in AI world and how hard it is to feel good about whether your work is actually important and net positive
  • You have a hard sciences or engineering background, especially mechanical engineering, materials science, physics, etc (or of course a background in biology, though that's less necessary/relevant than you may assume!)
  • You want a vibe of solving technical problems with strong feedback loops rather than a vibe of doing communications and politics, but you're not a good fit for ML research

To be clear, bio is definitely not my lane and I don't have super deep thinking on this topic beyond what I'm sharing in this quick take (and I'm partly deferring to others on the overall size of bio risk). But from my zoomed-out view, the problem seems both very real and refreshingly tractable.

Ajeya
3
0
0
1
1
1

Thanks Mishaal!

  1. I think previous experience taking on operationally challenging projects is definitely the most important thing here, though it may not necessarily be traditional job experience (running a student group or local group can also provide good experience here). Beyond that, demonstrating pragmatism and worldliness in interviews (for example, when discussing real or hypothetical operational or time management challenges) is useful.
  2. I think an important quality in a role like this is steadiness — not getting easily overwhelmed by juggling a lot of competing tasks, having the ability to get the easy stuff done quickly and make smart calls about prioritizing between the harder more nebulous tasks. And across all our roles, being comfortable with upward feedback and disagreement is key.

For me personally, research and then grantmaking at Open Phil has been excellent for my career development, and it's pretty implausible that grad school in ML or CS, or an ML engineering role at an AI company, or any other path I can easily think of, would have been comparably useful. 

If I had pursued an academic path, then assuming I was successful on that path, I would be in my first or maybe second year as an assistant professor right about now (or maybe I'd just be starting to apply for such a role). Instead, at Open Phil, I wrote less-academic reports and posts about less established topics in a more home-grown style, gave talks in a variety of venues, talked to podcasters and journalists, and built lots of relationships in industry, academia, and the policy world in the course of funding and advising people. I am likely more noteworthy among AI companies, policymakers, and even academic researchers than I would have been if I had spent that time doing technical research in a grad school and then went for a faculty role — and I additionally get to direct funding, an option which wouldn't have been easily available to me on that alternative path.

The obvious con of OP relative to a path like that is that you have to "roll your own" career path to a much greater degree. If you go to grad school, you will definitely write papers, and then be evaluated based on how many good papers you've written; there isn't something analogous you will definitely be made to do and evaluated on at OP (at least not something clearly publicly visible). But I think there are a lot of pros:

  • The flipside of the social awkwardness and stress that Linch highlighted in one of his questions is that a grantmaking role teaches you how to navigate delicate power dynamics, say no, give tough feedback, and make non-obvious decisions that have tangible consequences on reasonably short timeframes. I think I've developed more social maturity and operational effectiveness than I would have in a research role; this is a pretty important and transferrable skillset.
  • There is more space than there would be in a grad school or AI lab setting to think about weird questions that sit at the intersection of different fields and have no obvious academic home, such as the trajectory of AI development and timelines to very powerful AI. While independent research or other small-scale nonprofit research groups could offer a similar degree of space to think about "weird stuff," OP is unusual in combining that kind of latitude with the ability to direct funding (and thus the ability to help make big material projects happen in the world).
     

I'm very interested in these paths. In fact, I currently think that well over half the value created by the projects we have funded or will fund in 2023 will go through "providing evidence for dangerous capabilities" and "demonstrating emergent misalignment;" I wouldn't be surprised if that continues being the case.

The way I approach the role, it involves thinking deeply about what technical research we want to see in the world and why, and trying to articulate that to potential grantees (in one-on-one conversations, posts like this one, RFPs, talks at conferences, etc) so that they can form a fine-grained understanding of how we're thinking about the core problems and where their research interests overlap with Open Phil's philanthropic goals in the space. To do this well, it's really valuable to have a good grip on the existing work in the relevant area(s).

I think this is definitely a real dynamic, but a lot of EAs seem to exaggerate it a lot in their minds and inappropriately round the impact of external research down to 0. Here are a few scattered points on this topic:

  • Third party researchers can influence the research that happens at labs through the normal diffusion process by which all research influences all other research. There's definitely some barrier to research insight diffusing from academia to companies (and e.g. it's unfortunately common for an academic project to have no impact on company practice because it just wasn't developed with the right practical constraints in mind), but it still happens all the time (and some types of research, e.g. benchmarks, are especially easy to port over). If third party research can influence lab practice to a substantial degree, then funding third party research just straightforwardly increases the total amount of useful research happening, since labs can't hire everyone who could do useful work.  
  • It will increasingly be possible to do good (non-interpretability) research on large models through APIs provided by labs, and Open Phil could help facilitate that and increase the rate at which it happens. We can also help facilitate greater compute budgets and engineering support.
  • The work of the lab-external safety research community can also impact policy and public opinion; the safety teams at scaling labs are not their only audience. For example, capability evaluations and model organisms work both have the potential to have at least as big an impact on policy as they do on technical safety work happening inside labs.
  • We can fund nonprofits and companies which directly interface with AI companies in a consulting-like manner (e.g. red-teaming consultants); I expect an increasing fraction of our opportunities to look like this.
  • Academics and other external safety researchers we fund now can end up joining scaling labs later (as e.g. Ethan Perez and Collin Burns did), to implement ideas that they developed on the outside; I think this is likely to happen more and more.
  • Some research directions benefit less than others from access to cutting edge models. For example, it seems like there's a lot of interpretability work that can be done on very small models, whereas scalable oversight work seems harder to do without quite smart models.

Professors typically have their own salaries covered, but need to secure funding for each new student they take on, so providing funding to an academic lab allows them to take on more students and grow (it's not always the case that everyone is taking on as many students as they can manage). Additionally, it's often hard for professors to get funding for non-student expenses (compute, engineering help, data labeling contractors, etc) through NSF grants and similar, which are often restricted to students.

Ajeya
11
2
0
1

Yeah, I feel a lot of this stress as well, though FWIW for me personally research was more stressful. I don't think there's any crisp institutional advice or formula for dealing with this kind of thing unfortunately. One disposition that I think makes it hard to be a grantmaker at OP (in addition to your list, which I think is largely overlapping) is being overly attached to perfection and satisfyingly clean, beautifully-justifiable answers and decisions.

It's hard to project forward of course, but currently there are ~50 applicants to the TAIS team and ~100 to the AI governance team (although I think a number of people are likely to apply close to the deadline).

Load more