Many students wanting to contribute to AI Safety are planning to do a PhD. Having started and then quit a PhD myself, I want to lay out some reasons why not to do this. There are also good reasons to do a PhD, and it can be the right choice for you - but before you make this big decision, I want you to know the reasons against. Even if you end up doing one, these are failure modes to intentionally avoid.
This advice is for people thinking about their career in impact-oriented terms, especially in AI Safety (I think many arguments also apply to other fields). I've given this advice often enough that I figured I should write it down. My experience is in technical AI Safety in a German PhD program, so I'm less confident these arguments apply elsewhere. My advice is partially lived experience, partially regurgitated from others. These are somewhat hot takes; please don't take them as gospel.
Reasons Against
The probability of impact is low
Often, you want your research adopted by frontier companies or to inform important decision-makers, but many papers get published and never go anywhere. One spends months or a year on a paper, puts it out, but in my opinion, most papers achieve no real-world impact. From my assessment, only very few papers coming out of academia end up really impacting the world, while most go nowhere. This is partially due to research that’s focused on problems that don’t practically matter, that failed in other ways, that didn’t reach the right person, or that was ignored by those who should pay attention to it.
This is much better in companies, where you can directly implement your solutions. Even if you can’t join a leading AI company immediately, you want to be "close to the action" because this makes it easier to get your work adopted.
Counter: If you want to do more fundamental research, it’s expected and ok that your research will not get implemented. Instead, you want to add building blocks for later breakthroughs.
You don't know what problems matter most at the frontier
Being close to the action also means you get information about what the urgent problems actually are. Inside a company, you immediately hear about the safety issues that arise or the evidence that would convince decision-makers. Academics don’t get this luxury. Instead, they often stare at a wall, invent a problem, convince themselves it's important, and then spend years working on that. While this style of research can birth breakthroughs, it also means that much research ends up focusing on less relevant problems.
Many academics are also not great at reasoning about real-world impact. They're guided by incentives to publish in prestigious venues, boost citation counts, and secure grants. Often, their first reaction to a project idea isn’t to consider real-world impact on AI Safety, but whether it would make a publishable and popular paper. Comparable pressures exist in other organisations too, but in my experience, academia has less of an impact-focused culture overall.
PhDs are bad for humans
PhD students tend to be unhappy. Some flourish, but on average, PhDs can harm mental health. I experienced this myself, saw it around me, and studies support this conclusion.
The reasons relate to the work environment: you tackle difficult problems, often alone, often without any structure, often with little help and management. PhD students are typically ambitious and put pressure on themselves. Projects stretch for months before you receive any positive feedback. Your success/failure depends on a random number generator (the peer review system). You will have to handle frustrations such as your beloved paper being rejected repeatedly for stupid reasons, losing months of work because someone else published the same thing before you, or grinding hard for months only to see your work disappear in obscurity.
For you personally, you’re likely to be happier in another job. Additionally, damage to your mental health will harm your current and future productivity.
Much research can’t be done in academia
Academics don’t have access to much compute. This excludes PhD researchers from many important directions, esp in Frontier LLMs. Furthermore, they lack white-box access (or any access) to the most advanced closed-source or unpublished models. Important work remains - building evals, interpretability - but it's harder.
PhDs can be poor learning environments
The most important factor for learning is tight feedback loops. In a PhD, projects span months or years, and you only learn whether something worked (did it get published? adopted?) long after wrapping up. You often work alone, maybe getting one meeting with your supervisor per week. Your supervisor might not even be an expert in your specific area, since they supervise many students across different topics.
In a non-profit or company, you typically get more check-ins with your manager and work with a team you can learn from. However, ultimately, this is dominated by whether you have a great mentor.
A PhD locks you in for a long time
Depending on the country, PhDs take 3–7 years. That's a long time, especially if you think AI timelines might be short. If you're mainly doing a PhD to upskill or build credibility, consider pursuing direct work instead.
Bad reasons to do a PhD
I need a PhD to become a researcher
At least in ML, this is not true. ML and AI Safety are very non-credentialist. What matters is not your degree, but showing that you can do good work. A PhD can be a good time where you can build a portfolio that shows your quality, but it’s not the only way. There are many examples of people who have made it far in ML Research without a PhD. e.g., the lead of the interpretability teams at Google and Anthropic both don't hold graduate degrees. Alternative paths can be to start as an engineer in a company and slowly transition into research roles, to take research positions or internship, or to do independent research.
I don't know what else to do
A PhD is a pretty big commitment. Unfortunately, many people just default into it. University can make it seem like PhDs are the natural next thing to do for smart, research-minded students. But due to the downsides listed above, I think you should only do a PhD if you have good reasons to do so. Instead, do an internship, join a fellowship program, or run other experiments that help you identify and assess your options.
I liked my MSc thesis, so I'll just continue with a PhD
As above, you should consider and experiment with other career options, instead of just following the path of least resistance. Additionally, enjoying a MSc thesis is some evidence that you would enjoy a PhD. However, a PhD is longer, often with a different supervisor and topic - so this evidence may not generalize.
Academic research is pure and not corrupted by money
Yes, in a PhD, you have the freedom to research things without obvious commercial use (although some AI companies also do fundamental research). However, academia does face other pressures pulling one away from the "pure pursuit of knowledge". Academics also need money, so they compete for grants. To get grants, you need to be legibly impressive/useful to some funding committee, which means you need papers published in prestigious journals and you need to work on topics that the committee thinks are worth funding. Thus, many people crowd into trendy areas, and there is pressure to "publish-or-perish".
A PhD gives you the freedom to research what you want.
This is partially true. Compared to other jobs, where a manager tells you what to work on, PhDs give you more freedom. However, in some PhDs, you do not get this freedom because you are hired for a specific project or because your supervisor micromanages you (avoid such PhDs). Even in other cases, the opinions of your supervisor will still constrain what you end up working on. If your supervisor hates a project idea, you likely won't pursue it (even if it is good).
Good Reasons to Do a PhD
Overall, a PhD can still be your best path, especially if you cannot get into positions at AI Safety Orgs or leading labs. There are good reasons and suitable situations for a PhD. My list isn't comprehensive. I suggest reading Andrej Kaparthy and Adam Gleave for more positive reasons.
Building Research Taste
PhDs give you significant freedom to explore your own ideas. You can have ideas, watch them fail, have more ideas, and slowly improve at generating and judging ideas. This is how you build research taste—one of the most important skills for becoming a strong researcher. You wouldn't develop this if your boss just told you what to work on.
You might lack better options
AI Safety is competitive, and landing a job at frontier companies or AI Safety orgs can be hard. Mid-to-high tier PhD programs are often easier to get into initially. If you've tried and failed to join AIS orgs, a PhD might be a reasonable alternative. Additionally, in a PhD, you might take a spot from someone uninterested in AI Safety, creating a counterfactual position—whereas AIS orgs would hire another AIS person anyway. For some, visa issues might also cut out many options, thus making PhDs with stable multi-year employment more attractive.
PhDs open (some) doors
The AI industry and AIS ecosystem tend not to care much about credentials. They care whether you can demonstrate that you can do good work. However, in some career paths, it is valuable to have a PhD. If you want to become a professor, it’s necessary. In Governance, it can give you more credibility. And in some organisations, it could help you rise higher up the ladder.
Some Advice If You Do a PhD
You don’t need to finish your PhD
Instead, you want to view this as a thing you’re trying for a while. Before starting, I committed to reevaluate whether the PhD was still the right thing for me and to be prepared to leave if it wasn’t. When the time came, I noticed I was unhappy, unproductive, and no longer believed in my work's impact. That early intention helped me make the decision (which turned out great).
Quitting your PhD is underrated. Many more people should do it. I see PhDs who really should quit but don't, partly because quitting is emotionally difficult—you might feel like you're failing, letting people down, or taking a huge risk.
Beware of:
- Bad supervisors. Students are quite dependent on their supervisor. If they're an asshole or just don't support you, you'll struggle. Before joining a lab, talk to current and past students to gauge what they're like as a mentor.
- PhDs with little freedom. Sometimes students get hired to work on a specific project because the professor got a grant. This cancels out the PhD's most attractive aspect (the freedom). Check beforehand how much freedom your supervisor will give you. Can you pursue whatever you want? Do they have a clear plan for you? Will they reject most of your ideas?
- Isolated locations. Being socially and physically close to other AI Safety researchers or "users of your research" who might implement your work is valuable for choosing better research ideas. Beware of PhDs in locations without an active AIS or AI industry ecosystem.
Consider doing something else first
Instead of directly rushing into a PhD, you might consider trying some of the options below first. Joining an internship, fellowship program, or working for a while can give you perspective on what you want from a PhD and whether you actually need one. It's better to do these things beforehand rather than during or after, because beforehand, they inform whether and how to pursue a PhD.
What are the alternatives?
For people wanting to get into AI Safety Research
- Try directly landing jobs in AI Safety organisations or AI Safety roles in companies. These are competitive, but if you can get in directly, you don’t need the PhD
- Join Fellowship/research programs like the ones listed here
- Start or join an AI Safety startup (see Catalyze and Seldon Lab)
- Do independent research on AI Safety (although this is a very risky path, in which most people struggle)
- Take a non-AI Safety job in an environment that will help you grow a lot. Eg do an internship or job in BigTech, work in an AI startup/company,
- You might also consider that there are other ways than research to contribute to AI Safety. You could do ML engineering, field building, operations, communications, org building, advocacy, …

Counterbalance: if you want to do research, do a PhD, if you can get one. It's the easiest research funding source available for someone who doesn't already have a PhD. Much of the rest of it can be solved by being smart and strategic about what you actually research. You can go to fellowships, research programs, events and training while doing a PhD.
I don't think EA should be recommending that its researchers drop out of research school.
(This is not necessarily disagreeing with you - I think a lot of people think they want to do research but actually don't want to do research, they just want to stay in a familiar university environment, and they need to figure this out.)
I also do recommend working for a while in a "normal job" before going back to uni for a PhD, as I think that's a really great way of sorting out genuine interest in research from inertia about not wanting to leave the university system.
I will plug my CDT here: https://www.lancaster.ac.uk/stor-i/
I appreciate the Counterbalance!