Policy
Policy

Quick takes

24
3d
2
Potential opportunity to influence the World Bank away from financing factory farms: The UK Parliament is currently holding an open consultation on the future of UK aid and development assistance, closing on November 14, 2025. It includes the question, "Where is reform needed in multilateral agencies and development banks the UK is a member of, and funds?". This would include the World Bank, which finances factory farms,[1][2] so could this consultation be a way to push it away from doing that, via the UK government?  Are any organisations planning on submitting responses? If so, should there be an effort to co-ordinate more responses on this? 1. ^  "Why the World Bank Must Stop Funding Factory Farms", 30 Apr 2024 https://www.worldanimalprotection.us/latest/blogs/why-the-world-bank-must-stop-funding-factory-farms/  2. ^ "The World Bank has a factory-farm climate problem", 20 Nov 2024 https://grist.org/food-and-agriculture/world-bank-development-banks-factory-farm-climate-industrial-agriculture/ 
24
19d
1
I sometimes think of this idea and haven't found anyone mentioning it with a quick AI search: a tax on suffering. EDIT: there's a paper on this but specific to animal welfare that was shared on the forum earlier this year. A suffering tax would function as a Pigouvian tax on negative externalities—specifically, the suffering imposed on sentient beings. The core logic: activities that cause suffering create costs not borne by the actor, so taxation internalizes these costs and incentivizes reduction. This differs from existing approaches (animal welfare regulations, meat taxes) by: * Making suffering itself the tax base rather than proxies like carbon emissions or product type * Creating a unified framework across different contexts (factory farming, research, entertainment, etc.) * Explicitly quantifying and pricing suffering The main problems are measurement & administration. I would imagine an institute would be tasked with guidelines/a calculation model, which could become pretty complex. Actually administrating it would also be very hard, and there should be a threshold beneath which no tax is required because it wouldn't be worth the overhead. I would imagine that an initial version wouldn't right away be "full EA" taking into account invertebrates. It should start with a narrow scope, but with the infrastructure for moral circle expansion. It's obviously more a theoretical exercise than practical near-term, but here's a couple of considerations:  * it's hard to oppose: it's easier to say that carbon isn't important or animals don't suffer. It's harder to oppose direct taxation of suffering * it's relatively robust in the long-term: it can incorporate new scientific and philosophical insights on wild animal welfare, non-vertebrate sentience, digital sentience, etc. * it's scale sensitive * it focuses the discussion on what matters: who suffers how much? * It incentivizes the private sector to find out ways to reduce suffering
48
3mo
2
AI governance could be much more relevant in the EU, if the EU was willing to regulate ASML. Tell ASML they can only service compliant semiconductor foundries, where a "compliant semicondunctor foundry" is defined as a foundry which only allows its chips to be used by compliant AI companies. I think this is a really promising path for slower, more responsible AI development globally. The EU is known for its cautious approach to regulation. Many EAs believe that a cautious, risk-averse approach to AI development is appropriate. Yet EU regulations are often viewed as less important, since major AI firms are mostly outside the EU. However, ASML is located in the EU, and serves as a chokepoint for the entire AI industry. Regulating ASML addresses the standard complaint that "AI firms will simply relocate to the most permissive jurisdiction". Advocating this path could be a high-leverage way to make global AI development more responsible without the need for an international treaty.
7
13d
3
Horizon Institute for Public Service is not x-risk-pilled Someone saw my comment and reached out to say it would be useful for me to make a quick take/post highlighting this: many people in the space have not yet realized that Horizon people are not x-risk-pilled. (Edit: some people reached out to me to say that they've had different experiences with a minority of Horizon people.)
47
5mo
4
EU opportunities for early-career EAs: quick overview from someone who applied broadly I applied to several EU entry programmes to test the waters, and I wanted to share what worked, what didn’t, and what I'm still uncertain about, hoping to get some insights. Quick note: I'm a nurse, currently finishing a Master of Public Health, and trying to contribute as best I can to reducing biological risks. My specialisation is in Governance and Leadership in European Public Health, which explains my interest in EU career paths. I don’t necessarily think the EU is the best option for everyone. I just happen to be exploring it seriously at the moment and wanted to share what I’ve learned in case it’s useful to others. ⌨️ What I applied to & how it went * Blue Book traineeship – got it (starting October at HERA.04, Emergency Office of DG HERA) * European Committee of the Regions traineeship – rejected in pre-selection * European Economic & Social Committee traineeship – same * Eurofound traineeship – no response * EMA traineeship (2 applications: Training Content and Vaccine Outreach) – no response * Center for Democracy & Technology internship – no response * Schuman traineeship (Parliament) – no response * EFSA traineeship – interview but no feedback (I indicated HERA preference, so not surprised) If anyone needed a reminder: rejection is normal and to be expected, not a sign of your inadequacy. It only takes one “yes.” 📄 Key EA Forum posts that informed and inspired me * “EAs interested in EU policy: Consider applying for the European Commission’s Blue Book Traineeship” * “What I learned from a week in the EU policy bubble” – excellent perspective on the EU policymaking environment 🔍 Where to find EU traineeships All together here: 🔗 https://eu-careers.europa.eu/en/job-opportunities/traineeships?institution=All Includes Blue Book, Schuman, and agency-specific roles (EMA, EFSA, ECDC...). Traineeships are just traineeships: don’t underestimate what
72
9mo
2
Update (January 28): Marco Rubio has now issued a temporary waiver for "humanitarian programs that provide life-saving medicine, medical services, food, shelter and subsistence assistance."[1] PEPFAR's funding was recently paused as a result of the recent executive order on foreign aid.[2] (It was previously reauthorized until March 25, 2025.[3]) If not exempted, this would pause PEPFAR's work for three months, effective immediately. Marco Rubio has issued waivers for some forms of aid, including emergency food aid, and has the authority to issue a similar waiver for PEPFAR, allowing it to resume work immediately.[4] Rubio has previously expressed (relatively generic) positive sentiments about PEPFAR on Twitter,[5] and I don't have specific reason to think he's opposed to PEPFAR, as opposed to simply not caring strongly enough to give it a waiver without anyone encouraging him to. I think it is worth considering calling your representatives to suggest that they encourage Rubio to give PEPFAR a waiver, similarly to the waiver he provided to programs giving emergency food aid. I have a lot of uncertainty here — in particular, I'm not sure whether this is likely to persuade Rubio — but I think it is fairly unlikely to make things actively worse. I think the argument in favor of calling is likely stronger for people who are represented by Republicans in Congress; I expect Rubio would care much more about pressure from his own party than about pressure from the Democrats.   1. ^ https://apnews.com/article/trump-foreign-assistance-freeze-684ff394662986eb38e0c84d3e73350b 2. ^ My primary source for this quick take is Kelsey Piper's Twitter thread, as well as the Tweets it quotes and the articles it and the quoted Tweet link to. For a brief discussion of what PEPFAR is, see my previous Quick Take. 3. ^ https://www.kff.org/policy-watch/pepfars-short-term-reauthorization-sets-an-uncertain-course-for-its-long-term-future/ 4. ^ htt
30
4mo
5
The book "Careless People" starts as a critique of Facebook — a key EA funding source — and unexpectedly lands on AI safety, x-risk, and global institutional failure. I just finished Sarah Wynn-Williams' recently published book. I had planned to post earlier — mainly about EA’s funding sources — but after reading the surprising epilogue, I now think both the book and the author might deserve even broader attention within EA and longtermist circles. 1. The harms associated with the origins of our funding The early chapters examine the psychology and incentives behind extreme tech wealth — especially at Facebook/Meta. That made me reflect on EA’s deep reliance (although unclear how much as OllieBase helpfully pointed out after I first published this Quick Take) on money that ultimately came from: * harms to adolescent mental health, * cooperation with authoritarian regimes, * and the erosion of democracy, even in the US and Europe. These issues are not new (they weren’t to me), but the book’s specifics and firsthand insights reveal a shocking level of disregard for social responsibility — more than I thought possible from such a valuable and influential company. To be clear: I don’t think Dustin Moskovitz reflects the culture Wynn-Williams critiques. He left Facebook early and seems unusually serious about ethics. But the systems that generated that wealth — and shaped the broader tech landscape could still matter. Especially post-FTX, it feels important to stay aware of where our money comes from. Not out of guilt or purity — but because if you don't occasionally check your blind spot you might cause damage. 2. Ongoing risk from the same culture Meta is now a major player in the frontier AI race — aggressively releasing open-weight models with seemingly limited concern for cybersecurity, governance, or global risk. Some of the same dynamics described in the book — greed, recklessness, detachment — could well still be at play. And it would not be comple
38
6mo
1
I'd be excited to see 1-2 opportunistic EA-rationalist types looking into where marginal deregulation is a bottleneck to progress on x-risk/GHW, circulating 1-pagers among experts in these areas, and then pushing the ideas to DOGE/Mercatus/Executive Branch. I'm thinking things like clinical trials requirements for vaccines, UV light, anti-trust issues facing companies collaborating on safety and security, maybe housing (though I'm not sure which are bottlenecked by federal action). For most of these there's downside risk if the message is low fidelity, the issue becomes polarized, or priorities are poorly set, hence collaborating with experts. I doubt there's that much useful stuff to be done here, but marginal deregulation looks very easy right now and looks good to strike while the iron is hot. 
Load more (8/89)