Current: US government relations (energy & tech, mostly); currently in Zug, usually in DC & Texas
Former: doctoral candidate (law @ Oxford) / lecturer (humanitarian aid & human rights practice) / global operations advisor (nonprofits) / NSF research fellow (civil conflict management & peace science)
Distribution would likely be coordinated with the Defense Logistics Agency, which manages global supply chain and distribution for DOD and other federal agencies. They have an extensive national distribution network and are often responsible for ensuring emergency supplies reach domestic natural disaster sites.
Recent biodefense plans included high levels of DOD action. I don't know where the current admin stands on the existing plans, but I would feel much more comfortable if Defense played a significant role.
I saw this and I agree with your main points. I will be offline for a bit due to travel, but I am happy to have a longer conversation with more nuanced responses.
Policy teams at private companies are more well-resourced while, as you mentioned, working on issues ranging from antitrust to privacy and child protection. I may be wrong, but I think the teams focused specifically on frontier AI (excluding infrastructure work) seem to be more balanced than the provided numbers suggest. This observation may be outdated, especially since SB-1047. You likely have a better idea of the current landscape than I do, and I’ll defer to your assessment.
Regarding “conflict framing” - I should have phrased this differently. I did not mean the policy conflicts which come up when a new or potentially consequential industry is facing government intervention. I meant a situation when groups and individuals become entrenched in direct conflict on almost all issues, regardless of the consequence. A recent non-AI example would be the philanthropically funded anti-fossil fuel advocates fighting carbon capture projects despite the IRA funding and general support from climate change-focused groups. The conflict has moved beyond specific policy proposals or even climate goals and has become a purity test that seems impossible to overcome through negotiation. This is a situation that I would not want to see, and I am glad it is not the case here.
In 2024, several lobbyists from the same firm represented both OpenAI and the Center for AI Safety Action Fund at the same time. I am not suggesting any conflict of interest on their part. However, I don't think the "299 corporate lobbyists vs. scrappy AI safety groups" is an effective framing, given that at least some of the money is flowing to the similar places.
I wouldn't compare external lobbyists to full time advocacy staff because (1) external lobbyists and lawyers typically cover many clients and are unlikely to be particularly committed to a single issue like AI safety, and (2) firms often register anyone who does outreach on the project, regardless of whether they meet federal lobbying disclosure thresholds. It's also pretty congenial in general; Amazon lobbyists have helped me with EA-related side projects without payment because they were being nice.
This isn't to say that advocacy couldn't absorb more funding. But the conflict framing doesn't seem to represent the facts on the ground, at if organizations want funding to hire mainstream government relations people who rarely see it that way. Upskilling people already committed to AI safety would be different.
You run an AI safety org full time and have a better idea of the field. I'm just throwing in my two cents re: representation disparities.
There are a lot of interesting global development and technology-related angles that could justify energy-related work. Reliable, affordable energy can spur economic growth and increase the quality of life for people in developing economies. I’m linking a very surface-level McKinsey report on the historic link between energy demand and GDP for basic context, but I’m happy to have a longer chat.
Existing cause areas like South Asian Air Quality could benefit from low-hanging fruit in scalable alternatives to the country's current reliance on coal. For example, India is already a major importer of LPG (which it subsidizes for home kitchen use) and, more recently, LNG. IEA expects India’s gas imports to more than double by 2030 to support its predicted economic growth. This is in addition to the existing ~46% of domestically produced energy coming from renewable sources.
Diverting philanthropic resources to US energy policy doesn’t make much sense to me on the surface, but I’m open to being proven wrong if you have more information behind the argument.
Edit: My non-tech energy work is a ~neutral earning-to-give situation. The work is interesting, reliable, and I enjoy it. I wouldn't argue that it is has a similar impact to direct work.
Sharing some context about where we are, since coverage has really blown up since the provision was added. I am not working on this specific issue, so I can comment here:
For people who are going to reach out, I would focus on substantive concerns about the provisions, which may be effective even with states' rights-focused conservatives. Getting daily calls about parliamentary procedure may or may not change any minds. I promise even the proponents are aware of those hurdles.
I'm more optimistic that people who showed up because they wanted to do the most good still believe in it. Even time spent with "EA-adjacent-adjacent-etc." people is refreshing compared to most of my work in policy, including on behalf of EA organizations.
Community groups, EAGs, and other events still create the space for first principles discussions that you're talking about. As far as I know, those spaces are growing. Even if they weren't, I can't remember a GCR-focused event that served non-vegan food, including those without formal EA affiliations. It's a small gesture, but I think it's relevant to some of the points made in this post.
I understand picking on BlueDot because they started as free courses designed for EAs who wanted to learn more about specific cause areas (AI safety, biosecurity, and alternative proteins, if I remember correctly). They are now much larger, focused exclusively on AI, and have a target audience that goes beyond EA and may not know much about GCRs coming into the course. The tradeoffs they make for the increased net is between them and their funders, and does not necessarily speak to the values of the people running the course.
Unfortunately, there was an effective effort to tie AI safety advocacy organizations to their funders in a way that increased risk to any high-profile donors who supported federal policy work. I don't know if this impacted any of your funders' decisions, but the related media coverage could have been cause for concern (ie Politico). Small dollar donations might help balance this.
It seems very likely that the federal government will attempt to override any state AI regulation that gets passed in the next year. Jason put together a strong, experienced team that can navigate the quickly shifting terrain in Washington. Dissolving immediately due to lack of funding would be an unfortunate outcome at a critical time.
Context: I work in government relations on related issues and met Jason at an EAG in 2024. I have not worked with CAIP or pushed for their model legislation, but I respect the team.
If you want a more detailed take on these issues than a Guardian article can provide, I would attend the annual Space Ecology Workshop. It's an annual, free event for academic and industry experts to discuss the future of human space exploration and settlement. The team is really nice and might be open to adding a session on welfare / ethics of commercial farming in space.
Researchers at the Space Analog for the Moon & Mars at Biosphere 2 would also probably have some interesting takes. Most of their relevant work has focused on plant ecology, but questions about potential alternative sources definitely came up over the years. The project in this article is one of many different research pathways on human nutrition in space, most of which won't end up happening.
I used to volunteer at Biosphere 2 when I lived in Tucson and like to stay in the loop, but this is not my current field at all.
I think it would be great to have some materials adapted for policy audiences if it isn't too far out of your team's scope. There is a lot of demand for this kind of practical, implementation-focused work. Just this week, there were multiple US congressional hearings and private events on the future of AI in the US, with a specific focus on adapting to a world with advanced artificial intelligence.
As an example, the Special Competitive Studies Project hosted the AI+ summit series in DC and launched a free course on "Integrating Artificial Intelligence into Public Sector Missions". These have been very well-received and attended by stakeholders across agencies and non-governmental entities. While SCSP has done more to prepare government stakeholders to adapt than any other nonprofit I am aware of, there is still plenty room for other expert takes.
Several people working on climate issues out of World Bank HQ are involved in the local EA community. It may be worth a conversation with them around feasibility and bureaucratic pathways / challenges to shifting strategy on major funding areas. Your second footnote focused on climate impacts, so I assume you're not opposed to arguments from that perspective.