Hi, I’m Florian. I am enthusiastic about working on large scale problems that require me to learn new skills and extend my knowledge into new fields and subtopics. My main interests are climate change, existential risks, feminism, history, hydrology and food security.
Yeah, I share that worry. And from experience it is really hard to get funding for nuclear work in both philanthropy and classic academic funding. My last grant proposal about nuclear was rejected with the explanation that we already know everything there is to know about nuclear winter, so no need to spend money on research there.
I meant specifically mentioning that you don't really fund global catastrophic risk work on climate change, ecological collapse, near-Earth objects (e.g., asteroids, comets), nuclear weapons, and supervolcanic eruptions. Because to my knowledge such work has not been funded for several years now (please correct me if this is wrong). And as you mentioned that status quo will continue, I don't really see a reason to expect that the LTFF will start funding such work in the foreseeable future.
Thanks for wanting to check in if there is a difference between the public grants and the application distribution. Would be curious to hear the results.
Thanks for the clarification. In that case I think it would be helpful to state on the website that the LTFF won't be funding non AI/biosecurity GCR work for the foreseeable future. Otherwise you will just attract applications which you would not fund anyway, which results in unnecessary effort for both applicants and reviewers.
Now that this paper is finally published, it feels a bit like a requiem to the field. Every non-AI GCR researcher I talked to in the last year or so is quite concerned about the future of the field. A large chunk of all GCR funding now goes to AI, leaving existing GCR orgs without any money. For example, ALLFED is having to cut a large part of their programs (https://forum.effectivealtruism.org/posts/K7hPmcaf2xEZ6F4kR/allfed-emergency-appeal-help-us-raise-usd800-000-to-avoid-1), even though pretty much everyone seems to agree that ALLFED is doing good work and should continue to exist.
I think funders like Open Phil or the Survival and Flourishing Fund should strongly consider putting more money into non-AI GCR research again. I get that many people think that AI risk is very imminent, but I don't think that this justifies to leave the rest of GCR research dying on the vine. It would be quite a bad outcome if in five years AI risk did not materialize, but most of the non-AI GCR orgs have ceased to exist, as all of the funding dried up.
With the satellites I understood it as they being disrupted in several ways:
What ultimately happens depends a lot on the orbit and how hardened the satellite is, but I haven't seen research that tries to assess this in detail (but also haven't looked very hard for this particular thing).
About the airplanes: Yeah this might be an option, though I think the paper that mentioned this said something along the lines "it is quite hard to predict where in the airplanes path the radiation will increase and they can receive the radiation quickly, which makes this hard to avoid".