My biggest takeaway from the Essays on Longtermism anthology is that irrecoverable collapse is a serious concern and we should not assume that humanity will rebound from a global catastrophe. The two essays that convinced me of this were "Depopulation and Longtermism" by Michael Geruso and Dean Spears and "Is Extinction Risk Mitigation Uniquely Cost-Effective? Not in Standard Population Models" by Gustav Alexandrie and Maya Eden. These essays argue that human population does not automatically or necessarily grow in the rapid, exponential way we became accustomed to over the last few hundred years.
In the discourse on existential risk, it's often assumed that even if only 1% of the human population survives a global disaster, eventually humanity will rebound. On this assumption, while extinction reduces future lives to zero, a disaster that kills 99% of the human population only reduces the eventual number of future lives from some astronomically large figure to some modestly lower astronomically large figure. This idea goes back to Derek Parfit, who (as far as I know) was the first analytic philosopher to discuss human extinction from a population ethics standpoint. Nick Bostrom, who is better known for popularizing the topic of existential risk, has cited Parfit as an influence. So, this assumption has been with us from the beginning.
Irrecoverable collapse, as I would define it, means that population does not ever rebound to pre-collapse levels and science, technology, and industry do not recover to pre-collapse levels, either. So, digital minds and other futuristic fixes don't get us out of the jam. While the two aforementioned papers are primarily about population, the paper on depopulation by Geruso and Spears also persuasively argues that technological progress depends on population. This spells trouble for any scenario where a global catastrophe kills a large percentage of living human beings.[1]
While a small global population of humans might live on Earth for a very long time, the overall number of future lives will be much less than if science and technology continued to progress, if the global economy continued to grow, and if global population continued to grow or at least stayed roughly steady. If irrecoverable collapse reduces the number of future lives by something like 99.9%, we should be concerned about irrecoverable collapse for the same reason we're concerned about extinction.[2]
For several kinds of existential threat, such as asteroids, pandemics, and nuclear war, it seems like the chance of an event that kills a devastating percentage of the world's population but not 100% is significantly higher than the chance of a full-on extinction event. If irrecoverable collapse scenarios are almost as bad as extinction events, then the putatively greater likelihood of irrecoverable collapse scenarios probably matters a lot!
If irrecoverable collapse reduces the number of future lives by almost as much as extinction and if irrecoverable collapse scenarios are more likely than extinction scenarios, then it may be more important to try to prevent irrecoverable collapse than extinction. In practice, maybe trying to prevent extinction looks the same as trying to prevent sub-extinction disasters. For example, pandemic prevention probably looks similar whether you're trying to prevent another pandemic like covid-19 or a pandemic 10x worse or a pandemic 10x worse than that. However, I can think of two areas where this idea about irrecoverable collapse might be practically relevant:
- It might become more important to detect smaller asteroids using space telescopes like NASA's planned NEO Surveyor. It's plausible to think there may be asteroids that are too small to cause human extinction but large enough to cause irrecoverable collapse, especially if they hit a densely populated part of Earth. (Similar reasoning might apply to other threats like large volcanoes.)
- Maybe it's worthwhile thinking more about ways to reboot civilization after a collapse. There has been some discussion in the existential risk literature about long-term shelters or refuges, which could be a relevant intervention. See, for example, Nick Beckstead's excellent paper on the topic. However, Beckstead's paper seems to make the assumption that I'm now saying is dubious: if even a small number of people survive, that's good enough.
 
One topic not discussed in Essays on Longtermism is humanity's one-time endowment of easily accessible fossil fuels. These fossil fuels have been used up and if industrial civilization collapsed, it could not be rebooted along the same pathway it originally took. A hopeful idea I once heard offered in this context was that maybe charcoal, which is made from wood, could replace coal. I don't know whether or not that is feasible. This is a worrying problem and if there any good ideas for how to solve it, I would love to hear them.
There are other considerations. For example, if humanity regressed to a pre-scientific stage, are we confident that a Scientific Revolution would eventually happen again? Is the Scientific Revolution inevitable and guaranteed, given enough time, or are we lucky that it happened? 
Let's say we want to juice the odds. Could we store scientific knowledge over the very long term, possibly carved in stone or engraved in nickel, in a way that would remain understandable to people for centuries after a collapse? How might we encourage future people to care about this knowledge? Would people be curious about it? How could we make sure they would find it?
Not much research has been done into so-called "doomsday archives". To clarify: there has been some research on how to physically store data for a very long time, with proofs of concept that store data in dehydrated DNA or that use lasers to encode data in quartz glass or diamond. However, very little research has been done into how to make information accessible and understandable to a low-tech society that has drifted culturally and linguistically away from the creators of the archive in the centuries following a global disaster.
If irrecoverable collapse is indeed as important as I have entertained in this essay, then a few recommendations follow:
- People who are concerned about existential risks primarily because of the reduction the number of future lives should look more broadly at mitigating potential disasters that would not cause extinction but might cause an irrecoverable collapse.
- That same class of people should look into any way that a devastated civilization could recover without the easily accessible fossil fuels that human civilization had the first time around.
- Another potential research direction is doomsday archives that can preserve knowledge not only physically but also practically for people with limited technology and limited background knowledge.
In short, we should not assume humanity will automatically recover from a sub-extinction global catastrophe and should plan accordingly.
- ^If we were able to create digital minds, concerns about the biological human population and fertility rates would suddenly become much less pressing. However, getting to the point where we can create digital minds would require that the human population not collapse before then. 
- ^This is not a new idea. As early as 2002, Nick Bostrom defined an existential risk as: "One where an adverse outcome would either annihilate Earth−originating intelligent life or permanently and drastically curtail its potential." Even so, I think this idea has been under-emphasized. 
