Matt Putz

815 karmaJoined Brussels, Belgium

Comments
67

Interesting post!

I expect that a significant dynamic in this world would be that there'd be major investment in attempting to recover knowledge from the previous civilization. That’s because: 

  • Intellectually, it seems fascinating.
    • If a previous civilization more advanced than ours had existed and then collapsed, I imagine today’s historians would be hugely interested in that.
  • More importantly, there would be a huge economic incentive to understand the previous civilization:
    • Many of the richest and most successful people today are people who anticipated (the consequences of) important technological developments. E.g. people who specialized or invested in AI a decade ago, people who anticipated that internet commerce would be a huge deal, people who anticipated that software development was going to be very valuable, etc.
    • Similarly, there’s a huge edge to be had in science from knowing which domains are promising at which time.

This is important because it’s perhaps the main argument for the Optimistic view regarding whether a post-setback society would retain alignment knowledge.

  • You name various arguments for the Pessimistic view. Those seem reasonable to me, but I think they do have to be weighed against the fact that people would be trying pretty hard to recover a lot of knowledge about today’s world, s.t. substantial difficulties could very plausibly be overcome (e.g. deciphering old hard drives).
    • This is esp. true once that new civilization has technology advanced enough to be at the cusp of AGI.
  • I don’t have a strong view on where that leaves me overall, but intuitively, I probably feel more optimistic than you seem to be.

Separately, I think it’s worth noting that regardless of whether historians manage to recover technical knowledge about alignment, it would likely be obvious very early on that the previous civilization reached something like AGI. This would radically change the governance landscape relative to today’s world, and would plausibly make the problem easier the second time around.  

Finally, I also wanted to note that it seems intuitively likely to me that the effect on trajectory change (described in the Appendix) is more important than the effect on existential risk.


Aside: another consequence if historians are able to recover a lot of information is that economic growth in the rerun might be substantially faster than today. Scientists, entrepreneurs, and investors could learn a ton about which pursuits are most promising at what points. In particular, AI and deep learning investment might happen earlier. This might be good (e.g. because faster growth means there’s generally more surplus around the crucial period and less zero-sum mentality) or bad (e.g. because AI progress is already scary fast today, and it might be even faster in this world, since the payoff would be much clearer to everyone).

I work at Open Philanthropy, and I recently let Gavin know that Open Phil is planning to recommend a grant of $5k to Arb for the second project on your list: Overview of AI Safety in 2024 (they had already raised ~$10k by the time we came across it). Thanks for writing this post Austin — it brought the funding opportunity to our attention.

Like other commenters on Manifund, I believe this kind of overview is a valuable reference for the field, especially for newcomers. 

I wanted to flag that this project would have been eligible for our RFP for work that builds capacity to address risks from transformative AI. I worry that not all potential applicants are aware of the RFP or its scope, so I’ll take this opportunity to mention that this RFP’s scope is quite broad, including funding for: 

  • Training and mentorship programs
  • Events
  • Groups
  • Resources, media, and communications
  • Almost any other type of project that builds capacity for advanced AI risks (in the sense of increasing the number of careers devoted to these problems, supporting people doing this work, and sharing knowledge related to this work). 

More details at the link above. People might also find this page helpful, which lists all currently open application programs at Open Phil. 

Can you say more about the 20% per year discount rate for community building? 

In particular, is the figure meant to refer to time or money? I.e. does it mean that

  1. you would trade at most 0.8 marginal hours spent on community building in 2024 for 1 marginal hour in 2023?
  2. you would trade at most 0.8 marginal dollars spent on community building in 2024 for 1 marginal dollar spent on community building in 2023? 
  3. something else? (possibly not referring to marginal resources?)

(For money a 20% discount rate seems very high to me, barring very short timelines or something similar. It would presumably imply that you think Open Phil should be spending much more on community building until the marginal dollar doesn't have such high returns anymore?)

Minor nitpick: 

I would've found it more helpful to see Haydn's and Esben's judgments listed separately.

Need is a very strong word so I'm voting no. Might sometimes be marginally advantageous though.

Thanks for writing this up! Was gonna apply anyway, but a post like this might have gotten me to apply last year (which I didn't, but which would've been smart). It also contained some useful sections that I didn't know about yet!

This is so useful! I love this kind of post and will buy many things from this one in particular.

Probably a very naive question, but why can't you just take a lot of DHA **and** a lot of EPA to get both supplements' benefits? Especially if your diet means you're likely deficient in both (which is true of veganism? vegetarianism?).

Assuming the Reddit folk wisdom about DHA inducing depression was wrong (which it might not be, I don't want to dismiss it), I don't understand from the rest of what you wrote why this doesn't work? Why is there a trade-off?

Load more