L

Linch

@ -
27661 karmaJoined Working (6-15 years)openasteroidimpact.org

Comments
2898

For Inkhaven, I wrote 30 posts in 30 days. Most of them are not particularly related to EA, though a few of them were. I recently wrote some reflections. @Vasco Grilo🔸 thought it might be a good idea to share on the EA Forum; I don't want to be too self-promotional so I'm splitting the difference and posting just a shortform link here:

https://linch.substack.com/p/30-posts-in-30-days 

The most EA-relevant posts are probably

https://inchpin.substack.com/p/skip-phase-3

https://inchpin.substack.com/p/aging-has-no-root-cause

https://inchpin.substack.com/p/legible-ai-safety-problems-that-dont 

Linch
11
0
0
1

There are a number of implicit concepts I have in my head that seem so obvious that I don't even bother verbalizing them. At least, until it's brought to my attention other people don't share these concepts.

It didn't feel like a big revelation at the time I learned the concept, just a formalization of something that's extremely obvious. And yet other people don't have those intuitions, so perhaps this is pretty non-obvious in reality.

Here’s a short, non-exhaustive list:

  • Intermediate Value Theorem
  • Net Present Value
  • Differentiable functions are locally linear
  • Theory of mind
  • Grice’s maxims

If you have not heard any of these ideas before, I highly recommend you look them up! Most *likely*, they will seem obvious to you. You might already know those concepts by a different name, or they’re already integrated enough into your worldview without a definitive name.

However, many people appear to lack some of these concepts, and it’s possible you’re one of them.

As a test: for every idea in the above list, can you think of a nontrivial real example of a dispute where one or both parties in an intellectual disagreement likely failed to model this concept? If not, you might be missing something about each idea!

Thanks, I find the polls to be much stronger evidence than the other things you've said.

My overall objection/argument is that you appear to selectively portray data points that show one side, and selectively dismiss data points that show the opposite view. This makes your bottom-line conclusion pretty suspicious. 

I also think the rationalist community overreached and their epistemics and speed in early COVID were worse compared to, say, internet people, government officials, and perhaps even the general public in Taiwan. But I don't think the case for them being slower than Western officials or the general public in either the US or Europe is credible, and your evidence here does not update me much.

Why does this not apply to your original point citing a single NYT article?

See eg traviswfisher's prediction on Jan 24:

https://x.com/metaculus/status/1248966351508692992 

Or this post on this very forum from Jan 26:

https://forum.effectivealtruism.org/posts/g2F5BBfhTNESR5PJJ/concerning-the-recent-2019-novel-coronavirus-outbreak 

I wrote this comment on Jan 27, indicating that it's not just a few people worried at the time. I think most "normal" people weren't tracking covid in January. 

I think the thing to realize/people easily forget is that everything was really confusing and there was just a ton of contentious debate during the early months. So while there was apparently a fairly alarmed NYT report in early Feb, there were also many other reports in February that were less alarmed, many bad forecasts, etc.

I wrote a short intro to stealth (the radar evasion kind). I was irritated by how bad existing online introductions are, so I wrote my own!

I'm not going to pretend it has direct EA implications. But one thing that I've updated more towards in the last few years is how surprisingly limited and inefficient the information environment is. Like obvious concepts known to humanity for decades or centuries don't have clear explanations online, obvious and very important trends have very few people drawing attention to them, you can just write the best book review of a popular book that's been around for decades, etc.

I suppose one obvious explanation here is that most people who can write stuff like this have more important and/or interesting things to do with their time. Which is true, but also kind of sad.

presupposes that EAs are wrong, or at least, merely luckily right

Right, to be clear I'm far from certain that the stereotypical "EA view" is right here. 

I guess really I was saying that "conditional on a sociological explanation being appropriate, I don't think it's as LW-driven as Yarrow thinks", although LW is undoubtedly important.

Sure that makes a lot of sense! I was mostly just using your comment to riff on a related concept. 

I think reality is often complicated and confusing, and it's hard to separate out contingency vs inevitable stories for why people believe what they believe. But I think the correct view is that EAs' belief on AGI probability and risk (within an order of magnitude or so)  is mostly not contingent (as of the year 2025) even if it turns out to be ultimately wrong.

The Google ads example was the best example I could think of to illustrate this. I'm far from certain that Google's decision to use ads was actually the best source of long-term revenue (never mind being morally good lol). But it still seemed like the internet as we understand it meant it was implausible that Google ads was counterfactually due to their specific acquisitions.

Similarly, even if EAs ignored AI before for some reason, and never interacted with LW or Bostrom, it's implausible that, as of 2025, people who are concerned with ambitious, large-scale altruistic impact (and have other epistemic, cultural, and maybe demographic properties characteristic of the movement) would not think of AI as a big deal. AI is just a big thing in the world that's growing fast. Anybody capable of reading graphs can see that.

That said, specific micro-level beliefs (and maybe macro ones) within EA and AI risk might be different without influence from either LW or the Oxford crowd. For example there might be a stronger accelerationist arm. Alternatively, people might be more queasy with the closeness with the major AI companies, and there will be a stronger and more well-funded contingent of folks interested in public messaging on pausing or stopping AI. And in general if the movement didn't "wake up" to AI concerns at all pre-ChatGPT I think we'd be in a more confused spot.

eh, I think the main reason EAs believe AGI stuff is reasonably likely is because this opinion is correct, given the best available evidence[1]

Having a genealogical explanation here is sort of answering the question on the wrong meta-level, like giving a historical explanation for "why do evolutionists believe in genes" or telling a touching story about somebody's pet pig for "why do EAs care more about farmed animal welfare than tree welfare." 

Or upon hearing "why does Google use ads instead of subscriptions?" answering with the history of their DoubleClick acquisition. That history is real, but it's downstream of the actual explanation: the economics of internet search heavily favor ad-supported models regardless of the specific path any company took. The genealogy is epiphenomenal.

The historical explanations are thus mildly interesting but they conflate the level of why

EDIT: man I'm worried my comment will be read as a soldier-mindset thing that only makes sense if you presume the "AGI likely soon" is already correct. Which does not improve on the conversation. Please only upvote it iff a version of you that's neutral on the object-level question would also upvote this comment.

  1. ^

    Which is a different claim from whether it's ultimately correct. Reality is hard.

  • Near-term AGI is highly unlikely, much less than a 0.05% chance in the next decade.

Is this something you're willing to bet on? 

Load more