I’ve seen a few people in the LessWrong community congratulate the community on predicting or preparing for covid-19 earlier than others, but I haven’t actually seen the evidence that the LessWrong community was particularly early on covid or gave particularly wise advice on what to do about it. I looked into this, and as far as I can tell, this self-congratulatory narrative is a complete myth.

Many people were worried about and preparing for covid in early 2020 before everything finally snowballed in the second week of March 2020. I remember it personally.

In January 2020, some stores sold out of face masks in several different cities in North America. (For example, in New York.) The oldest post on LessWrong tagged with "covid-19" is from well after this started happening. (I also searched the forum for posts containing "covid" or "coronavirus" and sorted by oldest. I couldn’t find an older post that was relevant.) The LessWrong post is written by a self-described "prepper" who strikes a cautious tone and, oddly, advises buying vitamins to boost the immune system. (This seems dubious, possibly pseudoscientific.) To me, that first post strikes a similarly ambivalent, cautious tone as many mainstream news articles published before that post.

If you look at the covid-19 tag on LessWrong, the next post after that first one, the prepper one, is on February 5, 2020. The posts don't start to get really worried about covid until mid-to-late February.

How is the rest of the world reacting at that time? Here's some polling from late January that gives a sense of public sentiment at the time.

Morning Consult ran a poll from January 24-26, 2020 that found 37% of Americans were very concerned about the novel coronavirus spreading in the U.S.:

An Ipsos poll of Canadians from January 27-28 found similar results:

Half (49%) of Canadians think the coronavirus poses a threat (17% very high/32% high) to the world today, while three in ten (30%) think it poses a threat (9% very high/21% high) to Canada. Fewer still think the coronavirus is a threat to their province (24%) or to themselves and their family (16%).

Here's a New York Times article from February 2, 2020, entitled "Wuhan Coronavirus Looks Increasingly Like a Pandemic, Experts Say", well before any of the worried posts on LessWrong:

The Wuhan coronavirus spreading from China is now likely to become a pandemic that circles the globe, according to many of the world’s leading infectious disease experts.

The prospect is daunting. A pandemic — an ongoing epidemic on two or more continents — may well have global consequences, despite the extraordinary travel restrictions and quarantines now imposed by China and other countries, including the United States.

The tone of the article is fairly alarmed, noting that in China the streets are deserted due to the outbreak, it compares the novel coronavirus to the 1918-1920 Spanish flu, and it gives expert quotes like this one:

It is “increasingly unlikely that the virus can be contained,” said Dr. Thomas R. Frieden, a former director of the Centers for Disease Control and Prevention who now runs Resolve to Save Lives, a nonprofit devoted to fighting epidemics.

The worried posts on LessWrong don't start until weeks after this article was published. On a February 25, 2020 post asking when CFAR should cancel its in-person workshop, the top answer cites the CDC's guidance at the time about covid-19. It says that CFAR's workshops "should be canceled once U.S. spread is confirmed and mitigation measures such as social distancing and school closures start to be announced." This is about 2-3 weeks out from that stuff happening. So, what exactly is being called early here?

CFAR is based in the San Francisco Bay Area, as are Lightcone Infrastructure and MIRI, two other organizations associated with the LessWrong community. On February 25, 2020, the city of San Francisco declared a state of emergency over covid. (Nearby, Santa Clara county, where most of what people consider as Silicon Valley is located, declared a local health emergency on February 10.) At this point in time, posts on LessWrong remain overall cautious and ambivalent.

By the time the posts on LessWrong get really, really worried, in the last few days of February and the first week of March, much of the rest of the world was reacting in the same way.

From February 14 to February 25, the S&P 500 dropped about 7.5%. Around this time, financial analysts and economists issued warnings about the global economy.

Between February 21 and February 27, Italy began its first lockdowns of areas where covid outbreaks had occurred.

On February 25, 2020, the CDC warned Americans of the possibility that "disruption to everyday life may be severe". The CDC made this bracing statement:

It's not so much a question of if this will happen anymore, but more really a question of when it will happen — and how many people in this country will have severe illness.

Another line from the CDC:

We are asking the American public to work with us to prepare with the expectation that this could be bad.

On February 26, Canada's Health Minister advised Canadians to stockpile food and medication.

The most prominent LessWrong post from late February warning people to prepare for covid came a few days later, on February 28. So, on this comparison, LessWrong was actually slightly behind the curve. (Oddly, that post insinuates that nobody else is telling people to prepare for covid yet, and congratulates itself on being ahead of the curve.)

In the beginning of March, the number of LessWrong posts tagged with covid-19 posts explodes, and the tone gets much more alarmed. The rest of the world was responding similarly at this time. For example, on February 29, 2020, Ohio declared a state of emergency around covid. On March 4, Governor Gavin Newsom did the same in California. The governor of Hawaii declared an emergency the same day, and over the next few days, many more states piled on.

Around the same time, the general public was becoming alarmed about covid. In the last days of February and the first days of March, many people stockpiled food and supplies. On February 29, 2020, PBS ran an article describing an example of this at a Costco in Oregon:

Worried shoppers thronged a Costco box store near Lake Oswego, emptying shelves of items including toilet paper, paper towels, bottled water, frozen berries and black beans.

“Toilet paper is golden in an apocalypse,” one Costco employee said.

Employees said the store ran out of toilet paper for the first time in its history and that it was the busiest they had ever seen, including during Christmas Eve.

A March 1, 2020 article in the Los Angeles Times reported on stores in California running out of product as shoppers stockpiled. On March 2, an article in Newsweek described the same happening in Seattle:

Speaking to Newsweek, a resident of Seattle, Jessica Seu, said: "It's like Armageddon here. It's a bit crazy here. All the stores are out of sanitizers and [disinfectant] wipes and alcohol solution. Costco is out of toilet paper and paper towels. Schools are sending emails about possible closures if things get worse.

In Canada, the public was responding the same way. Global News reported on March 3, 2020 that a Costco in Ontario ran out bottled water, toilet paper, and paper towels, and that the situation was similar at other stores around the country. The spike in worried posts on LessWrong coincides with the wider public's reaction. (If anything, the posts on LessWrong are very slightly behind the news articles about stores being picked clean by shoppers stockpiling.)

On March 5, 2020, the cruise ship the Grand Princess made the news because it was stranded off the coast of California due to a covid outbreak on board. I remember this as being one seminal moment of awareness around covid. It was a big story. At this point, LessWrong posts are definitely in no way ahead of the curve, since everyone is talking about covid now.

On March 8, 2020, Italy put a quarter of its population under lockdown, then put the whole country on lockdown on March 10. On March 11, the World Health Organization declared covid-19 a global pandemic. (The same day, the NBA suspended the season and Tom Hanks publicly disclosed he had covid.) On March 12, Ohio closed its schools statewide. The U.S. declared a national emergency on March 13. The same day, 15 more U.S. states closed their schools. Also on the same day, Canada's Parliament shut down because of the pandemic. By now, everyone knows it's a crisis.

So, did LessWrong call covid early? I see no evidence of that. The timeline of LessWrong posts about covid follow the same timeline that the world at large reacted to covid, increasing in alarm as journalists, experts, and governments increasingly rang the alarm bells. In some comparisons, LessWrong's response was a little bit behind.

The only curated post from this period (and the post with the third-highest karma, one of only four posts with over 100 karma) tells LessWrong users to prepare for covid three days after the CDC told Americans to prepare, and two days after Canada's Health Minister told Canadians to stockpile food and medication. It was also three days after San Francisco declared a state of emergency. When that post was published, many people were already stockpiling supplies, partly because government health officials had told them to. (The LessWrong post was originally published on a blog a day before, and based on a note in the text apparently written the day before that, but that still puts the writing of the post a day after the CDC warning and the San Francisco declaration of a state of emergency.)

Unless there is some evidence that I didn't turn up, it seems pretty clear the self-congratulatory narrative is a myth. The self-congratulation actually started in that post published on February 28, 2020, which, again, is odd given the CDC's warning three days before (on the same day that San Francisco declared a state of emergency), analysts' and economists' warnings about the global economy a bit before that, and the New York Times article warning about a probable pandemic at the beginning of the month. The post is slightly behind the curve, but it's gloating as if it's way ahead.

Looking at the overall LessWrong post history in early 2020, LessWrong seems to have been, if anything, slightly behind the New York Times, the S&P 500, the CDC, the government of San Francisco, and enough members of the general public to clear out some stores of certain products. By the time LessWrong posting reached a frenzy in the first week of March, the world was already responding — U.S governors were declaring states of emergency, and everyone was talking about and worrying about covid.

If LessWrong had decided to declare a state of emergency on the same day that San Francisco did, or simply prominently posted the CDC’s warning from that same day, it would have been three days (if not a bit more) ahead of where it actually ended up. Why didn’t it just do that? Alternatively, if the S&P 500 dropping 7.5% in around ten days had been taken as a sufficient signal, that also would have put LessWrong three days ahead of where it was. 

I think people should be skeptical and even distrustful toward the claims of the LessWrong community, both on topics like pandemics and about its own track record and mythology. Obviously this myth is self-serving, and it was pretty easy for me to disprove in a short amount of time — so anyone who is curious can check and see that it's not true. The people in the LessWrong community who believe the community called covid early probably believe that because it's flattering. If they actually wondered if this is true or not and checked the timelines, it would become pretty clear that didn't actually happen.


Note: I am publishing this under a second EA Forum profile I created for community posts and other minutiae that I don’t want to distract from or clutter up my posts on my main profile. Sometimes a post is in an awkward middle ground between a quick take and a full post, and this is my imperfect solution to that.

See some previous discussion of a quick take almost identical to this post here.

15

2
2
1

Reactions

2
2
1

More posts like this

Comments8
Sorted by Click to highlight new comments since:

Back in April 2020, I asked Greg Lewis for his take on the EA community's response to COVID on the 80k podcast. I can't remember anything about what we said but folks might find it interesting. https://80000hours.org/podcast/episodes/greg-lewis-covid-19-global-catastrophic-biological-risks/#the-response-of-the-effective-altruism-community-to-covid-19-021142

Woah, this is brutal. 

Context for everybody: Gregory Lewis is a biorisk researcher with a background in medicine and public health. He describes himself as "heavily involved in Effective Altruism". (He's not a stranger here: his EA Forum account was created in 2014 and he has 21 posts and 6000 karma.)

The 80,000 Hours Podcast interview was recorded in mid-April 2020. Lewis starts off with a compliment:

If we were to give a fair accounting of all EA has done in and around this pandemic, I think this would overall end up reasonably strongly to its credit. For a few reasons. The first is that a lot of EAs I know were, excuse the term, comfortably ahead of the curve compared to most other people, especially most non-experts in recognizing this at the time: that emerging infectious disease could be a major threat to people’s health worldwide. And insofar as their responses to this were typically either going above and beyond in terms of being good citizens or trying to raise the alarm, these seem like all prosocial, good citizen things which reflect well on the community as a whole.

He also pays a compliment to a few people in the EA community who have brainstormed interesting ideas about how to respond to the pandemic and who (as of April 2020) were working on some interesting projects. But he continues (my emphasis added):

But unfortunately I’ve got more to say.

So, putting things politely, a lot of the EA discussion, activity, whatever you want to call it, has been shrouded in this miasma of obnoxious stupidity, and it’s been sufficiently aggravating for someone like me. I sort of want to consider whether I can start calling myself EA adjacent rather than EA, or find some way of distancing myself from the community as a whole. Now the thing I want to stress before I go on to explain why I feel this way is that unfortunately I’m not alone in having these sorts of reactions.

... But at least I have a few people who talk to me now, who, similar to me, have relevant knowledge, background and skills. And also, similar to me, have found this community so infuriating they need to take a break from their social media or want to rage quit the community as a whole. ... So I think there’s just a pattern whereby discussion around this has been very repulsive to people who know a lot about the subject is, I think, a course for grave concern.

That EA's approval rating seems to fall dramatically with increasing knowledge is not the pattern you typically take as a good sign from the outside view.

Lewis elaborates (my emphasis added again):

And this general sense of just playing very fast and loose is pretty frustrating. I have experienced a few times of someone recommending X, then I go into the literature, find it’s not a very good idea, then I briefly comment going, “Hey, this thing here, that seems to be mostly ignored”, then I get some pretty facile reply and I give up and go home. And that’s happened to other people as well. So I guess given all these things, it seems like bits of the EA response were somewhat less than optimal. 

And I think for ways it could have been improved were mostly in the modesty direction. So, for example, I think several EAs have independently discovered for themselves things like right censoring or imperfect ascertainment or other bits of epidemiology which inform how you, for example, assess the case fatality ratio. And that’s great, but all of that was in most textbooks and maybe it’d have saved time had those been consulted first rather than doing something else instead.

More on this consulting textbooks:

But typically for most fields of human endeavor, we have a reasonably good way which is probably reasonably efficient in terms of picking up the relevant level of knowledge and expertise. Now, it’s less efficient if you just target it, if you know in advance what you want to know ahead. But unfortunately, this area tends to be one where it’s a background tacit knowledge thing. It’s hard to, as it were, rapier-like just stab all the things, in particular, facts you need. And if you miss some then it can be a bit tricky in terms of having good ideas thereafter.

What's worse than inefficiency:

The other problems are people often just having some fairly bad takes on lots of things. And it’s not always bad in terms of getting the wrong answer. I think some of the interventions do seem pretty ill-advised and could be known to be ill-advised if one had maybe done one’s homework slightly better. These are complicated topics generally: something you thought about for 30 minutes and wrote a Medium post about may not actually be really hitting the cutting edge.

An example of a bad take:

So I think President Trump at the moment is suggesting that, as it were, the cure is worse than the disease with respect to suppression. ... But suppose we’re clairvoyant and we see in two years’ time, we actually see that was right. ... I think very few people would be willing to, well, maybe a few people listening to this podcast can give Trump a lot of credit for calling it well. Because they would probably say, “Well yeah, maybe that was the right decision but he chose it for the wrong reasons or the wrong epistemic qualities”. And I sort of feel like a similar thing sort of often applies here. 

So, for example, a lot of EAs are very happy to castigate the UK government when it was more going for mitigation rather than suppression, but for reasons why, just didn’t seem to indicate they really attended to any of the relevant issues which you want to be wrestling with. And see that they got it right, but they got it right in the way that stopped clocks are right if you look at them at the right time of day. I think it’s more like an adverse rather than a positive indicator. So that’s the second thing.

On bad epistemic norms:

And the third thing is when you don’t have much knowledge of your, perhaps, limitations and you’re willing to confidently pronounce on various things. This is, I think, somewhat annoying for people like me who maybe know slightly more as I’m probably expressing from the last five minutes of ranting at you. But moreover, it doesn’t necessarily set a good model for the rest of the EA community either. Because things I thought we were about were things like, it’s really important to think things through very carefully before doing things. A lot of your actions can have unforeseen consequences. You should really carefully weigh things up and try and make sure you understand all the relevant information before making a recommendation or making a decision.

And it still feels we’re not really doing that as much as we should be. And I was sort of hoping that EA, in an environment where there’s a lot of misinformation, lots of outrage on various social media outlets, there’s also castigation of various figures, I was hoping EA could strike a different tone from all of this and be more measured, more careful and just more better I guess, roughly speaking.

More on EA criticism of the UK government:

Well, I think this is twofold. So one is, if you look at SAGE, which is the Scientific Advisory Group for Emergencies, who released what they had two weeks ago in terms of advice that they were giving the government, which is well worth a read. And my reading of it was essentially they were essentially weeks ahead of EA discourse in terms of all the considerations they should be weighing up. So obviously being worse than the expert group tasked to manage this is not a huge rap in terms of, “Well you’re doing worse than the leading experts in the country.” That’s fair enough. But they’re still overconfident in like, “Oh, don’t you guys realize that people might die if hospital services get overwhelmed, therefore your policy is wrong.” It seems like just a very facile way of looking at it.

But maybe the thing is first like, not having a very good view. The second would be being way too overconfident that you actually knew the right answer and they didn’t. So much that you’re willing to offer a diagnosis, for example, “Maybe the Chief Medical Officer doesn’t understand how case ascertainment works or something”. And it’s like this guy was a professor of public health in a past life. I think he probably has got that memo by now. And so on and so forth.

On cloth masks:

I think also the sort of ideas which I’ve seen thrown around are at least pretty dicey. So one, in particular, is the use of cloth masks; we should all be making cloth masks and wearing them.

And I’m not sure that’s false. I know the received view in EA land is that medical masks are pretty good for the general population which I’ll just about lean in favor of, although all of these things are uncertain. But cloth masks seem particularly risky insofar as if people aren’t sterilizing them regularly which you expect they won’t: a common thing about the public that you care about is actual use rather than perfect use. And you have this moist cloth pad which you repeatedly contaminate and apply to your face which may in fact increase your risk and may in fact even increase the risk of transmission. It’s mostly based on contact rather than based on direct droplet spreads. And now it’s not like lots of people were touting this. But lots on Twitter were saying this. They cite all the things. They seem not to highlight the RCT which cluster analyzed healthcare workers to medical masks, control, and cloth masks, and found cloth masks did worse than the control.

Then you would point out, per protocol, that most people in the controlled arm were using medical masks anyway or many of them were, so it’s hard to tell whether cloth masks were bad or medical masks were good. But it’s enough to cause concern. People who write the reviews on this are also similarly circumspect and I think they’ve actually read the literature where I think most of the EAs confidently pronouncing it’s a good idea generally haven’t. So there’s this general risk of having risky policy proposals which you could derisk, in expectation, by a lot, by carefully, as it were, checking the tape.

More on cloth masks:

And I still think if you’re going to do this, or you’re going to make your recommendations based on expectation, you should be checking very carefully to make sure your expectation is as accurate as it could be, especially if there’s like a credible risk of causing harm and that’s hard to do for anyone, for anything. I mean cf. the history of GiveWell, for example, amongst all its careful evaluation. And we’re sort of at the other end of the scale here. And I think that could be improved. If it was someone like, “Oh, I did my assessment review of mask use and here’s my interpretation. I talked to these authors about these things or whatever else”, then I’d be more inclined to be happy. But where there’s dozens of ideas being pinged around… Many of them are at least dubious, if not downright worrying, then I’m not sure I’m seeing really EA live out its values and be a beacon of light in the darkness of irrationality.

Lewis' concrete recommendations for EA:

The direction I would be keen for EAs to go in is essentially paying closer attention to available evidence such as it is. And there are some things out there which can often be looked at or looked up, or existing knowledge one can get better acquainted with to help inform what you think might be good or bad ideas. And I think, also, maybe there’s a possibility that places like 80K could have a comparative advantage in terms of elicitation or distillation of this in a fast moving environment, but maybe it’s better done by, as it were, relaying on what people who do this all day long, and who have a relevant background are saying about this. 

So yeah, maybe Marc Lipsitch wants to come on the 80K podcast, maybe someone like Adam Kucharski would like to come on. Or like Rosalind Eggo or other people like this. Maybe they’d welcome a chance of being able to set the record straight given like two hours to talk about their thing rather than like a 15 minute media segment. And it seems like that might be a better way of generally improving the epistemic waterline of EA discussions, rather than lots of people pandemic blogging, roughly speaking, and a very rapid, high turnaround. By necessity, there’s like limited time to gather relevant facts and information.

More on EA setting a bad example:

...one of the things I’m worried about, it’s like a lot of people are going to look at COVID-19, start want get involved in GCBRs. And sort of all these people are cautious, circumspect, lot’s of discretion and stuff like that. I don’t think 80Ks activity on this has really modeled a lot of that to them. Rob [Wiblin], in particular, but not alone. So having a pile of that does not fill me with great amounts of joy or anticipation but rather some degree of worry.

I think that does actually apply even in first order terms to the COVID-19 pandemic, where I can imagine a slightly more circumspect or cautious version of 80K, or 80K staff or whatever, would have perhaps had maybe less activity on COVID, but maybe slightly higher quality activity on COVID and that might’ve been better.

On epistemic caution:

I mean people like me are very hesitant to talk very much on COVID for fear of being wrong or making mistakes. And I think that fear should be more widespread and maybe more severe for folks who don’t have the relevant background who’re trying to navigate the issue as well.

Lewis twice mentions an EA Forum post he wrote about epistemic modesty, which sounds like it would be a relevant read, here.

Note that Dominic Cummings, one of the then most powerful men in the UK, [credits the rationality community] (https://x.com/dominic2306/status/1373333437319372804) for convincing him that the UK needed to change its coronavirus policy (which I personally am very grateful for!). So it seems unlikely to have been that obvious

Although I think Yarrow's claim is that the LW community was not "particularly early on covid [and did not give] particularly wise advice."  I don't think the rationality community saying things that were not at the time "obvious" undermines this conclusion as long as those things were also being said in a good number of other places at the same time.

Cummings was reading rationality material, so that had the chance to change his mind. He probably wasn't reading (e.g.) the r/preppers subreddit, so its members could not get this kind of credit. (Another example: Kim Kardashian got Donald Trump to pardon Alice Marie Johnson and probably had some meaningful effect on his first administration's criminal-justice reforms. This is almost certainty a reflection of her having access, not evidence that she is a first-rate criminal justice thinker or that her talking points were better than those of others supporting Johnson's clemency bid.)

It's quite a strange and interesting story, but I don't think it supports the case that LessWrong actually called covid earlier than others. Let's get into the context a little bit.

First off, Dominic Cummings doesn't appear to be a credible person on covid-19, and seems to hold strange, fringe views. For example, in November 2024 he posted a long, conspiratorial tweet which included the following:

The Fauci network should be rolled up & retired en masse with some JAILED. 
And their media supporters - i.e most of the old media - driven out of business.

Incidentally, Cummings also had a scandal in the UK around allegations that he inappropriately violated the covid-19 lockdown and subsequently wasn't honest about it (possibly lied about it). This also makes me a bit suspicious about his reliability.

This situation with Dominic Cummings reminds me a bit about how Donald Trump's staffers in the White House have talked about how it's nearly impossible to get him to read their briefings, but he's obsessed with watching Fox News. Unfortunately, the information a politician pays attention to and acts on is not necessarily the best information, or the source that conveyed that information first. 

As mentioned in the post above, there were already mainstream experts like the CDC giving public warnings before the February 27, 2020 blog post that was republished on LessWrong on February 28. Is it possible Dominic Cummings was, for whatever reason, ignoring warnings from experts while, oddly, listening to them from bloggers? Is Cummings' narrative, in general, reliable?

I decided to take a look at the timeline of the UK government's response to covid in March 2020. There's an article from the BBC published on March 14, 2020, headlined, "Coronavirus: Some scientists say UK virus strategy is 'risking lives'". Here's how the BBC article begins:

More than 200 scientists have written to the government urging them to introduce tougher measures to tackle the spread of Covid-19.

In an open letter, the 229 specialists in disciplines ranging from mathematics to genetics - though no leading experts in the science of the spread of diseases - say the UK's current approach will put the NHS under additional stress and "risk many more lives than necessary".

The signatories also criticised comments made by Sir Patrick Vallance, the government's chief scientific adviser, about managing the spread of the infection to make the population immune.

The Department of Health said Sir Patrick's comments had been misinterpreted.

The scientists - all from UK universities - also questioned the government's view that people would become fed up with restrictions if they were imposed too soon.

The open letter says:

By putting in place social distancing measures now, the growth can be slowed down dramatically, and thousands of lives can be spared. We consider the social distancing measures taken as of today as insufficient, and we believe that additional and more restrictive measures should be taken immediately, as it is already happening in other countries across the world.

The BBC article also mentions another open letter, then signed by 200 behavioural scientists (eventually, signed by 680), challenging the government's rationale for not instituting a lockdown yet. That letter opened for signatures on March 13, 2020 and closed for signatures on March 16, 2020.

First, note that this open letter calling for social distancing measures was published 16 days after the February 28, 2020 post on LessWrong and 17 days after the February 27, 2020 blog post it was republishing. If the UK government changed its thinking on or approach to covid-19 in the last few days of February or the first few days of March based on Dominic Cummings reading the blog posts he mentioned in that tweet, why was this open letter still necessary on March 14? What exactly is Cummings saying the bloggers convinced him to do, and when exactly did he do it?

Another quote from the March 14, 2020 BBC article:

Meanwhile the government's scientific advisory group for emergencies (Sage) advised that measures to protect vulnerable people - including household isolation - "will need to be instituted soon".

On March 16, 2020, the UK government advised the public to avoid non-essential social contact. On March 23, 2020, the government announced a nationwide lockdown that officially took effect on March 26. Is it possible the UK government's response was more influenced by mainstream experts than by bloggers?

I think the COVID case usefully illustrates a broader issue with how “EA/rationalist prediction success” narratives are often deployed.

That said, this is exactly why I’d like to see similar audits applied to other domains where prediction success is often asserted, but rarely with much nuance. In particular: crypto, prediction markets, LVT, and more recently GPT-3 / scaling-based AI progress. I wasn’t closely following these discussions at the time, so I’m genuinely uncertain about (i) what was actually claimed ex ante, (ii) how specific those claims were, and (iii) how distinctive they were relative to non-EA communities.

This matters to me for two reasons.

First, many of these claims are invoked rhetorically rather than analytically. “EAs predicted X” is often treated as a unitary credential, when in reality predictive success varies a lot by domain, level of abstraction, and comparison class. Without disaggregation, it’s hard to tell whether we’re looking at genuine epistemic advantage, selective memory, or post-hoc narrative construction.

Second, these track-record arguments are sometimes used—explicitly or implicitly—to bolster the case for concern about AI risks. If the evidential support here rests on past forecasting success, then the strength of that support depends on how well those earlier cases actually hold up under scrutiny. If the success was mostly at the level of identifying broad structural risks (e.g. incentives, tail risks, coordination failures), that’s a very different kind of evidence than being right about timelines, concrete outcomes, or specific mechanisms.

I distinctly remember telling my parents to wear a mask in the airport, based on rationalist sources, and having to argue that actually anonymous people on the internet were more reliable sources than the government, who did not recommend this.

Interesting. The claim I heard was that some rationalists anticipated that there would be a lockdown in the US and figured out who they wanted to be locked down with, especially to keep their work going. That might not have been put on LW when it was happening. I was skeptical that the US would lock down.

Curated and popular this week
Relevant opportunities