Non-EA interests include chess and TikTok (@benthamite). We are probably hiring: https://metr.org/hiring
Feedback always appreciated; feel free to email/DM me or use this link if you prefer to be anonymous.
Thanks for collecting this timeline!
The version of the claim I have heard is not that LW was early to suggest that there might be a pandemic but rather that they were unusually willing to do something about it because they take small-probability high-impact events seriously. Eg. I suspect that you would say that Wei Dai was "late" because their comment came after the nyt article etc, but nonetheless they made 700% betting that covid would be a big deal.
I think it can be hard to remember just how much controversy there was at the time. E.g. you say of March 13, "By now, everyone knows it's a crisis" but sadly "everyone" did not include the California department of public health, who didn't issue stay at home orders for another week.
[I have a distinct memory of this because I told my girlfriend I couldn't see her anymore since she worked at the department of public health (!!) and was still getting a ton of exposure since the California public health department didn't think covid was that big of a deal.]
Your answer is the best that I know of, sadly.
A thing you could consider is that there are a bunch of EAGx's in warm/sunny places (Ho Chi Minh City, Singapore, etc.). These cities maybe don't meet the definition of "hub", but they have enough people for a conference, which possibly will meet your needs.
+1 to maintaining justification standards across cause areas, thanks for writing this post!
Fwiw I feel notably less clueless about WAW than about AI safety, and would have assumed the same is true of most people who work in AI safety, though I admittedly haven't talked to very many of them about this. (And also haven't thought about it that deeply myself.)
Thanks! Perhaps I phrased this poorly; a person being a patient or not isn't the relevant factor, it's whether or not they are licensed. E.g. if you look at the FDA authorization for the first product it says:
I'm actually not sure whether one could generously interpret "similar training" to include e.g. radiology technicians. They wouldn't be allowed to make diagnoses, and my guess is that the government would not look kindly on a rad tech saying something like "I'm not diagnosing you with a stroke, but the AI thinks you've had one, wink, wink," but I'm not sure. Perhaps someone with more legal experience could chime in.
In any case, I'm skeptical that a business would want to run that malpractice risk (particularly since, as mentioned above, insurance wouldn't reimburse them for doing so).
And yes, I agree that this probably means these products aren't more clearly safe and effective than e.g. eyeglasses (where businesses are analogously legally prohibited from giving glasses to patients without a licensed human optometrist first performing an exam). It's just worth considering that this is a very high bar![1]
Although I think maybe it's more accurate to just say that medical device authorization is based on a bunch of factors that are largely unrelated to the safety and efficacy of the product. E.g. I think there's no one who believes that cigarettes are safer than eyeglasses, despite them being available OTC.