Bio

Non-EA interests include chess and TikTok (@benthamite). We are probably hiring: https://metr.org/hiring 

How others can help me

Feedback always appreciated; feel free to email/DM me or use this link if you prefer to be anonymous.

Sequences
3

AI Pause Debate Week
EA Hiring
EA Retention

Comments
1167

Topic contributions
6

Thanks! Perhaps I phrased this poorly; a person being a patient or not isn't the relevant factor, it's whether or not they are licensed. E.g. if you look at the FDA authorization for the first product it says:

The ContaCT mobile application is intended to be used by neurovascular specialists, such as vascular neurologists, neuro-interventional specialists, or users with similar training who have been pre-authorized by their Healthcare Organization or Facility.

I'm actually not sure whether one could generously interpret "similar training" to include e.g. radiology technicians.  They wouldn't be allowed to make diagnoses, and my guess is that the government would not look kindly on a rad tech saying something like "I'm not diagnosing you with a stroke, but the AI thinks you've had one, wink, wink," but I'm not sure. Perhaps someone with more legal experience could chime in.

In any case, I'm skeptical that a business would want to run that malpractice risk (particularly since, as mentioned above, insurance wouldn't reimburse them for doing so).

And yes, I agree that this probably means these products aren't more clearly safe and effective than e.g. eyeglasses (where businesses are analogously legally prohibited from giving glasses to patients without a licensed human optometrist first performing an exam). It's just worth considering that this is a very high bar![1]

  1. ^

    Although I think maybe it's more accurate to just say that medical device authorization is based on a bunch of factors that are largely unrelated to the safety and efficacy of the product. E.g. I think there's no one who believes that cigarettes are safer than eyeglasses, despite them being available OTC.

I doubt that there are surveys of when people stayed home. You could maybe try to look at prediction markets but I'm not sure what you would compare them to to see if the prediction market was more accurate than some other reference group.

Thanks for collecting this timeline! 

The version of the claim I have heard is not that LW was early to suggest that there might be a pandemic but rather that they were unusually willing to do something about it because they take small-probability high-impact events seriously. Eg. I suspect that you would say that Wei Dai was "late" because their comment came after the nyt article etc, but nonetheless they made 700% betting that covid would be a big deal.

I think it can be hard to remember just how much controversy there was at the time. E.g. you say of March 13, "By now, everyone knows it's a crisis" but sadly "everyone" did not include the California department of public health, who didn't issue stay at home orders for another week. 

[I have a distinct memory of this because I told my girlfriend I couldn't see her anymore since she worked at the department of public health (!!) and was still getting a ton of exposure since the California public health department didn't think covid was that big of a deal.]

Congrats Samantha and the AIM team!

Your answer is the best that I know of, sadly.

A thing you could consider is that there are a bunch of EAGx's in warm/sunny places (Ho Chi Minh City, Singapore, etc.). These cities maybe don't meet the definition of "hub", but they have enough people for a conference, which possibly will meet your needs.

Thanks Vasco, I hadn't seen that. Do you know if anyone has addressed Nathan's "Comparative advantage means I'm guaranteed work but not that that work will provide enough for me to eat" point? (Apart from Maxwell, who I guess concedes the point?)

why are there fewer horses?

+1 to this being an important question to ask.

+1 to maintaining justification standards across cause areas, thanks for writing this post!

Fwiw I feel notably less clueless about WAW than about AI safety, and would have assumed the same is true of most people who work in AI safety, though I admittedly haven't talked to very many of them about this. (And also haven't thought about it that deeply myself.)

Is the amount which has been donated to the fund visible anywhere?

Sorry, I don't mean models that you consider to be better, but rather metrics/behaviors. Like what can V-JEPA-2 (or any model) do that previous models couldn't which you would consider to be a sign of progress?

Load more