The NYT article Your A.I. Radiologist Will Not Be With You Soon reports, “Leaders at OpenAI, Anthropic and other companies in Silicon Valley now predict that A.I. will eclipse humans in most cognitive tasks within a few years… The predicted extinction of radiologists provides a telling case study. So far, A.I. is proving to be a powerful medical tool to increase efficiency and magnify human abilities, rather than take anyone’s job.”[1]
I disagree that this is a “telling case study.”[2] Radiology has several attributes which make it hard to generalize to other jobs:
- Moreover, the article is framed as Geoff Hinton having confidently predicted that AI would replace radiologists and this prediction as having been proven wrong, but his statement felt more to me like an offhand remark/hope.
Takeaways from this incident I endorse:[6]
Offhand remarks from ML researchers aren’t reliable economic forecasts
People trying to predict the effects of automation/AI capabilities should consider that employees often perform valuable services which aren’t easily captured in evals, such as “beside manner” and “regulatory capture”
If you have a job where a) your customers are legally prohibited from hiring someone other than you, b) even if an enterprising competitor decides to run the legal risk of replacing you they still have to pay you, and c) anyone who replaces you is likely to be sued, you probably have reasonable job security
- Takeaways I don’t endorse:
- Radiology’s impacts being less than Hinton thought means that we should disbelieve:
- Claims that AI has already driven decreased wages, e.g. Azar et al. 2025 or Brynjolfsson et al. 2025
- Claims that future AI could drive wages even lower, e.g. Barnett 2025
- Or really any claim which is supported by something more than an offhand remark
- Many people work in jobs similar to radiology where e.g. it is illegal to replace them with AI, and therefore we can easily extrapolate from limited wage impacts in radiology to job loss in other sectors of the economy
- Radiology’s impacts being less than Hinton thought means that we should disbelieve:
Appendix: Data and Methodology for the sample of AI Radiology products
Data
The following products were included in my random sample:
| Product | Legally usable by patients? | Notes |
| Viz.AI Contact | No | |
| Aidoc | No | |
| HeartFlow FFRct | No | |
| Arterys Cardio DL | No | |
| QuantX | No | |
| ProFound AI for Digital Breast Tomosynthesis | No | |
| OsteoDetect | No | |
| Lunit INSIGHT CXR Triage | No | |
| Caption Guidance | No | Not intended to assist radiologists; intended to assist ultrasound techs. |
| SubtlePET | No |
Methodology
I asked GPT 5.1 to randomly sample products and record whether they were legally usable by patients. Transcript here. I then manually verified each product.
- ^
Note that, because the supply of radiologists is artificially limited, a drop in demand needn’t actually cause a change in the number of radiologists employed. It would be expected to decrease their wages though. In the rest of this post, I will respond to a steelman of the NYT which is talking about a decrease in the wage of radiologists, not a decrease in the number employed.
- ^
I get vague vibes from the NYT article like “predictions of job loss from AI automation aren’t trustworthy”, but they don’t make a very clear argument, so it’s possible that I am misunderstanding their point. My apologies if so. Thanks to Yarrow for this point.
- ^
I randomly sampled 10 AI radiology products and found that patients are legally allowed to purchase 0 of them. See appendix.
- ^
Medical billing is complex, but, roughly, providers are reimbursed for the labor they put in to seeing the patient, not the patient’s improved outcomes. In my sample of 10 AI products, only 1 of the ten had a CPT code (meaning that providers can’t bill even $0.01 more for using those products than for using a non-AI tool) and that one which did could only be billed in combination with human labor.
- ^
Possibly at some point in the future, juries will acknowledge the supremacy of AI systems, but I doubt a present day jury would be very sympathetic to a hospital that replaced human radiologists with an AI that made a mistake. Some insurers have a blanket exclusion for AI-caused malpractice. Radiology has one of the highest rates of malpractice lawsuits. Thanks to Jason for this point.
- ^
Works in Progress has an article which goes into more detail about the state of radiology automation, and is helpful for better understanding the current state, though I think they are underselling the regulatory barriers

I have an issue with this argument, although I don't have much expertise in this field.
you talk about the legality of a patient directly buying radiology results from an AI company, but this isn't a very plausible path of radiologists being replaced. People will still have to go to the hospital to get the actual radiology scans done.
The actual concern would be that hospitals get the radiology scans done by non-radiologists, and outsource the radiology results to an AI radiology company. I can't really tell from your post whether this is illegal or not (if it is, what is the business model of these companies?). This process seems more like how automation will actually go in most fields, so it's relevant if it's not working for radiology.
And another point: one reason that this stuff may be illegal is that it doesn't work well enough to be made legal. I think if this is partly the reason, that can absolutely be used as a point against the likelihood of AI automation.