I'm a computational physicist, I generally donate to global health. I am skeptical of AI x-risk and of big R Rationalism, and I intend on explaining why in great detail.
I think "speck of dust in the eye" was a bad choice for the central example of this debate, because in some situations a speck in your eye can be literally zero painful, and in others it can be actually quite painful and distressing. I think this leads to miscommunications and poor intuitions.
My preferred alternative would be something like "lightly scratching your palm with your fingernail". And while this is technically pain, I find a single light scratch to be so minor that it has literally zero effect on my levels of happiness: in fact I will sometimes do this to myself on purpose when I get sufficiently bored.
I therefore think that that premise 1: "mild pain is bad", is wrong for sufficiently small definitions of "mild pain". I think you need a threshold of badness for the argument to work. Furthermore, I think most people who would side with the "dust specks" also have some threshold where they would pick the torture: for example if it was "punching a billion people in the face vs torture one person".
To be clear, I wasn't saying that complexity itself was the cause of consciousness, just that some level of algorithmic complexity may be a requirement for consciousness. This seems like a common position: the prospect of present or future LLM sentience is a subject of debate, but it's rare to see a similar debate about the sentience of a pocket calculator.
A brain and a digital simulation have some similarities, but they also have a lot of differences. One of those differences is that the brains are running on "laws of physics" algorithms that are overwhelmingly faster and more complex than that of digital simulations. They didn't need to evolve these "algorithms": it's inherent to any biological process. Seth identifies several other differences as well: continuous operation, embodiment, etc. His position seems to be that at least one of these differences may result in a lack of consciousness.
disclaimer: I am not too well-versed on the philosophy here so I could be saying dumb things, feel free to correct:
From my computational physics experience I know that it is physically impossible to simulate the exact electrical properties of a system of a couple hundred atoms on a classical digital computer, due to a blowup in computational complexity.
The laws of physics could be described as an algorithm, but the algorithm in question is on a level of complexity that is impossible for digital simulations to match. I think it's generally agreed that some degree of complexity is required for consciousness: it doesn't seem insane to say that that complexity might lie past what is digitally simulatable in practice.
The question of digital consciousness seems to depend on whether simulated abstracted approximations to the physical process of thinking are close enough to produce the same effect.
I am somewhat concerned about data contamination here: Are you sure that the original Givewell writeup has at no point leaked into your model's analysis? Ie: was any of givewell's analysis online before the august 2025 knowledge cutoff for GPT, or did your agents look at the Givewell report as part of their research?
Yeah, the future described in this post isn't particuarly "weird", per se, it's just using the assumption that every technology that has been hypothetically proposed for the future will be created by ASI soon after AGI arrives.
I think the future will be a lot more unpredictable than this. Analolgously, I can imagine someone from 1965 being very confused about a future where immensely powerful computers can fit in your pocket, but human spaceflight had gone no further than the moon. It's very hard to predict in advance the constraints and shortcomings of future technology, or the practical and logistical factors that affect what is achieved.
Have you considered that the reason these policies are not increasing AI usage is that AI usage is not particularly useful for many applications? Particularly when it comes to something like animal advocacy, I'm struggling to think of many things you'd actually need a full model subscription for (rather than just asking the occasional question to a free model).
I think the original policies are fine: they let people evaluate and decide for themselves how useful AI models are, and adjust strategies accordingly. Trying to pressure people to use AI beyond this level is going to make your team less effective.
You correctly point out that "AI safety leaders" is a group that selects for high concern about AI, which means that the average is skewed towards high concern, relative to experts more generally.
I would like to add that the same is probably true (to a lesser extent) for AGI timeline estimates: People that think that AGI is very far away are less likely to think that AI safety is a pressing concern and are thus less motivated to become AI safety leaders. Also, people who are concerned about present-day AI risks, but don't think AGI is imminent often call themselves "AI ethicists", rather than AI safety people. These "AI ethicists" are unlikely to show up to a "summit on existential security".
To be clear, I think it's good to write this article, but we should always be mindful of selection effects when interpreting surveys.
Unfortunately, most estimates of LLM energy use are somewhat out of date due to the rise of reasoning models. A small amount of personal usage is probably still not that energy intensive, but I don't think it's negligible anymore.
The most up-to-date estimates I've seen of AI energy use is this paper here. I recommend you look at table 4. For the o3 reasoning model, which is probably the closest analogue to todays reasoning models, a short query costs something like 7 Wh, a medium query is 20 Wh, and a long query is 30 Wh. Using a non-reasoning model like GPT-4o was much less intensive at like 0.4 Wh for a small query, however in my experience the results tend to be a lot worse.
So if you end up using like 10 medium queries to a reasoning model over the course of a project, that would add up to 0.2 kWh: if you use 100 queries, that would be 2 kWh. The typical household energy use is something like 30 kWh per day. So the impact is small, but non-neglible: probably there are other things you can do that will have a bigger impact on energy use.
Personally, I would be worried about cognitive offloading: I think that an overreliance on AI can hamper your ability to learn things, if you offload mentally difficult tasks to the AI.
This interpretation is not true. Thiel was talking specifically about money going to Gates in the event of Musk dying:
That's how Thiel said he persuaded Musk. He said he looked up actuarial tables and found the probability of Musk's death in the coming year equated to giving $1.4 billion to Gates, who has long sparred with the Tesla CEO.
"What am I supposed to do—give it to my children?" Musk responded, in Thiel's telling. "You know, it would be much worse to give it to Bill Gates."
I think this would only make sense if Musk had specifically willed his pledge money to the gates foundation?
I think you would benefit from re-reading the article in question. For example, they directly adress your point 1 by pointing out that consumer diffusion figures are often misleading by expressing figures in terms of "percentage of people that use chatbots on occasion", rather than on frequency of use.
Point 3 is not even an argument, just a restatement of what they believe: yes, they think AI domination will take decades. They state the reasons they believe this very clearly in the section "Diffusion is limited by the speed of human, organizational, and institutional change": if you disagree with this, you have to present actual arguments. From what I know, most economists would agree with them.
Point 5 is not an argument either: they are not to blame for how you interpret their "vibes". If people interpret "AI will be akin to the internet" as anything other than "AI will be akin to the internet" that's their fault, not the authors.
As for point 6, I'm confused as to what your position is here. Do you think that AI systems are merely cheating on every single benchmark? In the section "benchmarks do not mention real-world utility", I took them as referring to benchmarks that are actually meaningful: saying that while they genuinely are good at taking law tests, even non-contaminated ones, that this doesn't translate into being a good lawyer because of the aspects that are not easily measurable. I don't see how this is a contradiction to any of their previous work?