I've found the following diagram of the approx. stakes of OpenAI stakeholders useful for understanding the situation:
Interesting question. Some potentially tangential considerations:
Pivotal is hiring a Research Manager for Technical AI Safety. This could be a great fit if you're technical, enjoy being in the weeds of multiple research projects, love 1:1s, or could see yourself as a coach. You don't need to be an experienced RM already, we want to make you one! Apply here – we evaluate on a rolling basis. Recommend people here or to me directly.
Thanks, Toby & titotal!
Fair, 'misreading' is a strong word for what's going on. I find it somewhat justified because of how much less plausible the 'standard reading' becomes once you engage with the whole story. I tried to address the weirdness of disagreeing with the author in the last section.
I didn't intend this post to be a gotcha. Sorry if it comes across this way!
I also agree that it's fine for people to repurpose stories. There are likely many people with "Two roads diverged in a wood, and I took the one less travelled by. And that has made all the difference." posters, who derive a lot of value from them, which is good. But it's still interesting to me that those posters misread the poem, and that how common this misreading is may reveal something broader about the readers & what they want to get out of literature.
You mention having “an ambition, even a prayer for [the AI developers]” (~12 min). You might mean this figuratively, but many viewers of that channel will probably take it literally. When you admit elsewhere that you don't believe in god and don't practice any religion, they likely see this as a contradiction and suspect you’re not being genuine once you become more popular.
My guess is that it's better to be upfront about not being a Christian in order to retain authenticity. You'll probably always be regarded as an outsider by the conservative right (just for another example: your twitter handle includes 'poly', and you've spoken at length online about being polyamorous), but you could hope to be perceived as 'the outsider who gets us'. This kinda worked for a while for Sam Harris or Milo Yiannopoulos.
I’m not sure arguing for animal welfare on a strict scriptural basis really works. I wish the Bible spoke clearly about the moral worth of animals, but I don't think it does.
Many of the examples don't show genuine care for animals to me. They seem to be about keeping society functioning (e.g. a working ox must not be muzzled but allowed to eat, the ox fallen into a pit), using animals to point to the all-encompassingness of a rule (e.g. even the ox and donkey rest on the Sabbath), and others use animals as illustration in parables (e.g. the lost sheep).
Some passages also point the other way. Almost all animals die in the flood because of human sin. The old testament sometimes commands the complete destruction of livestock along with people in condemned cities. In Luke 8, Jesus does not seem to treat pigs with mercy.
Jesus asked him, “What is your name?”
“Legion,” he replied, because many demons had gone into him. And they begged Jesus repeatedly not to order them to go into the Abyss.
A large herd of pigs was feeding there on the hillside. The demons begged Jesus to let them go into the pigs, and he gave them permission. 33 When the demons came out of the man, they went into the pigs, and the herd rushed down the steep bank into the lake and was drowned.
My guess is that if you want to make a case for animal welfare from a christian perspective, it works better to argue for it 'in spirit', acknowledging that animal welfare wasn’t really a priority in biblical times.
I'm not sure that immediately jumping to critiquing the messaging makes sense here.
If hearing someone’s strong belief in simulation theory lowers your trust in their AI safety views, the 'obvious' first step seems to be to lower your trust in Yampolsky's AI safety views.
And if the simulation theory makes AI safety feel less motivating, the 'obvious' first step seems to be to reduce your motivation in proportion to how credible you find the theory.
Why is this false? The valuation in Oct. 2024 valuation was $157B, which means it has ~3.1x since. So wouldn't the compensation of 130/3.1 = ~$42B still be "tens of billions" in May 2025 terms?