Bio

Participation
4

Evolutionary psychology professor, author of 'The Mating Mind', 'Spent', 'Mate', & 'Virtue Signaling'. B.A. Columbia; Ph.D. Stanford. My research has focused on human cognition, machine learning, mate choice, intelligence, genetics, emotions, mental health, and moral virtues.  Interested in long termism, X risk,  longevity, pronatalism, population ethics, AGI, China, crypto.

How others can help me

Looking to collaborate on (1) empirical psychology research related to EA issues, especially attitudes towards long-termism, X risks and GCRs, sentience, (2) insights for AI alignment & AI safety from evolutionary psychology, evolutionary game theory, and evolutionary reinforcement learning, (3)  mate choice, relationships, families , pronatalism, and population ethics as cause areas.

How I can help others

I have 30+ years experience in behavioral sciences research, have mentored 10+ PhD students and dozens of undergrad research assistants. I'm also experienced with popular science outreach, book publishing, public speaking, social media, market research, and consulting.

Comments
722

Tobias -- I take your point. Sort of. 

Just as they say 'There are no atheists in foxholes' [when facing risk of imminent death during combat], I feel that it's OK to pray (literally and/or figuratively) when facing AI extinction risk -- even if one's an atheist or agnostic. (I'd currently identify as an 'agnostic', insofar as the Simulation Hypothesis might be true). 

My X handle 'primalpoly' is polysemic, and refers partly to polyamory, but partly to polygenic traits (which I've studied extensively), and partly to some of the hundreds of other words that start with 'poly'. 

I think that given most of my posts on X over the last several years, and the people who follow me, I'm credibly an insider to the conservative right.

My new interview (48 mins) on AI risks for Bannon's War room: https://rumble.com/v6z707g-full-battleground-91925.html

This was my attempt to try out a few new arguments, metaphors, and talking points to raise awareness about AI risks among MAGA conservatives. I'd appreciate any feedback, especially from EAs who lean to the Right politically, about which points were most or least compelling.

PS the full video of my 15-minute talk was just posted today on the NatCon YouTube channel; here's the link

David -- I considered myself an atheist for several decades (partly in alignment with my work in evolutionary psychology), and would identify now as an agnostic (insofar as the Simulation Hypothesis has some slight chance of being true, and insofar as 'Simulation-Coders' aren't functionally any different from 'Gods', from our point of view).

And I'm not opposed to various kinds of reproductive tech, regenerative medicine research, polygenic screening, etc.

However, IMHO, too many atheists in the EA/Rationalist/AI Safety subculture have been too hostile or dismissive of religion to be effective in sharing the AI risk message with religious people (as I alluded to in this post). 

And, I think way too much overlap has developed between transhumanism and the e/acc cult that dismisses AI risk entirely, and/or that embraces human extinction and replacement by machine intelligences. Insofar as 'transhumanism' has morphed into contempt for humanity-as-it-is, and into a yearning for hypothetical-posthumanity-as-it-could be, then I think it's very dangerous.

Modest, gradual, genetic selection or modification of humans to make them a little healthier or smarter, generation by generation? That's fine with me. 

Radical replacement of humanity by ASIs in order to colonize the galaxy and the lightcone faster? Not fine with me.

Arepo - thanks for your comment.

To be strictly accurate, perhaps I should have said 'the more you know about AI risks and AI safety, the higher your p(doom)'. I do think that's an empirically defensible claim. Especially insofar as most of the billions of people who know nothing about AI risks have a p(doom) of zero.

And I might have added that thousands of AI devs employed by AI companies to build AGI/ASI have very strong incentives not to learn about too much about AI risks and AI safety of the sort that EAs have talked about for years, because such knowledge would cause massive cognitive dissonance, ethical self-doubt, regret (as in the case of Geoff Hinton), and/or would handicap their careers and threaten their salaries and equity stakes. 

Remmelt - thanks for posting this. 

Senator Josh Hawley is a big deal, with a lot of influence. I think building alliances with people like him could help slow down reckless AGI development. He may not be as tuned into AI X-risk as your typical EA is, but he is, at least, resisting the power of the pro-AI lobbyists.

Thanks for sharing this. 

IMHO, if EAs really want effective AI regulation & treaties, and a reduction in ASI extinction risk, we need to engage more with conservatives, including those currently in power in Washington. And we need to do so using the language and values that appeal to conservatives.  

Joel -- have you actually read the Bruce Gilley book? 

If you haven't, maybe give it a try before dismissing it as something that's 'extremely useful to avoid associating ourselves with'.

To me, EA involves a moral obligation to seek the truth about contentious political topics, especially those that concern the origins and functioning of successful institutions -- which is what the whole colonialism debate is centrally about. And not ignoring these topics just to stay inside the Overton window.

Jason -- your reply cuts to the heart of the matter.

Is it ethical to try to do good by taking a job within an evil and reckless industry? To 'steer it' in a better direction? To nudge it towards minimally-bad outcomes? To soften the extinction risk?

I think not. I think the AI industry is evil and reckless, and EAs would do best to denounce it clearly by warning talented young people not to work inside it.

Load more