Jackson Wagner

Scriptwriter for RationalAnimations @ https://youtube.com/@RationalAnimations
3639 karmaJoined Working (6-15 years)Fort Collins, CO, USA

Bio

Scriptwriter for RationalAnimations!  Interested in lots of EA topics, but especially ideas for new institutions like prediction markets, charter cities, georgism, etc.  Also a big fan of EA / rationalist fiction!

Comments
354

To answer with a sequence of increasingly "systemic" ideas (naturally the following will be tinged by by own political beliefs about what's tractable or desirable):

There are lots of object-level lobbying groups that have strong EA endorsement. This includes organizations advocating for better pandemic preparedness (Guarding Against Pandemics), better climate policy (like CATF and others recommended by Giving Green), or beneficial policies in third-world countries like salt iodization or lead paint elimination.

Some EAs are also sympathetic to the "progress studies" movement and to the modern neoliberal movement connected to the Progressive Policy Institute and the Niskasen Center (which are both tax-deductible nonprofit think-tanks). This often includes enthusiasm for denser ("yimby") housing construction, reforming how science funding and academia work in order to speed up scientific progress (such as advocated by New Science), increasing high-skill immigration, and having good monetary policy. All of those cause areas appear on Open Philanthropy's list of "U.S. Policy Focus Areas".

Naturally, there are many ways to advocate for the above causes -- some are more object-level (like fighting to get an individual city to improve its zoning policy), while others are more systemic (like exploring the feasibility of "Georgism", a totally different way of valuing and taxing land which might do a lot to promote efficient land use and encourage fairer, faster economic development).

One big point of hesitancy is that, while some EAs have a general affinity for these cause areas, in many areas I've never heard any particular standout charities being recommended as super-effective in the EA sense... for example, some EAs might feel that we should do monetary policy via "nominal GDP targeting" rather than inflation-rate targeting, but I've never heard anyone recommend that I donate to some specific NGDP-targeting advocacy organization.

I wish there were more places like Center for Election Science, living purely on the meta level and trying to experiment with different ways of organizing people and designing democratic institutions to produce better outcomes. Personally, I'm excited about Charter Cities Institute and the potential for new cities to experiment with new policies and institutions, ideally putting competitive pressure on existing countries to better serve their citizens. As far as I know, there aren't any big organizations devoted to advocating for adopting prediction markets in more places, or adopting quadratic public goods funding, but I think those are some of the most promising areas for really big systemic change.

The Christians in this story who lived relatively normal lives ended up looking wiser than the ones who went all-in on the imminent-return-of-Christ idea. But of course, if christianity had been true and Christ had in fact returned, maybe the crazy-seeming, all-in Christians would have had huge amounts of impact.

Here is my attempt at thinking up other historical examples of transformative change that went the other way:

  • Muhammad's early followers must have been a bit uncertain whether this guy was really the Final Prophet. Do you quit your day job in Mecca so that you can flee to Medina with a bunch of your fellow cultists? In this case, it probably would've been a good idea: seven years later you'd be helping lead an army of 100,000 holy warriors to capture the city of Mecca. And over the next thirty years, you'll help convert/conquer all the civilizations of the middle east and North Africa.

  • Less dramatic versions of the above story could probably be told about joining many fast-growing charismatic social movements (like joining a political movement or revolution). Or, more relevantly to AI, about joining a fast-growing bay-area startup whose technology might change the world (like early Microsoft, Google, Facebook, etc).

  • You're a physics professor in 1940s America. One day, a team of G-men knock on your door and ask you to join a top-secret project to design an impossible superweapon capable of ending the Nazi regime and stopping the war. Do you quit your day job and move to New Mexico?...

  • You're a "cypherpunk" hanging out on online forums in the mid-2000s. Despite the demoralizing collapse of the dot-com boom and the failure of many of the most promising projects, some of your forum buddies are still excited about the possibilities of creating an "anonymous, distributed electronic cash system", such as the proposal called B-money. Do you quit your day job to work on weird libertarian math problems?...

People who bet everything on transformative change will always look silly in retrospect if the change never comes. But the thing about transformative change is that it does sometimes occur.

(Also, fortunately our world today is quite wealthy -- AI safety researchers are pretty smart folks and will probably be able to earn a living for themselves to pay for retirement, even if all their predictions come up empty.)

Linking my own thoughts as part of previous discussion "How confident are you that it's preferable for America to develop AGI before China does?".  I generally agree with your take.

This is a nice story, but it doesn't feel realistic to treat the city of the future as such an all-or-nothing affair.  Wouldn't there be many individual components (like the merchant's initial medical tonic) that could be stand-alone technologies, diffusing throughout the world and smoothly raising standards of living in the usual way?  In this sense, even your "optimistic" story seems too pessimistic about the wide-ranging, large-scale impact of the scholar's advice.

The world of the story would still develop quite differently than in real history, since they're:

  1. getting technologies much faster than in real history
  2. getting technologies without understanding as much of the theory behind them.  (although is this really true?  I feel like, if we had access to such a scholar, it might be easiest for the scholar to tell us about fundamental theories of nature, rather than laboriously transcribing the design for each and every inscrutable device.  so it's possible that an oracle would actually differentially advance our theoretical understanding -- consider how useful an oracle would be in the field of modern pharma development, where we have many effective drugs whose exact mechanisms of action are still unknown!)

This and other effects (like the obvious power-concentration aspect of whoever controls access to the oracle's insights) would probably produce a very lopsided-seeming world compared to actual modernity.  But I don't think it would end up looking like either of the two endings to your story.

(Of course, your more poetic endings fit the form of a traditional fable much better.  "And then the city kicked off an accelerating techno-industrial singularity" doesn't really fit the classic repertoire of tragedy, comedy, etc!)

100km across would be a pretty large comet; how much warning would humanity likely get that the comet is incoming?

Sorry about that!  I think I just intended to link to the same place I did for my earlier use of the phrase "AI-enabled coups", namely this Forethought report by Tom Davidson and pals, subtitled "How a Small Group Could Use AI to Seize Power": https://www.forethought.org/research/ai-enabled-coups-how-a-small-group-could-use-ai-to-seize-power

But also relevant to the subject is this Astral Codex Ten post about who should control an LLM's "spec": https://www.astralcodexten.com/p/deliberative-alignment-and-the-spec

The "AI 2027" scenario is pretty aggressive on timelines, but also features a lot of detailed reasoning about potential power-struggles over control of transformative AI which feels relevant to thinking about coup scenarios.  (Or classic AI takeover scenarios, for that matter. Or broader, coup-adjacent / non-coup-authoritarianism scenarios of the sort Thiel seems to be worried about, where instead of getting taken over unexpectedly by China, Trump, or etc, today's dominant western liberal institutions themselves slowly become more rigid and controlling.)

For some of the shenanigans that real-world AI companies are pulling today, see the 80,000 Hours podcast on OpenAI's clever ploys to do away with its non-profit structure, or Zvi Mowshowitz on xAI's embarrassingly blunt, totally not-thought-through attempts to manipulate Grok's behavior on various political issues (or a similar, earlier incident at Google).

it could be the case that he is either lying or cognitively biased to believe in the ideas he also thinks are good investments

Yeah.  Thiel is often, like, so many layers deep into metaphor and irony in his analysis, that it's hard to believe he keeps everything straight inside his head.  Some of his investments have a pretty plausible story about how they're value-aligned, but notably his most famous and most lucrative investment (he was the first outside investor in Facebook, and credits Girardian ideas for helping him see the potential value) seems ethically disastrous!  And not just from the commonly-held liberal-ish perspective that social media is bad for people's mental health and/or seems partly responsible for today's unruly populist politics.  From a Girardian perspective it seems even worse!!  Facebook/instagram/twitter/etc are literally the embodiment of mimetic desire, hugely accelerating the pace and intensity of the scapegoat process (cancel culture, wokeness, etc -- the very things Thiel despises!) and hastening a catastrophic Girardian war of all against all as people become too similar in their desires and patterns of thinking (the kind of groupthink that is such anathema to him!).

Palantir also seems like a dicey, high-stakes situation where its ultimate impact could be strongly positive or strongly negative, very hard to figure out which.

If you take seriously either of these donations, they directly contradict your claim that he is worried about stable totalitarianism and certainly personal liberty

I would say it seems like there are three potential benefits that Thiel might see for his support for Blake / Masters:

  1. Grim neoreactionary visions of steering the future of the country by doing unlawful, potentially coup-like stuff at some point in the future. (I think this is a terrible idea.)
  2. A kind of vague, vibes-based sense that we need to support conservatives in order to shake up the stagnant liberal establishment and "change the conversation" and shift the culture.  (I think this is a dumb idea that has backfired so far.)
  3. The normal concept of trying to support people who agree with you on various policies, in the hopes they pass those policies -- maybe now, or maybe only after 2028 on the off chance that Vance becomes president later.  (I don't know much about the details here, but at least this plan isn't totally insane?)

Neoreaction: In this comment I try to map out the convoluted logic by which how Thiel might be reconciling his libertarian beliefs like "I am worried about totalitarianism" with neoreactionary ideas like "maybe I should help overthrow the American government".  (Spoilers: I really don't think his logic adds up; any kind of attempt at a neoreactionary power-grab strikes me as extremely bad in expectation.)  I truly do think this is at least some part of Thiel's motivation here.  But I don't think that his support for Vance (or Blake Masters) was entirely or mostly motivated by neoreaction.  There are obviously a lot of reasons to try and get one of your buddies to become senators!  If EA had any shot at getting one of "our guys" to be the next Dem vice president, I'm sure we'd be trying hard to do that!

"Shifting the conversation": In general, I think Thiel's support for Trump in 2016 was a dumb idea that backfired and made the world worse (and not just by Dem lights -- Thiel himself now seems to regret his involvement).  He sometimes seems so angry at the stagnation created by the dominant liberal international order, that he assumes if we just shake things up enough, people will wake up and the national conversation will suddenly shift away from culture-war distractions to more important issues.  But IMO this hasn't happened at all. (Sure, Dems are maybe pivoting to "abundance" away from wokeness, which is awesome.  But meanwhile, the entire Republican party has forgotten about "fiscal responsibility", etc, and fallen into a protectionist / culture-war vortex.  And most of all, the way Trump's antics constantly saturate the news media seems like the exact opposite of a healthy national pivot towards sanity.)  Nevertheless, maybe Thiel hasn't learned his lesson here, so a misguided desire to generally oppose Dems even at the cost of supporting Trump probably forms some continuing part of his motivation.

Just trying to actually get desired policies (potentially after 2028): I'd be able to say more about this if I knew more about Vance and Masters' politics.  But I'm not actually an obsessive follower of JD Vance Thought (in part because he just seems to lie all the time) like I am with Thiel.  But, idk, some thoughts on this, which seems like it probably makes up the bulk of the motivation:

  • Vance does seems to just lie all the time, misdirecting people and distracting from one issue by bringing up another in a totally scope-insensitive way.  (Albeit this lying takes a kind of highbrow, intellectual, right-wing-substacker form, rather than Trump's stream-of-consciousness narcissistic confabulation style.)  He'll say stuff like "nothing in this budget matters at all, don't worry about the deficit or the benefit cuts or etc -- everything will be swamped by the importance of [some tiny amount of increased border enforcement funding]".
    • The guy literally wrote a whole book about all the ways Trump is dumb and bad, and now has to constantly live a lie to flatter Trump's whims, and is apparently pulling that trick off successfully!  This makes me feel like "hmm, this guy is the sort of smart machiavellian type dude who might have totally different actual politics than what he externally espouses".  So, who knows, maybe he is secretly 100% on board with all of Thiel's transhumanist libertarian stuff, in which case Thiel's support would be easily explained!
    • Sometimes (like deficit vs border funding, or his anti-Trump book vs his current stance) it's obvious that he's knowingly lying.  But other times he seems genuinely confused and scope-insensitive.  Like, maybe one week he's all on about how falling fertility rates is a huge crisis and #1 priority.  Then another week he's crashing the Paris AI summit and explaining how America is ditching safetyism and going full-steam ahead since AI is the #1 priority.  (Oh yeah, but also he claims to have read AI 2027 and to be worried about many of the risks...)  Then it's back to cheerleading for deportations and border control, since somehow stopping immigrants is the #1 priority.  (He at least knows it's Trump's #1 best polling issue...)  Sometimes all this jumping-around seems to happen within a single interview conversation, in a way that makes me think "okay, maybe this guy is not so coherent".
  • All the lying makes it hard to tell where Vance really stands on various issues.  He seems like he was pushing to be less involved in fighting against Houthis and Iran?  (Although lost those internal debates.)  Does he actually care about immigration, or is that fake?  What does he really think about tarriffs and various budget battles?
  • Potential Thiel-flavored wins coming out of the white house:
    • Zvi says that "America's AI Action Plan is Pretty Good"; whose doing is that?  Not Trump.  Probably not Elon.  If this was in part due to Vance, then this is probably the biggest Vance-related payoff Thiel has gotten so far.
      • The long-threatened semiconductor tariff might be much weaker than expected; probably this was the work of Nvidia lobbyists or something, but again, maybe Vance had a finger on the scale here?
      • Congress has also gotten really pro-nuclear-power really quickly, although again this is probably at the behest of AI-industry lobbyists, not Vance.
      • But it might especially help to have a cheerleader in the executive branch when you are trying to overhaul the government with AI technology, eg via big new Palantir contracts or providing chatGPT to federal workers.
    • Thiel seems to be a fan of cryptocurrency; the republicans have done a lot of pro-crypto stuff, although maybe they would have done all this anyways without Vance.
    • Hard to tell where Thiel stands on geopolitical issues, but I would guess he's in the camp of people who are like "ditch russia/ukraine and ignore iran/israel, but be aggressive on containing china".  Vance seems to be a dove on Iran and the Houthis, and his perrenial europe-bashing is presumably seen as helpful as regards Russia, trying to convince europe that they can't always rely on the USA to back them up, and therefore need to handle Russia themselves.
    • Tragically, RFK is in charge of all the health agencies and is doing a bunch of terrible, stupid stuff.  But Marty Makary at the FDA and Jim O'Neill at the HHS are Thiel allies and have been scurrying around amidst the RFK wreckage, doing all kinds of cool stuff -- trying to expedite pharma manufacturing build-outs, building AI tools to accelerate FDA approval processes, launching a big new ARPA-H research program for developing neural interfaces, et cetera.  This doesn't have anything to do with Vance, but definitely represents return-on-investment for Thiel's broader influence strategy.  (One of the few arguable bright spots for the tech right, alongside AI policy, since Elon's DOGE effort has been such a disaster, NASA lost an actually-very-promising Elon-aligned administrator, Trump generally has been a mess, etc.)
  • Bracketing the ill effects of generally continuing to support Trump (which are maybe kind of a sunk cost for Thiel at this point), the above wins seem easily worth the $30m or so spent on Vance and Masters' various campaigns.
    • And then of course there's always the chance he becomes president in 2028, or otherwise influences the future of a hopefully-post-Trump republican party, and therefore gets a freer hand to implement whatever his actual politics are.
    • I'm not sure how the current wins (some of them, like crypto deregulation or abandoning Ukraine or crashing the Paris AI summit, are only wins from Thiel's perspective, not mine) weighs up against bad things Vance has done (in the sense of bad-above-replacement of the other vice-presidential contenders like Marco Rubio) -- compared to more normal republicans, Vance seems potentially more willing to flatter Trump's idiocy on stuff like tariffs, or trying to annex Greenland, or riling people up with populist anti-immigrant rhetoric.

I am a biased center left dem though

I am a centrist dem too, if you can believe it!  I'm a big fan of Slow Boring, and in recent months I have also really enjoyed watching Richard Hannania slowly convert from a zealous alt-right anti-woke crusader into a zealous neoliberal anti-Trump dem and shrimp-welfare-enjoyer.  But I like to hear a lot of very different perspectives about life (I think it's very unclear what's going on in the world, and getting lots of different perspectives helps for piecing together the big picture and properly understanding / prioritizing things), which causes me to be really interested in a handful of "thoughtful conservatives".  There are only a few of them, especially when they keep eventually converting to neoliberalism / georgism / EA / etc, so each one gets lots of attention...

I think Thiel really does have a variety of strongly held views.  Whether these are "ethical" views, ie views that are ultimately motivated by moral considerations... idk, kinda depends on what you are willing to certify as "ethical".

I think you could build a decent simplified model of Thiel's motivations (although this would be crediting him with WAY more coherence and single-mindedness than he or anyone else really has IMO) by imagining he is totally selfishly focused on obtaining transhumanist benefits (immortality, etc) for himself, but realizes that even if he becomes one of the richest people on the planet, you obviously can't just go out and buy immortality, or even pay for a successful immortality research program -- it's too expensive, there are too many regulatory roadblocks to progress, etc.  You need to create a whole society that is pro-freedom and pro-property-rights (so it's a pleasant, secure place for you to live) and radically pro-progress.  Realistically it's not possible to just create an offshoot society, like a charter city in the ocean or a new country on Mars (the other countries will mess with you and shut you down).  So this means that just to get a personal benefit to yourself, you actually have to influence the entire trajectory of civilization, avoiding various apocalyptic outcomes along the way (nuclear war, stable totalitarianism), etc.  Is this an "ethical" view?

  • Obviously, creating a utopian society and defeating death would create huge positive externalities for all of humanity, not just Mr Thiel.
    • (Although longtermists would object that this course of action is net-negative from an impartial utilitarian perspective -- he's short-changing unborn future generations of humanity, running a higher level of extinction risk in order to sprint to grab the transhumanist benefits within his own lifetime.)
  • But if the positive externalities are just a side-benefit, and the main motivation is the personal benefit, then it is a selfish rather than altruistic view.  (Can a selfish desire for personal improvement and transcendence still be "ethical", if you're not making other people worse off?)
    • Would Thiel press a button to destroy the whole world if it meant he personally got to live forever?  I would guess he wouldn't, which would go to show that this simplified monomanaical model of his motivations is wrong, and that there's at least a substantial amount of altruistic motivation in there.

I also think that lots of big, world-spanning goals (including altruistic things like "minimize existential risk to civilization", or "minimimze animal suffering", or "make humanity an interplanetary species") often problematically route through the convergent instrumental goal of "optimize for money and power", while also being sincerely-held views.  And none moreso than a personal quest for immortality!  But he doesn't strike me as optimising for power-over-others as a sadistic goal for its own sake (as it may have been for, say, Stalin) -- he seems to have such a strong belief in the importance of individual human freedom and agency that it would be suprising if he's secretly dreaming of enslaving everyone and making them do his bidding.  (Rather, he consistently sees himeself as trying to help the world throw off the shackles of a stultifying, controlling, anti-progress regime.)

But getting away from this big-picture philosophy, Thiel also seems to have lots of views which, although they technically fit nicely into the overall "perfect rational selfishness" model above, seem to at least in part be fueled by an ethical sense of anger at the injustice of the world.  For example, sometime in the past few years Thiel started becoming a huge Georgist.  (Disclaimer: I myself am a huge Georgist, and I think it always reflects well on people, both morally and in terms of the quality of their world-models / ability to discern truth.)

  • Here is a video lecture where Thiel spends half an hour at the National Conservatism Conference, desperately begging Republicans to stop just being obsessed with culture-war chum and instead learn a little bit about WHY California is so messed up (ie, the housing market), and therefore REALIZE that they need to pass a ton of "Yimby" laws right away in all the red states, or else red-state housing markets will soon become just as disfunctional as California's, and hurt middle class and poor people there just like they do in California.  There is some mean-spiritedness and a lot of Republican in-group signalling throughout the video (like when he is mocking the 2020 dem presidential primary candidates), but fundamentally, giving a speech trying to save the American middle class by Yimby-pilling the Republicans seems like a very good thing, potentially motivated by sincere moral belief that ordinary people shouldn't be squeezed by artificial scarcity creating insane rents.
  • Here's a short, two-minute video where Thiel is basically just spreading the Good News about Henry George, wherin he says that housing markets in anglosphere countries are a NIMBY catastrophe which has been "a massive hit to the lower-middle class and to young people".

Thiel's georgism ties into some broader ideas about a broken "inter-generational compact", whereby the boomer generation has unjustly stolen from younger generations via housing scarcity pushing up rents, via ever-growing medicare / social-security spending and growing government debt, via shutting down technological progress in favor of safetyism, via a "corrupt" higher-education system that charges ever-higher tuition and not providing good enough value for money, and various other means.

The cynical interpretation of this is that this is just a piece of his overall project to "make the world safe for capitalism", which in turn is part of his overall selfish motivation:  He realizes that young people are turning socialist because the capitalist system seems broken to them.  It seems broken to them, not because ALL of capitalism is actually corrupt, but specifically because they are getting unjustly scammed by NIMBYism.  So he figures that to save capitalism from being overthrown by angry millenials voting for Bernie, we need to make America YIMBY so that the system finally works for young people and they have a stake in the system. (This is broadly correct analysis IMO)  Somewhere I remember Thiel explicitly explaining this (ie, saying "we need to repair the intergenerational compact so all these young people stop turning socialist"), but unfortunately I don't remember where he said this so I don't have a link.

So you could say, "Aha!  It's really just selfishness all the way down, the guy is basically voldemort."  But, idk... altruistically trying to save young people from the scourge of high housing prices seems like going pretty far out of your way if your motivations are entirely selfish.  It seems much more straightforwardly motivated by caring about justice and about individual freedom, and wanting to create a utopian world of maximally meritocratic, dynamic capitalism rather than a world of stagnant rent-seeking that crushes individual human agency. 

Thiel seems to believe that the status-quo "international community" of liberal western nations (as embodied by the likes of Obama, Angela Merkel, etc) is currently doomed to slowly slide into some kind of stagnant, inescapable, communistic, one-world-government dystopia.

Personally, I very strongly disagree with Thiel that this is inevitable or even likely (although I see where he's coming from insofar as IMO this is at least a possibility worth worrying about).  Consequently, I think the implied neoreactionary strategy (not sure if this is really Thiel's strategy since obviously he wouldn't just admit it) -- something like "have somebody like JD Vance or Elon Musk coup the government, then roll the dice and hope that you end up getting a semi-benevolent libertarian dictatorship that eventually matures into a competent normal government, like Singapore or Chile, instead of ending up getting a catastrophic outcome like Nazi Germany or North Korea or a devastating civil war" -- is an incredibly stupid strategy that is likely to go extremely wrong.

I also agree with you that Christianity is obviously false and thus reflects poorly on people who sincerely believe it.  (Although I think Ben's post exaggerates the degree to which Thiel is taking Christian ideas literally, since he certainly doesn't seem to follow official doctrine on lots of stuff.)  Thiel's weird reasoning style that he brings not just to Christianity but to everything (very nonlinear, heavy on metaphors and analogies, not interested in technical details) is certainly not an exemplar of rationalist virtue.  (I think it's more like... heavily optimized for trying to come up with a different perspective than everyone else, which MIGHT be right, or might at least have something to it.  Especially on the very biggest questions where, he presumably believes, bias is the strongest and cutting through groupthink is the most difficult.  Versus normal rationalist-style thinking is optimized for just, you know, being actually fully correct the highest % of the time, which involves much more careful technical reasoning, lots of hive-mind-style "deferring" to the analysis of other smart people, etc)

Agreed that it is weird that a guy who seems to care so much about influencing world events (politics, technology, etc) has given away such a small percentage of his fortune as philanthropic + political donations.

But I would note that since Thiel's interests are less altruistic and more tech-focused, a bigger part of his influencing-the-world portfolio can happen via investing in the kinds of companies and technologies he wants to create, or simply paying them for services.  Some prominent examples of this strategy are founding Paypal (which was originally going to try and be a kind of libertarian proto-crypto alternate currency, before they realized that wasn't possible), founding Palantir (allegedly to help defend western values against both terrorism and civil-rights infringement) and funding Anduril (presumably to help defend western values against a rising China).  A funnier example is his misadventures trying to consume the blood of the youth in a dark gamble for escape from death, via blood transfusions from a company called Ambrosia.  Thiel probably never needed to "donate" to any of these companies.

(But even then, yeah, it does seem a little too miserly...)

Load more