Jackson Wagner

Scriptwriter for RationalAnimations @ https://youtube.com/@RationalAnimations
3771 karmaJoined Working (6-15 years)Fort Collins, CO, USA

Bio

Scriptwriter for RationalAnimations!  Interested in lots of EA topics, but especially ideas for new institutions like prediction markets, charter cities, georgism, etc.  Also a big fan of EA / rationalist fiction!

Comments
370

To answer with a sequence of increasingly "systemic" ideas (naturally the following will be tinged by by own political beliefs about what's tractable or desirable):

There are lots of object-level lobbying groups that have strong EA endorsement. This includes organizations advocating for better pandemic preparedness (Guarding Against Pandemics), better climate policy (like CATF and others recommended by Giving Green), or beneficial policies in third-world countries like salt iodization or lead paint elimination.

Some EAs are also sympathetic to the "progress studies" movement and to the modern neoliberal movement connected to the Progressive Policy Institute and the Niskasen Center (which are both tax-deductible nonprofit think-tanks). This often includes enthusiasm for denser ("yimby") housing construction, reforming how science funding and academia work in order to speed up scientific progress (such as advocated by New Science), increasing high-skill immigration, and having good monetary policy. All of those cause areas appear on Open Philanthropy's list of "U.S. Policy Focus Areas".

Naturally, there are many ways to advocate for the above causes -- some are more object-level (like fighting to get an individual city to improve its zoning policy), while others are more systemic (like exploring the feasibility of "Georgism", a totally different way of valuing and taxing land which might do a lot to promote efficient land use and encourage fairer, faster economic development).

One big point of hesitancy is that, while some EAs have a general affinity for these cause areas, in many areas I've never heard any particular standout charities being recommended as super-effective in the EA sense... for example, some EAs might feel that we should do monetary policy via "nominal GDP targeting" rather than inflation-rate targeting, but I've never heard anyone recommend that I donate to some specific NGDP-targeting advocacy organization.

I wish there were more places like Center for Election Science, living purely on the meta level and trying to experiment with different ways of organizing people and designing democratic institutions to produce better outcomes. Personally, I'm excited about Charter Cities Institute and the potential for new cities to experiment with new policies and institutions, ideally putting competitive pressure on existing countries to better serve their citizens. As far as I know, there aren't any big organizations devoted to advocating for adopting prediction markets in more places, or adopting quadratic public goods funding, but I think those are some of the most promising areas for really big systemic change.

The Christians in this story who lived relatively normal lives ended up looking wiser than the ones who went all-in on the imminent-return-of-Christ idea. But of course, if christianity had been true and Christ had in fact returned, maybe the crazy-seeming, all-in Christians would have had huge amounts of impact.

Here is my attempt at thinking up other historical examples of transformative change that went the other way:

  • Muhammad's early followers must have been a bit uncertain whether this guy was really the Final Prophet. Do you quit your day job in Mecca so that you can flee to Medina with a bunch of your fellow cultists? In this case, it probably would've been a good idea: seven years later you'd be helping lead an army of 100,000 holy warriors to capture the city of Mecca. And over the next thirty years, you'll help convert/conquer all the civilizations of the middle east and North Africa.

  • Less dramatic versions of the above story could probably be told about joining many fast-growing charismatic social movements (like joining a political movement or revolution). Or, more relevantly to AI, about joining a fast-growing bay-area startup whose technology might change the world (like early Microsoft, Google, Facebook, etc).

  • You're a physics professor in 1940s America. One day, a team of G-men knock on your door and ask you to join a top-secret project to design an impossible superweapon capable of ending the Nazi regime and stopping the war. Do you quit your day job and move to New Mexico?...

  • You're a "cypherpunk" hanging out on online forums in the mid-2000s. Despite the demoralizing collapse of the dot-com boom and the failure of many of the most promising projects, some of your forum buddies are still excited about the possibilities of creating an "anonymous, distributed electronic cash system", such as the proposal called B-money. Do you quit your day job to work on weird libertarian math problems?...

People who bet everything on transformative change will always look silly in retrospect if the change never comes. But the thing about transformative change is that it does sometimes occur.

(Also, fortunately our world today is quite wealthy -- AI safety researchers are pretty smart folks and will probably be able to earn a living for themselves to pay for retirement, even if all their predictions come up empty.)

People also talked about "astronomical waste" (per the nick bostrom paper) -- the idea that we should race to colonize the galaxy as quickly as possible because we're losing literally a couple galaxies every second we delay.  (But everyone seemed to agree that this wasn't practical, racing to colonize the galaxy soonest would have all kinds of bad consequences that would cause the whole thing to backfire, etc)

People since long before EA existed have been concerned about environmentalist causes like preventing species extinctions, based on a kind of emotional proto-longtermist feeling that "extinction is forever" and it isn't right that humanity, for its short-term benefit, should cause irreversible losses to the natural world.  (Similar "extinction is forever" thinking applies to the way that genocide -- essentially seeking the extinction of a cultural / religious / racial / etc group, is considered a uniquely terrible horror, worse than just killing an equal number of randomly-selected people.)

A lot of "improving institutional decisionmaking" style interventions make more and more sense as timelines get longer (since the improved institutions and better decisions have more time to snowball into better outcomes).

  • With your FTX thought experiment, the population being defrauded (mostly rich-world investors) is different from the population being helped (people in poor countries), so defrauding the investors might be worthwhile in a utilitarian sense (the poor people are helped more than the investors are harmed), but it certainly isn't in the investors' collective interest to be defrauded!!  (Unless you think the investors would ultimately profit more by being defrauded and seeing higher third-world economic growth, than by not being defrauded.  But this seems very unlikely & also not what you intended.)
    • I might be in favor of this thought experiment if the group of people being stolen from was much larger -- eg, the entire US tax base, having their money taken through taxes and redistributed overseas through USAID to programs like PEPFAR... or ideally the entire rich world including europe, japan, middle-eastern petro-states, etc.  The point being that it seems more ethical to me to justify coercion using a more natural grouping like the entire world population, such that the argument goes "it's in the collective benefit of the average human, for richer people to have some of their money transferred to poorer people".  Verus something about "it's in the collective benefit of all the world's poor people plus a couple of FTX investors, to take everything the FTX investors own and distribute it among the poor people" seems like a messier standard that's much more ripe for abuse (since you could always justify taking away anything from practically anyone, by putting them as the sole relatively-well-off member of a gerrymandered group of mostly extremely needy people).
      • It also seems important that taxes for international aid are taken in a transparent way (according to preexisting laws, passed by a democratic government, that anyone can read) that people at least have some vague ability to give democratic feedback on (ie by voting), rather than being done randomly by FTX's CEO without even being announced publicly (that he was taking their money) until it was a fait accompli.
    • Versus I'm saying that various forms of conscription / nationalization / preventing-people-and-capital-from-fleeing (ideally better forms rather than worse forms) seems morally justified for a relatively-natural group (ie all the people living in a country that is being invaded) to enforce, when it is in the selfish collective interest of the people in that group.
    • huw said "Conscription in particular seems really bad... if it’s a defensive war then defending your country should be self-evidently valuable to enough people that you wouldn’t need it."
    • I'm saying that huw is underrating the coordination problem / bank-run effect.  Rather than just let individuals freely choose whether to support the war effort (which might lead the country to quickly collapse even if most people would prefer that the country stand and fight), I think that in an ideal situation: 
      1. people should have freedom of speech to argue for and against different courses of action -- some people saying we should surrender because the costs of fighting would be too high and occupation won't be so bad, others arguing the opposite. (This often doesn't happen in practice -- places like Ukraine will ban russia-friendly media, governments like the USA in WW2 will run pro-war support-the-trooops propaganda and ban opposing messages, etc.  I think this is where a lot of the badness of even defensive war comes from -- people are too quick to assume that invaders will be infinitely terrible, that surrender is unthinkable, etc.)
      2. then people should basically get to occasionally vote on whether to keep fighting the war or not, what liberties to infringe upon versus not, etc (you don't necessarily need to vote right at the start of the war, since in democracy there's a preexisting social contract including stuff like "if there's a war, some of you guys are getting drafted, here's how it works, by living here as a citizen you accept these terms and conditions")
      IMO, under those conditions (and as long as the burdens / infringements-of-liberty of the war are reasonably equitably shared throughout society, not like people voting "let's send all the ethnic-minority people to fight while we stay home"), it is ethically justifiable to do quite a lot of temporarily curtailing individual liberties in the name of collective defense.
    • Back to finance analogy: sometimes non-fraudulent banks and investment funds do temporarily restrict withdrawals, to prevent bank-runs during a crisis.  Similarly, stock exchanges implement "circuit-breakers" that suspend trading, effectively freezing everyone's money and preventing them from selling their stock, when markets crash very quickly.  These methods are certainly coercive, and they don't always even work well in practice, but I think the reason they're used is because many people recognize that they do a better-than-the-alternative job of looking out for investors' collective interests.
  • This isn't part of your thought experiment, but in the real world, even if FTX had spent a much higher % of their ill-gotten treasure on altruistic endeavors, the whole thing probably backfired in the end due to reputational damage (ie, the reputational damage to the growth of the EA movement hurt the world much more than the FTX money donated in 2020 - 2022 helped).
    • And in general this is true of unethical / illegal / coercive actions -- it might seem like a great idea to earn some extra cash on the side is beating up kids for their lunch money, but actually the direct effect of stealing the money will be overriden by the second-order effect of your getting arrested, fined, thrown in jail, etc.
    • But my impression is that most defensive wars don't backfire in this way??  Ukraine or Taiwan might be making an ethical or political mistake if they decide to put up more of a fight by fighting back against an invader, but it's not like conscripting more people to send to the front is going to paradoxically result in LESS of a fight being put up!  Nations siezing resources & conscripting people in order to fight harder, generally DOES translate straightforwardly into fighting harder.  (Except on the very rare occasion when people get sufficiently fed up that they revolt in favor of a more surrender-minded government, like Russia in 1918 or Paris in 1871.)
  • To be clear, I am not saying that conscription is always justified or that "it's solving a coordination problem" is a knockdown argument in all cases.  (If I believed this, then I would probably be in favor of some kind of extreme communist-style expropriation and redistribution of economic resources, like declaring that the entire nation is switching to 100% Georgist land value taxes right now, with no phase-in period and no compensating people for their fallen property values.  IRL I think this would be wrong, even though I'm a huge fan of more moderate forms of Georgism.)  But I think it's an important argument that might tip the balance in many cases.

 

Finally, to be clear, I totally agree with you that conscription is a very intense infringement on individual human liberty!  I'm just saying that sometimes, if a society is stuck between a rock and a hard place, infringements on liberty can be ethically justifiable IMO.  (Ideally IMO I'd like it if countries, even under dire circumstances, should try to pay their soldiers at least something reasonably close to the "free-market wage", ie the salary that would get them to willingly volunteer.  If this requires extremely high taxes on the rest of the populace, so be it!  If the citizens hate taxes so much, then they can go fight in the war and get paid instead of paying the taxes!  And thereby a fair equilibrium can be determined, whereby the burden of warfighting is being shared equally between citizens & soldiers.  But my guess is that most ordinary people in a real-world scenario would probably vote for traditional conscription, rather than embracing my libertarian burden-sharing scheme, and I think their democratic choice is also worthy of respect even if it's not morally optimal in my view.)

Also agreed that societies in general seem a little too rearing-to-go to get into fights, likely make irrational decisions on this basis, etc.  It would be great if everyone in the world could chill out on their hawkishness by like 50% or more... unfortunately there are probably weird adversarial dynamics where you have to act freakishly tough & hawkish in order to create credible deterrence, so it's not obvious that individual countries should "unilaterally disarm" by doving-out (although over the long arc of history, democracies have generally sort of done this, seemingly to their great benefit).  But to the extent anybody can come up with some way to make the whole world marginally less belligerent, that would obviously be a huge win IMO.

But there's clearly a coordination problem around defense that conscription is a (brute) solution to.

Suppose my country is attacked by a tyrranical warmonger, and to hold off the invaders we need 10% the population to go fight (and some of them will die!) in miserable trench warfare conditions.  The rest need to work on the homefront, keeping the economy running, making munitions etc.  Personally I'd rather work on the homefront (or just flee the country, perhaps)!  But if everyone does that, nobody will head to the trenches, the country will quickly fold, and the warmonger invader will just roll right on to the next country (which will similarly fold)!

It seems almost like a "run on the bank" dynamic -- it might be in everyone's collective interests to put up a fight, but it's in everyone's individual interests to simply flee.  So, absent some more elegant galaxy-brained solution (assurance contracts, prediction markets, etc??) maybe the government should defend the collective interests of society by stepping in to prevent people from "running on the bank" by fleeing the country / dodging the draft.

(If the country being invaded is democratic and holds elections during wartime, this decision would even have collective approval from citizens, since they'd regularly vote on whether to continue their defensive war or change to a government more willing to surrender to the invaders.)

Of course there are better and worse forms of conscription: paying soldiers enough that you don't need conscription is better than paying them only a little (although in practice high pay might strain the finances of an invaded country), which is better than not paying them at all.

The OP seems to be viewing things entirely from the perspective of individual rights and liberties, but not proposing how else we might solve the coordination problem of providing for collective defense.

Eg, by his own logic, OP should surely agree that taxes are theft, any governments funded by such flagrant immoral confiscation are completely illegitimate, and anarcho-capitalism is the only ethically acceptable form of human social relations.  Yet I suspect the OP does not believe this, even though the analogy to conscription seems reasonably strong (albeit not exact).

Wow, sounds like a really format to have different philosophers all come and pitch their philosophy as the best approach to life!  I'd love to take a class like that.

Reposting here a recent comment of mine listing socialist-adjacent ideas that at least I personally am a lot more excited about than socialism itself.
 
* * *

FYI, if you have not yet heard of "Georgism" (see this series of blog posts on Astral Codex Ten), you might be in for a really fun time!  It's a fascinating idea that aims to reform capitalism by reducing the amount of rent-seeking in the economy, thus making society fairer and more meritocratic (because we are doing a better job of rewarding real work, not just rewarding people who happen to be squatting on valuable assets) while also boosting economic dynamism (by directing investment towards building things and putting land to its most productive use, rather than just bidding up the price of land).

A few other weird optimal-governance schemes that have socialist-like egalitarian aims but are actually (or at least partially) validated by our modern understanding of economics:

  • using prediction markets to inform institutional decision-making (see this entertaining video explainer), and the wider field of wondering if there are any good ways to improve institutions' decisions
  • using quadratic funding to optimally*  fund public goods without relying on governments or central planning. (*in theory, given certain assumptions, real life is more complicated, etc etc)
  • pigouvian taxes (like taxes on cigarrettes or carbon emissions).  Like georgist land-value taxes, these attempt to raise funds (for providing public goods either through government services or perhaps quadratic funding) in a way that actually helps the economy (by properly pricing negative externalities) rather than disincentivizing work or investment.
  • various methods of trying to improve democratic mechanisms to allow people to give more useful, considered input to government processes -- approval voting, sortition / citizen's assemblies, etc
  • conversation-mapping / consensus-building algorithms like pol.is & community notes
  • not exactly optimal governance, but this animated video explainer lays out GiveDirectly's RCT-backed vision of how it's actually pretty plausible that we could solve extreme poverty by just sending a ton of money to the poorest countries for a few years, which would probably actually work because 1. it turns out that most poor countries have a ton of "slack" in their economy (as if they're in an economic depression all the time), so flooding them with stimulus-style cash mostly boosts employment and activity rather than just causing inflation, and 2. after just a few years, you'll get enough "capital accumulation" (farmers buying tractors, etc) that we can taper off the payments and the countries won't fall back into extreme poverty + economic depression
  • the dream (perhaps best articulated by Dario Amodei in sections 2, 3, and 4 of his essay "machines of loving grace", but also frequently touched on by Carl Schulman) of future AI assistants that improve the world by actually making people saner and wiser, thereby making societies better able to coordinate and make win-win deals between different groups.
  • the concern (articulated in its negative form at https://gradual-disempowerment.ai/, and in its positive form at Sam Altman's essay Moore's Law for Everything), that some socialist-style ideas (like redistributing control of capital and providing UBI) might have to come back in style in a big way, if AI radically alters humanity's economic situation such that the process of normal capitalism starts becoming increasingly unaligned from human flourishing.

Reposting here a recent comment of mine listing socialist-adjacent ideas that at least I personally am a lot more excited about than socialism itself.
 
* * *

FYI, if you have not yet heard of "Georgism" (see this series of blog posts on Astral Codex Ten), you might be in for a really fun time!  It's a fascinating idea that aims to reform capitalism by reducing the amount of rent-seeking in the economy, thus making society fairer and more meritocratic (because we are doing a better job of rewarding real work, not just rewarding people who happen to be squatting on valuable assets) while also boosting economic dynamism (by directing investment towards building things and putting land to its most productive use, rather than just bidding up the price of land).  Check it out; it might scratch an itch for "something like socialism, but that might actually work".

A few other weird optimal-governance schemes that have socialist-like egalitarian aims but are actually (or at least partially) validated by our modern understanding of economics:

  • using prediction markets to inform institutional decision-making (see this entertaining video explainer), and the wider field of wondering if there are any good ways to improve institutions' decisions
  • using quadratic funding to optimally*  fund public goods without relying on governments or central planning. (*in theory, given certain assumptions, real life is more complicated, etc etc)
  • pigouvian taxes (like taxes on cigarrettes or carbon emissions).  Like georgist land-value taxes, these attempt to raise funds (for providing public goods either through government services or perhaps quadratic funding) in a way that actually helps the economy (by properly pricing negative externalities) rather than disincentivizing work or investment.
  • various methods of trying to improve democratic mechanisms to allow people to give more useful, considered input to government processes -- approval voting, sortition / citizen's assemblies, etc
  • conversation-mapping / consensus-building algorithms like pol.is & community notes
  • not exactly optimal governance, but this animated video explainer lays out GiveDirectly's RCT-backed vision of how it's actually pretty plausible that we could solve extreme poverty by just sending a ton of money to the poorest countries for a few years, which would probably actually work because 1. it turns out that most poor countries have a ton of "slack" in their economy (as if they're in an economic depression all the time), so flooding them with stimulus-style cash mostly boosts employment and activity rather than just causing inflation, and 2. after just a few years, you'll get enough "capital accumulation" (farmers buying tractors, etc) that we can taper off the payments and the countries won't fall back into extreme poverty + economic depression
  • the dream (perhaps best articulated by Dario Amodei in sections 2, 3, and 4 of his essay "machines of loving grace", but also frequently touched on by Carl Schulman) of future AI assistants that improve the world by actually making people saner and wiser, thereby making societies better able to coordinate and make win-win deals between different groups.
  • the concern (articulated in its negative form at https://gradual-disempowerment.ai/, and in its positive form at Sam Altman's essay Moore's Law for Everything), that some socialist-style ideas (like redistributing control of capital and providing UBI) might have to come back in style in a big way, if AI radically alters humanity's economic situation such that the process of normal capitalism starts becoming increasingly unaligned from human flourishing.

Reposting a comment of mine from another user's similar post about "I used to be socialist, but now have seen the light"!
 
* * *

FYI, if you have not yet heard of "Georgism" (see this series of blog posts on Astral Codex Ten), you might be in for a really fun time!  It's a fascinating idea that aims to reform capitalism by reducing the amount of rent-seeking in the economy, thus making society fairer and more meritocratic (because we are doing a better job of rewarding real work, not just rewarding people who happen to be squatting on valuable assets) while also boosting economic dynamism (by directing investment towards building things and putting land to its most productive use, rather than just bidding up the price of land).  Check it out; it might scratch an itch for "something like socialism, but that might actually work".

A few other weird optimal-governance schemes that have socialist-like egalitarian aims but are actually (or at least partially) validated by our modern understanding of economics:

  • using prediction markets to inform institutional decision-making (see this entertaining video explainer), and the wider field of wondering if there are any good ways to improve institutions' decisions
  • using quadratic funding to optimally*  fund public goods without relying on governments or central planning. (*in theory, given certain assumptions, real life is more complicated, etc etc)
  • pigouvian taxes (like taxes on cigarrettes or carbon emissions).  Like georgist land-value taxes, these attempt to raise funds (for providing public goods either through government services or perhaps quadratic funding) in a way that actually helps the economy (by properly pricing negative externalities) rather than disincentivizing work or investment.
  • various methods of trying to improve democratic mechanisms to allow people to give more useful, considered input to government processes -- approval voting, sortition / citizen's assemblies, etc
  • conversation-mapping / consensus-building algorithms like pol.is & community notes
  • not exactly optimal governance, but this animated video explainer lays out GiveDirectly's RCT-backed vision of how it's actually pretty plausible that we could solve extreme poverty by just sending a ton of money to the poorest countries for a few years, which would probably actually work because 1. it turns out that most poor countries have a ton of "slack" in their economy (as if they're in an economic depression all the time), so flooding them with stimulus-style cash mostly boosts employment and activity rather than just causing inflation, and 2. after just a few years, you'll get enough "capital accumulation" (farmers buying tractors, etc) that we can taper off the payments and the countries won't fall back into extreme poverty + economic depression
  • the dream (perhaps best articulated by Dario Amodei in sections 2, 3, and 4 of his essay "machines of loving grace", but also frequently touched on by Carl Schulman) of future AI assistants that improve the world by actually making people saner and wiser, thereby making societies better able to coordinate and make win-win deals between different groups.
  • the concern (articulated in its negative form at https://gradual-disempowerment.ai/, and in its positive form at Sam Altman's essay Moore's Law for Everything), that some socialist-style ideas (like redistributing control of capital and providing UBI) might have to come back in style in a big way, if AI radically alters humanity's economic situation such that the process of normal capitalism starts becoming increasingly unaligned from human flourishing.

Left-libertarian EA here -- I'll always upvote posts along the lines of "I used to be socialist, but now have seen the light"!

FYI, if you have not yet heard of "Georgism" (see this series of blog posts on Astral Codex Ten), you might be in for a really fun time!  It's a fascinating idea that aims to reform capitalism by reducing the amount of rent-seeking in the economy, thus making society fairer and more meritocratic (because we are doing a better job of rewarding real work, not just rewarding people who happen to be squatting on valuable assets) while also boosting economic dynamism (by directing investment towards building things and putting land to its most productive use, rather than just bidding up the price of land).  Check it out; it might scratch an itch for "something like socialism, but that might actually work".

A few other weird optimal-governance schemes that have socialist-like egalitarian aims but are actually (or at least partially) validated by our modern understanding of economics:

  • using prediction markets to inform institutional decision-making (see this entertaining video explainer), and the wider field of wondering if there are any good ways to improve institutions' decisions
  • using quadratic funding to optimally*  fund public goods without relying on governments or central planning. (*in theory, given certain assumptions, real life is more complicated, etc etc)
  • pigouvian taxes (like taxes on cigarrettes or carbon emissions).  Like georgist land-value taxes, these attempt to raise funds (for providing public goods either through government services or perhaps quadratic funding) in a way that actually helps the economy (by properly pricing negative externalities) rather than disincentivizing work or investment.
  • various methods of trying to improve democratic mechanisms to allow people to give more useful, considered input to government processes -- approval voting, sortition / citizen's assemblies, etc
  • conversation-mapping / consensus-building algorithms like pol.is & community notes
  • not exactly optimal governance, but this animated video explainer lays out GiveDirectly's RCT-backed vision of how it's actually pretty plausible that we could solve extreme poverty by just sending a ton of money to the poorest countries for a few years, which would probably actually work because 1. it turns out that most poor countries have a ton of "slack" in their economy (as if they're in an economic depression all the time), so flooding them with stimulus-style cash mostly boosts employment and activity rather than just causing inflation, and 2. after just a few years, you'll get enough "capital accumulation" (farmers buying tractors, etc) that we can taper off the payments and the countries won't fall back into extreme poverty + economic depression
  • the dream (perhaps best articulated by Dario Amodei in sections 2, 3, and 4 of his essay "machines of loving grace", but also frequently touched on by Carl Schulman) of future AI assistants that improve the world by actually making people saner and wiser, thereby making societies better able to coordinate and make win-win deals between different groups.
  • the concern (articulated in its negative form at https://gradual-disempowerment.ai/, and in its positive form at Sam Altman's essay Moore's Law for Everything), that some socialist-style ideas (like redistributing control of capital and providing UBI) might have to come back in style in a big way, if AI radically alters humanity's economic situation such that the process of normal capitalism starts becoming increasingly unaligned from human flourishing.
Load more