TG

Tax Geek

199 karmaJoined

Comments
25

Thanks for the post. I generally agree with your arguments but thought I should respond as someone currently doing research on a non-alignment problem. While I want a global pause, I have no idea what I personally can do to help achieve that. Whereas I at least have some idea of actions I can take that might help reduce the "massive increase in inequality/power concentration" problem. 

"Solve philosophy" is not the same thing as "implement the correct philosophy", and we need the AI to bridge that gap. There is a near-consensus among moral philosophers that factory farming is wrong, yet it persists.

This is a great point and I just wanted to call it out. I do think research is most likely to make a difference when it is produced with some thought about implementation - i.e. who the relevant audience is, how to get it to them, whether the actions you are recommending they take are actually within their power, etc. 

Answer by Tax Geek9
0
0
1

Yes, I think "AI as normal technology" is probably a misnomer - or at least very liable to being misinterpreted. Perhaps this later post by the authors is helpful - they clarify they don't mean "mundane or predictable" when they say "normal". 

But I'm not sure a world where human CEOs defer a lot of decisions, including high-level strategy, to AI requires something that is approximately AGI. Couldn't we also see this happen in a world with very narrow but intelligent "Tool AI" systems? In other words, CEOs could be deferring a lot of decisions "to AI", but to many different AI systems, each of which has relatively narrow competencies. This might depend on your view of how narrow or general a skill "high-level strategy" is. 

From the Asterisk interview you linked, it doesn't sound like Arvind is expecting AI to remain like narrow and tool-like forever. Just that he expects it will take longer to reach AGI than people expect, and only after AIs are used extensively in the real world. He admits he would significantly change his evaluation if we saw a fairly general-purpose personal assistant work out of the box in 2025-26.

I sort of see where both criticisms are coming from. The lowest-common-denominator, community-related posts get the highest engagement (including from people like the OP) because they require little context. The high context technical stuff is much harder to parse, and necessarily has a smaller audience, so gets less engagement (perhaps with the exception for AI Safety, which is currently experiencing a "boom"). 

There will naturally be fewer technical posts in the areas I'm interested in and, like Michael_PJ, I have no desire to read long technical posts in areas I'm not interested in, so I end up engaging disproportionately with community-related posts. 

Fiddling with the forum filters helps - I personally have downweighted posts tagged "Community" and "Building effective altruism" - but I suspect few people do this.

Hi Dushan. I cover this at a high level under "But impacts will be uneven" heading. I agree with you that countries in the supply chain will benefit and others less so.

Thank you! Yes, totally fair point. I am not trained in development economics so was very uncertain about this post, and expected there to be large differences between countries that I wouldn't pick up. It's disappointing to hear that the development econ mainstream has not been engaging with this topic.  

I had in mind the lower-income countries (mostly in Africa) when writing most of this. Your point about how, without TAI, these countries might be able to develop export industries and climb the development ladder is an interesting one. I had thought of that briefly, but was unsure how likely that was to happen, given we haven't seen any African country do it yet (to my knowledge). But perhaps it's just something that takes time and can only really happen after the middle-income countries become rich. 

AI agrees with you on cars and yachts, but says the majority of TVs, gaming rigs, and bikes consumed in HICs are made in LMICs. 

Fair enough. I think most of these are made in Asia and I do expect Asia (particularly China) to fare better than most other LMICs or developing countries. 

I should note that "transfers" is not limited to unemployment benefits. For OECD governments, the biggest class of transfer by far is currently public pensions. 

There are all sorts of good reasons why the elderly should be happy with lower public pensions (elderly poverty rates tend to be lower than for children or working-age adults, life expectancy has increased far more than retirement ages have). But that still doesn't happen for political economy reasons. Perhaps that will change with TAI - the elderly tend to own more capital so they should see massive returns in general. Maybe they'll be happy with <10x pension increases even as wages increase 10x. I just wouldn't take that for granted.

Agree 100% that governments would need to tap into capital gains somehow, or capital more broadly. I also like that Capital dividend fund idea - thanks for sharing. 

Thanks for the post! I share much of the concerns you raise, particularly your conclusion that benefits of AI will not be distributed equitably through natural market mechanisms. 

There will still exist a sizable gap between the development of these systems and their diffusion into the broader economy, but this gap will be on the order of years, not decades. 

I am curious about why you think this. And by "the broader economy" are you talking about the global economy or only the US? I don't have any firm views on speed of diffusion but I find decades plausible, at least when it comes to the global economy. Especially if diffusion involves widespread deployment of robotics. 

Thanks. I'm a bit sceptical of that 10x estimate and will have a closer look at that paper.

However, even assuming wages for non-automatable roles goes up ~10x before full automation, that won't help governments if their costs rise more than 10x. In developed countries, government costs mostly consist of social protection transfers and wages themselves. In the case where wages rise 10x, transfers could rise more than 10x if (1) transfers are linked to wages (which they often are); and/or (2) the share of people receiving transfers rises (because unemployment rises). 

It is possible that transfers could be de-linked from wages somewhat, but political economy can make that difficult and, to the extent that people's welfare depends on the parts of the economy that are not rapidly growing (e.g. healthcare, housing, childcare), that could have negative welfare impacts. 

So I'm not saying governments are doomed - as I point out, TAI should be creating value and the challenge is ultimately one of distribution. But governments still have to worry about revenue, because it's not the size of GDP that matters so much as the composition of government income and spending. 

Load more