Hide table of contents

This week, we are discussing the statement: “If AGI goes well for humans, it’ll go well for animals”. The announcement post, with a bit more info and a reading list, is here

What is this thread for?

General discussions about and reactions to the debate statement. 

Some of the comments on this thread will be populated directly from the debate banner on the homepage — these will mostly be people explaining why they voted the way they did.

 However, you’re also welcome to comment on here directly, with any considerations you'd like to share, or questions you'd like to ask. 

How should I understand the debate statement?

Again, our statement is: “If AGI goes well for humans, it’ll go well for animals”

The statement will ultimately mean whatever people interpret it to mean. The key is to explain how you are interpreting the statement in the comment that you attach to your vote. However, I can share a few notes which might pre-empt your questions:

  • AGI- Artificial General Intelligence. What this exactly is and how transformative it is likely to be to the world economy and our ways of life is likely to be a crux in this debate. As such, I won't be offering a definition.
  • Goes well- Likewise, what it means for AGI to go well is likely to be a live element of the discussion. For example, 'going well' might mean humans are still in control of AI tools, or it might mean that humans are replaced by more beneficent machines. I'll leave this up to you.
  • Animals- I'm talking about non-human animals. I'm specifically naming animals rather than 'other minds' to signal that this conversation isn't primarily about digital minds. 

Message me or comment in the thread with me tagged if you have any questions. 


 

Comments84
Sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I think Aidan Kankyoku's framing of the debate question from his post is very helpful:

"Even if we think AI will be the decisive factor determining future animal welfare, should we bother with animal-specific interventions in AI? Or can we trust the usual human-centric alignment efforts to take care of animals?"

I'd love to see more discussion on the question considered in this way.

Jim Buhler
*16
6
0
0% agree

I think there are plenty of crucial sign-flipping considerations pointing both ways (sec. 1 of my post), and that our takes certainly fail to account for some of them, in ways that likely make these takes irrelevant. 

And even if someone's evaluation somehow does not omit a single crucial consideration, they have to make opaque judgment calls on how to weigh up the conflicting pieces of (theoretical and empirical) evidence. I see little reason to believe such judgment calls would do better than chance.

Clarification on what my "0% Agree" means: I confidently disagree that we should believe it'd go well for animals (sec. 1 of my post), but I don't think we should believe the opposite either. I think our cause prio should not rely on any assumption on this question (sec. 2 of my post).

It's true that technological progress so far has been largely good for humans and bad for animals (due to factory farms, but the effects on wild animals complicate this a lot).

But I also think human values towards animals have improved compared to how they were historically, and e.g., house and work animals are likely treated better on the whole now than in the past. So I think there's been some moral progress, but this improvement has been dominated by technology simultaneously making animal food production much more cost-effective (and, as a side-effect, more suffering-producing).

I think eventually technological progress will make it cheaper to act on animal-friendly values, because I'm guessing the taste/price/convenience/friction of animal meat is hitting diminishing returns, while there's much more room for improvement with non-animal-based foods. So I think there will be a crossover point at some point, sort of the way it was with the Enlightenment and the industrial revolution, where the economic effects of technology first made many people probably worse off, but eventually the better values won out and people were left better off on the whole.

I separately also think AGI and especially ASI, if aligned with human values, can wisely advise us on good courses of actions and help improve our values and promote good values, which would also help. ASI could also help a lot with wild animal welfare, where we are currently quite at a loss.

The final debate week vote:

Most animals are wild animals, so the answer to this question should focus on them. It seems to me that the answer largely depends on how we understand "goes well for humans", and what we expect the counterfactual to be.

So in what are the possible scenarios?

  1. AGI empowers humans to make their own decisions, and to make better decisions. I expect this would greatly accelerate progress toward helping wild animals. This would be great.
  2. AGI replaces human decision-making. It then either:
    1. Reasons further from a starting point of human values, removing biases and in
... (read more)
9
Aidan Kankyoku
I can imagine a future where most animals are farmed animals. I'm not saying it's particularly likely, but if humans spread to other planets, I think we're more likely to take factory farming with us than take nature with us. Farmed animals should be part of this convo imo.
2
Tristan Katz
Copying my response from your other comment: Does that mean you think it's likely that we will spread to other planets without spreading ecosystems? If we spread ecosystems it seems likely that we would also spread at least some wild animals. And I think we have good reasons to do so - to promote good atmospheres and other ecosystem services.  I feel pretty skeptical that humans capable of going to other galaxies would not have realized the inefficiencies of meat and would still not have made competitive substitutes. 
3
Amrit (recovered acc.)
This. I do not see off-world animal farming as a real issue. It's such an energy and resource inefficient way of making food. Indeed, a prerequisite or a proxy indicator for Earth-independent sustainable civilization seems to be extremely good efficiency in food production. You can't possibly be on Mars or make an interstellar ship and still have a thousand cows in it for making some cheese.
Peter Wildeford
7
1
1
30% disagree

There are two ways to interpret this claim.

One is to interpret this claim as causal -- "the things that cause AGI to go well for humans also cause AGI to go well for animals".

In general, my concern here is something like "AGI gets aligned primarily based on 2025-era human values by imitation learning and doesn't magically converge on my ideal philosophy". I think what happens to animals after that would be fairly contingent on human moral evolution.

Another is to interpret the claim as evidentiary -- "what happens to animals conditional only on things going... (read more)

2
Jim Buhler
Are you setting aside wild animals?
2
Peter Wildeford
No
2
Jim Buhler
Oh good, I have no objection then. Well played.
Asha (Ramakumar) Amdur
6
0
0
1
100% disagree ➔ 0% agree

I actually would agree with the inverse of this statement:

"If AI goes well for animals, it'll go well for humans"


We are interdependent beings. And yet survival--particularly the contemporary late-capitalist understanding of survival--is treated as zero-sum. This is common amongst social movements: To view success and justice for one group as coming at the expense of another. And while the reality may be that in one snapshot of time, it looks that one is benefitting more than another, if we zoom out and understand how things undulate, it becomes clear that ... (read more)

MaxReith
6
0
0
10% disagree

 I think this depends on whether farmed or wild animal welfare matters more. I don't have an answer, so let's treat it as 50/50. 

  1. If wild animals matter more, what could happen?  On the upside, AGI might enable us to help wild animals.  On the downside, it might lead to humans creating biospheres on other planets, which would increase the suffering of wild animals by many orders of magnitude.
  2. If farmed animals matter more, the upside could be that AGI enables us to substitute farmed animals completely (cultivated meat, etc.). The downside
... (read more)
7
Jim Buhler
Nitpick, but it seems unfair to consider this an upside rather than the mere absence of a downside, since the relevant counterfactual scenario, in expectation (if no AI safety work) is a misaligned AI that takes over and probably ends animal farming as it kills or disempowers humans.  AI safety cannot take the credit for a potential future reduction or end of farmed animal suffering if it preserves humanity, without which animal farming would not exist to begin with.

I'm very uncertain. My main crux is something like: What is the most likely 'AGI'? 

  • If we are talking increased productivity/ efficiency, I'd expect that things get worse for animals for a while, and then get better as incentives continue to push us to non-animal agriculture.
  • If we are thinking of an intelligence explosion/ uncontrollable machine god then my expectations matter for nought, and my vote is a settled 0%
  • If we are thinking of a controllable intelligence explosion/ machine god, then animals might be in the worst position - since revealed human preferences don't seem to be great for animals so far. 
OscarD🔸
3
0
0
70% agree

A world of intelligence too cheap to meter is more teleological: constraints and tradeoffs that exist now are washed away, and what matters is mainly what people ultimately value. And more people ultimately value animal welfare than animal diswelfare. The main game is wild animals, and the ~only way for things to go well for them is if we build an ASI that can eventually reshape the natural world to be less suffering filled. I think it is very unlikely farmed animal suffering is exported to other galaxies in a major way, because at technological maturity animals will not be the mose efficient way to meet human material needs.

5
David Mathers🔸
What about the risk we spread wild animal suffering to other planets? 
2
OscarD🔸
Good point, that seems like a big risk! I expect the fraction of sentience that is (post)human or digital to be quite high, especially compared to today, in the intergalactic future. But improving values wrt wild animals seems important.

Reminder that you can kick off sub-debates within this discussion thread. Just highlight some text in a comment and then click 'Insert poll':
 

This'd be especially useful as a way to find (and discuss) the cruxes that are driving the different views on this debate. 

Kevin Xia 🔸
4
1
0
70% disagree

Very uncertain on this one, mainly a matter of "I just don't see why it would" and a strong default to "technological process has largely been bad for animals."

I do think the "better" AI goes for humans (or broadly, the more "extreme" the outcome is), the more likely it is that factory farming would basically disappear incidentally.

However, I think a large range of possible futures where AI goes well for humans are (comparably) normal scenarios, in which I just don't have any strong reason to believe that they would go well for animals.

Constance Li
3
0
0
50% disagree

depends a lot on how much control AIs end up having, their values/reasoning, and which (if any) humans end up getting power

NickLaing
4
1
0
0% agree

I think the answer to this question is too many branches down a tree of possible futures to meaningfully predict. What happens at multiple branch points could swing this either way. If I have time I'll share more about what I mean.

The poll defines "probably" as 70% chance. In this post, I wrote that I thought there was a ~70% chance that AGI would go well for animals.

I guess that means I believe there's a 50% chance that there's a 70% chance that AI goes well for animals? So I should vote in the exact middle of the spectrum?

weeatquince🔸
2
0
0
30% disagree

On priors technology has not been good for animals. So weakly lean against. But could go either way.

My position statement

  • As a suffering-focused ethicist who generally rejects moral aggregation across individuals (I am most sympathetic to painism), I have a higher bar for “AGI going well for humans” for humans than many others do; it’s not clear to me that previous technological advances went well for humans
    • Agricultural revolution’s “luxury trap”: going from hunting-gathering to farming allowed humans to consolidate unprecedented wealth and power, but at the cost of the wellbeing/welfare/rights of very many humans
    • Perhaps similar arguments can be made for
... (read more)
Adam.Kruger 🔸
2
1
0
10% disagree

Originally, I voted that I slightly agreed, assuming AGI would accelerate developments like cultured meat. Upon further reflection, I realized that advances in technology over the past couple of centuries have been overwhelmingly good for humans but arguably devastating for animals, particularly through factory farming. That makes me lean toward slight disagreement: more powerful technology hasn't automatically meant better outcomes for animals, and there's no guarantee AGI will be the exception.

Here’s how I’m thinking about this:

From the perspective of non-human animals, humanity looks a lot like an unaligned superintelligence. We closely resemble the "paperclip maximizer" thought experiment, where the "paperclips" are narrow human goals. Over millennia, we’ve become incredibly good at optimizing for those goals, but in the process we systematically exclude other sentient beings out of the moral circle and override their most basic interests for benefits that are often trivial.

Given this reality, without a fundamental shift in our ethics, superin... (read more)

If AI is successfully aligned to "human values", that would include animal agriculture and conservationist ideology, perpetuating and potentially expanding nonhuman animal suffering to other planets even while humans thrive.

Whitney Peng
1
0
0
30% agree

My definition of going well for humans: for the existing population, there would be a re-allocation of resources. Food and water would be rationally distributed based on basic needs, and once basic needs are all met, based on wealth (or the ability to generate progress for the society). 

 

With this as a premise, I think 1) there would be no need for factory farming, and 2) the welfare of all, including animals, would be rebalanced. 

 

For point 2), I very much think the lack of welfare of animals is reflective of the lack of welfare in humans themselves

Panashe Zowa
1
0
0
20% ➔ 40% disagree

The epoch of superintelligence will not result in any meaningful improvements in animal welfare. Previous epochs of humanity, marked by transformative advancements such as the industrial and digital revolutions, have failed to yield meaningful improvements in animal welfare. If anything, these shifts created novel pathways for animal exploitation or rendered existing models, such as animal husbandry, vastly more lethal and efficient through the rise and continued development of factory farming. Unless there is a massive societal-level dietary shift, which ... (read more)

I worry I'm too pessimistic in general, but the world economy (and general living standards), have improved significantly over time, and farmed animal welfare seems to be a lot worse. That seems to be evidence to me that amazing technological progress won't be sufficient for animal welfare progress.  

Riccardo Zucco
1
0
0
30% ➔ 50% disagree

I am extremely uncertain on this point. While there is a possibility that an aligned AI could be immensely beneficial for animals, I believe this is an outcome we absolutely cannot take for granted.

Broadly speaking, it is difficult to assess such a scenario without knowing the specific form an 'aligned' AI will take and what a world where humans coexist with an AGI or ASI will actually look like. As some have pointed out, if this AI were to simply 'lock in' current human values indefinitely, it would likely be really bad for animals.

It seems probable, howe... (read more)

Federica Monte
2
0
0
60% disagree

If we take things as they stand at the moment, AGI going well for humans doesn’t translate to AGI going well for animals. There is however a world where AGI has been trained in such a manner that it recognises sentience as its metric for moral consideration, which in turn would result in AGI going well for humans and for animals alike.

Say we define "going well" simply as "better than the status quo",  for a human, "going well" might mean radical life extension or a post-scarcity economy. With the status quo for animals (especially factory farmed anima... (read more)

I'd really like it if AI resulted in amazing plant based or cultured meat, and that the general abundance coming from AI means that people can focus their thinking on morality, not just making their lives go okay. 

BUT, so far, new tech and improved economical situations have caused farmed animal suffering to get worse.

So I have a big uncertainty, but lean disagree. 
 

Hi! There's no labels on the slider bar so it's initially unclear which side is agree vs disagree.

2
Sarah Cheng 🔸
Oh no, thanks so much for flagging this! Toby was on holiday today unfortunately, so I've just updated it.
2
NickLaing
Fair call disappearing after dropping the debate slider to avoid the upcoming bedlam...
MaxRa
2
0
0
40% agree

I'm reading "goes well for humans" as including "goes well for human values broadly, accounting for further refinement of human values".

Not by default. I think humans by in large, don't care much about animals enough for this to work

I don't have a singular directional intuition about this.

  1. I'm not sure AGI isn't already here.
  2. There are some scenarios where AGI liberates us from constraints and others where it enables humans to extend their dominance over animals. Who can say?
  3. In the meantime, AI does not absolve us of helping animals.
  4. Something about this topic creates a semantic stop sign for people whose opinions I otherwise find interesting. So even if the subject interesting in the abstract, I'm afraid here and now it sometimes leads to worse discussions.


 

OllieRodriguez
2
0
0
20% ➔ 10% agree

I take "AGI goes well" to imply a wealthy and technologically advanced society. I think that could mean:

- Very cheap and delicious meat alternatives.
- Factory farming waning as it reaches inefficiencies and bottlenecks, not able to compete with the above.
- More demand for higher-welfare options like free-range and local produce.

But it also seems possible that we "lock in" factory farming and scale it further and that AGI adopts speciesist views.

Very uncertain, I don't find myself strongly disagreeing with claims across the spectrum.

Steven Rouk
2
2
0
60% disagree

I'm quite uncertain, but in general I don't think it's been the case that "if X technology goes well for humans, it'll go well for animals". I think in some key cases, it's been the exact opposite, actually—e.g., industrialization leading to the rise of factory farming and killing/causing suffering to many more animals.

However, I also don't think that AGI is going to be quite different from most technologies, at least in some ways (and definitely as it goes past AGI to ASI), and so I'm quite uncertain about how "going well for humans" might positively impa... (read more)

Xaq
2
0
0
50% disagree

TL;DR: I don't think there's sufficient evidence to make such a claim.

It could go either way, but because the statement is phrased positively, I disagree. I think it's more likely to improve the conditions of non-human animals than not, because I think it may accelerate lab-grown meat (dairy, eggs, etc.) technology to the point where it becomes cheaper than farmed meat, in which case the conditions of animals will considerably improve. However, if this doesn't occur, AGI could further increase animal farming and efficiency, considerably worsening the condi... (read more)

PabloAMC 🔸
2
0
0
30% disagree

AGI could, in principle, find solutions for the key problems that animals face, but I would argue the main issue is that it won't automatically enlighten humans.

SimonM_
2
0
1
90% disagree

I would estimate my disagreement at roughly 90% to 95%.

Default human values are largely indifferent or actively hostile to the suffering of non-human animals.

Humanity currently oversees massive amounts of animal suffering through factory farming &habitat destruction.

If an AGI would be perfectly aligned to make things "go well" for humans, it will likely prioritize human flourishing, economic growth + resource acquisition. If human preferences do not drastically shift toward minimizing animal suffering, an AGI will have no inherent reason to protect ani... (read more)

Aaron Bergman
2
0
0
40% disagree

Vibes, I have no idea, I hope someone convinces me with good takes

JDBauman
1
0
0
80% agree

Two ways it goes well for animals:

1. As incomes rise globally, initially it's worse for animals because demand for meat rises. But once incomes rise from high to very high, desire for high-welfare standard meat increases and factory farming is eventually outlawed (not everywhere, but almost).

2. Economic development spurred on by AGI leads to further displacement of wild habitats, reducing wild animal suffering.

Aidan Kankyoku
1
0
0
30% disagree

I think it 70% likely will go well for animals, but that's not enough to obviate the need for animal-specific alignment efforts. Full take: https://forum.effectivealtruism.org/posts/skdp9uB4AoyN2fnuu/animal-welfare-is-just-part-of-ai-alignment-now-and-both

Successful CEV likely to lead to improved outcomes for animals

Slightly leaning toward that moral progress in that area would become so cheap that people accept it.

Mjreard
1
1
0
60% ➔ 40% agree

Seems like AGI will lead to ASI and ASI will show us more valuable ways to use all the land and matter that currently support animal suffering. The ways we use those probably won't involve animals or suffering at all.

Ligeia
1
0
0
30% agree

ES: not professional, not sure

IMO if AGI goes well for humans then at least it would have a decent grasp for general ethics, which includes animal welfare. AGI that hasn't got good ethics wouldn't benefit human, they'd just paperclip around. Since I have a short-ish timeline, i think somewhat-ethical and empowered AGI will benefit animals more than speciest HGIs.

Hailey Sherman
1
0
0
10% disagree

I don't think this is guaranteed, but if AI goes well and is used to develop better tasting, healthier, and more cost effective cultivated meat, it will be more likely to be adopted and will reduce reliance on factory farmed animals.

On the other hand, wild animals could be preserved as they are or spread throughout the galaxy, potentially increasing net suffering.

I think AI going well would leave a lot of this up to human choice and is therefore uncertain.

Due to Value Lock-in, TAI poses a time constraint for farmed animal social progress.

I do not expect most issues to be resolved before this time, due to technological limitations, heightened barriers to social change relative to historic movements, and increasing developing world meat consumption. 

If we open this up to wild animals rather than just farmed, net-negative outcomes are much more assured.

Camille
1
0
0
70% agree

"Goes well for humans" (i.e for a very long time) worlds are mostly worlds where AGI is fully theoretically and empirically aligned with a CEV-shaped alignment target, which for me logically requires animal welfare. (I also currently believe those worlds to be implausible because no company seems focused on this)

I struggle to imagine any deliberative or reflective-preference oriented process that does not give the right answer to the animal welfare question. If it doesn't care about non-human animals, then it means animals are not sentient, or that the CEV... (read more)

Puggy
1
0
0
100% agree

The timelines where agi goes well would probably 10x the resources required to improve animal welfare. It will probably be similar to just "buying shrimp stunners" for the shrimp farmers who are indifferent.

See my post:

We are AGI to animals now, and we may be net negative as a species. See my short post for intuition pump

https://forum.effectivealtruism.org/posts/et6tWRzgXHBHRciuN/quick-pig-based-intuition-pump-on-superintelligence-and?utm_campaign=post_share&utm_source=link

Ronak Mehta
1
0
0
100% agree

AGI going well for humans in my mind suggests we all get uplifted as much as we like into some post-scarcity utopia, and if that happens I can't imagine animals getting a different outcome. It may be delayed, it may look different, we may not even have direct control or understanding of it, but it seems implausible that for some reason superintelligence deems humans uniquely important compared to other biological sentience.

jteichma
1
0
0
30% agree

We haven't done so well for animals thus far.  that said, I hope that super-intelligence will respect all intelligences

Jo_🔸
1
0
0
20% disagree

(Copied from my Symposium position statement)

If I accept conventional assumptions in EA Animal welfare[1], AGI will be negative for animals in expectation. On the other hand, AGI being good for humans makes it worse for animals in expectation. However, both rogue AGI and human-friendly AGI seem positive for animals in most scenarios: it just happens that the "bad" scenarios seem much worse than the "good" scenario.

Why is that? AGI, whether rogue or human-aligned, may not decide to keep other planets free of biological animals (though it seems like a bigger... (read more)

JessMasterson
1
0
0
90% disagree

So far, much of technological development seems to have gone well for humans - for example, in developed nations, we have never had to do less hard manual labour, or had access to more information. That has not led to an improvement in the quality of non-human animal lives. In fact, we have seen exactly the opposite. AGI is likely to amplify this effect unless we make a significant conscious and coordinated effort to steer it another direction.

William Jones
1
0
0
100% agree

hedonic utilitarianism, an aligned superintelligence solves metaethics and fills the universe with hedonium.

WobblyWorms
1
0
0
40% disagree

I appreciate there's a lot of nuance in the question but some rough thoughts:

  1. Number of ways AGI goes well for humans and good for animals < number of ways AGI goes well for humans and bad for animals.
  2. Moral expansion or inclusion of animals is not obvious to me to be guaranteed (in near or long term future). And I think there's a lot of people today (eg other cultures and generations) who don't or negligibly value animal welfare.
  3. Short of a AGI utopia with unlimited resources, I think there will be tradeoffs between animals and human considerations where
... (read more)

I expect that factory farming will become even more harmful as a result of AGI

VRehnberg
1
0
0
70% ➔ 60% agree

AGI "goes well" or not is to me an X-risk question. Which makes me read this question as:

> If we survive AGI, are animals likely to be better off than if they were extinct?

To which I answer: Probably yes.

From @Tristan Katz :

Does WAW dwarf FAW in expectation? Or is FAW still important to consider in this discussion?

2
MichaelDickens
Yes Not necessarily, because S-risks may be more important in expectation (e.g. a malevolent or vindictive ASI tiles the universe with extremely energy-efficient animal-like beings of pure suffering).
2
Jim Buhler
Even granting that the overwhelming majority are wild animals, this doesn't necessarily imply we should focus on them. We have to factor in the welfare difference between the two (welfare ranges and quality of life in practice).
1
lroberts
  Most animals today are wild animals, but for the answer to this question to focus on them, most future animals would also have to be wild. Fwiw my intuition is that most future animals will be wild because it seems more likely that we terraform by seeding ecosystems than that we export energy inefficient factory farming. That said:  a) I feel uncertain about that position. b) The post-AGI future will be pretty weird, and our distinction of wild vs farmed animals probably won't map neatly onto future sentient beings.
1
Aidan Kankyoku
I can imagine a future where most animals are farmed animals. I'm not saying it's particularly likely, but if humans spread to other planets, I think we're more likely to take factory farming with us than take nature with us. Farmed animals should be part of this convo.
4
Tristan Katz
So does that mean you think it's likely that we will spread to other planets without spreading ecosystems? If we spread ecosystems it seems likely that we would also spread at least some wild animals. And I think we have good reasons to do so - to promote good atmospheres and other ecosystem services.  I feel pretty skeptical that humans capable of going to other galaxies would not have realized the inefficiencies of meat and would still not have made competitive substitutes. 

What does "going well" mean? 

It seems plausible that many things could be a lot better, like making factory farming obsolete. Does that mean that animals are no longer experiencing extreme suffering? What is our baseline for animal welfare?

gkcv
1
0
0
70% disagree

Factory farming has gone up along with the same forces of economic expansion that have made things go better for humans over the last 80 years. I don't see any fundamental reason that AI would change these trends.

Mrtdj
1
0
0
10% agree

I don't want to be a pessimist here, so I slightly moved my avatar to the right... I hope it will be good for animals...

I don't really know, but my starting model would be... unless AGI is applying utilitarian models, it would likely rate human welfare well above and beyond animal one, in enough orders of magnitude as to make any animal welfare insignificant. The developments could allow for an end of farmed meat and the like, but that would also make the need to have animals as such... mostly redundant? You might have reservations for animals... Dunno.

If AGI takes on the same values as humanity as a whole, factory-farming will continue, this means it would not go well for the animals.

Babel
1
0
0
70% disagree

Value lock-in is the central variable. If AGI leads to lock-in of current human values, then humans may suvive while animals keep suffering. 

If by "AGI goes well" we also include the continuation of things like moral progress (which current AI existential safety work does NOT address!), then the two are indeed aligned.

Hazo
1
0
0
60% ➔ 50% agree

A couplet different potential mechanisms could help farmed animals:

  • Solving cultivated meat or brainless animals
  • Creating better welfare technologies (e.g. solving all disease issues on current farms)
  • Generating enough societal wealth to make welfare improvements  like lowering stocking density trivial

More abstractly, people generally care about welfare so it will be one of the things that an aligned AGI optimizes for. However, it wont be optimal for animals because AGI won't be directly optimizing for welfare. For example, most people don't think it's w... (read more)

alene
1
2
0
100% disagree

The good news is that life on Earth has been going better and better for humans over the millennia. For instance, we have technology that make it easy to grow tons and tons of food so lots of people can eat as much as they want. We have cures for lots of previously deadly diseases so lots of us humans can live a very long time. And lots of people live in countries that recognize their rights. We also have a robust international economy that makes it really easy for a large number of people to buy the goods and services they want—and for lots of other peopl... (read more)

shepardriley
1
0
0
30% disagree

No particular strong reason, this is my intuition but curious to see people's reasoned takes.

If AGI goes well for humans, this will likely mean a lot of technological development. This would likely include technologies allowing for products equal to or superior on the dimensions humans like, that don't have the animal welfare entailments. I realize that there have been some arguments that people would still prefer products created through suffering even if alternatives could be just as cheap, satisfying, and convenient, but I think that attitudes would change in the medium to long-term if those conditions were met.

Gowthama Rajavelu
-1
0
0
80% disagree

I dont agree to the fact that AGI will go well for humans and hence I am disagreeing. And if it doesnt go well for humans, it wont go well for animals as well. Here the logic is simple, humans are also animals. 

[comment deleted]2
0
0
30% agree
Curated and popular this week
Relevant opportunities