I appreciate there's a lot of nuance in the question but some rough thoughts:
Number of ways AGI goes well for humans and good for animals < number of ways AGI goes well for humans and bad for animals.
Moral expansion or inclusion of animals is not obvious to me to be guaranteed (in near or long term future). And I think there's a lot of people today (eg other cultures and generations) who don't or negligibly value animal welfare.
Short of a AGI utopia with unlimited resources, I think there will be tradeoffs between animals and human considerations where animals will nearly always be second. eg it seems plausible that AGI will consider a +10 human wellbeing outcome even if it includes a -11 animals wellbeing outcome.
Moral lock-in, like others have mentioned
My thoughts are based on if we only did human values alignment
I appreciate there's a lot of nuance in the question but some rough thoughts:
My thoughts are based on if we only did human values alignment