The problem with Pascal's Wager is that it ignores reversed scenarios that would offset it: e.g. there could as well be a god that would punish you for believing in God without having good evidence.
I don't think this would be applicable to our scenario. Whether we choose to help the human or the animals, there will always be uncertainty about the (long-term) effects of our intervention, but the intervention would ideally be researched well enough for us to have confidence that its expected value is robustly positive.
Let's say you faced a situation where you could either (a) improve the welfare of 1 human, or (b) potentially improve (conditional on sentience), to the same extent as the human, the welfare of X animals which you currently believe are not sentient.
Does your epistemology imply that no matter how large X was, you would never choose (b) until you found a "rational reason to drop your views"? But you admit there is a possibility that you will find such a reason in the future, including the possibility that credences are a superior way of representing beliefs?
Consider trade-offs between suffering of medium and extreme severity, where medium severity could be described as hurtful or disabling. It seems hard to defend that no duration of medium suffering could ever outweigh a short duration of extreme suffering. This is not a matter of downplaying the extreme suffering for its short duration, but rather of appreciating the magnitude of an eternity of medium suffering. (I think the idea of eternity really helps in imagining what is at stake when one assumes lexicality.)
It also seems a priori implausible that having any particular experience could somehow reveal information about it being lexically superior to other experiences. Experiencing extreme suffering can tell us that its quality is extremely bad, but this insight is duration-insensitive. Filling the entire universe with people experiencing medium suffering forever seems to be decidedly worse than a single person experiencing extreme suffering for a few hours.
So if one deems medium suffering offsetable by pleasure, then extreme suffering will be offsetable as well, assuming transitivity.
As short-lived fish low on the food chain, they accumulate only minimal levels of contaminants like mercury and PCB
Sardines and anchovies seem to be high in PCBs. Did you come across any research that stated otherwise?
We are mostly in agreement, though I don't quite understand what you meant by:
These seem to be examples where maximizing hedonistic utility functions leads to bad things happening, because they are.
If suffering and pleasure are incommensurable, in what way are such outcomes bad?
I would also be interested in your response to the argument that suffering is inherently urgent, while pleasure does not have this quality. Imagine you are incapable of suffering, and you are currently experiencing pleasure. One could say that you would be indifferent to the pleasure being taken away from you (or being increased to a higher level). Now imagine that you are instead incapable of experiencing pleasure, and you are currently suffering. In this case it would arguably be very clear to you that reducing suffering is important.
What I meant is that the disvalue of suffering becomes evident at the moment of experiencing it. Once you know what disvalue is, the next step is figuring out who can experience this disvalue. Given that you and I e.g. have a very similar nervous system, and that we behave similarly in response to noxious stimuli, my subjective probability that you are capable of suffering will be much higher than the probability that a rock can suffer.
Sure there is a small chance, but the question is: what can we do about it and will the opportunity cost be justifiable? And for the same reason that Pascal's Wager fails, we can't just arbitrarily say "doing this may reduce suffering" and think it justifies the action, since the reversal "doing this may increase suffering" plausibly offsets it.