The person who would have been born if my parents had had sex five seconds later than they did when they conceived me--call him "Justin"--is not an actual person. He is a merely possible person. He does not actually exist any more than Sherlock Holmes actually exists. Of course, if my parents had had sex five seconds later, then he would be an actual person, and I would be the merely possible person. But they did not, so he is not an actual person. And accordingly my parents did not negatively affect any actual person by failing to conceive Justin, because Justin is not actual.
Similarly, if a nuclear war kills everyone, then there are no actual future people. They are merely possible people--just like all of the infinitely many possible people who would have existed in all the possible futures that wouldn't have occurred anyway would have been merely possible people regardless of whether the war happened. None of them actually exist, just like Sherlock Holmes doesn't exist. Of course, if the nuclear war had not occurred, then they would be born and would be actual people. But it did happen, so they are not actual people. And accordingly the nuclear war does not negatively affect any actual future person, because the future people (given that, actually, there are no future people) are not actual.
I think what you want to say is that you can negatively affect people who, actually, are merely possible by preventing them from coming into existence, and for that reason you think "person-affecting view" is an inaccurate term. That's a view you can take. But the crux is not whether future people are actual people (everyone agrees they are, if they actually exist at some point) and anyway this is not really the way to figure out what Parfit thought about longtermism.
As far as I know, Parfit did coin it; he just didn't mean "person-affecting views" to be "obviously biased" in favor of person-affecting views, since he rejected person-affecting views. The idea behind the term is not that "future people are somehow not actual people": a proponent of a person-affecting view will generally agree that I have obligations to improve the welfare of someone who will exist in the future no matter what. (Maybe I can do something now to benefit, in the future, some kid who will be conceived on the other side of the world five minutes from now.) The idea is rather that people who aren't actual (at any point, ever--as opposed to merely possible) are not actual people. In that event, the idea is that no person is negatively affected if they aren't created, since there is no person to negatively affect.
So I understand: are you denying that the life with a tiny bit of positive welfare and no negative welfare, or the life with a tiny bit of positive welfare and a tiny tiny tiny tiny tiny bit of negative welfare, is determinately net positive? If so, I think that is an important crux. I don't see why that would be.
I guess it had better not be a question of whether, as a matter of actual fact, I have the brainpower to do the exercise (with my eyes closed!). Babies, I assume, have no concept of their own non-existence, and so can't compare any state they're in to non-existence, yet they can have positive or negative welfare. Or someone who lives long enough will not be able to remember, much less bring to mind, everything that's happened in their life, yet they can have a positive or negative welfare. So what matters is, if anything, some kind of idealized comparison I may or may not be able to do in actual fact. (And in any event, I guess the argument here would not be that nematodes have indeterminate welfare because their range is small, but rather that they do because they are stupid.)
What I'm suggesting could be the case is a situation where, say, the correct weighting of X vs Z is not a precise ratio but a range--anything between 7.9:1 and 8:1, let's say for the sake of argument--such that the actual ratio falls into this indeterminate range, and a small change in either direction will not cause a departure from the range. I see how that could perhaps be the case. But that kind of indeterminacy is orthogonal to the size of the welfare range. It would still hold if the values were .087455668741 and .011024441253 or 87455668741 and 11024441253, and wouldn't hold if the values were .087455668741 and .010024441253.
So my view is that if I have (X,Y,Z) at (0,0,0), which is equal to nonexistence, then (.01,0,0) is positive and (-.01,0,0) is negative. Why wouldn't it be? Why wouldn't a life with a slight positive and no negatives be positive? And presumably, say, (.01,0,-.00000001) will also be positive.
I think people frequently conflate there being no reason for something and there being very little reason. E.g., they'll say "there is no evidence for a flat earth" when there is obviously some evidence for it (that some people believe in it is some evidence). If people say (.01,0,0) is not better than non-existence, I'd suspect that's what they're doing.
As far as I can see, there just isn't such a thing as a neutral range. An individual could have an arbitrarily small welfare range and still have determinately positive or negative net welfare, or (I am open to the possibility of) an arbitrarily large welfare range while being such that it's indeterminate whether their net welfare is positive or negative. And so noting that nematodes have small welfare ranges doesn't tell us anything about this in and of itself.
I guess I'm not getting how this responds to my point. Suppose my welfare range (understood as representing the range of positive and negative experiences I can have) goes from -.01 to .01. I say I might have determinately positive welfare because, as a matter of fact, all, or the predominant majority of, my experiences are slightly positive. Otoh, suppose my range goes from -1000 to 1000. I say (I am open to the possibility that) it might be indeterminate whether I have positive welfare because I have a bunch of importantly different types of positive and negative experiences that are kind of closely matched without a uniquely correct weighting. So the indeterminacy is not related to the size of the welfare range but rather to having importantly different types of positive and negative experiences that are kind of closely matched without a uniquely correct weighting, or something like that. It could still be that it's indeterminate whether nematodes have positive or negative welfare, but that won't be just because their welfare range is small.
What's your answer to that?
I don't quite see the connection here between having a small welfare range and having an indeterminate welfare sign. Suppose a being is only capable of having very slightly positive experiences. Then it has a very small welfare range but it seems to me that its welfare range is determinately positive. It has positive and no negative experiences.
There is some plausibility to the idea that there may not be uniquely correct ways of weighing different experiences against each other. E.g., perhaps there is no uniquely correct answer to how many seconds of a pleasant breeze outweigh 60 minutes of a boring lecture, or how many minutes of the intellectual enjoyment of playing chess outweigh the sharp pain of a bad papercut, even if there are incorrect answers (maybe one second of the breeze is definitely not enough to outweigh the lecture). This may be plausible in light of the Ruth Chang-type intransitivity arguments: if I am indifferent between X seconds of the breeze and 60 minutes of the lecture, I might also be indifferent between X + 1 seconds of the breeze and 60 minutes of the lecture even though I obviously prefer X+1 seconds of the breeze to just X seconds, and it's not clear that this is merely an epistemic issue. If, as came up in your discussion with Vasco, someone wants to understand one's experience outweighing another's as being a matter of what you would prefer (rather than a realist understanding on which the outweighing comes first and rational preferences will follow), this perhaps seems especially plausible, as I doubt our preferences about these things are, as a matter of descriptive psychology, always perfectly fine-grained.
in that case, I could see it being the case that it's sometimes indeterminate whether a being has positive or negative welfare because it has lots of very different types of experiences which sort of come out closely matched with no uniquely correct weighting. But that is orthogonal to the size of the welfare range: that could turn out to be true even if the individual experiences are really (dis)valuable.
In many cases, it seems very doubtful whether further research into whether animals are conscious will be action-guiding in any meaningful way. Further research into whether chickens are conscious, say, will not produce definitive certainty that they are or aren't. (Finding out whether your favorite theory of consciousness predicts that they are conscious is only so useful, as we should have significant uncertainty about the right theory of consciousness.) And moderate changes in your credence probably shouldn't affect what you should do. E.g., if your credence in chicken consciousness drops 20%, there is still the moral risk argument for acting as if chickens are conscious; if you have some reason for rejecting that argument, that was probably also a reason when your credence was 20% higher. And at the same time, there are potentially very great opportunity costs to waiting to act--costs that don't make sense if decision-relevant information isn't actually likely to come in.
I think your stridency outpaces your understanding of the issue in such a way that continuing the conversation is not likely to be useful. So I will stop after this.
Your original claim was that Parfit "thinks future people are somehow not actual people." That is wrong. What he thinks is that people who are not actual (=do not, at any point, exist in the actual world) are not actually people. (Justin is not a human being. Human beings, among other things, have DNA, and Justin does not (actually) have any DNA. Justin only possibly has DNA and is only possibly a human being. So, too, Justin is not a person--he is only possibly a person. Anyway, that is the dominant way of thinking about this in analytic philosophy.) So, on this way of thinking, person-affecting views do not restrict themselves to an arbitrary subset of people, because merely possible people are not people. You take issue with this, which some people do, though you do not seem aware of the metaphysical issues your view raises, and anyway it doesn't seem true to me that your view is obvious. Parfit's takeaway from all this, very roughly, is that sometimes you have an obligation to make the world better even though failing to do so would not harm any particular person, whereas you want to say that in those cases the obligation is because failing to do so would harm merely possible people. I think these views will wind up being equivalent in their practical recommendations.
You also want to say that calling them "person-affecting views" is "a pretty strong mark against counting Parfit as a godfather of longtermism." To me, the way to determine whether he is a godfather of longtermism is to ask whether he was a primary originator and defender of the ideas underpinning longtermism, not to look at what he named a different view.