Lets clarify this a bit then. Suppose there is a massive nuclear exchange tomorrow, which leads in short order to the extinction of humanity. I take it both proponents and opponents of person affecting views will agree that that is bad for the people who are alive just before the nuclear detonations, and die either from those detonations or shortly after because of them. Would that also be bad for a person who counterfactually would have been conceived the day after tomorrow, or in a thousand years had there not been a nuclear exchange? I think the obviously correct answer is yes, and I think the longtermist has to answer yes, because that future person who exists in some timelines and not others is an actual person with actual interests that any ethical person must account for. My understanding is that person-affecting views say no, because they have mislabeled that future person as not an actual person. Am I misunderstanding what is meant by person-affecting views? Because if I have understood the term correctly, I have to stand by the position that it is an obviously biased term.
Put another way, it sounds like the main point of a person-affecting view is to deny that preventing a person from existing causes them harm (or maybe benefits them if their life would not have been worth living). Such a view does this by labeling such a person as somehow not a person. This is obviously wrong and biased.
Ah. I mistakenly thought that Parfit coined the term "person affecting view", which is such an obviously biased term I thought he must have been against longtermism, but I can't actually find confirmation of that so maybe I'm just wrong about the origin of the term. I would be curious if anyone knows who did coin it.
How on earth is Derek Parfit the godfather of longermism? If I recall correctly, this is the person who thinks future people are somehow not actual people, thereby applying the term "person affecting views" to exactly the opposite of the set of views a longtermist would think that label applies to.
I would not frame the relationship that way, no. I would say EA is built on top of rationality. Rationality talks about how to understand the world and achieve your goals, it defines itself as systematized winning. But it is agnostic as to what those goals are. EA takes those rationality skills, and fills in some particular goals. I think EA's mistake was in creating a pipeline that often brought people into the movement without fully inculcating the skills and norms of rationality.
I take it "any bad can be offset by a sufficient good" is what you are thinking of as being in the yellow circle implications. And my view is that it is actually red circle. It might actually be how I would define utilitarianism, rather than your UC.
What I am still really curious about is your motivation. Why do you even want to call yourself a utilitarian or an effective altruist or something? If you are so committed to the idea that some bads cannot be offset, then why don't you just want to call yourself a deontologist? I come to EA precisely to find a place where I can do moral reasoning and have moral conversations with other spreadsheet people, without running into this "some bads cannot be offset" stuff.
My main issue here is a linguistic one. I've considered myself a utilitarian for years. I've never seen anything like this UC, though I think I agree with it, and with a stronger version of premise 4 that does insist on something like a mapping to the real numbers. You are essentially constructing an ethical theory, which very intentionally insists that there is no amount good that can offset certain bads, and trying to shove it under the label "utilitarian". Why? What is your motivation? I don't get that. We already have a label for such ethical theories, deontology. The usefulness of having the label "utilitarian" is precisely to pick out those ethical theories that do at least in principle allow offsetting any bad with a sufficient good. That is a very central question on which people's ethical intuitions and judgments differ, and which this language of utilitarianism and deontology has been created to describe. This is where one of realities joints is.
For myself, I do not share your view that some bads cannot be offset. When you talk of 70 years of the worst suffering in exchange for extreme happiness until the heat death of the universe, I would jump on that deal in a heartbeat. There is no part of me that questions whether that is a worthwhile trade. I cannot connect with your stated rejection of it. And I want to have labels like "utiliarian" and "effective altruist" to allow me to find and cooperate with others who are like me in this regard. Your attempt to get your view under these labels seems both destructive of my ability to do that, and likely unproductive for you as well. Why don't you want to just use other more natural labels like "deontology" to find and cooperate with others like you?
For instance, if someone is interested in AI safety, we want them to know that they could find a position or funding to work in that area.
But that isn't true, never has been, and never will be. Most people who are interested in AI safety will never find paid work in the field, and we should not lead them to expect otherwise. There was a brief moment when FTX funding made it seem like everyone could get funding for anything, but that moment is gone, and it's never coming back. The economics of this are pretty similar to a church - yes there are a few paid positions, but not many, and most members will never hold one. When there is a member who seems particularly well suited to the paid work, yes, it makes sense to suggest it to them. But we need to be realistic with newcomers that they will probably never get a check from EA, and the ones who leave because of that weren't really EAs to begin with. The point of a local EA org, whether university based or not, isn't to funnel people into careers at EA orgs, it's to teach them ideas that they can apply in their lives outside of EA orgs. Lets not loose sight of that.
And this, again, is just plane false, at least in the morally relevant senses of these words.
I will admit that my initial statement was imprecise, because I was not attempting to be philosophically rigorous. You seem to be focusing in on the word "actual", which was a clumsy word choice on my part, because "actual" is not in the phrase "person affecting views". Perhaps what I should have said is that Parfit seems to think that possible people are somehow not people with moral interests.
But at the end of the day, I'm not concerned with what academic philosophers think. I'm interested in morality and persuasion, not philosophy. It may be that his practical recommendations are similar to mine, but if his rhetorical choices undermine those recommendations, as I believe they do, that does not make him a friend, much less a godfather of longermism. If he wasn't capable of thinking about the rhetorical implications of his linguistic choices, then he should not have started commenting on morality at all.
You seem to be making an implicit assumption that longtermism originated in philosophical literature, and that therefor whoever first put an idea in the philosophical literature is the originator of that idea. I call bullshit on that. These are not complicated ideas that first arose amongst philosophers. These are relatively simple ideas that I'm sure many people had thought before anyone thought to write them down. One of the things I hate most about philosophers is their tendency to claim dominion over ideas just because they wrote long and pointless tomes about them.