R

River

759 karmaJoined

Posts
2

Sorted by New

Comments
70

I'm not especially familiar with the history - I came to EA after the term "longtermism" was coined so that's just always been the vocabulary for me. But you seem to be equating an idea being chronologically old with it already being well studied and explored and the low hanging fruit having been picked. You seem to think that old -> not neglected. And that does not follow. I don't know how old the idea of longtermism is. I don't particularly care. It is certainly older than the word. But it does seem to be pretty much completely neglected outside EA, as well as important and, at least with regard to x-risks, tractable. That makes it an important EA cause area.

Why on earth would you set 2017 as a cutoff? Language changes, there is nothing wrong with a word being coined for a concept, and then applied to uses of the concept that predate the word. That is usually how it goes. So I think your exclusion of existential risk is just wrong. The various interventions for existential risks, of which there are many, are the answer to your question.

merely possible people are not people

 

And this, again, is just plane false, at least in the morally relevant senses of these words. 

I will admit that my initial statement was imprecise, because I was not attempting to be philosophically rigorous. You seem to be focusing in on the word "actual", which was a clumsy word choice on my part, because "actual" is not in the phrase "person affecting views". Perhaps what I should have said is that Parfit seems to think that possible people are somehow not people with moral interests.

But at the end of the day, I'm not concerned with what academic philosophers think. I'm interested in morality and persuasion, not philosophy. It may be that his practical recommendations are similar to mine, but if his rhetorical choices undermine those recommendations, as I believe they do, that does not make him a friend, much less a godfather of longermism. If he wasn't capable of thinking about the rhetorical implications of his linguistic choices, then he should not have started commenting on morality at all.

You seem to be making an implicit assumption that longtermism originated in philosophical literature, and that therefor whoever first put an idea in the philosophical literature is the originator of that idea. I call bullshit on that. These are not complicated ideas that first arose amongst philosophers. These are relatively simple ideas that I'm sure many people had thought before anyone thought to write them down. One of the things I hate most about philosophers is their tendency to claim dominion over ideas just because they wrote long and pointless tomes about them.

Lets clarify this a bit then. Suppose there is a massive nuclear exchange tomorrow, which leads in short order to the extinction of humanity. I take it both proponents and opponents of person affecting views will agree that that is bad for the people who are alive just before the nuclear detonations, and die either from those detonations or shortly after because of them. Would that also be bad for a person who counterfactually would have been conceived the day after tomorrow, or in a thousand years had there not been a nuclear exchange? I think the obviously correct answer is yes, and I think the longtermist has to answer yes, because that future person who exists in some timelines and not others is an actual person with actual interests that any ethical person must account for. My understanding is that person-affecting views say no, because they have mislabeled that future person as not an actual person. Am I misunderstanding what is meant by person-affecting views? Because if I have understood the term correctly, I have to stand by the position that it is an obviously biased term.

Put another way, it sounds like the main point of a person-affecting view is to deny that preventing a person from existing causes them harm (or maybe benefits them if their life would not have been worth living). Such a view does this by labeling such a person as somehow not a person. This is obviously wrong and biased.

Ah. I mistakenly thought that Parfit coined the term "person affecting view", which is such an obviously biased term I thought he must have been against longtermism, but I can't actually find confirmation of that so maybe I'm just wrong about the origin of the term. I would be curious if anyone knows who did coin it.

How on earth is Derek Parfit the godfather of longermism? If I recall correctly, this is the person who thinks future people are somehow not actual people, thereby applying the term "person affecting views" to exactly the opposite of the set of views a longtermist would think that label applies to.

I would not frame the relationship that way, no. I would say EA is built on top of rationality. Rationality talks about how to understand the world and achieve your goals, it defines itself as systematized winning. But it is agnostic as to what those goals are. EA takes those rationality skills, and fills in some particular goals. I think EA's mistake was in creating a pipeline that often brought people into the movement without fully inculcating the skills and norms of rationality.

EA, back in the day, refused to draw a boundary with the rationality movement in the Bay area

 

That's a hell of a framing. EA is an outgrowth of the rationality movement which is centered in the bay area. EA wouldn't be EA without rationality.

I take it "any bad can be offset by a sufficient good" is what you are thinking of as being in the yellow circle implications. And my view is that it is actually red circle. It might actually be how I would define utilitarianism, rather than your UC.

What I am still really curious about is your motivation. Why do you even want to call yourself a utilitarian or an effective altruist or something? If you are so committed to the idea that some bads cannot be offset, then why don't you just want to call yourself a deontologist? I come to EA precisely to find a place where I can do moral reasoning and have moral conversations with other spreadsheet people, without running into this "some bads cannot be offset" stuff.

Load more