JL

Jakob Lohmar

DPhil Student in Philosophy @ University of Oxford
162 karmaJoined Pursuing a doctoral degree (e.g. PhD)Oxford, Vereinigtes Königreich

Bio

I'm currently writing a dissertation on Longtermism with a focus on non-consequentialist considerations and moral uncertainty. Generally interested in the philosophical aspects of global priorities research and planning to contribute to that research also after my DPhil. Before moving to Oxford, I studied philosophy and a bit of economics in Germany where I helped organizing the local group EA Bonn for a couple of years. I also worked in Germany for a few semesters as a research assistant and taught some seminars on moral uncertainty and the epistemology of disagreement.

How I can help others

If you have a question about philosophy, I could try to help you with it :)

Comments
35

Yeah, good points. I think for exactly these reasons it is important that each (sub-)cause is included in not only one but several partial rankings. However, that ranking needn't be a total ranking, it could be a partial ranking itself. E.g. one partial ranking is 'present people < farmed animals' and another one is 'farmed animals < wild animals'. From these, we could infer (by transitivity of "<") that 'present people < wild animals', which already gets us closer to a total ranking. So I think one way that a partial ranking of (sub-)causes can help determining a total ranking - and hence the 'best cause' overall - is if there are several overlapping partial rankings.

(By the way, just in case you didn't see that one, I had written this other reply to your previous comment - no need to answer it, but to make sure you didn't overlook it since I wrote two separate replies.)

One thing this suggests is that you might think of this slightly differently when you are asking "What is this activity like?" at the individual level vs asking "What prioritisation are we doing?" at the movement level. A more narrowly focused individual project might be a contribution to wider cause prioritisation. But if, ultimately, no-one is considering anything outside of a single cause area, then we as a movement are not doing any broader cause prioritisation.

I also think that this is crucial for understanding the whole picture. Analogously to employees in a company (or indeed scientists) who work on some narrow task, members of EA could each work on prioritization in a narrow field while the output of the whole community is an unrestricted CP. But I agree that it is important to also have people who think more big-picture and prioritize across different cause areas.

Thanks, David!

It might be helpful to distinguish two related but distinct issues here: a) there are edge-cases of prio-work where it is (even) intuitively unclear whether they should be categorized as CP or WCP and b) my more theoretical point that this kind of categorization is fundamentally relative to cause individuations.

The second issue (b) seems to be the in-principle more damaging one to your results, as it suggests that your findings may hold only relative to one of many possible individuations. But I think it's plausible (although not obvious to me) that in fact it doesn't make a big difference because (i) a lot of actual prio-work takes something like your cause individuation for orientation (i.e. there is not that much prio-work between your general causes and relatively specific interventions) and also because (ii) your analysis seems not to apply the specific cause individuation mentioned in the beginning very strictly in the end - it seems that you rather think of causes as something like global health, animals, and catastrophic risks, but not necessarily these in particular? So I wonder if your results could be redescribed as holding relative to a cause individuation of roughly the generality / coarse-grainedness of the one you suggest, where the one you mention is only a proxy or example which could be replaced by similarly coarse-grained individuations. Then, for example, your result that only 8% of prio-work is CP would mean that 8% of prio-work operates on roughly the level of generality of causes like global health, animals, and catastrophic risks, although not all of that work compares these causes in particular.

So I think that your results are probably rather robust in the end. Still, it would be interesting to do the same exercise again based on a medium fine-grained cause individuation that distinguishes between, say, 15 causes (maybe similar to Will's) and see if anything changes significantly.

Many thanks for your reply! These are great points and I think there is some truth to them, but here is a bit to push back against them (or I guess just against your first point).

But, crucially, these researchers are still only making prioritisations within the super-ordinate Animal Welfare cause. They're not comparing e.g. PBMA to nuclear security initiatives. So I think you would need to say something: like these people are engaged in cause-level but within-cause prioritisation.

But I think you could say something analogous for other CP work? For example, there is a lot of discussion on whether we should focus on far future people or present people. This seems to be an instance of CP, but still you could say that the two causes are within the super-ordinate cause of "Human Welfare". So it seems unnecessary for genuine CP that a cause is compared to causes that cannot be categorized under the same super-ordinate cause. This would be too demanding as a condition for CP since you can (almost) always find a common super-ordinary cause for the compared (sub-)causes.

But if that is true, the fine-grainedness of the cause individuation does seem to make a difference to whether something counts as CP. For example, work on whether we should prioritize wild animals or farmed animals would then be genuine CP according to a cause individuation that includes 'wild animals' and 'farmed animals' but not according to your cause individuation which only includes 'animals' as a more general category. Maybe work that only compares 'wild animals' with 'farmed animals' but not with other causes seems strange, but the ultimate goal of this work could well be to find out what is the best cause overall. A conclusion on this could be reached by putting this work together with other work with a similar level of generality, such as work on whether to prioritize 'farmed animals' or 'global poverty'.

As a concrete example, maybe it's helpful to look at Will's recent suggestion that EA should acknowledge as cause areas: AI safety, AI character, AI welfare / digital minds, the economic and political rights of AIs, AI-driven persuasion and epistemic disruption, AI for better reasoning, decision-making and coordination, and the risk of (AI-enabled) human coups. Now imagine someone does research on the comparative effectiveness of Will's AI causes. Should we consider this CP or WCP? It seems it is CP relative to Will's cause individuation but WCP relative to the cause individuation that summarizes all of these under 'AI'.

This is a classic case of a surprising and suspicious convergence

Not sure if this distinction is made in the original post, but I'd say that the convergence in this case is not surprising since there is a fairly obvious explanation for it, but it is all the more suspicious since the alternative explanation for doing “entertainment for EAs” is the immediate fun and recognition one gets from it (rather than that one merely has an emotional connection to the cause).

Overall, I guess it's good to have an “entertainment for EAs detector" that is not very sensitive, so that it only goes off when big amounts of resources are at stake. E.g. not when it's about writing a fun post or buying pizza for attendees but when it's about... buying an abbey.

I'm late to the party but would still be interested what you think of this: cause areas can be individuated in more or less fine-grained ways. For example, we could consider 'animal welfare' one cause area or 'wild animal welfare' and 'farmed animal welfare' two cause areas, and again we could individuate more fine-grainedly between 'wild invertebrate welfare' and 'wild vertebrate welfare' and so on. I think that you might even end up with (what is intuitively thought of as) interventions at some point by making causes more and more fine-grained. If so, there is no fundamental difference between 'causes' and 'interventions'.

Now that is not to say that distinguishing between causes and interventions is not useful, and some cause/intervention individuations are certainly more intuitive than others. But if there are several permissible/useful/intuitive ways of individuation them, you might get a different picture of the resource allocation between CP and WCP (and indeed also CCP). Generally I think that the more fine-grained causes are individuated, the more work will count as CP rather then WCP. Conversely, if you individuate causes in a very coarse-grained way, it is unsurprising that most prioritization work will count as 'within a cause'. In the extreme case where you only consider a single all-encompassing cause, all prioritization will necessarily be within that cause. If you distinguish only between two causes (say, human and non-human welfare), there can be genuine CP - namely between these two causes - but it still woulnd't be surprising if most prio-work falls within one of these two causes and therefore counts as WCP. Now you distinguish between three causes. That is not unusual in EA but still very coarse-grained and I think you could sensibly distinguish instead between, say, 10 cause areas or so. Would this affect the result of your analysis such that more prio work would count as CP?

If there are so many new promising causes that EA should plausibly focus on (from something like 5 to something like 10 or 20), cause-prio between these causes (and ideas for related but distinct causes) should be especially valuable as well. I think Will agrees with this - after all his post is based on exactly this kind of work! - but the emphasis in this post seemed to be that EA should invest more into these new cause areas rather than investigate further which of them are the most promising - and which aren't that promising after all. It would be surprising if we couldn't still learn much more about their expected impact.

Hey Kritika, great work! I must admit that I didn't read all passages carefully yet, but here are some high-level thoughts that immediately came to mind.

  1. The requirements for Institutional Longtermism that you suggest seem to me like desiderata from a (purely) longtermist perspective, but I don't see why they should be considered requirements? For example, you suggest that it is a requirement that core long-term policies can only be modified by a supermajority of e.g. 90%. This may be desirable from a longtermist perspective, but long-term policies that can be modified by a smaller supermajority or even just a majority vote would still be valuable from a longtermist perspective.
  2. This seems analogous to other causes, such as animal welfare. From a purely animal welfare perspective, it may be desirable to have animal welfare policies that cannot be modified by even 90% of voters, and so on. But that doesn't mean that animal welfare is incompatible with democracy?
  3. I guess you see the difference between the longtermist cause and other causes to be in longtermism's demands: we should design institutions without exception such that they are optimized for the long-term future because the long-term future matter that incredibly much. But that would be a very extreme form of longtermism. Even Greaves' and MacAskill's Strong Longtermism only makes claims about what we should do / what is best to do on the margin. It doesn't say that we should spend all (or even 50% of) our resources on the long-term future. Similarly, Institutional (even Strong) Longtermism could merely claim that a fraction of public resources should be spent on the long term. Let's say that's 10%. Then, decisions about the remaining 90% of public resources could be made based on democratic procedures.
  4. Finally, I think it's even desirable from a longtermist perspective to leave important political decisions in the hands of future people: they probably know better how to improve the long term (e.g. because of improved forecasting). 

That seems like a strange combination indeed! I will need to think more about this...

and perhaps under-rewarded given it is less exciting.

...especially so in academia! I'd say that in philosophy mediocre new ideas are more publishable than good objections.

Load more