DM

David_Moss

Principal Research Director @ Rethink Priorities
9989 karmaJoined Working (6-15 years)

Bio

I am the Principal Research Director at Rethink Priorities. I lead our Surveys and Data Analysis department and our Worldview Investigation Team. 

The Worldview Investigation Team previously completed the Moral Weight Project and CURVE Sequence / Cross-Cause Model. We're currently working on tools to help EAs decide how they should allocate resources within portfolios of different causes, and to how to use a moral parliament approach to allocate resources given metanormative uncertainty.

The Surveys and Data Analysis Team primarily works on private commissions for core EA movement and longtermist orgs, where we provide:

  • Private polling to assess public attitudes
  • Message testing / framing experiments, testing online ads
  • Expert surveys
  • Private data analyses and survey / analysis consultation
  • Impact assessments of orgs/programs

Formerly, I also managed our Wild Animal Welfare department and I've previously worked for Charity Science, and been a trustee at Charity Entrepreneurship and EA London.

My academic interests are in moral psychology and methodology at the intersection of psychology and philosophy.

How I can help others

Survey methodology and data analysis.

Sequences
4

EA Survey 2024
RP US Public AI Attitudes Surveys
EA Survey 2022
EA Survey 2020

Comments
649

there’s a limit to how much you can learn in a structured interview, because you can’t adapt your questioning on the fly if you notice some particular strength or weakness of a candidate. 

 

I agree. Very often I think that semi-structured interviews (which has a more or less closely planned structure, with the capacity to deviate), will be the best compromise between fully structured and fully unstructured interviews. I think it's relatively rare that the benefits of being completely structured outweigh the benefits of at least potentially asking a relevant followup question, and rare that the benefits of being completely unstructured outweigh the benefits of having at least a fairly well developed plan, with key questions to ask going on.

Thanks Jakob!

For example, there is a lot of discussion on whether we should focus on far future people or present people. This seems to be an instance of CP, but still you could say that the two causes are within the super-ordinate cause of "Human Welfare".

This is a great example! I think there is a real tension here. 

On the one hand, typically we would say that someone comparing near-term human work vs long-term human work (as broad areas) is engaged in cause prioritisation. And the character of the assessment of areas this broad will likely be very much like the character we ascribed to cause prioritisation (i.e. concerning abstract assessment of general characteristics of very broad areas). On the other hand, if we're classifying the allocation of movement as a whole across different types of prioritisation, it's clear that prioritisation that was only focused on near-term vs long-term human comparisons would be lacking something important in terms of actually trying to identify the best cause (across cause areas). To give a different example, if the movement only compared invertebrate non-humans vs vertebrate non-humans, I think it's clear that we'd have essentially given up on cause prioritisation, in an important sense.[1]

I think what I would say here is something like: the term "cause (prioritisation)" is typically associated with multiple different features, which typically go together, but which in edge cases can come apart. And in those cases, it's non-obvious how we should best describe the case, and there are probably multiple equally reasonable terminological descriptions. In our system, using just the main top-level EA cause areas, classification may be relatively straightforward, but if you divide things differently or introduce subordinate or superordinate causes, then you need to introduce some more complex distinctions like sub-cause-level within-cause prioritisation.

That aside, I think even if you descriptively divide up the field somewhat differently, the same normative points about the relative strengths and weaknesses of prioritisation focuses on larger or smaller objects of analysis (more cause-like vs more intervention-like) and narrower or wider in scope (within a single area vs across more or all areas), can still be applied in the same way. And, descriptively, it still seems like the the movement still has relatively little prioritisation that is more broadly cross-area.

 

  1. ^

    One thing this suggests is that you might think of this slightly differently when you are asking "What is this activity like?" at the individual level vs asking "What prioritisation are we doing?" at the movement level. A more narrowly focused individual project might be a contribution to wider cause prioritisation. But if, ultimately, no-one is considering anything outside of a single cause area, then we as a movement are not doing any broader cause prioritisation.

Thanks David.

I certainly agree that we should be careful to make sure that we don't over-optimise short-term appeal at the cost of other things that matter (e.g. long-term engagement, accuracy and fidelity of the message, etc.). I don't think we're calling for people to only consider this dimension: we explicitly say that we "think that people should assess particular cases on the basis of all the details relevant to the particular case in question."

That said, I think that there are many cases where those other dimensions won't, in fact, be diminished by selecting messages which are more appealing.[1] For example:

  • In some cases, like this one, we're selecting within taglines that had already been selected as suitable candidates based on other factors (such as accuracy). We're then additionally considering data on how people actually respond.
  • Relatedly, we might have messages available which seem equally good on the other key dimensions which we care about, but which we know to have higher appeal. For example, in this case, I think "the most good you can do" is at least as accurate and likely to encourage long-term engagement as "doing good better". So, if this phrasing performs better in terms of initially appealing to people, this is a pro tanto consideration in its favour.
  • In many contexts, we are only considering the question of which ~5 word tagline to include on a website. So the common tradeoff between shorter, more initially appealing messages and longer but higher-fidelity ones may not apply.
  • In some cases, like short website taglines, most of the effect of the messages may be whether the person continues reading at all (in which case they read the rest of the website and learn a lot more content) or whether they are instantly turned off. We might not expect the short taglines to have a long-term effect on people's understanding of EA (dominating all the later content they read) themselves.
  • While initial appeal and long-term engagement could diverge, in many cases, there's no particular reason to think that they do. This framing suggests a tradeoff between what is merely superficially appealing vs what promotes long-term engagement. But, often, one message might just appeal less to people simpliciter, e.g. because people find it confusing or off-putting, without promoting any longterm benefits.
  • More generally, I think that we can often think of plausible ways that a message might appear more appealing, but actually be sub-optimal, when taking into account long-term second order effects or divergent effects across different subgroups and so on. But in such cases I think we typically need more investigation of how people respond, not less.

All that said, I certainly think that we should be careful not to over-optimise any single dimension, but instead carefully weigh all the relevant factors.

 

  1. ^

    Though note that these are not arguments that we should assume that other considerations don't matter. We should still assess each case on its merits and weigh all the considerations directly.

Thanks for your comment Jakob! A few thoughts:

  • I think if we individuated "causes" in more fine-grained way, e.g. "Animal Welfare" -> "Plant-based meat alternatives", "Corporate Campaigns" etc., this might not actually change our analysis that much. Why? Prima facie, there are some more people who are working on questions like PBMA vs corporate campaigns, who would otherwise be counted as within-cause prioritisation in our current framework. But, crucially, these researchers are still only making prioritisations within the super-ordinate Animal Welfare cause. They're not comparing e.g. PBMA to nuclear security initiatives. So I think you would need to say something: like these people are engaged in cause-level but within-cause prioritisation. This is technically a kind of (sub-)cause level prioritisation, but it lacks the cross-cause comparison that our CP and CCP has due to still being constrained within a single cause.
  • The other thing that I'd note is that we also draw attention to the characteristic styles, and strengths and weaknesses, of cause prioritisation and intervention-level prioritisation. So, we argue, cause prioritisation is characterised more by abstract consideration of general features of the cause, whereas intervention-level prioritisation can increasingly attend to, more closely evaluate and potentially empirically study the specific details of the particular intervention in question. For example, it's not possible to do a meaningful cost-effectiveness analysis of 'Animals' writ large,[1] but it is possible to do so for a particular animal intervention. I would speculate that as you individuated causes in an increasingly fine-grained way, then their evaluation and prioritisation might become more intervention-like and less cause-like, as their evaluation becomes more tightly defined and more empirically tractable. My guess though is that a lot of even these more fine-grained sub-causes, might still be much more like causes than interventions in our analysis, insofar as they will still contain heterogeneous groups of interventions and so need to be evaluated more in terms of general characteristics of the set.
  • I agree that if you individuated cause areas in an increasingly fine-grained way, so that each "cause" under consideration was an intervention (e.g. malaria nets in Uganda) or even a specific charity, then the cause/intervention distinction would collapse, in practice.
  1. ^

    Although you could do so for the best single intervention within the cause.

An alternative hypothesis is that less time is being devoted to these kinds of questions (see here and here). 

This potentially has somewhat complex effects, i.e. it's not just that you get fewer novel insights with 100 hours spent thinking than 200 hours spent thinking, but that you get more novel insights from 100 hours spent thinking when doing so against a backdrop of lots of other people thinking and generating ideas in an active intellectual culture.

To be clear, I don't think this totally explains the observation. I also think that it's true, to some extent, that the lowest hanging fruit has been picked, and that this kind of volume probably isn't optimising for weird new ideas. 

Perhaps related to the second point, I also think it may be the case that relatively more recent work  in this area has been 'paradigmatic' rather than 'pre-paradigmatic' or 'crisis stage', which likely generates fewer exciting new insights.

A striking finding is that the area where people expect the greatest positive impact from AI is biosecurity and pandemic prevention

 

It seems like this might simply be explained by "biosecurity and pandemic prevention" containing two very different things: 'novel biosecurity risks' (of the kind EAs are concerned about) and 'helping with the next covid-19' (likely more salient to the general public and potentially involving broader healthcare improvements, which AI was also predicted to improve to a similar extent).

Perhaps relatedly, biosecurity and pandemic prevention was rated as the least a problem today (below everything other than AI itself).

Post-FTX, I think core EA adopted a “PR mentality” that (i) has been a failure on its own terms and (ii) is corrosive to EA’s soul. 

 

I find it helpful to distinguish two things, one which I think EA is doing too much of and one which EA is doing too little of:

  • Suppressing (the discussion of) certain ideas (e.g. concern for animals of uncertain sentience): I agree this seems deeply corrosive (even if an individual could theoretically hold onto the fact that x matters and even act to advance the cause of x, while not talking publicly about it, obviously the collective second-order effects mean that not publicly discussing x prevents many other people forming true beliefs about or acting in service of x (often with many other downstream effects on their beliefs and actions regarding y, z...).
  • Attending carefully to the effect of communicating ideas in different ways: how an idea is communicated can make a big difference to how it is understood and received (even if all the expressions of the idea are equally accurate). For example, if you talk about "extinction from AI", will people even understand this to refer to extinction and not the metaphorical extinction of job losses, or per your recent useful example, if you talk about "AI Safety", will people understand this to refer to mean "stop all AI development". I think this kind of focus on clear and compelling communication is typically not corrosive, but often neglected by EAs (and often undertaken only at the level of intuitive vibes, rather than testing how people receive communications differently framed.

Currently, the online EA ecosystem doesn’t feel like a place full of exciting new ideas, in a way that’s attractive to smart and ambitious people

 

This may be partly related to the fact that EA is doing relatively little cause and cross-cause prioritisation these days (though, since we posted this, GPI has wound down and Forethought has spun up). 

People may still be doing within-cause, intervention-level prioritisation (which is important), but this may be unlikely to generate new, exciting ideas, since it assumes causes, and works only within them, is often narrow and technical (e.g. comparing slaughter methods), and is often fundamentally unsystematic or inaccessible (e.g. how do I, a grantmaker, feel about these founders?).

Thanks for the post! It's great to see analysis of the LEAF data and engagement with existing EA Survey data.

much of the current research suggests these events [conferences, local groups, and educational programs] to be largely ineffective in encouraging participants to engage further with EA communities

That is not my impression of the existing data.

For example, you cite the 2019 cause prioritization report to say that:

Rethink Priorities’ analysis of the 2019 EA Survey found that 42% of respondents reported changing their primary cause area after becoming involved with an EA community. However, relatively few respondents had actually made career or behavioural changes to align with EA priorities.

I'm afraid I don't understand the reason why you think this post suggests that claim. That post addressed cause prioritization, not behavioural changes, and I don't think whether people changed their cause prioritization since joining EA is a good proxy for them making changes to align with EA priorities. Most respondents already supported EA causes at the time of joining EA (though many switch between causes or change their relative prioritizations over time). 

In the report on Engagement from that same year, we find that large numbers of EAs are taking actions aligned with EA priorities (e.g. making EA donations, changing their career plans, volunteering or working in EA jobs, etc.). 

a 2023 study of EAGx conferences, which compared the attitudes and behaviours of attendees with non-attendees, found no statistically significant differences between the two groups

I couldn't find a post with the title you gave, but perhaps you are referring to this one? While I was very glad that they did the study, as I commented at the time, it was extremely under-powered, so finding non-significant effects was not surprising.

Participants' responses were tracked before and after the Leaf 2025 course to evaluate belief changes. Overall, the dataset revealed very little net change in views towards the statement ‘we should prioritise what is evidenced as best over what we emotionally prefer’.

I've not dug into the LEAF data in detail (and thank you again for analyzing it). But it looks like the main reason why there was very little increase in people's agreement with this statement was because respondents overwhelmingly agreed with it even in the pre condition. Mean ratings were 6.05 out of 7 at the start of the course, leaving almost no room for the score to go up.

Events like EAGx are rising in influence (15% of respondents now cite them as important in their EA journey), pushing conferences, meetups, social or other events is a high-leverage way to connect new people to the community.

As EA Survey from 2024 suggests, personal connections are one of the strongest channels through which people first hear about EA (17.9%) and go on to get involved (45%) — so your invites & referrals really move the needle and act in the similar way.

 

I agree. I would also add that personal contacts and EAGx are commonly cited as the largest positive influences on people's ability to have an impact: personal contact with EAs is the most commonly cited (42.3% of respondents), while EAGx is cited by 13.1% of respondents (which should be interpreted in light of the fact that only minority of EAs have ever attended an EAGx). These factors are both particularly influential for the most highly engaged EAs.

They are also both highly close to the top most commonly mentioned sources of people making interesting and valuable new connections (EAG/EAGx combined is top (31.6%), followed by personal contacts (30.8%), with EAGx specifically being 19.2% of respondents.

Load more