This is a point that came up recently in conversations about democracy protection that I thought merited a brief post.

Impact, tractability and neglectedness is still the go-to framework for thinking about potential EA cause areas. But impact is fundamentally different from the other two.

Impact is more of a cause-level variable. Whether AI, animal welfare or nuclear risk are really important are variables that tend to carry over to any intervention you might pursue within these fields. Every intervention within AI safety is going to affect the p(doom) by different amounts, but they are all multiplied by E[utility(doom)], which is massive. It's not as simple as a multiplicative term in animal welfare, since every intervention affects some number of animals, not some % of the total animals. Still, animal welfare interventions tend to be all high orders of magnitudes because of the scale of the problem.

By contrast, tractabilty and neglectedness are more intervention-level variables. You have some interventions within a cause area that are super neglected and some that are not. You have some interventions within a cause area that are super tractable and some that are not. Some cause areas are likely to have more tractable or neglected interventions than others, but this is only a prior to use when evaluating evidence, not a key part of the impact calculation. If you find good evidence that there's a super tractable and neglected intervention within a less-neglected and less-tractable cause area you should still go for it. Moreover, if you keep finding tractable and neglected interventions within a cause area, you should update your priors and stop assuming that the low-hanging fruit is already taken.

8

0
0

Reactions

0
0
Comments
No comments on this post yet.
Be the first to respond.
More from Gil
Curated and popular this week
Relevant opportunities