MG

mal_graham🔸

Strategy Director @ Wild Animal Initiative
327 karmaJoined Working (6-15 years)Philadelphia, PA, USA

Comments
18

Thanks, that's helpful. I agree that the former feels more natural but am not sure where that comes from. 
 

Not relevant to the main text here, but based on this I suspect at least part of the reason white folks in the UK have lower life expectancy is rates of alcohol consumption. See figure 1, for example. I haven't dug into the report methodology so my confidence is low, but it at least tracks with my experience living there. These data on cause of death are interesting as well. 

This might not be the place for a discussion of this, but I personally don't feel that the "robustness" of Tomasikian chain of reasoning you note here is similar to the "robustness" of the idea that factory farms contain a crazy amount of suffering. 

In the first instance, the specific chain of arrows above seems quite speculative, since we really have no idea how land use would change in a world with no factory farming. Are we that confident net primary productivity will increase? I'm aware there are good arguments for it, but I'd be surprised if someone couldn't come up with good arguments against if they tried.

More importantly, I don't think that's a sufficient reasoning chain to demonstrate that wild animal effects dominate? You'd need to show that wild+farmed animal welfare on post-factory farmed land uses is lower than wild+farmed animal welfare on current land uses, and that seems very sensitive to specific claims about moral weights, weights between types of suffering, empirical information about wild animal quality of life, what it means for a life to be net-negative, etc.

Or am I misunderstanding what you mean by robustness? I've just finished reading your unawareness sequence and mostly feel clueless about everything, including what it could mean for a reasoning chain to be robust.

 

I'm also very interested in this question, because it isn't obvious to me where to draw the line in fields like wild animal welfare. I think I know as little about nematode sentience + welfare, for example, as I do about possible far future beings. 

Maybe one difference is that it at least feels possible in theory to get more information about nematodes, but not really possible to get more information about far future beings? Although I'm skeptical of my intuitions here, since maybe it's easier than I think to get information about far future beings and harder than I think to get information about nematode sentience. 

Could you say a little more to translate the objections to option 3 into this case, and why you think they're crazy? 

Is your intuition strongly that Emily should stand down for option 3 reasons, or merely that Emily should stand down? Personally, my intuition that Emily should definitely stand down is probably not grounded in option 3 being compelling. I, like most people, am really intensely risk averse about harming children, and in your example it seems like you could maybe stand down Emily but still find another way to kill the terrorist (as happens in every movie about things like this). 

I'm pretty sympathetic to option 3 but it doesn't feel emotionally satisfying here. It kind of feels like the stakes of Emily's shoulder pain just can't matter enough for me to have an new attitude toward the situation, given the stakes of the stuff I'm clueless about. It feels like a bad reason to act. I think a reasonable response is that its still the best reason I've got, but I'm at least sympathetic to feeling like it's unsatisfying. 

Wow, was not expecting such a thorough answer, I really appreciate it! I will try to do justice to the existing literature in the talk :)

As a random aside, I hired Anthony to do an internship at WAI in ~2019 so it's very funny to me for him to have gone off and done longtermism things, that nevertheless have ended up relevant to WAI.

Cool to see someone writing about this. I have a few miscellaneous thoughts. For context: I'm the strategy director at Wild Animal Initiative and I was recently on the review panel for Morris Animal Foundation's call for proposals on amphibian and reptile health & welfare. 

  • Wild Animal Initiative has some unpublished, but hopefully forthcoming work from a former staff member on chytrid as a welfare issue.
  • Chytrid is basically all anyone is talking about in the amphibian space these days. That has made us (WAI) less interested in funding grants on it, because compared to other wild animal welfare issues, it's less neglected. I can't disclose non-public details but if you look at Morris' database of funded studies you can get a sense of how many amphibian proposals connect to chytrid: https://www.morrisanimalfoundation.org/studies?animal_type=466&study_category=All
  • That said, I still think this issue is wildly neglected given it's scale. The fact that WAI doesnt' really have enough resources to work on this is an illustration of how funding constrained the WAW ecosystem is, not that it doesn't matter.
  • I suspect that the cost-effective approach to working on this problem is lobbying to get governments to pay for large-scale treatment & mitigation, rather than trying to intervene at the level of providing care, but I haven't done any cost-effectiveness analyses on this or anything.
  • For the sake of pointing out why someone might not prioritize this, though, I should note that folks concerned about net-negative lives might think that amphibians are particularly likely to be in that position because of the high numbers of juveniles that don't survive to adulthood, and those who worry about suffering of insects might worry that high frog populations contribute to high degrees of insect suffering. So the scale of suffering is much higher than the farming issue raised in the other frog post, but perhaps the externalities are more complicated, if your ethics considers those to be an issue. 

I think there are extremely obvious disanalogies along several axes here. (1) agentic differences between humans going to space by choice and wild animals being taken or bred there without agency, (2) the difference in our general ability to understand non-human animal welfare and human welfare, (3) the general set point of best guesses as to the net positivity and net negativity of human lives. Without giving me more context on your premises or underlying moral values, it's hard to know where to begin a response. 

I suspect we aren't reasoning from the same principles and therefore there isn't much point in continuing the conversation, but feel free to expand on what your views would be about bringing wild animals to space (or humans) if you'd like. 

Thank you for your comment! It's actually a topic of quite a lot of discussion for us, so I would love to connect on it. I'll send you a DM soon. 

Just for context, the main reason I've felt a little constrained to the US/UK context is due to comparative advantage considerations, such as having staff who are primarily based in those countries/speaking English as our organizational common tongue/being most familiar with those academic communities, etc. 

I definitely think the WAW community, in general, should be investing much more outside of just US/UK/EU -- but am less sure whether it makes sense for WAI to do so, given our existing investments/strengths. But I could be convinced otherwise! 

Even if we keep our main focus in the US/UK, I'd be very interested in hearing more about how WAI might be able to support the "people hungry to build the field" in other countries, so that could be another thing to discuss. 

Hi Max, thanks for the positive feedback and for the question. 

I will ask our research team if they are aware of any specific papers I could point to; several of them are more familiar with this landscape than I am. My general idea that AI-enabled modeling would be beneficial is more from the very basic guess that given that AI is pretty good at coding, stuff that relies on coding might get a lot better if we had TAI. If that's right, then even if we don't see currently great examples of modeling work being useful now, it could nevertheless get a lot better sooner than we think. 

Thanks for bringing up the usefulness sentence, I think I could have been a lot clearer there and will revise it in future versions. I think I mainly meant that I was less confident about what TAI would mean for infrastructure and academic influence, and so any possible implications for WAW strategy would be more tentative. However, thinking about it a bit more now, I think the two cases are a bit different.

For infrastructure:  In part, I down-weighted this issue because I find the idea that the manufacturing explosion will allow every scientist to have a lab in their house less probable, at least on short timelines, than software-based takeoffs. But also, and perhaps more importantly, I generally think that on my list of reasons to do science within academia, 1 and 3 are stronger reasons than 2. Infrastructure can be solved with more money, while the others can't. So even if thinking about TAI caused me to throw out the infrastructure consideration, I might still choose to focus on growing WAWS inside academia, and that makes figuring out exactly what TAI means for infrastructure less useful for strategy. 

For "academic stamp of approval": I think I probably just shouldn't have mentioned this here, because I do end up talking about legitimacy in the piece quite a bit. But here's an attempt at articulating more clearly what I was getting at: 

  • Assume TAI makes academic legitimacy less important after TAI arrives.
  • You still want decision-makers to care about wild animal welfare before TAI arrives, so that they use it well etc.
  • Most decision-makers don't know much about WAW now, and one of the main pathways now that wildlife decision-makers become familiar with a new issue is through academia.
  • So, academic legitimacy is still useful in the interim.
  • And, if academic legitimacy is still important after TAI arrives, you also want to work on academic legitimacy now.
  • So, it isn't worth spending too much time thinking about how TAI will influence academic legitimacy, because you'd do the same thing either way. 

That said, I find this argument suspiciously convenient, given that as an academic, of course I'm inclined to think academic legitimacy is important. This is definitely an area where I'm interested in getting more perspectives. At minimum, taking TAI seriously suggests to me that you should diversify the types of legitimacy you try to build, to better prepare for uncertainty. 

Load more