A

abrahamrowe

5441 karmaJoined Working (6-15 years)

Bio

Director of Operations at GovAI.
 

I previously co-founded and served as Executive Director at Wild Animal Initiative, was the COO of Rethink Priorities from 2020 to 2024, and ran an operations consultancy, Good Structures, from 2024-2025.

Comments
248

Topic contributions
1

Yeah, I agree with the standardization issue and all the downsides you outline, which for me would be the main appeal of someone creating a standard, and might resolve most the concerns (since then there would be consistent practices on when organizations do cash vs accruals). I think that generally, organizations who do modified cash accrue things on a timed basis (e.g. liabilities that will exist for longer than a month will be accrued) and a size basis (e.g. major multi-year grants might be accrued), and just using that as a standard would help.

I think the primary advantage is cash accounting has way less room for error. It's half the general ledger lines, so I guess half as many places to make mistakes. And, since a journal entry of only P&L and liability/receivable accounts isn't reconcilable, in practice, it seems like transactions that only touch them generate more errors than ones touching cash accounts.

And, I think I regularly encounter organizations doing accrual whose liability accounts are just really messed up (e.g. I'm pretty sure every organization on earth accruing payroll taxes has some payroll tax account with a messed up value they have to correct).

I do think for EA organizations, INPAS seems like a big improvement on GAAP. One issue in adoption in the US - since statements need to be prepared according to GAAP for charitable solicitation registration audits for most states, there would need to be some state level policy change, since organizations might be hesitant to pay for two audits.

Nice! Thanks for sharing.

I only read the implementation guidance, so these comments are not super in the weeds. Also, I'm only comparing to GAAP, not FRS/IFRS:

 

  • Restricted net assets seem to be handled way better than GAAP, and I'm very in favor of getting rid of release transactions, though in practice it seems like this is mostly something organizations don't actually do on their books, so are mostly added by auditors.
    • It also gets rid of the issues of people who try to track restricted assets on the balance sheet, which is clearly an intuition lots of people in nonprofits have / want to act on, so that seems good.
  • It doesn't seem like it can handle endowments/permanently restricted assets super cleanly compared to GAAP - this explicitly seems like an upside of GAAP's restriction handling, but also maybe isn't super relevant to many EA orgs?
  • The rest of the standards seems basically fine, but I wouldn't expect EA organizations to see major changes in their books if they adopted them — I suspect it basically wouldn't change how EA orgs recorded transactions (at least against GAAP), and just would impact preparation during an audit.
    • Since almost no EA funders ask for financial reporting (especially in a standardized format), I don't know if it would impact organization's engagement with funders.
  • I could see this being really nice for anyone who does gov grants in the US, though that would require a substantial policy change.

 

My controversial accounting take will forever remain that the vast majority of EA nonprofits and funders would be better served by organizations preparing financial statements on a modified cash basis rather than any accrual standard, and I suspect this is true for basically any non-service provisioning nonprofit (e.g. hospitals or food pantries, etc), and I'd be way more excited to see a standard that supported modified cash accounting for audit purposes.

I think this is plausibly among the top two most promising immediate funding opportunities in the wild animal welfare space (besides general support for WAI, where I have giant conflicts of interest). CXL is really good at fundraising from non-EA donors, and if this works, which it seems like it has a decent chance to, it just effectively helps conservation dollars and for-profit investment go into a promising WAW intervention. I'd be excited to chat with anyone considering funding it about why I think it is so promising in more detail.

I think this is true as a response in certain cases, but many philanthropic interventions probably aren't tried enough times to get the sample size and lots of communities are small. It's pretty easy to imagine a situation like:

  • You and a handful of other people make some positive EV bets.
  • The median outcome from doing this is the world is worse, and all of the attempts at these bets end up neutral or negative.
  • The positive EV is never realized and the world is worse on average, despite both the individuals and the ecosystem being +EV.

It seems like this response would imply you should only do EV maximization if your movement is large (or that its impact is reliably predictable if the movement is large).

But I do think this is a fair point overall — though you could imagine a large system of interventions with the same features I describe that would have the same issues as a whole.

Probably, but not sure! Yeah, the above is definitely ignoring cluelessness considerations, on which I don't have any particularly strong opinion.

I don't think this is quite what I'm referring to, but I can't quite tell! But my quick read is we are talking about different things (I think because I used the word utility very casually). I'm not talking about my own utility function with regard to some action, but the potential outcomes of that action on others, and I don't know if I'm embracing risk aversion views as much as relating to their appeal.

Or maybe I'm misunderstanding, and you're just rejecting the conclusion that there is a moral difference between taking, say, an action with +1 EV and a 20% chance of causing harm and an action with +1EV and a 0% chance of causing harm / think I just shouldn't care about that difference?

I think I mean something slightly different than difference-making risk aversion, but I see what you're saying. I don't even know if I'm arguing against EV maximization - more just trying to point out that EV alone doesn't feel like it fully captures the picture of the value I care about (e.g. likelihood of causing harm relative to doing nothing feels like another important thing). I think specifically, that there are plausible circumstances where I am more likely than not to cause additional harm, and in expectation that action has positive EV, feels concerning. I imagine lots of AI risk work could be like this: doing some research project has some strong chance of advancing capabilities a bit (high probability of a bit of negative value), but maybe a very small chance of massively reducing risk (low probability of tons of positive value). The EV looks good, but my median outcome will be the world being worse than it was if I hadn't done anything.

Expected value maximization hides a lot of important details.  

I think a pretty underrated and forgotten part of of Rethink Priorities' CURVE sequence is the risk aversion work. I think the defenses of EV against more risk-aware models seem to often boil down to EV's simplicity. But I think that EV actually just hides a lot of important detail, including, most importantly, that if you only care about EV maximization, you might be forced to conclude that worlds where you're more likely to cause harm than not are preferable.

As an example, imagine that you're considering a choice that can cause 10 equally possible outcomes. In 6 of them, you'll create -1 utility. In 3 of them, your impact is neutral. In 1 of them, you'll create 7 utility. The EV of taking the action is (-6+0+7)/10 = 0.1. This is a positive number! Your expected value is positive, even though you have a 60% chance of causing harm. In expectation you're more likely than not to cause harm, but also in expectation you should expect to increase utility a bit. This is weird.

 

Scenario 1

More concretely, if I consider the following choices, which are equivalent from an EV perspective:

Option A. A 0% chance of causing a harmful outcome, but in expectation will cause +10 utility

Option B. A  20% chance of causing a harmful outcome, but in expectation will cause +10 utility

It seems really bizarre to not prefer Option A. But if I prefer Option A, I'm just accepting risk aversion to at least some extent. But what if the numbers slip a little more?

 

Scenario 2

Option A. A 0% chance of causing a harmful outcome, but in expectation will cause +9.9999 utility

Option B. A 20% chance of causing a harmful outcome, but in expectation will cause +10 utility

Do I really want to take a 20% chance on causing harm in exchange for 0.001% gain in utility caused?

 

Scenario 3

Option A. A 0% chance of causing a harmful outcome, but in expectation will cause +5 utility

Option B. A 99.99999% chance of causing a harmful outcome, but in expectation will cause +10 utility

Do I really want to be exceedingly likely to cause harm, in exchange for a 100% gain in utility?

 

I don't know the answers to the above scenarios, but I think it feels like just saying "the EV is X" without reference to the downside risk misses a massive part of the picture. It seems much better to say "the expected range of outcomes are a 20% chance of really bad stuff happening, a 70% chance of nothing happening and a 10% of a really really great outcome, which all averages out to an >0 average". This is meaningfully different than saying "no downside risk, and a 10% chance of a pretty good outcome, so >0 average".

I think that risk aversion is pretty important, but even if it isn't incorporated into people's thinking at all, it really doesn't feel like EV produces a number I can take at face value, and that makes me feel like EV isn't actually that simple.

The place where I currently see this happening the most is naive expected value maximization in reasoning about animal welfare — I feel like I've seen an uptick in "I think there is a 52% chance these animals live net negative lives, so we should do major irreversible things to reduce their population". But it's pretty easy to imagine doing those things being harmful, or your efforts backfiring, etc. in ways that cause harm.

How did you get to 58%? That seems pretty precise so interested in the reasoning there.

This isn't an answer to your question, but I think the underlying assumption is way too strong given available evidence.

Taking for granted that bad experiences outweigh good ones in the wild (something I'm sympathetic to also, but which definitely has not clearly been demonstrated), I think having any kind of position on whether or not climate change increases or decreases wild animal welfare is pretty much impossible to say.

  • Why do you think insects will end up dominating in the calculus of animals impacted by climate change? What if most animals impacted by climate change are aquatic, and not terrestrial? This seems entirely plausible. I don't think we have any idea how climate change will impact aquatic animal populations in the very long run.
  • It might be in principle true that warmer climates = more insects, but what actually will end up impacting insect populations is going to be a lot more complicated: pace and nature of human development (e.g. changes in habitat destruction), weather variance over the year and across years, etc. Maybe species that are especially good at navigating high weather variance will do especially well for the next few centuries, and that causes local maxima that look very different than the theoretical effects.
  • It wouldn't surprise me if total land area by biome type is way more relevant to insect population than overall temperature. This again seems like a question where we know basically nothing about what the longterm impacts of climate change will be.

I guess my overall view is that having any kind of reasonable opinion on the impact of climate change on insect or other animal populations in the longrun, besides extremely weak priors, is basically impossible right now, and most assumptions we can make will end up being wrong in various ways.

I also think it doesn't follow that if we think suffering in nature outweighs positive experience, we should try to minimize the number of animals. What if it is more cost-effective to improve the lives of those animals? Especially given that we are at best incredibly uncertain if suffering outweighs positive experience, it seems clearly better to explore cost-effective ways to improve welfare over reducing populations, as those interventions will be more robust no matter the overall dominance of negative vs positive experiences in the wild.

Load more