L

lilly

3072 karmaJoined

Comments
154

I think you’re right that some of the abundance ideas aren’t exactly new to EA folks, but I also think it’s true that: (1) packaging a diverse set of ideas/policies (re: housing, science, transportation) under the heading of abundance is smart and innovative, (2) there is newfound momentum around designing and implementing an abundance-related agenda (eg), and (3) the implementation of this agenda will create opportunities for further academic research (enabling people to, for instance, study some of those cruxes). All of this to say, if were a smart, ambitious, EA-oriented grad student, I think I would find the intellectual opportunities in this space exciting and appealing to work on.

Currently, the online EA ecosystem doesn’t feel like a place full of exciting new ideas, in a way that’s attractive to smart and ambitious people.

I think one thing that has happened is that as EA has grown/professionalized, an increasing share of EA writing/discourse is occurring in more formal outlets (e.g., Works in Progress, Asterisk, the Ezra Klein podcast, academic journals, and so on). As an academic, it's a better use of my time—both from the perspective of direct impact and my own professional advancement—to publish something in one of these venues than to write on the Forum. Practically speaking, what that means is that some of the people thinking most seriously about EA are spending less of their time engaging with online communities. While there are certainly tradeoffs here, I'm inclined to think this is overall a good thing—it subjects EA ideas to a higher level of scrutiny (since we now have editors, in addition to people weighing in on Twitter/the Forum/etc about the merits of various articles) and it broadens exposure to EA ideas.

I also don't really buy that the ideas being discussed in these more formal venues aren't exciting or new; as just two recent examples, I think (1) the discourse/opportunities around abundance are exciting and new, as is (2) much of the discourse happening in The Argument. (While neither of these examples is explicitly EA-branded, they are both pretty EA-coded, and lots of EAs are working on/funding/engaging with them.) 

Thanks for writing this. It feels like the implicit messaging, ideas, and infrastructure of the EA community have historically been targeted towards people in their 20s (i.e., people who can focus primarily on maximizing their impact). A lot of the EA writing (and EAs) I first encountered pushed for a level of commitment to EA that made more sense for people who had few competing obligations (like kids or aging parents). This resonated with me a decade ago—it made EA feel like an urgent mission—but today feels more unrealistic, and sometimes even alienating. 

Given that the average age of the EA community is increasing, I wonder if it’d be good to rethink this messaging/set of ideas/infrastructure; to create a gentler, less hardheaded EA—one that takes more seriously the non-EA commitments we take on as we age, and provides us with a framework for reconciling them with our commitment to EA. (I get the sense that some orgs—like OP, which seems to employ older EAs on average—do a great job of this through, e.g., their generous parental leave policies, but I’d like to see the implicit philosophy connoted by these policies become part of EA’s explicit belief system and messaging to a greater extent.)

I watched this with a non-EA (omnivore) friend, and we both found it compelling, informative, and not preachy. Nice job!! With respect to the advocacy ask you make at the end: we would benefit from further guidance on how exactly to do this, and what practical steps (aside from changing diets) people who care about these issues should take. For instance, I don’t have a great sense of how to talk about factory farming, because it’s hard to broach these issues without implicitly condemning someone’s behavior (which often feels both socially inappropriate and counterproductive). It would be easier to broach this, I think, if there were specific positive actions I could recommend, or at least some concrete guidance on what to say versus not say when this comes up. I have been a vegetarian for many years, so this kind of conversation has come up organically many times (often over meals), and my natural inclination is always to quickly explain why I don’t eat meat and then change the subject, so I don’t make whoever asked uncomfortable (since often they’re eating meat). But presumably there’s a better way to approach this, so I’m curious if you/others have thoughts, or if there’s research on this.

One q: why is viewer minutes a metric we should care about? QAVMs seems importantly different from QALYs/DALYs, in that the latter matter intrinsically (ie, they correspond to suffering associated with disease). But viewer minutes only seem to matter if they’re associated with some other, downstream outcome (Advocacy? Donating to AI safety causes? Pivoting to work on this?). By analogy, QAVMs seems akin to “number of bednets distributed” rather than something like “cases of malaria averted” or “QALYs.”


The fact that you adjust for quality of audience seems to suggest a ToC in the vein of advocacy or pivoting, but I think this is actually pretty important to specify, because I would guess the theory of change for these different types of media (eg, TikToks vs long form content) is quite different, and one unit of QAVM might accordingly translate differently into impact.

lilly
18
11
1
1

I would also guess that the overwhelming majority (>95%) of highly impactful jobs are not at explicitly EA-aligned organizations, just because only a minuscule fraction of all jobs are at EA orgs. It can be harder to identify highly impactful roles outside of these specific orgs, but it's worth trying to do this, especially if you've faced a lot of rejection from EA orgs.

Okay, so a simple gloss might be something like "better futures work is GHW for longtermists"?

In other words, I take it there's an assumption that people doing standard EA GHW work are not acting in accordance with longtermist principles. But fwiw, I get the sense that plenty of people who work on GHW are sympathetic to longtermism, and perhaps think—rightly or wrongly—that doing things like facilitating the development of meat alternatives will, in expectation, do more to promote the flourishing of sentient creatures far into the future than, say, working on space governance.

I apologize because I'm a bit late to the party, haven't read all the essays in the series yet, and haven't read all the comments here. But with those caveats, I have a basic question about the project:

Why does better futures work look so different from traditional, short-termist EA work (i.e., GHW work)?  

I take it that one of the things we've been trying to do by investing in egg-sexing technology, strep A vaccines, and so on is make the future as good as possible; plenty of these projects have long time horizons, and presumably the goal of investing in them today is to ensure that—contingent on making it to 2050—chickens live better lives and people no longer die of rheumatic heart disease. But the interventions recommended in the essay on how to make the future better look quite different from the ongoing GHW work.

Is there some premise baked into better futures work that explains this discrepancy, or is this project in some way a disavowal of current GHW priorities as a mechanism for creating a better future? Thanks, and I look forward to reading the rest of the essays in the series.

Not saying something in this realm is what's happening here, but in terms of common causes of people identifying as EA adjacent, I think there are two potential kinds of brand confusion one may want to avoid:

  1. Associations with a particular brand (what you describe)
  2. Associations with brands in general:

I think EAs often want to be seen as relatively objective evaluators of the world, and this is especially true about the issues they care about. The second you identify as being part of a team/movement/brand, people stop seeing you as an objective arbiter of issues associated with that team/movement/brand. In other words, they discount your view because they see you as more biased. If you tell someone you're a fan of the New York Yankees and then predict they're going to win the World Series, they'll discount your view relative to if you just said you follow baseball but aren't on the Yankees bandwagon in particular. I suspect some people identify as politically independent for this same reason: they want to and/or want to seem like they're appraising issues objectively. My guess is this second kind of brand confusion concern is the primary thing leading many EAs to identify as EA adjacent; whether or not that's reasonable is a separate question, but I think you could definitely make the case that it is.

lilly
3
0
1
79% disagree

It's a tractability issue. In order for these interventions to be worth funding, they should reduce our chance of extinction not just now, but over the long term. And I just haven't seen many examples of projects that seem likely to do that.

Load more