Bio

I currently lead EA funds.

Before that, I worked on improving epistemics in the EA community at CEA (as a contractor), as a research assistant at the Global Priorities Institute, on community building, and Global Health Policy.

Unless explicitly stated otherwise, opinions are my own, not my employer's.

You can give me positive and negative feedback here.

Comments
463

Topic contributions
6

Quick non-exhaustive list of places that I think a few strategic, dedicated, and ambitious altruists could make a significant dent in any of these areas within a year, because rn EA is significantly dropping the ball.

Improving the media, China stuff, increasing altruism, moral circle expansion, AI mass movement stuff, frontier AI lab insider coordination (within and among labs), politics in and outside the US, building up compute infrastructure outside the US, security stuff, EA/longtermist/SMA/other field building, getting more HNW people into EA, etc.

(List originally shared with me by a friend)

I suggested the following question for Carl Shulman a few years ago

I'd like to hear his advice for smart undergrads who want to build their own similarly deep models in important areas which haven't been thought about very much e.g. take-off speeds, the influence of pre-AGI systems on the economy, the moral value of insects, preparing for digital minds (ideally including specific exercises/topics/reading/etc.).

I'm particularly interested in how he formed good economic intuitions, as they seem to come up a lot in his thinking/writing.

https://forum.effectivealtruism.org/posts/ytBxJpQsdEEmPAv9F/i-m-interviewing-carl-shulman-what-should-i-ask-him?commentId=vqrxdfNEnYioDEt4N

@Saul Munn recently asked me what my current take on this was and then suggested I share it online - it's very lightly edited.

I think if I were to speak to my undergrad self I'd prioritise:
    •    reading a lot of SOTA background material (e.g. Dwarkesh podcasts on AI stuff)
    •    becoming literate in a bunch of seemingly useful fields (e.g. CS, math, econ, philosophy) with some bias towards things that are fun and motivating
    •    try to contribute to the intellectual frontier as quickly as possible (e.g. via LessWrong) on topics that seem very important
    •    find a community of people that care about similar stuff, ideally in person - consider starting a "better futures", or "what to do about AGI" group.

This is pretty general advice. I might try to focus on Holden's list Important, actionable research questions for the most important century or similar.

Yeah, I think we have a substantive disagreement. My impression before and after reading your list above is that you think that being convinced of longtermism is not very important for doing work that is stellar according to "longtermism" and it's relatively easy to convince people that x-risk/AIS/whatever is important.

I agree with the literal claim, but think that empirically longtermists represent the bulk of people that concern themeselves with thinking clearly about how wild the future could be. I don't think all longtermists do this, but longtermism empirically seems to provide a strong motivation for trying to think about how wild the future could be at all. 
[1]
I also believe that thinking clearly about how wild the future could be is an important and often counterfactual trait for doing AIS work that I expect to actually be useful (though it's obviously not necessary in every case). Lots of work in the name of AIS is done by non-longtermists (which is great), but at the object level, I often feel their work could have been much more impactful if they tried to think more concretely about wild AI scenarios. Ik that longermism is not about AI, and most longtermists are not actually working on AI.

So, for me the dominant question is does more longtermism writing increase or decrease the supply of people trying to think clearly about the future. Overall, I'm like ... weakly increases (?) and there aren't many other leveraged interventions for getting people think about the future.

I would be much more excited about competitions like:
1. Write branches of the AI 2027 forecast from wherever you disagree (which could be at the start).
2. Argue for features of a pre-IE society that can navigate the IE well and roadmap how we might get more of that feature or think about critical R&D challenges for navigating an IE well.

etc. 


Also, somewhat unrelated to the above, but I suspect that where "philosophy" starts for me might be lower abstraction than where it starts for you. I would include things like Paul writing about what a good successor would look like, Ryan writing about why rogue AI may not kill literally everyone, etc., as "philosophy", though I'm not arguing that either of those specific discussions is particularly important.

P.S. fwiw I don't think the writing style in this post was particularly poor, or that you came across as grumpy
 

  1. ^

    I guess there are some non-longtermist bay area people trying to do this, but I feel like most of them don't then take very thoughtful or altruistic actions.

A few scattered points that make me think this post is directionally wrong, whilst also feeling meh about the forum competition and essays:

  • I agree that the essay competition doesn't seem to have surfaced many takes that I thought were particularly interesting or action-guiding, but I don't think that this is good evidence for "talking about longtermism not being important".
  • There's a lot of things that I would describe as "talking about longtermism" that seem important and massively underdiscussed (e.g. acausal trade and better futures-y things). I think you also think this.
  • The claim in the title seems about as valid as "talking about AI safety is not important" or talking about global health is not important" because most of the AI safety and global health work is relatively unimportant. That said, mean GH, AIS, and longtermist work are very important. I think that pushing the academic writing about longtermism button increases the amount of "good" longtermist writing at least a bit - though I'd be like 50x more excited about Forethought + Redwood running a similar competition on things they think are important that are still very philosophy-ish/high level.
  • The track record of talking about longtermism seems very strong. For example, I think it's hard to tell a story for Open Phil's work in AIS and biosecurity that doesn't significantly route through "writing about longtermism".
  • I feel like this post is more about "is convincing people to be longermists important" or should we just care about x-risk/AI/bio/etc. I strongly believe that most of the influential technical AIS contributors have been significantly influenced by longtermist writing, including the more philosophical aspects - though I wouldn't be surprised if, by their lights, longtermist writing isn't useful for the people they want to hire.
     

Yeah I also think hanging out in a no 1:1s area is weirdly low status/unexciting. I’d be a bit more excited about cause or interest specific areas like “talk about ambitious project ideas”.

calebp
*48
10
18
1
1

(Weakly) Against 1:1 Fests

I just returned from EAG NYC, which exceeded my expectations - it might have been the most useful and enjoyable EAG for me so far.

Ofc, it wouldn’t be an EAG without inexperienced event organisers complaining about features of the conference (without mentioning it in the feedback form), so to continue that long tradition here is an anti-1:1s take.

EAGs are focused on 1:1s to a pretty extreme degree. It’s common for my friends to have 10-15 30 minute 1:1s per day, at other conferences I’ve been to it’s generally more like 0-5. I would prefer a culture of closer to 5 1:1s per day, with half of them organised after the conference starts. 

Some upsides of my imaginary system relative to the  current system:

  • Far less tiring for attendees
  • Far more opportunity to use earlier conversations to inform later ones (e.g. you could have a new project idea, then talk to a collaborator about it, then secure funding all at the conference)
  • More opportunities for small group conversations, which are extremely hard to organise in Swapcard and, in my opinion, are much more valuable than 1:1s.
  • Less planning overhead, where you need to start booking meetings very early so that people still have time in their calendars (and regular pre-con visits to swapcard to see who has recently joined the platform)

    Some concrete recommendations to try at the next EAG could be:
  • Figure out how to make group conversations easier to organise (maybe ditch swapcard and online makes their own platform??)
  • Block out every 2nd or 3rd session by default.
  • Create nice zones for spontaneous conversations (not sure how to do this well) or set up the space with more nooks for organic conversations (or maybe have high effort after parties with more of this vibe)
  • Encourage attendees to keep at least half the schedule free till after the first day.


I’m not sure what actions I plan to take at an individual level, it feels like it’s hard for me to realise something like the above vision just for myself. Some options that I feel pretty good about  trying include:

  • budget 3x more time for scheduling and make more small group conversations happen
  • block out lots of time in swapcard (though it’s not super useful if others don’t do this too)
  • think of lower downside interventions and lobby the EAG team to try them out - one issue is that I’m not sure I have any interventions that result in better feedback form scores in the short-term - even if the change is better in the long term.

 

P.S. Thanks again to the EAG team for another excellent conference!

 

I’m not sure I understood the last sentence. I personally think that a bunch of areas Will mentioned (democracy, persuasion, human + AI coups) are extremely important, and likely more useful on the margin than additional alignment/control/safety work for navigating the intelligence explosion. I’m probably a bit less “aligned ASI is literally all that matters for making the future go well” pilled than you, but it’s definitely a big part of it. 

I also don’t think that having higher odds of AI x-risk are a crux, though different “shapes” of intelligence explosion could be , e.g. if you think we’ll never get useful work for coordination/alignment/defense/ai strategy pre-foom then I’d be more compelled by the totalising alignment view - but I do think that’s misguided. 

the AI safety group was just way more exciting and serious and intellectually alive than the EA group — this is caricatured,


Was the AIS group led by people that had EA values or were significantly involved with EA?

I’m sure it was a misunderstanding, but fwiw, in the first paragraph, I do say “positive contributors” by which I meant people having a positive impact. 

I agree with some parts of your comment  though it’s not particularly  relevant to the thesis that most people with significant responsibility for most of the top-tier (according to my view on top tier areas for making AGI go well) have values that are much more EA like than would naively be expected.

Load more