I currently lead EA funds.
Before that, I worked on improving epistemics in the EA community at CEA (as a contractor), as a research assistant at the Global Priorities Institute, on community building, and Global Health Policy.
Unless explicitly stated otherwise, opinions are my own, not my employer's.
You can give me positive and negative feedback here.
Yeah, I think we have a substantive disagreement. My impression before and after reading your list above is that you think that being convinced of longtermism is not very important for doing work that is stellar according to "longtermism" and it's relatively easy to convince people that x-risk/AIS/whatever is important.
I agree with the literal claim, but think that empirically longtermists represent the bulk of people that concern themeselves with thinking clearly about how wild the future could be. I don't think all longtermists do this, but longtermism empirically seems to provide a strong motivation for trying to think about how wild the future could be at all. 
[1]
I also believe that thinking clearly about how wild the future could be is an important and often counterfactual trait for doing AIS work that I expect to actually be useful (though it's obviously not necessary in every case). Lots of work in the name of AIS is done by non-longtermists (which is great), but at the object level, I often feel their work could have been much more impactful if they tried to think more concretely about wild AI scenarios. Ik that longermism is not about AI, and most longtermists are not actually working on AI.
So, for me the dominant question is does more longtermism writing increase or decrease the supply of people trying to think clearly about the future. Overall, I'm like ... weakly increases (?) and there aren't many other leveraged interventions for getting people think about the future.
I would be much more excited about competitions like:
1. Write branches of the AI 2027 forecast from wherever you disagree (which could be at the start).
2. Argue for features of a pre-IE society that can navigate the IE well and roadmap how we might get more of that feature or think about critical R&D challenges for navigating an IE well.
etc. 
Also, somewhat unrelated to the above, but I suspect that where "philosophy" starts for me might be lower abstraction than where it starts for you. I would include things like Paul writing about what a good successor would look like, Ryan writing about why rogue AI may not kill literally everyone, etc., as "philosophy", though I'm not arguing that either of those specific discussions is particularly important.
P.S. fwiw I don't think the writing style in this post was particularly poor, or that you came across as grumpy
 
I guess there are some non-longtermist bay area people trying to do this, but I feel like most of them don't then take very thoughtful or altruistic actions.
A few scattered points that make me think this post is directionally wrong, whilst also feeling meh about the forum competition and essays:
Yeah I also think hanging out in a no 1:1s area is weirdly low status/unexciting. I’d be a bit more excited about cause or interest specific areas like “talk about ambitious project ideas”.
I just returned from EAG NYC, which exceeded my expectations - it might have been the most useful and enjoyable EAG for me so far.
Ofc, it wouldn’t be an EAG without inexperienced event organisers complaining about features of the conference (without mentioning it in the feedback form), so to continue that long tradition here is an anti-1:1s take.
EAGs are focused on 1:1s to a pretty extreme degree. It’s common for my friends to have 10-15 30 minute 1:1s per day, at other conferences I’ve been to it’s generally more like 0-5. I would prefer a culture of closer to 5 1:1s per day, with half of them organised after the conference starts. 
Some upsides of my imaginary system relative to the  current system:
I’m not sure what actions I plan to take at an individual level, it feels like it’s hard for me to realise something like the above vision just for myself. Some options that I feel pretty good about  trying include:
P.S. Thanks again to the EAG team for another excellent conference!
 
I’m not sure I understood the last sentence. I personally think that a bunch of areas Will mentioned (democracy, persuasion, human + AI coups) are extremely important, and likely more useful on the margin than additional alignment/control/safety work for navigating the intelligence explosion. I’m probably a bit less “aligned ASI is literally all that matters for making the future go well” pilled than you, but it’s definitely a big part of it. 
I also don’t think that having higher odds of AI x-risk are a crux, though different “shapes” of intelligence explosion could be , e.g. if you think we’ll never get useful work for coordination/alignment/defense/ai strategy pre-foom then I’d be more compelled by the totalising alignment view - but I do think that’s misguided. 
I’m sure it was a misunderstanding, but fwiw, in the first paragraph, I do say “positive contributors” by which I meant people having a positive impact.
I agree with some parts of your comment though it’s not particularly relevant to the thesis that most people with significant responsibility for most of the top-tier (according to my view on top tier areas for making AGI go well) have values that are much more EA like than would naively be expected.
I don’t think the opposite of (i) is true.
 Imagine a strong fruit loopist, believes there’s an imperative to maximise total fruit loops. 
If you are not a strong fruit loopist there’s no need to minimise total fruit loops, you can just have preferences that don’t have much of an opinion on how many fruit loops should exist (I.e. everyone’s position). 
I suggested the following question for Carl Shulman a few years ago
I'd like to hear his advice for smart undergrads who want to build their own similarly deep models in important areas which haven't been thought about very much e.g. take-off speeds, the influence of pre-AGI systems on the economy, the moral value of insects, preparing for digital minds (ideally including specific exercises/topics/reading/etc.).
I'm particularly interested in how he formed good economic intuitions, as they seem to come up a lot in his thinking/writing.
https://forum.effectivealtruism.org/posts/ytBxJpQsdEEmPAv9F/i-m-interviewing-carl-shulman-what-should-i-ask-him?commentId=vqrxdfNEnYioDEt4N
@Saul Munn recently asked me what my current take on this was and then suggested I share it online - it's very lightly edited.
I think if I were to speak to my undergrad self I'd prioritise:
• reading a lot of SOTA background material (e.g. Dwarkesh podcasts on AI stuff)
• becoming literate in a bunch of seemingly useful fields (e.g. CS, math, econ, philosophy) with some bias towards things that are fun and motivating
• try to contribute to the intellectual frontier as quickly as possible (e.g. via LessWrong) on topics that seem very important
• find a community of people that care about similar stuff, ideally in person - consider starting a "better futures", or "what to do about AGI" group.
This is pretty general advice. I might try to focus on Holden's list Important, actionable research questions for the most important century or similar.