Studying behaviour and interactions of boundedly rational agents, AI alignment and complex systems.
Research fellow at Future of Humanity Institute, Oxford. Other projects: European Summer Program on Rationality. Human-aligned AI Summer School. Epistea Lab.
My impression is EAGx Prague 22 managed to balance 1:1s with other content simply by not offering SwapCard 1:1s slots part of the time, having a lot of spaces for small group conversations, and suggesting to attendees they should aim for something like balanced diet. (Turning off SwapCard slots does not prevent people from scheduling 1:1, just adds a little friction; empirically it seems enough to prevent the mode where people just fill their time by 1:1s).
As far as I understand this will most likely not happen, because weight given to / goodharting on metrics like people reporting 1:1s is the most valuable use of time, metrics tracking "connections formed" and weird psychological effect of 1:1 fests. (People feel stimulated, connected, energized,... Part of the effect is superficial).  Also the counterfactual value lost from lack of conversational energy at scales ~3 to 12ppl is not visible and likely not tracked in feedback  (I think this has predictable  effects on what types of collaborations do start and which do not, and the effect is on the margin bad.) The whole is downstream of problems like  Don't Over-Optimize Things / We can do better than argmax.
Btw I think you are too apologetic / self-deprecating ("inexperienced event organisers complaining about features of the conference").  I have decent experience running events and all what you wrote is spot on. 
Thanks for explanation. My guess is this decision should not be delegated to LLMs but mostly to authors (possibly with some emphasis on correct classification in the UI).
I think the "the post concerns an ongoing conversation, scandal or discourse that would not be relevant to someone who doesn't care about the EA community" should not be interpreted extensively, otherwise it can easily mean "any controversy or criticism". I will repost it without the links to current discussions - these are non-central, similar points are raised repeatedly over the years and it is easy to find dozens of texts making them.
I wrote a post on “Charity” as a conflationary alliance term. You can read it on LessWrong, but I'm also happy to discuss it here.
If wondering why not post it here: Originally posted it here with a LW cross-post. It was immediately slapped with the "Community" tag, despite not being about community, but about different ways people try to do good, talk about charity & ensuing confusions. It is about the space of ideas, not about the actual people or orgs.
With posts like OP announcements about details of EA group funding or EAG admissions bar not being marked as community, I find it increasingly hard to believe the "Community" tag is driven by the stated principe marking "Posts about the EA community and projects that focus on the EA community" and not by other motives, like e.g. forum mods expressing the view "we want people to think less about this / this may be controversial / we prefer someone new to not read this".
 
My impression this moves substantial debates about ideas to the side, which is a state I don't want to cooperate on by just leaving it as it is -> moved the post on LessWrong and replaced by this comment. 
Seems plausible the impact of that single individual act is so negative that aggregate impact of EA is negative.
I think people should reflect seriously upon this possibility and not fall prey to wishful thinking (let's hope speeding up the AI race and making it superpower powered is the best intervention! it's better if everyone warning about this was wrong and Leopold is right!).
The broader story here is that EA prioritization methodology is really good for finding highly leveraged spots in the world, but there isn't a good methodology for figuring out what to do in such places, and there also isn't a robust pipeline for promoting virtues and virtuous actors to such places.
I don't think so. I think in practice
I. - Some people don't like the big R community very much.
AND
2a. - Some people don't think improving the EA community small-r rationality/epistemics should be one of top ~3-5 EA priorities. 
OR
2b.  - Some people do agree this is important, but don't clearly see the extent to which the EA community imported healthy epistemic vigilance and norms from Rationalist or Rationality-adjacent circles
=>
- As a consequence, they are at risk of distancing from small r rationality as a collateral damage / by neglect
Also I think many people in the EA community don't think it's important to try hard at being small-r rational at the level of aliefs.  No matter what is the actual situation revealed by actual decisions, I would expect the EA community to at least pay lip service to epistemics and reason, so I don't think stated preferences are strong evidence. 
"Being against small-r rationality is like being against kindness or virtue; no one thinks of themselves as taking that stand." 
Yes I do agree almost no one thinks about themselves that way. I think it is maybe somewhat similar to "Being against effective charity" - I would be surprised if people though about themselves that way. 
Reducing rationality to "understand most of Kahneman and Tversky's work" and cognitive psychology would be extremely narrow and miss most of the topic.
To quickly get some independent perspective, I recommend reading the Overview of the handbook part of "The Handbook of Rationality"  (2021, MIT Press, open access). For an extremely crude calibration: the Handbook has 65 chapters. I'm happy to argue at least half of them cover topics relevant to the EA project. About ~3 are directly about Kahneman and Tversky's work. So, by this proxy, you would miss about 90% of whats relevant.
 
Sorry for sarcasm, but what about returning to the same level of non-involvement and non-interaction between EA and Rationality as you describe was happening in Sydney? I.e. EA events are just co-hosted with LW Rationality and Transhumanism, and the level of Rationality idea non-influence is kept on par with Transhumanism?
It would be indeed very strange if people made the distinction, thought about the problem carefully, and advocated for distancing from 'small r' rationality in particular.
I would expect real cases to look like
- someone is deciding about an EAGx conference program; a talk on prediction markets sounds subtly Rationality-coded, and is not put on schedule
- someone applies to OP for funding to create rationality training website; this is not funded because making the distinction between Rationality and rationality would require too much nuance
- someone is deciding about what intro level materials to link to; some links to LessWrong are not included
The crux is really what's at the end of my text - if people do steps like above, and nothing else, they are distancing also from the 'small r' thing. 
Obviously part of the problem for the separation plan is Rationality and Rationality-adjacent community actually made meaningful progress on rationality and rationality education; a funny example here in the comments ... Radical Empath Ismam advocates for the split and suggests EAs should draw from the "scientific skepticism" tradition instead of Bay Rationality. Well, if I take that suggestion seriously, and start looking for what could be good intro materials relevant to the EA project (which "debunking claims about telekinesis" advocacy content probably isn't) .... I'll find New York City Skeptics and their podcast, Rationally Speaking. Run by Julia Galef, who also later wrote Scout Mindset. Excellent. And also, co-founded CFAR. 
(crossposted from twitter) Main thoughts: 
1. Maps pull the territory 
2. Beware what maps you summon 
Leopold Aschenbrenners series of essays is a fascinating read: there is a ton of locally valid observations and arguments. Lot of the content is the type of stuff mostly discussed in private. Many of the high-level observations are correct. 
At the same time, my overall impression is the set of maps sketched pulls toward existential catastrophe, and this is true not only for the 'this is how things can go wrong' part, but also for the 'this is how we solve things' part. 
Leopold is likely aware of the this angle of criticism, and deflects it with 'this is just realism' and 'I don't wish things were like this, but they most likely are'. I basically don't buy that claim.
Travel: mostly planned (conferences, some research retreats).
We expect closely coordinated team work on the LLM psychology direction, with a bit looser connections to the gradual disempowerment / macrostrategy work. Broadly ACS is small enough that anyone is welcome to participate in anything they are interested in, and generally everyone has idea what others work on.