I'm currently a co-director at EA Netherlands (with Marieke de Visscher). We're working to build and strengthen the EA community here.
Before this, I worked as a consultant on urban socioeconomic development projects and programmes funded by the EU. Before that, I studied liberal arts (in the UK) and then philosophy (in the Netherlands).
Hit me up if you wanna find out about the Dutch EA community! :)
I don't have a super strong view on which set of guiding principles is better - I just thought it was odd for them to be changed in this way.
If pushed, I prefer the old set, and a significant part of that preference stems from the amount of jargon in the new set. My ideal would perhaps be a combination of the old set and the 2017 set.
We work to overcome our natural tendency to care most about those closest to us. This means taking seriously the interests of distant strangers, future generations, and nonhuman animals - anyone whose wellbeing we can affect through our choices. We continuously question the boundaries we place around moral consideration, and we're willing to help wherever we can do the most good, not just where helping feels most natural or comfortable.
We do the hard work of choosing where to focus our limited time, money, and attention. This means being willing to say "this is good, but not the best use of marginal resources" - and actually following through, even when it means disappointing people or turning down appealing opportunities. We resist scope creep and don't let personal preferences override our considered judgments about where we can have the most impact.
We treat our beliefs as hypotheses to be tested rather than conclusions to be defended. This means actively seeking disconfirming evidence, updating based on data, and maintaining genuine uncertainty about what we don't yet know. We acknowledge the limits of our evidence, don't oversell our findings, and follow arguments wherever they lead - even when the conclusions are uncomfortable or threaten projects we care about.
We take unusual ideas seriously and are willing to consider approaches that seem weird or unconventional if the reasoning is sound. We default to transparency about our reasoning, funding, mistakes, and internal debates. We make our work easy to scrutinise and critique, remain accessible to people from different backgrounds, and share knowledge rather than hoarding it. We normalise admitting when we get things wrong and create cultures where people can acknowledge mistakes without fear, while still maintaining accountability.
We align our behaviour with our stated values. This means being honest even when it's costly, keeping our commitments, and treating people ethically regardless of their status or usefulness to our goals. How we conduct ourselves - especially toward those with less power - reflects our actual values more than our stated principles. We hold ourselves and our institutions to high standards of personal and professional conduct, recognising that being trustworthy is foundational to everything else.
This is a useful write-up, thanks for sharing!
As I said in my last comment, I'd go to Zurich simply for the friends thing.
Thoughts on Amsterdam:
There are a few Dutch EAs who have worked at quant firms and done E2G - let me know if you'd like an introduction. @Imma🔸 might also be interesting to chat with. She's a software engineer who moved from NL to CH for E2G reasons (IIRC) but then moved back.
Random suggestion: Dunno if you've already got a master's degree but the UK has just expanded the number of universities that, if you have a degree from one of them, will give you access to their 'high potential individual' visa. AFAIK this is easier than getting sponsorship. Unfortunately, Ghent didn't quite make the cut, but lots of other EU ones did (including Amsterdam, which has a solid AI safety scene). So you could sample a new city, get a 1-year master's, and then you've gained the option of an easier move to London.
P.S. Great to see you're coming to EAGxAmsterdam - hope you have a great time!
Thanks for clarifying!
But at what level should that standardised set of outcome-related indicators operate?
As you mention, we already have indicators for ultimate impact (QALYs, etc). And the indicators at the opposite end of the spectrum are pretty simple (completion rates, NPS, etc.).
It feels like you're looking for indicators that occupy the space in between? Something like 80k's old DIPY metric or AAC's ICAP?
I thiiiiink both organisations tried these metrics and then discontinued them because they weren't so useful?
This is a good take. 80k are good at it, bluedot too, and GWWC have started doing good things as well.
I think national orgs like EA Netherlands are well-positioned to do more, but we're only just waking up to this and are learning how best to allocate a portion of the EUR 30-40k in unrestricted funding we get from CEA. At EAN we've started working with Amplify and a marketing agency and have had great results (3x'd our intro programme completions and increased our EAGx attendance by 35%). Would like to do more of this in the future if we can find the money/re-allocate more of our funds.