Community
Community
Posts about the EA community and projects that focus on the EA community

Quick takes

-40
8d
21
The context for what I'm discussing is explained in two Reflective Altruism posts: part 1 here and part 2 here. Warning: This is a polemic that uses harsh language. I still completely, sincerely mean everything I say here and I consciously endorse it.[1] ---------------------------------------- It has never stopped shocking and disgusting me that the EA Forum is a place where someone can write a post arguing that Black Africans need Western-funded programs to edit their genomes to increase their intelligence in order to overcome global poverty and can cite overtly racist and white supremacist sources to support this argument (even a source with significant connections to the 1930s and 1940s Nazi Party in Germany and the American Nazi Party, a neo-Nazi party) and that post can receive a significant amount of approval and defense from people in EA, even after the thin disguise over top of the racism is removed by perceptive readers. That is such a bonkers thing and such a morally repugnant thing, I keep struggling to find words to express my exasperation and disbelief. Effective altruism as a movement probably deserves to fail for that, if it can't correct it.[2]  My loose, general impression is that people who got involved in EA because of global poverty and animal welfare tend to be broadly liberal or centre-left and tend to be at least sympathetic toward arguments about social justice and anti-racism. Conversely, my impression of LessWrong and the online/Bay Area rationalist community is that they don't like social justice, anti-racism, or socially/culturally progressive views. One of the most bewildering things I ever read on LessWrong was one of the site admins (an employee of Lightcone Infrastructure) arguing that closeted gay people probably tend to have low moral integrity because being closeted is a form of deception. I mean, what?! This is the "rationalist" community?? What are you talking about?! As I recall based on votes, a majority of forum users who
-1
11d
Just calling yourself rational doesn't make you more rational. In fact, hyping yourself up about how you and your in-group are more rational than other people is a recipe for being overconfidently wrong. Getting ideas right takes humility and curiosity about what other people think. Some people pay lip service to the idea of being open to changing their mind, but then, in practice, it feels like they would rather die than admit they were wrong.  This is tied to the idea of humiliation. If disagreement is a humiliation contest, changing one's mind can feel emotionally unbearable. Because it feels as if to change your mind is to accept that you deserve to be humiliated, that it's morally appropriate. Conversely, if you humiliated others (or attempted to), to admit you were wrong about the idea is to admit you wronged these people, and did something immoral. That too can feel unbearable. So, a few practical recommendations: -Don't call yourself rational or anything similar -Try to practice humility when people disagree with you -Try to be curious about what other people think -Be kind to people when you disagree so it's easier to admit if they were right -Avoid people who aren't kind to you when you disagree so it's easier to admit if you were wrong
-4
17d
1
deleted my original comment about the first DDS attack because it was called a 'crackpot theory and shouldn't be on the forum'. I didn't phrase it right but was asking for any research from potential catastrophic risk groups on probability/estimates of attacks like this increasing (especially with global heat around AGI race) and any recommendations for regular citizens to prepare for it.  Second attack hit this morning in under a week and it's now picking up in press as potential threat. So i'm going to trust my gut on this one and say i'm not wrong in forecasting this is an immediate emerging threat. I'm going to start compiling some work on this, let me know if you're interested.
10
1mo
TL;DR: $100,000 for insights into an EA's unsolved medical mystery (Sharing on behalf of the patient to preserve their anonymity)  The Medical Mystery Prize is a patient-funded initiative offering a $100,000 grand prize (plus smaller awards) for ideas that help advance a difficult, unresolved medical case. The patient works in AI safety. The goal is to solve his health issue so that he can do his best work. All patient records are fully anonymized and HIPAA-compliant. Submissions for the prize will be reviewed by a licensed healthcare provider before reaching the patient. Even if you don’t have a complete solution, it’s worth taking a look; sometimes a fresh perspective or small hypothesis can make a real difference! Partial contributions will also be awarded smaller prize amounts. Check out the case details and submission info at themedicalmysteryprize.com.
1
1mo
In talking with OWA groups in Africa and Asia, I’m learning about a culture of dictatorship at OWA. 1. OWA holds 15 to 20+ meetings annually with grantees, excluding campaign meetings, mentorship, and trainings, in addition to 2 narrative reports each year. It has to be unacceptable even if you’re brandishing it as collaboration. 2. OWA grantees in these regions are recently required to submit “regular written updates regarding engagement with the companies” 3. Over 30 groups from Asia and Africa are in the alliance, serving 78% of the world population and over 60% of farmed chicken. OWA has only three staff to support groups in the regions. The job titles of some of the staff are “regional leads”. I think that is insufficient if they’re building a movement in these regions, but sufficient if they’re passing over requests from the West. 4. OWA seeks to control the specific companies that groups campaign against. In a recent webinar to OWA members on “Focus Local, Impact Global,” they pitched to groups to leave Western companies operating in their countries and target local competitors.  I discovered these facts while researching OWA and attending their recent global summit. I haven't shared this feedback with the OWA team before this post, as they don’t have a public anonymous feedback form.  
16
2mo
Running EA Oxford Socials: What Worked (and What Didn't) After someone reached out to me about my experience running EA socials for the Oxford group, I shared my experience and was encouraged to share what I sent him more widely. As such, here's a brief summary of what I found from a few terms of hosting EA Oxford socials. The Power of Consistency Every week at the same time, we would host an event. I strongly recommend this, or having some kind of strong schedule, as it lets people form a routine around your events and can help create EA aligned friend-groups. Regardless of the event we were hosting, we had a solid 5ish person core who were there basically every week, which was very helpful. We tended to have 15 to 20 people per event, with fewer at the end of the term as people got busy with finishing tutorials. Board Game Socials Board game socials tended to work the best of the types of socials I tried. No real structure was necessary, just have a few strong EAs to set the tone, so it really feels like "EA boardgames," and then just let people play. Having the games acts as a natural conversation starter. Casual games especially are recommended, "Codenames" and "Coup" were favorites in particular at my socials but I can imagine many others working too. Deeper games have a place too, but they generally weren't primary. In the first two terms, we would just hold one of these every week. They felt like ways for people to just talk about EA stuff in a more casual environment than the discussion groups or fellowships. "Lightning Talks" We also pretty effectively did "Lightning Talks," basically EA powerpoint nights. As this was in Oxford, we could typically get at least one EA-aligned researcher or worker there every week we did it (which was every other week), and the rest of the time would be filled with community member presentations (typically between 5-10 minutes). These seemed to be best at re-engaging people who signed up once but had lost contact wi
13
3mo
6
the Global Priorities Institute at Oxford University has shut down as of July. More information, publication list and additional groups on website. Surprised this hasn't been brought up given how important GPI was in establishing EA as a legitimate academic research space. By my count, barring Trajan House, it now appears that EA has officially been annexed from Oxford University. This feels like a significant change post-FTX - I see pros and cons to not being tied to one university. Thoughts? edited: to clarify I meant the university not the city
1
3mo
Curious if there is any addictiveness benchmark for new technologies. * How would it be measured, would it be similar to training a preference model on rankings of multiple responses? *  I am aware many people are aware how technology can be addictive. I seen people say the best way to avoid it is to not use it or it is the person who is addicted fault. How about instead of people completely avoiding certain technologies like social media is there any effort to make these technologies at the very least less addictive *I feel this can also be effective with since we do not want to cause a future of simply looking at screens in the future like the wally movie humans Is there anyone working on making to making less addictive alternatives to technologies or tools to reduce addiction?
Load more (8/220)

Posts in this space are about

CommunityEffective altruism lifestyle