Quick takes

Set topic
Frontpage
Global health
Animal welfare
Existential risk
Biosecurity & pandemics
11 more
Bella
50
16
15
4
3

EAs are trying to win the "attention arms race" by not playing. I think this could be a mistake.

  • The founding ideas and culture of EA was created and “grew up” in the early 2010s, when online content consumption looked very different.
    • We’ve overall underreacted to shifts in the landscape of where people get ideas and how they engage with them.
    • As a result, we’ve fallen behind, and should consider making a push to bring our messaging and content delivery mechanisms in line with 2020s consumption.
  • Also, EA culture is dispositionally calm, rational, and dry.
    • Th
... (read more)
Showing 3 of 14 replies (Click to show all)

I see the extent of badness of social media platforms (for mental health or society at large) as orthogonal to the question of whether they can be used to cost-effectively attract talent to EA.

2
gergo
Short-form content doesn't have to sacrifice fidelity! It's going to contain less information, but the way to use it is to attract people to go onto engaging with long-form content.
2
gergo
There are ad formats where people don't have to leave the platform, just quickly share their contact information in an in-built form and then continue the mindless scrolling :-) Once they are in a better place mentally, they can read our follow-up email!  There is also a whole "science" behind landing page optimisation, where if people click on your ad, you take them elsewhere but make it as low-friction as possible to sign up to your thing afterwards.

Hey y'all,

My TikTok algorithm recently presented me with this video about effective altruism, with over 100k likes and (TikTok claims) almost 1 million views. This isn't a ridiculous amount, but it's a pretty broad audience to reach with one video, and it's not a particularly kind framing to EA. As far as criticisms go, it's not the worst, it starts with Peter Singer's thought experiment and it takes the moral imperative seriously as a concept, but it also frames several EA and EA-adjacent activities negatively, saying EA quote "is spending millions on hos... (read more)

6
huw
My hobby horse around these parts has been that EA should be less scared about reaching out to the left (where I’m politically rooted), and thinking about what commonalities we have. This is something I have already seen in the animal welfare movement, where EAs are unafraid to work with existing vegan activism, and have done a good job of selling philanthropic funding to them, despite having large differences in opinion on the margins. As you note, it’s not unreasonable that EA looks very far left from some perspectives. GiveDirectly is about direct empowerment, and I would argue that a lot of global development work, especially economic development, can be anti-imperial and generally concord with Marxist ideas of the internationale. Some better outreach and PR management in these communities would go a long way in the same way that it has for the political centre-left, who seem to get lots more attention from EA.
2
Yarrow Bouchard 🔸
The number I’ve seen people throw out a few times to estimate the number of people who identify with the effective altruism movement is 10,000, although I don’t know where that comes from. In one survey/poll I read (I think it was Pew or Gallup), 5% of Americans identify as being on the far left. 5% of the American population is 17 million.  If the American far left is going to change ideologically or culturally, it probably won’t be because of anything the effective altruism movement does. It’s just too big in comparison. I think there’s a sense in which you’ve just gotta resign yourself to the idea that many people on the far left will dislike effective altruism, insofar as they know anything about it, indefinitely into the future.  I think you have some interesting thoughts about messaging and outreach. For people who are concerned with paternalism or neocolonialism, or who are distrustful of charities, GiveDirectly is a great option. So, promoting GiveDirectly to people with these concerns seems like a good idea. I wonder also if explaining charities that do simple things like the Against Malaria Foundation giving bednets might be appealing to people, too. I feel like that’s so simple, it’s hard to imagine it somehow being secretly evil.  I’m personally fairly worn out and discouraged from trying, over many years, to talk to far leftist friends, acquaintances, and members of various communities (online and local). Despite voting for a social democratic party and having many strongly socially progressive and economically progressive/social democratic views, I’ve often had a hard time finding common ground with many people on the far left, to the extent that I’ve ended relationships with friends and acquaintances and left certain communities. Some of the views I hold that I was in several cases not able to find common ground on:  -Governments should be democratic rather than authoritarian  -It is morally unacceptable to commit terrorist attacks against civili

5% of Americans identify as being on the far left

However, I would strongly wager that the majority of this sample does not believe in the three ideological points you outlined around authoritarianism, terrorist attacks, and Stalin & Mao (I think it is also quite unlikely that the people viewing the Tik Tok in question would believe these things either). Those latter beliefs are extremely fringe.

The Ezra Klein Show (one of my favourite podcasts) just release an episode with GiveWell CEO Elie Hassenfeld!

Idea for someone with a bit of free time: 

While I don't have the bandwidth for this atm, someone should make a public (or private for, say, policy/reputation reasons) list of people working in (one or multiple of) the very neglected cause areas — e.g., digital minds (this is a good start), insect welfare, space governance, AI-enabled coups, and even AI safety (more for the second reason than others). Optional but nice-to-have(s): notes on what they’re working on, time contributed, background, sub-area, and the rough rate of growth in the field (you pr... (read more)

Started something sorta similar about a month ago: https://saul-munn.notion.site/A-collection-of-content-resources-on-digital-minds-AI-welfare-29f667c7aef380949e4efec04b3637e9?pvs=74

1
Eryn Van Wijk
What a wonderful idea! Mayank referred me over to this post, and I think EA at UIUC might have to hop on this project. I'll see about starting something in the next month or so and sharing a link to where I'm compiling things in case anyone else is interested in collaborating on this. Or, it's possible an initiative like it already exists that I'll stumble upon while investigating (though such a thing may well be outdated).

I’ve seen a few people in the LessWrong community congratulate the community on predicting or preparing for covid-19 earlier than others, but I haven’t actually seen the evidence that the LessWrong community was particularly early on covid or gave particularly wise advice on what to do about it. I looked into this, and as far as I can tell, this self-congratulatory narrative is a complete myth.

Many people were worried about and preparing for covid in early 2020 before everything finally snowballed in the second week of March 2020. I remember it personally.... (read more)

Showing 3 of 25 replies (Click to show all)

Following up a bit on this, @parconley. The second post in Zvi's covid-19 series is from 6pm Eastern on March 13, 2020. Let's remember where this is in the timeline. From my quick take above:

On March 8, 2020, Italy put a quarter of its population under lockdown, then put the whole country on lockdown on March 10. On March 11, the World Health Organization declared covid-19 a global pandemic. (The same day, the NBA suspended the season and Tom Hanks publicly disclosed he had covid.) On March 12, Ohio closed its schools statewide. The U.S. declared a nationa

... (read more)
2
Yarrow Bouchard 🔸
I spun this quick take out as a full post here. When I submitted the full post, there was no/almost no engagement on this quick take. In the future, I'll try to make sure to publish things only as a quick take or only as a full post, but not both. This was a fluke under unusual circumstances. Feel free to continue commenting here, cross-post comments from here onto the full post, make new comments on the post, or do whatever you want. Thanks to everyone who engaged and left interesting comments.
4
Jason
I like this comment. This topic is always at risk to devolving into a generalized debate between rationalists and their opponents, creating a lot of heat but not light. So it's helpful to keep a fairly tight focus on potentially action-relevant questions (of which the comment identifies one).

Reading Will's post about the future of EA (here) I think that there is an option also to "hang around and see what happens". It seems valuable to have multiple similar communities. For a while I was more involved in EA, then more in rationalism. I can imagine being more involved in EA again.

A better earth would build a second suez canal, to ensure that we don't suffer trillions in damage if the first one gets stuck. Likewise, having 2 "think carefully about things movements" seems fine. 

It hasn't always felt like this "two is better than one" feeling... (read more)

Showing 3 of 9 replies (Click to show all)
6
Nathan Young
Sure, and do you want to stand on any of those accusations? I am not going to argue the point with 2 blogposts. What is the point you think is the strongest? As for Moskovitz, he can do as he wishes, but I think it was an error. I do think that ugly or difficult topics should be discussed and I don't fear that. LessWrong, and Manifest, have cut okay lines through these topics in my view. But it's probably too early to judge. 

Well, the evidence is there if you're ever curious. You asked for it, and I gave it.

David Thorstad, who writes the Reflective Altruism blog, is a professional academic philosopher and, until recently, was a researcher at the Global Priorities Institute at Oxford. He was an editor of the recent Essays on Longtermism anthology published by Oxford University Press, which includes an essay co-authored by Will MacAskill, as well as essays by a few other people well-known in the effective altruism community and the LessWrong community. He has a number of publish... (read more)

5
Nathan Young
I often don't respond to people who write far more than I do.  I may not respond to this. 

Hi, does anyone from the US want to donation-swap with me to a German tax-deductible organization? I want to donate $2410 to the Berkeley Genomics Project via Manifund.

Is there currently an effective altruism merch/apparel store? If not do people think there is demand? I'd be happy to run it or help someone set it up. (quick search shows previous attempts that are now closed - if anyone knows why that would be cool too)

2
Joseph
I'm curious how easy or hard it is to set up some drop shipping. A few items (t-shirts, hoodies, mugs, caps) with a few choices of designs might be feasible, much like the Shrimp Welfare Project Shop, or the DFTBA shop.

it's quite easy, I actually already did it with printful + shopify. I stalled out because (1) I realized it's much more confusing to deal with all the copyright stuff and stepping on toes (I don't want to be competing with ea itself or ea orgs and didn't feel like coordinating with a bunch of people. (2) you kind of get raked using a easy fully automated stack. Not a big deal but with shipping hoodies end up being like 35-40 and t shirts almost 20.  I felt like given the size of EA we should probably just buy a heat press or embroidery machine since w... (read more)

3
James Herbert
Not one I know of. It's on my longterm to-do list for EA Netherlands.
[anonymous]1
0
1

A quick OpenAI-o1 preview BOTEC for additional emissions from a sort of Leopold scenario ~2030, assuming energy is mostly provided by natural gas, since I was kinda curious. Not much time spent on this and took the results at face value. I (of course?) buy that emissions don't matter in short term, in a world where R&D is increasingly automated and scaled.

Phib: Say an additional 20% of US electricity was added to our power usage (e.g. for AI) over the next 6 years, and it was mostly natural gas. Also, that AI inference is used at an increasing rate, sa... (read more)

I live in Australia, and am interested in donating to the fundraising efforts of MIRI and Lightcone Infrastructure, to the tune of $2,000 USD for MIRI and $1,000 USD for Lightcone. Neither of these are tax-advantaged for me. Lightcone is tax advantaged in the US, and MIRI is tax advantaged in a few countries according to their website

Anyone want to make a trade, where I donate the money to a tax-advantaged charity in Australia that you would otherwise donate to, and you make these donations? As I understand it, anything in Effective Altruism Austral... (read more)

This is now covered for Lightcone, but MIRI is still open.

6
Mitchell Laughlin🔸
Can confirm, and happy to vouch. Tax-effective Australian charities and funds: * Against Malaria Foundation * Deworm the World Initiative (led by Evidence Action) * Effective Altruism Australia * GiveDirectly * Giving What We Can * Helen Keller International * Malaria Consortium * New Incentives * One Acre Fund * StrongMinds * Unlimit Health (formerly SCI) * All Grants Fund by GiveWell * Top Charities Fund by GiveWell * Environment Fund by Giving Green

Londoners!
@Gemma 🔸 is hosting a co-writing session this Sunday, for people who would like to write "Why I Donate" posts. The plan is to work in poms, and publish something during the session. 

I can’t join this Sunday (finals season whoo!), but this is a really good idea. I’d love to see more initiatives like this to encourage writing on the Forum—especially during themed weeks.

Also, I’m always down to do (probably remote) co-working sessions with people who want to write Forum posts.

A semi-regular reminder that anybody who wants to join EA (or EA adjacent) online book clubs, I'm your guy.

Copying from a previous post:

I run some online book clubs, some of which are explicitly EA and some of which are EA-adjacent: one on China as it relates to EA, one on professional development for EAs, and one on animal rights/welfare/advocacy. I don't like self-promoting, but I figure I should post this at least once on the EA Forum so that people can find it if they search for "book club" or "reading group." Details, including links for joining each

... (read more)

I came to one, it was great! Thanks Joseph for your tireless organizing. 

In Developmenet, a global development-focused magazine founded by Lauren Gilbert, has just opened their first call for pitches. They are looking for 2-4k word stories about things happening in the developing world. They're especially excited about pitches from people living in low and middle income countries. They pay 2k USD per article, submissions close Jan 12. More info here

The mental health EA cause space should explore more experimental, scalable interventions, such as promoting anti-inflammatory diets at school/college cafeterias to reduce depression in young people, or using lighting design to reduce seasonal depression. What I've seen of this cause area so far seems focused on psychotherapy in low-income countries. I feel like we're missing some more out-of-the-box interventions here. Does anyone know of any relevant work along these lines? 

A few points:

  1. There is still a lot of progress to be made in low-income country psychotherapy, which I think many EAs find counterintuitive. StrongMinds and Friendship Bench could both be about 5× cheaper, and have found ways to get substantially cheaper every year for the past half decade or so. At Kaya Guides, we’re exploring further improvements and should share more soon.
  2. Plausibly, you could double cost-effectiveness again if it were possible to replace human counsellors with AI in a way that maintained retention (the jury is still out here).
  3. The Hap
... (read more)
3
Yarrow Bouchard 🔸
My gut feeling based on knowledge, reasoning, and experience is that the low-hanging fruit like diet and lighting is quite low-impact and probably has like low to middling cost-effectiveness — but I haven’t done any math, nor any experiments. If I had research bucks to spend on experimental larks, I would try to push the psychotherapeutic frontier. For example, I might fund grounded theory research into depression. Or I might do a clinical trial on the efficacy of schema therapy for depression — there have been some promising results, but not many studies. I think Johann Hari’s core point is correct — or at least a core point can be extracted from what he’s saying that is correct. Anti-depressants are very helpful for some people and moderately helpful for most people. Medical clinics that give ketamine to patients with treatment-resistant depression are helpful. Treatments that stimulate the brain with magnets and electricity are helpful. Neurofeedback may be helpful. But what all these approaches have in common is they’re trying to treat the brain like the engine in a car. This kind of argument often gets mixed in with people who say that anti-depressants don’t work or are against them for some reason. Or people who advocate for non-evidence-based, woo woo "treatments". But that’s not what I’m saying. Everyone who’s depressed should talk to a doctor about anti-depressants because the evidence for their efficacy is good and, even better, the side-effects for most people most of the time are fairly minor (providing they don’t mix them with the wrong drugs or substances), so the risk of trying them is low. And if one anti-depressant doesn’t work, the standard approach doctors will take is try 3-5 (over time, not all at once), to maximize the chance of one of them working. Other treatments like medical ketamine may be helpful or even life-changing for some people. But I also think pharmacological and other biologistic approaches only take us so far. Depression is
4
NickLaing
i think this is a good idea, but perhaps better excecutrd even by "non mental health" people. if your expertise is in psychotherapy why ditch that enormous competitive advantage? i also think the evidence base on this stuff isn't yet quite there? but I'm not up to date...

What are some resources for doing their own GPR that is longer than the couple months recommended in this 80k article but shorter than a lifetime's worth of work as a GP researcher?

This is an annoying feature of search: (this is the wrong will macaskill)

Rate limiting on the EA Forum is too strict. Given that people karma downvote because of disagreement, rather than because of quality or civility — or they judge quality and/or civility largely on the basis of what they agree or disagree with — there is a huge disincentive against expressing unpopular or controversial opinions (relative to the views of active EA Forum users, not necessarily relative to the general public or relevant expert communities) on certain topics.

This is a message I saw recently:

You aren't just rate limited for 24 hours once you fal... (read more)

Showing 3 of 15 replies (Click to show all)
6
Thomas Kwa
Claude thinks possible outgroups include the following, which is similar to what I had in mind
2
Yarrow Bouchard 🔸
a) I’m not sure all of those count as someone who would necessarily be an outsider to EA (e.g. Will MacAskill only assigns a 50% probability to consequentialism being correct, and he and others in EA have long emphasized pluralism about normative ethical theories; there’s been an EA system change group on Facebook since 2015 and discourse around systemic change has been happening in EA since before then) b) Even if you do consider people in all those categories to be outsiders to EA or part of "the out-group", us/them or in-group/out-group thinking seems like a bad idea, possibly leading to insularity, incuriosity, and overconfidence in wrong views  c) It’s especially a bad idea to not only think in in-group/out-group terms and seek to shut down perspectives of "the out-group" but also to cast suspicion on the in-group/out-group status of anyone in an EA context who you happen to disagree with about something, even something minor — that seems like a morally, subculturally, and epistemically bankrupt approach 
  • You're shooting the messenger. I'm not advocating for downvoting posts that smell of "the outgroup", just saying that this happens in most communities that are centered around an ideological or even methodological framework. It's a way you can be downvoted while still being correct, especially from the LEAST thoughtful 25% of EA forum voters
  • Please read the quote from Claude more carefully. MacAskill is not an "anti-utilitarian" who thinks consequentialism is "fundamentally misguided", he's the moral uncertainty guy. The moral parliament usually recommends actions similar to consequentialism with side constraints in practice.

I probably won't engage more with this conversation.

Here's some quick takes on what you can do if you want to contribute to AI safety or governance (they may generalise, but no guarantees). Paraphrased from a longer talk I gave, transcript here

  • First, there’s still tons of alpha left in having good takes.
    • (Matt Reardon originally said this to me and I was like, “what, no way”, but now I think he was right and this is still true – thanks Matt!)
    • You might be surprised, because there’s many people doing AI safety and governance work, but I think there’s still plenty of demand for good takes, and
... (read more)

EA Connect 2025: Personal Takeaways

Background

I'm Ondřej Kubů, a postdoctoral researcher in mathematical physics at ICMAT Madrid, working on integrable Hamiltonian systems. I've engaged with EA ideas since around 2020—initially through reading and podcasts, then ACX meetups, and from 2023 more regularly with Prague EA (now EA Madrid after moving here). I took the GWWC 10% pledge during the event.

My EA focus is longtermist, primarily AI risk. My mathematical background has led me to take seriously arguments that alignment of superintelligent AI may face fund... (read more)

3
Mo Putera
Aside: wow, the slide presentation you linked to above is both really aesthetically pleasing and has great content, thanks for sharing :) 

Rmk: Slides are for Joey's presentation, not my own, but I do not think he would mind sharing them.

Most of the talks had them availeble.

4
Joris 🔸
Congrats on taking the pledge Ondřej!
Load more