Lorenzo Buonanno🔸

Software Developer @ Giving What We Can
5214 karmaJoined Working (0-5 years)20025 Legnano, Metropolitan City of Milan, Italy

Bio

Software Developer at Giving What We Can, trying to make giving significantly and effectively a social norm.

Posts
11

Sorted by New

Comments
657

Topic contributions
7

Hopefully this is auspicious for things to come?

My understanding is that they already raise and donate millions of dollars per year to effective projects in global health (especially tuberculosis)
For what it's worth, their subreddit seems a bit ambivalent about explicit "effective altruism" connections (see here or here)

 

Btw, I would be surprised if the ITN framework was independently developed from first principles:

  • He says exactly the same 3 things in the same order
  • They have known about effective altruism for at least 11 years (see the top comment here)
  • There have been many effective altruism themed videos in their "Project for Awesome" campaign several years
  • They have collaborated several times with 80,000 hours and Giving What We Can
  • There are many other reasonable things you can come up with (e.g. urgency)

DoneThat is also significantly cheaper (at least for now) and Christoph is very responsive to feedback/requests (once replied to an email within 6 minutes)

I used DoneThat for a while and also highly recommend it, especially given the low cost (5$/month)

As a piece of feedback, I think you should have included this video in the post: https://www.loom.com/share/53d45343051846ca8328ccd91fa4c3a8 and people should look at it before deciding whether to download it. It made me feel much more confident in the privacy aspects (especially when using one's own Gemini API key)

If you upload it to YouTube you can also easily embed it in a bunch of places (including this forum)

I personally found it a very refreshing change of language/thinking/style from the usual EA Forum/LessWrong post, and found spending some extra effort to (hopefully) understand it worth it and highly enjoyable.

My one sentence summary/translation would be that advocating for longtermism would likely benefit on the margin from using more of a virtue ethics approach (e.g. using saints and heroes as examples) instead of a rationalist/utilitarian approach, as most people feel even less of an obligation towards future beings than towards the global poor, and many of the most altruistic people act altruistically for emotional/spiritual reasons rather than rational ones.

I could have definitely misunderstood the post though, so someone correct me if I misinterpreted it, and there are a lot more valuable points. E.g. that most people agree on an abstract level that future people matter, and that actively causing them harm is bad. So I think it claims that longtermists should focus less on strengthening that case and more on other things. Another interesting point is that to "mitigate hazards we create for ourselves" we could take advantage of the fact that "causing harm is intuitively worse than not producing benefit" for most people.

I think SummaryBot below also did a good job at translating.

Reposting this comment from the CEO of Open Philanthropy 12 days ago, as I think some people missed it:

A quick update on this: Good Ventures is now open to supporting work that Open Phil recommends on digital minds/AI moral patienthood. We're still figuring out where that work should slot in (including whether we’d open a public call for applications) and will update people working in the field when we do. Additionally, Good Ventures are now open to considering a wider range of recommendations in right-of-center AI policy and a couple other smaller areas (e.g. in macrostrategy/futurism), though those will be evaluated on a case-by-case basis for now. We’ll hopefully develop clearer parameters for GV interest over time (and share more when we have those). In practice, given our increasing work with other donors, we don’t think any of this is a huge update; we’d like to continue to hear about and expect to be able to direct funding to the most promising opportunities whether or not they are a fit for Good Ventures.

(More info on the film's creation in the FLI interview: Suzy Shepherd on Imagining Superintelligence and "Writing Doom")

Correct link: https://www.youtube.com/watch?v=McnNjFgQzyc 

 

Another FLI-funded YouTube channel is https://www.youtube.com/@Siliconversations, which has ~2M views on AI Safety

Posts on this topic that I liked:


I fairly strongly disagree with "be honest about your counterfactual impact—most people overestimate it.", and on only working at a nonprofit you consider effective if you think you're ~10x better than the counterfactual hire or "irreplaceable."

As an example, I'm confident that there are software developers who would have been significantly more impactful than me at my role at GWWC, but didn't apply, and the extra ~$/year that they are donating (if they are actually donating more in practice than what they would have) does not compensate for that.
I also think that there's a good chance that I would have done other vaguely impactful work, or donated more myself, if they had been hired instead of me, largely compensating for their missed donations.

I remember wondering the same a few years ago, and I came to the opposite conclusion. I think the biggest differences in my reasoning were:

  1. I think in practice it takes much more than 30 minutes on average to write a will, even more so if it's a significant amount of wealth (like $100k)
  2. I think the annualized chance of death for someone worth $100k at 25 is significantly lower than the population average
  3. People with no risk factors (e.g. heart disease, cancer) have a significantly lower chance of death, and if someone discovers a risk factor they can think about a will after that discovery

Also quickly noting that you're using the annualized chance of death for males in the US, but a significant percentage of EA Forum readers are women, so have less than half the mortality rate between 15 and 37, and/or live in countries with a much lower youth mortality risk (e.g.in the UK it's 0.6 per 1,000 25 y/o males, in Italy 0.4, in the Netherlands 0.4 if I'm interpreting this correctly, I expect Germany and other european countries to be similar, Canada 0.97, Australia 0.6)

The community tag was originally introduced as a way to separate out FTX-scandal related tags.

 

I don't think that's true, based on what CEA staff were posting publicly and some conversations I had at the time.

Some relevant posts and comment threads:
1. https://forum.effectivealtruism.org/posts/wvBfYnNeRvfEXvezP/moving-community-discussion-to-a-separate-tab-a-test-we#Why_consider_doing_this_at_all_

2. https://forum.effectivealtruism.org/posts/dDudLPHv7AgPLrzef/karma-overrates-some-topics-resulting-issues-and-potential

3. https://forum.effectivealtruism.org/posts/2jYDXwqSj87ZjLtwy/follow-and-filter-topics-and-an-update-to-the-community#3__The__Community__tag_and_topic

4. https://forum.effectivealtruism.org/posts/irhgjSgvocfrwnzRz/should-the-forum-be-structured-such-that-the-drama-of-the#GNLKxKvcijjcxSiRG 


When I was a moderator, my understanding was that the community tag was more about separating posts related to EA as in "doing good" from posts related to EA as in "a specific community of people". E.g. People uninterested in the community but still interested in AI Safety would still be the target audience of a post on "AI safety talent development"

That said, there were plenty of ambiguous cases, and users can tag any of their own posts as community when posting, so I agree that it's somewhat inconsistently applied.

Load more