Hide table of contents

I'm posting this in preparation for Draft Amnesty Week (October 13-19), but please also use this thread for posts you don't plan to write for Draft Amnesty. The last time I posted this question, there were some great responses. 

If you have multiple ideas, I'd recommend putting them in different answers, so that people can respond to them separately.

It would be great to see:

  • Both spur-of-the-moment vague ideas, and further along considered ideas. If you're in that latter camp, you could even share a google doc for feedback on an outline.
  • Commenters signalling with Reactions and upvotes the content that they'd like to see written.
  • Commenters responding with helpful resources or suggestions.

Draft Amnesty Week

If the responses here encourage you to develop one of your ideas, Draft Amnesty Week might be a great time to post it. Posts tagged "Draft Amnesty Week" don't have to be thoroughly thought through or even fully drafted. Bullet points and missing sections are allowed. You can have a lower bar for posting. 

21

0
0

Reactions

0
0
Comments16
Sorted by Click to highlight new comments since:

I'm considering writing a post on how I think governments are likely to respond to the tax, spending, and inequality pressures that transformative AI will bring. 

Others have already pointed out that if TAI displaces a lot of workers, this will reduce tax revenue (as most countries' tax bases rely heavily on labour income) while increasing government spending (to pay for benefits to support people out of work). I've also read some articles suggesting we'll need a high degree of global coordination in order to make sure AI's benefits are widely distributed. 

I agree global coordination on tax would be the first best solution, but I also think it is highly unlikely to happen. However:

  • I think there are things individual countries can (and hopefully will) do to mitigate national inequality;
  • I'm not sure TAI will worsen global inequality (I think it probably will, but not by as much as I initially thought); and
  • I don't think governments are going to go broke everywhere (I don't this is actually possible). We will likely see significant economic disruptions and maybe a few defaults, but the fiscal situation may not be as bad as some people seem to think.  

I'm not sure exactly where I'll end up with it, but my hope is to outline a few realistic pathways that will help people (including myself) decide where best to focus their efforts.  

Please let me know if you'd be willing to read drafts or act as a sounding board — I would very much appreciate the help. 

Hi! I'd be happy to help with proofreading/editing for flow and finesse. Let me DM you. 

A list of ideas:

  • We need breadth-first AI safety plans
  • Is it possible to persuade AI companies to slow down?
  • AI companies owe the public an explanation of what they will do with ASI once they have it
  • A frontier AI company should shut down as a costly signal that x-risk is a big deal
  • Some AI safety trolley problems
  • Pausing AI is the best general solution to every non-alignment problem with ASI (?)
  • You only get one shot at making ASI safe, even in a gradual takeoff
  • AI safety regulations would not be that onerous and I don't understand why people believe otherwise
  • A compilation of evidence that AI companies can't be trusted to abide by voluntary commitments
  • Literature review on the effectiveness disruptive or violent protests
  • Protest cost-effectiveness BOTEC
  • What evidence would convince us that LLMs are conscious?
  • Pascal's Mugging is rarely relevant

I wrote a response on your shortform

I'd really like to see an analysis of how necessary (or not!) academic degrees are for making an impact.

My ideas for posts (I'll try to write at least one):

  • I recently learned that malaria causes about as many miscarriages and stillbirths as it causes live infant deaths, but we only count neonatal deaths in most cost-effectiveness estimates. Intermittent preventive treatment with Sulphadoxine-Pyrimethamine (IPTp-SP) for pregnant women seems to be more cost-effective than bed-nets for preventing malaria-related stillbirths and miscarriage. Unsure whether to write a narrow post on that, or a deeper post on "What are the most effective charities, given worldviews where unborn children have similar value to new-borns?"
  • Some Europeans have been asking me a lot about what people in smaller countries can do to make AI go better (or slow down) - especially with regards to China. I think we've got a lot of lessons from (especially Cold War) history about third countries using their relations with superpowers to increase existential safety, but I don't think anyone's written an EA forum post about it.
  • I wrote a blog post on what I call "The Great Happiness Stagnation" - looking at the flattening of happiness in many rich countries since they became rich. I've been thinking about converting it to a forum post, but it currently seems insufficiently rigorous to be worthy of the forum! 

I am considering writing a brief post about how I think the EU AI office (where I will likely be starting a new position in one month) can address some issues of AI differently from other actors. The EU AI office might complement the work of traditional actors in addressing loss of control issues, but it could play a significant role in mitigating power concentration issues, especially in the geopolitical sense. This is a bit of a personal theory of change too.

I'd like to make a post about gruntworkers, people who do low-expertise but essential and non-replacable work. For example, an unpaid volunteer position, that requires general skills like planning, fundraising, outreach, etc.

How political groups aren't "causes", but networks of trust with different causes inside them.

False dichotomies and tradeoffs and their relevance to cause area reasoning.

Why many altruistic causes with different theories spiral into pessimism.

I am considering writing a post about the ways my perspective on AI Safety has changed over the past few years.

Here are my drafts: 

  1. Personal reflections on leading a national EA group for 3+ years. I'll aim to publish this during the draft amnesty week, but I might also chicken out.
  2. A post encouraging people to donate to effective giving organisations and comparing their multiplier effects, need for funding/ ability to use the money for growth and which charities/ cause areas get the boost if you donate to different meta charities. E.g., I expect donations to effektiv Spenden to result mainly in more donations for climate, but if you donate to Gi Effektivt, it only boosts GiveWell charities (not counting second-order effects of people getting into EA through these orgs). Anyone wanna work on this with me? Or steal the idea from me? :)


    [Edit: I won't share my reflections yet.]

I want to write about:

  • How founders and agency can be the real bottleneck in Effective Altruism
  • How funders can improve coordination with minimal effort
  • Why the field of potential digital sentience is highly neglected
  • 10 community-building project ideas

Best practices and pitfalls in EA-aligned campaigns for public office

I have an idea on a post with best practices for how student group leaders should interact with their Students' Associations/Unions/Guilds/etc.

Before I started working on making AI go well, I worked in the Student Opportunities department of my Students' Association and during this time, I interacted with a lot of student group leaders. However, I was never part of an EA Student group, so I don't know what group leaders want to know, and I am not sure how well my knowledge transfers outside of my university.

If you're a group leader and there's anything you're curious about, let me know, and worst-case I will send you a private answer; best case I'll write a post.

Why is there limited Available remote work in Africa , especially west Africa is less than 5%
And they keep announcing remote work worldwide.?
 

Considering writing a post on how EAG(x) Effective Giving meetups MUST drive a call to action (pledge, pledge advocacy, learning more about EG, running an EG-related project)

IMO intellectual discussion about EG without action does not meet the bar for engagement or expected impact for an EAG(x). 

Example meetup dicussion topics I personally did not like at past EAG(x)s: talking about how cost of living / mortgages get in the way of giving, talking about giving now vs accruing interest then giving later, back-and-forth about how improving farmed animal welfare results in higher wild animal suffering. Valid personal concerns and intellectually stimulating, but I'd reserve it for local meetups. These discussions do not yield impact nor meaningfully increase community bonding. 

Curated and popular this week
Relevant opportunities