Existential risk
Existential risk
Discussions of risks which threaten the destruction of the long-term potential of life

Quick takes

23
19d
PSA: If you're doing evals things, every now and then you should look back at OpenPhil's page on capabilities evals to check against their desiderata and questions in sections 2.1-2.2, 3.1-3.4, 4.1-4.3 as a way to critically appraise the work you're doing.
17
16d
1
If the people arguing that there is an AI bubble turn out to be correct and the bubble pops, to what extent would that change people's minds about near-term AGI?  I strongly suspect there is an AI bubble because the financial expectations around AI seem to be based on AI significantly enhancing productivity and the evidence seems to show it doesn't do that yet. This could change — and I think that's what a lot of people in the business world are thinking and hoping. But my view is a) LLMs have fundamental weaknesses that make this unlikely and b) scaling is running out of steam. Scaling running out of steam actually means three things: 1) Each new 10x increase in compute is less practically or qualitatively valuable than previous 10x increases in compute. 2) Each new 10x increase in compute is getting harder to pull off because the amount of money involved is getting unwieldy. 3) There is an absolute ceiling to the amount of data LLMs can train on that they are probably approaching. So, AI investment is dependent on financial expectations that are depending on LLMs enhancing productivity, which isn't happening and probably won't happen due to fundamental problems with LLMs and due to scaling becoming less valuable and less feasible. This implies an AI bubble, which implies the bubble will eventually pop.  So, if the bubble pops, will that lead people who currently have a much higher estimation than I do of LLMs' current capabilities and near-term prospects to lower that estimation? If AI investment turns out to be a bubble, and it pops, would you change your mind about near-term AGI? Would you think it's much less likely? Would you think AGI is probably much farther away?
17
24d
Ajeya Cotra writes: Like Ajeya, I haven't thought about this a ton. But I do feel quite confident in recommending that generalist EAs — especially the "get shit done" kind —  at least strongly consider working on biosecurity if they're looking for their next thing.
10
15d
8
I have a question I would like some thoughts on: As a utilitarianist I personally believe alignment to be the most important course area - though weirdly enough, even though I believe x-risk reduction to be positive in expectation, I believe the future is most likely to be net negative. I personally believe, without a high level of certainty, that the current utilities on earth are net negative due to wild animal suffering. If we therefore give the current world a utility value of -1, I would describe my believe for future scenarios like this: 1. ~5% likelihood: ~101000  (Very good future e.g. hedonium shockwave) 2. ~90% likelihood: -1 (the future is likely good for humans, but this is still negligible compared to wild animal suffering, which will remain) 3. ~5% likelihood: ~-10100 (s-risk like scenarios)   My reasoning for thinking “scenario 2” is more likely than “scenario 1” is based on what seems to be the values of the general public currently. Most people seem to care about nature conservation, but no one seems interested in mass producing (artificial) happiness. While the earth is only expected to remain habitable for about two billion years (and humans, assuming we avoid any x-risks, are likely to remain for much longer), I think, when it comes to it, we’ll find a way to keep the earth habitable, and thus wild animal suffering.  Based on these 3 scenarios, you don’t have to be a great mathematician to realize that the future is most likely to be net negative, yet positive in expectation. While I still, based on this, find alignment to be the most important course area, I find it quite demotivating that I spend so much of my time (donations) to preserve a future that I find unlikely to be positive. But the thing is, most long-termist (and EAs in general) don't seem to have similar beliefs to me. Even someone like Brian Tomasik has said that if you’re a classical utilitarian, the future is likely to be good.   So now I’m asking, what am I getting wro
48
3mo
2
AI governance could be much more relevant in the EU, if the EU was willing to regulate ASML. Tell ASML they can only service compliant semiconductor foundries, where a "compliant semicondunctor foundry" is defined as a foundry which only allows its chips to be used by compliant AI companies. I think this is a really promising path for slower, more responsible AI development globally. The EU is known for its cautious approach to regulation. Many EAs believe that a cautious, risk-averse approach to AI development is appropriate. Yet EU regulations are often viewed as less important, since major AI firms are mostly outside the EU. However, ASML is located in the EU, and serves as a chokepoint for the entire AI industry. Regulating ASML addresses the standard complaint that "AI firms will simply relocate to the most permissive jurisdiction". Advocating this path could be a high-leverage way to make global AI development more responsible without the need for an international treaty.
80
9mo
1
I recently created a simple workflow to allow people to write to the Attorneys General of California and Delaware to share thoughts + encourage scrutiny of the upcoming OpenAI nonprofit conversion attempt. Write a letter to the CA and DE Attorneys General I think this might be a high-leverage opportunity for outreach. Both AG offices have already begun investigations, and AGs are elected officials who are primarily tasked with protecting the public interest, so they should care what the public thinks and prioritizes. Unlike e.g. congresspeople, I don't AGs often receive grassroots outreach (I found ~0 examples of this in the past), and an influx of polite and thoughtful letters may have some influence — especially from CA and DE residents, although I think anyone impacted by their decision should feel comfortable contacting them. Personally I don't expect the conversion to be blocked, but I do think the value and nature of the eventual deal might be significantly influenced by the degree of scrutiny on the transaction. Please consider writing a short letter — even a few sentences is fine. Our partner handles the actual delivery, so all you need to do is submit the form. If you want to write one on your own and can't find contact info, feel free to dm me.
7
17d
3
Horizon Institute for Public Service is not x-risk-pilled Someone saw my comment and reached out to say it would be useful for me to make a quick take/post highlighting this: many people in the space have not yet realized that Horizon people are not x-risk-pilled. (Edit: some people reached out to me to say that they've had different experiences with a minority of Horizon people.)
155
2y
20
Mildly against the Longtermism --> GCR shift Epistemic status: Pretty uncertain, somewhat rambly TL;DR replacing longtermism with GCRs might get more resources to longtermist causes, but at the expense of non-GCR longtermist interventions and broader community epistemics Over the last ~6 months I've noticed a general shift amongst EA orgs to focus less on reducing risks from AI, Bio, nukes, etc based on the logic of longtermism, and more based on Global Catastrophic Risks (GCRs) directly. Some data points on this: * Open Phil renaming it's EA Community Growth (Longtermism) Team to GCR Capacity Building * This post from Claire Zabel (OP) * Giving What We Can's new Cause Area Fund being named "Risk and Resilience," with the goal of "Reducing Global Catastrophic Risks" * Longview-GWWC's Longtermism Fund being renamed the "Emerging Challenges Fund" * Anecdotal data from conversations with people working on GCRs / X-risk / Longtermist causes My guess is these changes are (almost entirely) driven by PR concerns about longtermism. I would also guess these changes increase the number of people donation / working on GCRs, which is (by longtermist lights) a positive thing. After all, no-one wants a GCR, even if only thinking about people alive today. Yet, I can't help but feel something is off about this framing. Some concerns (no particular ordering): 1. From a longtermist (~totalist classical utilitarian) perspective, there's a huge difference between ~99% and 100% of the population dying, if humanity recovers in the former case, but not the latter. Just looking at GCRs on their own mostly misses this nuance. * (see Parfit Reasons and Persons for the full thought experiment) 2. From a longtermist (~totalist classical utilitarian) perspective, preventing a GCR doesn't differentiate between "humanity prevents GCRs and realises 1% of it's potential" and "humanity prevents GCRs realises 99% of its potential" * Preventing an extinction-level GCR might move u
Load more (8/145)