FWIW, CNAS (where Paul and I work) are continuing to put out work on drone warfare:
https://www.cnas.org/publications/reports/evolution-not-revolution
https://www.cnas.org/publications/reports/swarms-over-the-strait
https://www.cnas.org/publications/reports/countering-the-swarm
Nice one!
A nitpick (h/t @Agustín Covarrubias ): the English translation of the US-China cooperation question ('How much do you agree with this statement: "Al will be developed safely without cooperation between China and the US?') reads as ambiguous.
ChatGPT and Gemini suggest the original can be translated as 'Do you agree that the safe development of artificial intelligence does not require cooperation between China and the United States?', which would strike me as less ambiguous.
From the discord: "Manifold can provide medium-term loans to users with larger invested balances to donate to charity now provided they agree to not exit their markets in a disorderly fashion or engage in any other financial shenanigans (interpreted very broadly). Feel free to DM for more details on your particular case."
I DM'd yesterday; today I received a mana loan for my invested amount, for immediate donation, due for repayment Jan 2, 2025, with a requirement to not sell out of large positions before May.
There's now a Google form: https://forms.gle/XjegTMHf7oZVdLZF7
A stray observation from reading Scott Alexander's post on his 2023 forecasting competition:
Scott singles out some forecasters who had particularly strong performance both this year and last year (he notes that being near the very top in one year seems noisy, with a significant role for luck), or otherwise seem likely to have strong signals of genuine predictive outperformance. These are:
- Samotsvety
- Metaculus
- possibly Peter Wildeford
- possibly Ezra Karger (Research Director at FRI).
I note that the first 3 above all have higher AI catastrophic/extinction risk estimates than the average superforecaster (I note Ezra given his relevance to the topic at hand, but don't know his personal estimates)
Obviously, this is a low n sample, and very confounded by community effects and who happened to catch Scott's eye (and confirmation bias in me noticing it, insofar as I also have higher risk estimates). But I'd guess there's at least a decent chance that both (a) there are groups and aggregation methods that reliably outperform superforecasters and (b) these give higher estimates of AI risk.
I haven't thought about this a lot, but I don't see big tech companies working with existing frontier AI players as necessarily a bad thing for race dynamics (compared to the counterfactual). It seems better than them funding or poaching talent to create a viable competitor that may not care as much about risk - I'd guess the question is how likely we'd expect them to be successful in doing so (given that Amazon is not exactly at the frontier now)?
Agree this seems bad. Without commenting on whether this would still be bad, here's one possible series of events/framing that strikes me as less bad:
- Org: We're hiring a temporary contractor and opening this up to international applicants
- Applicant: Gets the contract
- Applicant: Can I use your office as a working space during periods I'm in the states?
- Org: Sure
This maybe then just seems like the sort of thing the org and applicant would want to have good legal advice on (I presume the applicant would in fact look for a B1/B2 visa that allows business during their trip rather than just tourism)
For completeness, here's what OpenAI says in its "Governance of superintelligence" post:
Second, we are likely to eventually need something like an IAEA for superintelligence efforts; any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc. Tracking compute and energy usage could go a long way, and give us some hope this idea could actually be implementable. As a first step, companies could voluntarily agree to begin implementing elements of what such an agency might one day require, and as a second, individual countries could implement it. It would be important that such an agency focus on reducing existential risk and not issues that should be left to individual countries, such as defining what an AI should be allowed to say.
"Is Horizon x-risk pilled?" feels like a misguided question. The organization doesn't claim to be, and it would also be problematic if the organization were acting in an x-risk-pilled-way but but deceitful about it. I'm certainly confident that some Horizon people/fellows are personally x-risk-pilled, and some are not.
For x-risk-focused donors, I think the more reasonable question is: How much should we expect 'expertise and aptitude around emerging tech policy' (as Horizon interprets it) to correlate with the outcomes those donors care about? One could reasonably conclude that that correlation's low or even negative. But it's also not like there's a viable counterfactual 'X-risk-pilled Institute for Public Service' that would achieve a similar level of success at placing fellows.
(I'd guess you might directionally agree with this and just think the correlation isn't that high, but figured I'd comment to at least add the nuance).