Matrice Jacobine

Student in fundamental and applied mathematics
639 karmaJoined Pursuing a graduate degree (e.g. Master's)France

Bio

Technoprogressive, biocosmist, rationalist, defensive accelerationist, longtermist

Comments
95

Topic contributions
1

So, to be clear, you think that if LLMs continue to complete software engineering tasks of exponentially increasing lengths at exponentially decreasing risk of failure, then that tell us nothing about whether LLMs will reach AGI?

I expect most EAs who have enough money to consider investing them to already be investing them in index funds, which, by design, long the Magnificent Seven already.

You could bet on shorter-term indicators e.g. whether the METR trend will stop or accelerate.

You’re talking about research rather than scaling here, right? Do you think there is more funding for fundamental AI research now than in 2020? What about for non-LLM fundamental AI research?

Most of OpenAI’s 2024 compute went to experiments

anti-LLM arguments from people like Yann LeCun and François Chollet

François Chollet has since adjusted his AGI timelines to 5 years.

While this is ostensibly called "strong longtermism", the precision of saying "near-best" instead of "best" makes (i) hard to deny (the opposite statement would be "one ought to choose an option that is significantly far from the best for the far future"). The best cruxes against (ii) would be epistemic ones i.e. whether benefits rapidly diminish or wash out over time.

I agree with you on the meta case of suspicion about Open Philanthropy leadership but in this case AFAICT the Center for AI Policy was funded by the Survival and Flourishing Fund, which is aligned with the rationalist cluster and also funds PauseAI.

There's a decent amount of French-speaking ~AI safety content on YouTube:

I added a bunch of relevant tags to your post that might help you search the forum better.

Do you think work on AI welfare can count as part of Cooperative AI (i.e. as fostering cooperation between biological minds and digital minds)?

It strikes me as very unlikely that a rudimentary Pong-playing AI running on biological wetware is more sentient than a modern LLM running on digital hardware.

Load more