Technoprogressive, biocosmist, rationalist, defensive accelerationist, longtermist
You’re talking about research rather than scaling here, right? Do you think there is more funding for fundamental AI research now than in 2020? What about for non-LLM fundamental AI research?
anti-LLM arguments from people like Yann LeCun and François Chollet
François Chollet has since adjusted his AGI timelines to 5 years.
While this is ostensibly called "strong longtermism", the precision of saying "near-best" instead of "best" makes (i) hard to deny (the opposite statement would be "one ought to choose an option that is significantly far from the best for the far future"). The best cruxes against (ii) would be epistemic ones i.e. whether benefits rapidly diminish or wash out over time.
There's a decent amount of French-speaking ~AI safety content on YouTube:
So, to be clear, you think that if LLMs continue to complete software engineering tasks of exponentially increasing lengths at exponentially decreasing risk of failure, then that tell us nothing about whether LLMs will reach AGI?
I expect most EAs who have enough money to consider investing them to already be investing them in index funds, which, by design, long the Magnificent Seven already.