Steven Byrnes

Research Fellow @ Astera
1614 karmaJoined Working (6-15 years)Boston, MA, USA
sjbyrnes.com/agi.html

Bio

Hi I'm Steve Byrnes, an AGI safety / AI alignment researcher in Boston, MA, USA, with a particular focus on brain algorithms. See https://sjbyrnes.com/agi.html for a summary of my research and sorted list of writing. Physicist by training. Email: steven.byrnes@gmail.com. Leave me anonymous feedback here. I’m also at: RSS feed , Twitter , Mastodon , Threads , Bluesky , GitHub , Wikipedia , Physics-StackExchange , LinkedIn

Comments
162

Topic contributions
3

OK, here’s the big picture of this discussion as I see it.

As someone who doesn’t think LLMs will scale to AGI, I skipped over pretty much all of your OP as off-topic from my perspective, until I got to the sentences:

Eventually, there will be some AI paradigm beyond LLMs that is better at generality or generalization. However, we don't know what that paradigm is yet and there's no telling how long it will take to be discovered. Even if, by chance, it were discovered soon, it's extremely unlikely it would make it all the way from conception to working AGI system within 7 years.

(Plus the subsequent couple paragraphs about brain computation, which I responded to briefly in my top-level comment.)

So that excerpt is what I was responding to originally, and that’s what we’ve been discussing pretty much this whole time. Right?

My claim is that, in the context of this paragraph, “extremely unlikely” (as in “<0.1%”) is way way too confident. Technological forecasting is hard, a lot can happen in seven years … I think there’s just no way to justify such an extraordinarily high confidence [conditioned on LLMs not scaling to AGI as always].

If you had said “<20%” instead of “<0.1%”, then OK sure, I would have been in close-enough agreement with you, that I wouldn’t have bothered replying.

Does that help? Sorry if I’m misunderstanding.

 

Hmm, reading what you wrote again, I think part of your mistake is saying “…conception to working AGI system”. Who’s to say that this “AI paradigm beyond LLMs” hasn’t already been discovered ten years ago or more? There are a zillion speculative non-LLM AI paradigms that have been under development for years or decades. Nobody has heard of them because they’re not doing impressive things yet. That doesn’t mean that there hasn’t already been a lot of development progress.

OK, sorry for getting off track.

  • (…But I still think your post has a connotation in context that “AGI by 2032 is extremely unlikely [therefore AGI x-risk work is not an urgent priority]”, and that it would be worth clarifying that you are just arguing the narrow point.)
  • Wilbur Wright overestimated how long it would take him to fly by a factor of 25—he said 50 years, it was actually 2. This is an example of how even researchers estimating their own very-near-term progress on their own R&D pathway can absolutely suck at timelines, including in the over-pessimistic direction.
    • If someone in 1900 had looked at everyone before the Wright brothers saying that they’ll get heavier-than-air flight soon, all those predictions would have been falsified, and they might have generalized to “We have good reason to be skeptical if we look at predictions from people in [inventing airplanes] that have now come false”. But that generalization would have then failed when the Wright brothers came along.
  • Sutton does not seem to believe that “AGI by 2032 is extremely unlikely” so I’m not sure how that’s evidence on your side. You’re saying that he’s over-optimistic, and maybe he is, but we don’t know that. If you want examples of AI researchers and experts being over-pessimistic about the speed of progress, they are very easy to find (e.g.).
  • You’ve heard of Sutton & LeCun. There are a great many other research programs that you haven’t heard of, toiling away and writing obscure arxiv papers. Some of those people have been writing obscure arxiv papers for many years already, even decades. We both agree that it takes >>7 years for an R&D pathway to get from its first obscure arxiv paper to ASI. What I’m pushing back on the claim that it takes >>7 years to get from the final obscure arxiv paper (after which point the R&D pathway is impressive enough to stop being obscure) to ASI.

In a 2024 interview, Yann LeCun said he thought it would take "at least a decade and probably much more" to get to AGI or human-level AI by executing his research roadmap. Trying to pinpoint when ideas first started is a fraught exercise. If we say the start time is the 2022 publication of LeCun's position paper "A Path Towards Autonomous Machine Intelligence", then by LeCun's own estimate, the time from publication to human-level AI is at least 12 years and "probably much more".

Here’s why I don’t think “start time for LeCun’s research program is 2022” is true in any sense relevant to this conversation.

IIUC, the subtext of your OP and this whole conversation is that you think people shouldn’t be urgently trying to prepare for AGI / ASI right now.

In that context, one could say that the two relevant numbers are “(A) how far in advance should we be preparing for AGI / ASI?” and “(B) how far away is AGI / ASI?”. And you should start preparing when (A)=(B).

I think that’s a terrible model, because we don’t and won’t know either (A) or (B) until it’s too late, and there’s plenty of work we can be doing right now, so it’s nuts not to be doing that work ASAP. Indeed, I think it’s nuts that we weren’t doing more work on AGI x-risk in 2015, and 2005, and 1995 etc.

As bad as I think that “start when (A)=(B)” model is, I’m concerned that your implicit model is even worse. You seem to be acting as if (A) is less than 7 years, but you haven’t justified that, and I don’t think you can. I am concerned that what you’re actually thinking is more like: “AGI doesn’t feel imminent, therefore (B)<(A)”.

Does the clock start in 2022 when LeCun published A Path Towards Autonomous Machine Intelligence (APTAMI)? That was 3 years ago. Yet you still, right now, don’t seem to feel like we should be urgently preparing for AGI. If LeCun et al. keep making progress, maybe someday you will start feeling that sense of urgency about imminent LeCun-style AGI. And when that day comes, that’s when the relevant clock starts. And I think that clock will leave very little time indeed until AGI and ASI. (My own guess would be 0–2 years, if your sense of urgency will be triggered by obvious signals of impressiveness like using language and solving problems beyond current LLMs. If you have some other trigger that you’re looking for, what is it?)

What would it look like to feel a sense of urgency starting from the moment that APTAMI was published? It would look like what I did, which was write the response: LeCun’s “A Path Towards Autonomous Machine Intelligence” has an unsolved technical alignment problem. I’m pretty sure LeCun knows that this post exists, but he has not responded, and to this day he continues to insist that he has a great plan for AI alignment. Anyway, here I am, arguably the only person on Earth who is working on solving the technical alignment problem for APTAMI. LeCun and his collaborators have not shown the slightest interest in helping, and I don’t expect that situation to change as they get ever closer to AGI / ASI (on the off-chance that their research program is headed towards AGI / ASI).

(If you think we should be urgently preparing for AGI / ASI x-risk right now, despite AGI being extremely unlikely by 2032, then great, we would be in much more agreement than I assumed. If that’s the situation, then I think your post does not convey that mood, and I think that almost all readers will interpret it as having that subtext unless you explicitly say otherwise.)

Presumably a lot of these are all optimised for the current gen-AI paradigm, though. But we're talking about what happens if the current paradigm fails. I'm sure some of it would carry over to a different AI paradigm, but also it's pretty likely there would be other bottleneck we would have to tune to get things working.

Yup, some stuff will be useful and others won’t. The subset of useful stuff will make future researchers’ lives easier and allow them to work faster. For example, here are people using JAX for lots of computations that are not deep learning at all.

I feel like what you're saying is the equivalent of pointing out in 2020 that we have had so many optimisations and computing resources that went into, say, google searches, and then using that as evidence that surely the big data that goes into LLM's should be instantaneous as well.

In like 2010–2015, “big data” and “the cloud” were still pretty hot new things, and people developed a bunch of storage formats, software tools, etc. for distributed data, distributed computing, parallelization, and cloud computing. And yes I do think that stuff turned out to be useful when deep learning started blowing up (and then LLMs after that), in the sense that ML researchers would have made slower progress (on the margin) if not for all that development. I think Docker and Kubernetes are good examples here. I’m not sure exactly how different the counterfactual would have been, but I do think it made more than zero difference.

Maybe you simply intended to say that PyTorch and JAX are better today than they were in 2018.

Yup! E.g. torch.compile “makes code run up to 2x faster” and came out in PyTorch 2.0 in 2023.

More broadly, what I had in mind was: open-source software for everything to do with large-scale ML training—containerization, distributed training, storing checkpoints, hyperparameter tuning, training data and training environments, orchestration and pipelines, dashboards for monitoring training runs, on and on—is much more developed now compared to 2018, and even compared to 2022, if I understand correctly (I’m not a practitioner). Sorry for poor wording. :)

Thanks!

Out of curiosity, what do you think of my argument that LLMs can't pass a rigorous Turing test because a rigorous Turing test could include ARC-AGI 2 as a subset (and, indeed, any competent panel of judges should include it) and LLMs can't pass that? Do you agree? Do you think that's a higher level of rigour than a Turing test should have and that's shifting the goal posts?

I think we both agree that there are ways to tell apart a human from an LLM of 2025, including handing ARC-AGI-2 to each.

Whether or not that fact means “LLMs of 2025 cannot pass the Turing Test” seems to be purely an argument about the definition / rules of “Turing Test”. Since that’s a pointless argument over definitions, I don’t really care to hash it out further. You can have the last word on that. Shrug  :-P

I don't think I'm retreating into a weaker claim. I'm just explaining why, from my point of view, your analogy doesn't seem to make sense as an argument against my post and why I don't find it persuasive at all (and why I don't think anyone in my shoes would or should find it persuasive). I don't understand why you would interpret this as me retreating into a weaker claim.

If you’re making the claim:

The probability that a new future AI paradigm would take as little as 7 years to go from obscure arxiv papers to AGI, is extremely low (say, <10%).

…then presumably you should have some reason to believe that. If your position is “nobody can possibly know how long it will take”, then that obviously is not a reason to believe that claim above. Indeed, your OP didn’t give any reason whatsoever, it just said “extremely unlikely” (“Even if, by chance, it were discovered soon, it's extremely unlikely it would make it all the way from conception to working AGI system within 7 years.”)

Then my top comment was like:

Gee, a lot can happen in 7 years in AI, including challenges transitioning from ‘this seems wildly beyond SOTA and nobody has any clue where to even start’ to ‘this is so utterly trivial that we take it for granted and collectively forget it was ever hard’, and including transitioning from ‘kinda the first setup of this basic technique that anyone thought to try’ to ‘a zillion iterations and variations of the technique have been exhaustively tested and explored by researchers around the world’, etc. That seems like a reason to start somewhere like, I dunno, 50-50 on ≤7 years, as opposed to <10%. 50-50 is like saying ‘some things in AI take less than 7 years, and other things take more than 7 years, who knows, shrug’.

Then you replied here that “your analogy is not persuasive”. I kinda took that to mean: my example of LLM development does not prove that a future “obscure arxiv papers to AGI” transition will take ≤7 years. Indeed it does not! I didn’t think I was offering proof of anything. But you are still making a quite confident claim of <10%, and I am still waiting to see any reason at all explaining where that confidence is coming from. I think the LLM example above is suggestive evidence that 7 years is not some crazy number wildly outside the range of reasonable guesses for “obscure arxiv papers to AGI”, whereas you are saying that 7 years is in fact a pretty crazy number, and that sane numbers would be way bigger than 7 years. How much bigger? You didn’t say. Why? You didn’t say.

So that’s my evidence, and yes it’s suggestive not definitive evidence, but OTOH you have offered no evidence whatsoever, AFAICT.

I don’t think that LLMs are a path to AGI.

~~

Based on your OP, you ought to be trying to defend the claim:

STRONG CLAIM: The probability that a new future AI paradigm would take as little as 7 years to go from obscure arxiv papers to AGI, is extremely low (say, <10%).

But your response seems to have retreated to a much weaker claim:

WEAK CLAIM: The probability that an AI paradigm would take as little as 7 years to go from obscure arxiv papers to AGI, is not overwhelmingly high (say, it’s <90%). Rather, it’s plausible that it would take longer than that.

See what I mean? I think the weak claim is fine. As extremist as I am, I’m not sure even I would go above 90% on that.

Whereas I think the strong claim is implausible, and I don’t think your comment even purports to defend it.

~~

Maybe I shouldn’t have brought up the Turing Test, since it’s a distraction. For what it’s worth, my take is: for any reasonable operationalization of the Turing Test (where “reasonable” means “in the spirit of what Turing might have had in mind”, or “what someone in 2010 might have had in mind”, as opposed to moving the goalposts after knowing the particular profile of strengths and weaknesses of LLMs), a researcher could pass that Turing Test today with at most a modest amount of work and money. I think this fact is so obvious to everyone, that it’s not really worth anyone’s time to even think about the Turing Test anymore in the first place. I do think this is a valid example of how things can be a pipe dream wildly beyond the AI frontier in Year X, and totally routine in Year X+7.

I do not think the Turing Test (as described above) is sufficient to establish AGI, and again, I don’t think AGI exists right now, and I don’t think LLMs will ever become AGI, as I use the term.

In principle, anything's possible and no one knows what's going to happen with science and technology (as David Deutsch cleverly points out, to know future science/technology is intellectually equivalent to discovering/inventing it), so it's hard to argue against hypothetical scenarios involving speculative future science/technology. But to plan your entire life around your conviction in such hypothetical scenarios seems extremely imprudent and unwise.

I don’t “plan [my] entire life around [a] conviction” that AGI will definitely arrive before 2032 (my median guess is that it will be somewhat later than that, and my own technical alignment research is basically agnostic to timelines).

…But I do want to defend the reasonableness of people contingency-planning for AGI very soon. Copying from my comment here:

Pascal’s wager is a scenario where people prepare for a possible risk because there’s even a slight chance that it will actualize. I sometimes talk about “the insane bizarro-world reversal of Pascal’s wager”, in which people don’t prepare for a possible risk because there’s even a slight chance that it won’t actualize. Pascal’s wager is dumb, but “the insane bizarro-world reversal of Pascal’s wager” is much, much dumber still. :) “Oh yeah, it’s fine to put the space heater next to the curtains—there’s no guarantee that it will burn your house down.” :-P

If a potential threat is less than 100% likely to happen, that’s not an argument against working to mitigate it. A more reasonable threshold would be 10%, even 1%, and in some circumstances even less than that. For example, it is not 100% guaranteed that there is any terrorist in the world right now who is even trying to get a nuclear weapon, let alone who has a chance of success, but it sure makes sense for people to be working right now to prevent that “hypothetical scenario” from happening.

Speaking of which, I also want to push back on your use of the term “hypothetical”. Superintelligent AI takeover is a “hypothetical future risk”. What does that mean? It means there’s a HYPOTHESIS that there’s a future risk. Some hypotheses are false. Some hypotheses are true. I think this one is true.

I find it disappointing that people treat “hypothetical” as a mocking dismissal, and I think that usage is a red flag for sloppy thinking. If you think something is unlikely, just call it “unlikely”! That’s a great word! Or if you think it’s overwhelmingly unlikely, you can say that too! When you use words like “unlikely” or “overwhelmingly unlikely”, you’re making it clear that you are stating a belief, perhaps a quite strong belief, and then other people may argue about whether that belief is reasonable. This is all very good and productive. Whereas the term “hypothetical” is kinda just throwing shade in a misleading way, I claim.

Eventually, there will be some AI paradigm beyond LLMs that is better at generality or generalization. However, we don't know what that paradigm is yet and there's no telling how long it will take to be discovered. Even if, by chance, it were discovered soon, it's extremely unlikely it would make it all the way from conception to working AGI system within 7 years.

Suppose someone said to you in 2018:

There’s an AI paradigm that almost nobody today has heard of or takes seriously. In fact, it’s little more than an arxiv paper or two. But in seven years, people will have already put hundreds of billions of dollars and who knows how many gazillions of hours into optimizing and running the algorithms; indeed, there will be literally 40,000 papers about this paradigm already posted on arxiv. Oh and y’know how right now world experts deploying bleeding-edge AI technology cannot make an AI that can pass an 8th grade science test? Well y’know, in seven years, this new paradigm will lead to AIs that can nail not only PhD qualifying exams in every field at once, but basically every other written test too, including even the international math olympiad with never-before-seen essay-proof math questions. And in seven years, people won’t even be talking about the Turing test anymore, because it’s so obviously surpassed. And… [etc. etc.]

I think you would have read that paragraph in 2018, and described it as “extremely unlikely”, right? It just sounds completely absurd. How could all that happen in a mere seven years? No way.

But that’s what happened!

So I think you should have wider error bars around how long it takes to develop a new AI paradigm from obscurity to AGI. It can be long, it can be short, who knows.

(My actual opinion is that this kind of historical comparison understates how quickly a new AI paradigm could develop, because right now we have lots of resources that did not exist in 2018, like dramatically more compute, better tooling and frameworks like PyTorch and JAX, armies of experts on parallelization, and on and on. These were bottlenecks in 2018, without which we presumably would have gotten the LLMs of today years earlier.)

(My actual actual opinion is that superintelligence will seem to come almost out of nowhere, i.e. it will be just lots of obscure arxiv papers until superintelligence is imminent. See here. But if you don’t buy that strong take, fine, go with the weaker argument above.)

This is particularly true if running an instance of AGI requires a comparable amount of computation as a human brain.

My own controversial opinion is that the human brain requires much less compute than the LLMs of today. Details here. You don’t have to believe me, but you should at least have wide error bars around this parameter, which makes it harder to argue for a bottom line of “extremely unlikely”. See also Joe Carlsmith’s report which gives a super wide range.

Here’s the PDF.

I haven’t read it, but I feel like there’s something missing from the summary here, which is like “how much AI risk reduction you get per dollar”. That has to be modeled somehow, right? What did the author assume for that?

If we step outside the economic model into reality, I think reducing AI x-risk is hard, and as evidence we can look around the field and notice that many people trying to reduce AI x-risk are pointing their fingers at many other people trying to reduce AI x-risk, with the former saying that the latter have been making AI x-risk worse rather than better via their poorly-thought-through interventions.

If some institution or government decided to spend $100B per year on AI x-risk (haha), I would be very concerned that this tsunami of money would wind up net negative, leaving us in a worse situation than if the institution / government had spent $0 instead. But of course it would depend a lot on the decisionmakers and processes etc.

Load more