I think it would be useful to get a feel for Forum users' AI timelines. There are three questions, two of which are designed to align with questions on a LessWrong survey (from 2023). They are roughly year of artificial general intelligence, singularity (variously defined as cannot predict beyond, super exponential or explosive growth of the economy, etc.), and "crazy." Feel free to define "crazy" as you wish, but some possibilities could be greater than 20% unemployment in most countries, widespread political unrest, widespread loss of confidence in what is true, widespread economic growth exceeding 10% per year[1], your personal plans being disrupted by something related to AI, etc. It would be interesting to see in the comments how people define this. Please use the median year of your distribution (an even chance of happening before or after).
There are 21 locations on each poll, and they correspond to these years (if you comment, it would be helpful for you to put the year in, as the automatic description is not very helpful):

  1. 2026
  2. 2027
  3. 2028
  4. 2029
  5. 2030
  6. 2032
  7. 2035
  8. 2037
  9. 2040
  10. 2045
  11. 2050 (the middle of the poll range)
  12. 2060
  13. 2070
  14. 2080
  15. 2100
  16. 2125
  17. 2150
  18. 2200
  19. 2300
  20. later
  21. never

By what year do you think AI will be able to do intellectual tasks that expert humans currently do?

Year of AGI
P
SM
M
SR
RF
E
G
IL
D
N
S
DM
B
2026
never

 

By what year do you think the singularity will occur?

Year of Singularity
M
P
SR
D
N
2026
never

By what year do you think the world will get crazy?

Year of Crazy
M
D
P
SR
RF
IL
DM
N
2026
never
  1. ^

    Not recovering from a recession.

18

0
0
1

Reactions

0
0
1
Comments17
Sorted by Click to highlight new comments since:

This is a cool post, though I think it's kind of annoying not to be able to see the specific numbers that one is putting them on without reading the chart. 

Yeah perhaps this is a feature for polls v3 (v2 is almost done). 

I genuinely don't know how to answer these polls. I find it much easier to think about a shorter timeframe like the next 20 years — although even just that is hard enough — rather than try to predict the future over a timespan of 275+ years.

I find it much easier to say that the creation of AGI (specifically as I define it here, since some people even call o3 "AGI") is extremely unlikely by 2035 (i.e., much less than a 0.01% or 1 in 10,000 chance), let alone the Singularity. 

("Crazy" seems hazy, to the point that it probably needs to be decomposed into multiple different questions to make it a true forecast — although I can respect just asking people for a vague vibe just as a casual exercise, even though it won't resolve unambiguously in retrospect.) 

My problem with trying to put a median year on AGI is that I have absolutely no idea how to do that. If science and technology continue, indefinitely, to make progress in the sort of way they have for the last 100-300 years, then it seems inevitable humans will eventually invent AGI. Maybe there's a chance it's unattainable for reasons most academics and researchers interested in the subject don't currently anticipate. 

For instance, one estimate is that to rival the computation of a single human brain, a computer would need to consume somewhere between 300 times and 300 billion times as much electricity as the entire United States does currently. If that estimate is accurate, and if building AGI requires that much computation and that much energy, then, at the very least, that makes AGI far less attainable than even some relatively pessimistic and cautious academics and researchers might have guessed. Imagine the amount of scientific and technological progress required to produce that much energy, or to perform that much computation commensurately more energy efficiently. 

Let's assume, for the sake of argument, computation and energy are not issues, and it's just about solving the research problems. I just went on random.org and randomly generated a number between 1 and 275, to represent the range of years asked in this polling question. The result I got was 133 years. 133 years from now is 2158. So, can I do better than that? Can I guess a median year that's more likely to accurate, or more likely to be closer, at least, than a random number generator? Do I have a better methodology than using random.org? Why should I think so? This is a fundamental question, and it underlies this whole polling exercise, as well of most if not all forecasting related to AGI. For instance, is there any scientific evidence or historical evidence that anyone has ever been able to predict when scientific research problems would be solved, or when fundamentally new technologies would be developed, with any sort of accuracy at all? If so, where's the evidence? Let's cite it to motivate these exercises. If not, why should we think we're in a different situation now where we are better able to tell how the future will unfold? 

The mental picture I have of the long-term future when I think about forecasting when the fundamental science and technology problems pre-requisite to building AGI will be solved is of a thick fog, where I can see clearly only a little distance in front of me, can see foggily a little bit further, and then after that everything descends into completely opaque gray-white mist. Is 2158 the median year? I tried random.org again. I got 58, which would be 2083. Which year is more likely to be the median, 2083 or 2158? Are they equally likely to be the median? I have no idea how to answer these questions. For all I know, they might be fundamentally impossible to answer. The physicist David Deutsch makes the argument (e.g., in this video at 6:25) that we can't predict the content of scientific knowledge we don't yet know, since predicting the content would be equivalent to knowing it now, and we don't know yet it. This makes sense to me.

We don't yet know what the correct theory of intelligence is. We don't know the content of that theory. The theory that human-like intelligence is just current-generation deep neural networks scaled up 1,000x times would imply a close median year for AGI. Other theories of intelligence would imply something else. If the specific micro-architecture of the whole human brain is what's required for human-like intelligence (or general intelligence), then that implies AGI is probably quite far away, since we don't yet know that micro-architecture and don't yet have the tools in neuroscience to find it out. Even if we did know, reconstructing it in a simulation would pose its own set of scientific and technological challenges. Since we don't know what the correct theory of intelligence is, we don't know how hard it will be to build an intelligence like our own using computers, and therefore we can't predict when it will happen.

My personal view that AGI is extremely unlikely (much less than 0.01% likely) before the end of 2035 comes from my beliefs that 1) human-like intelligence is definitely not current-gen deep neural networks scaled up 1,000x, 2) the correct theory of intelligence is not something nearly so simple or easy (e.g., if AGI could have been solved by symbolic AI, it probably would have been a long time ago), and 3) it's extremely unlikely that all the necessary scientific discoveries and technological breakthroughs required to solve everything from fundamental theory to practical implementation will be solved within the next ~9 years. Scientists, philosophers, and AI researchers have been trying to understand the fundamental nature of intelligence for a long time. The foundational research for deep learning goes back around 40 years, and it built on research that's even older than that. Today, if you listen to ambitious AI researchers like Yann LeCun, Richard Sutton, François Chollet, and Jeff Hawkins, they are each confident in a research roadmap to AGI, but they are four completely different roadmaps based on completely different ideas. So, it's not like the science and philosophy of intelligence is converging toward any particular theory or solution.

That's a long, philosophical answer to this quick poll question, but I believe that's the crux of the whole matter. 

To clarify, does our "crazy" vote consider all possible causes of crazy, or just crazy that is caused by / significantly associated with AI?

Good question. For personal planning purposes, I think all causes would make sense. But the title is AI, so maybe just significantly associated with AI? I think these polls are about how the future is different because of AI.

Year of AGI

25 years seems about right to me, but with huge uncertainty. 

Year of Singularity (2040)

Though I think we could get explosive economic growth with AGI or even before, I'm going to interpret this as explosive physical growth, that we could double physical resources every year or less. I think that will take years after AGI to, e.g., crack robotics/molecular manufacturing.

NickLaing
2
0
0
10% disagree

Year of AGI

First i have zero expertise here and am rubbish at prediction

I don't think LLMs will get there, but something else probably will after that but maybe not in the very near future. I have a strong (perhaps too strong) feeling that the complexities of the human brain in forward planning/ task stacking and truly creative thought might be further away than we think. 

i also think there's likely to be a warning shot and then the kind of political backlash that could even slow things down 10 years or so.

Slow things down 10 to how many years?

sorry edited

Year of Crazy (2029)

I'm using a combination of scenarios in the post - one or more of these happen significantly before AGI.

Help me make sure I’m understanding this right. You’re at position #4 from left to right, so this means 2029 according to your list. So, this means you think there’s a 50% chance of a combination of the "crazy" scenarios happening by 2029, right?

Unfortunately, the EA Forum polls software makes it hard to ask certain kinds of questions. Your prediction is listed as "70% 2026", but that’s just an artifact of the poll software.

To make it clear to readers what people are actually predicting, and to make sure people giving predictions understand the system properly, you might want to add instructions for people to say something like '50% chance the Year of Crazy happens by 2029’ at the top of their comments. That would at least save readers the trouble of cross-referencing the list for every single prediction.

I tried to do a poll on people’s AI bubble predictions and I ran into a similar issue with the poll software displaying the results confusingly.

Yes, one or more of the "crazy" things happening by 2029. Good suggestion: I have edited the post and my comments to include the year. 

Rebecca Frank
1
0
0
40% disagree

2033

For which event? I'm not seeing you on the poll above.

Rebecca Frank
1
0
0
60% disagree

2031

Year of AGI (2035)

Extrapolating the METR graph here <https://www.lesswrong.com/posts/6KcP7tEe5hgvHbrSF/metr-how-does-time-horizon-vary-across-domains> means soon for super-human coder, but I think it's going to take years after that for the tasks that are slower on that graph, and many tasks are not even on that graph (despite the speedup from having a superhuman coder).

Curated and popular this week
Relevant opportunities