Surveys on fractional progress towards HLAI

Given simplistic assumptions, extrapolating fractional progress estimates suggests a median time from 2020 to human-level AI of:

  • 372 years (2392), based on responses collected in Robin Hanson’s informal 2012-2017 survey.
  • 36 years (2056), based on all responses collected in the 2016 Expert Survey on Progress in AI.
  • 142 years (2162), based on the subset of responses to the 2016 Expert Survey on Progress in AI who had been in their subfield for at least 20 years.
  • 32 years (2052), based on the subset of responses to the 2016 Expert Survey on Progress in AI about progress in deep learning or machine learning as a whole rather than narrow subfields.

67% of respondents of the 2016 expert survey on AI and 44% of respondents who answered from Hanson’s informal survey said that progress was accelerating.

Details

One way of estimating how many years something will take is to estimate what fraction of progress toward it has been made over a fixed number of years, then to extrapolate the number of years needed for full progress. As suggested by Robin Hanson,1 this method can provide an estimate for when human-level AI will be developed, if we have data on what fraction of progress toward human-level AI has been made and whether it is proceeding at a constant rate. 

We know of two surveys that ask about fractional progress and acceleration in specific AI subfields: an informal survey conducted by Robin Hanson in 2012 – 2017, and our 2016 Expert Survey on Progress in AI. We use them to extrapolate progress to human-level AI, assuming that:

  1. AI progresses at the average rate that people have observed so far.
  2. Human-level AI will be achieved when the median subfield reaches human-level.

Assumptions

AI progresses at the average rate that people have observed so far

The naive extrapolation method described above assumes that AI progresses at the average rate that people have observed so far, but some respondents perceived acceleration or deceleration. If we guess that this change in the rate of the progress continues into the future, this suggests that a truer extrapolation of each person’s observations would place human-level performance in their subfield either before or after the naively extrapolated date.

Human-level AI will be achieved when the median subfield reaches human-level

Both surveys asked respondents about fractional progress in their subfields. Extrapolating out these estimates to get to human-level performance gives some evidence for when AGI may come, but is not a perfect proxy. It may turn out that we get human-level performance in a small number of subfields much earlier than others, such that we count the resulting AI as ‘AGI’, or it may be the case that certain subfields important to AGI do not exist yet.

Hanson AI Expert Survey

Hanson’s survey informally asked ~15 AI experts to estimate how far we’ve come in their own subfield of AI research in the last twenty years, compared to how far we have to go to reach human level abilities. The subfields represented were analogical reasoning, knowledge representation, computer-assisted training, natural language processing, constraint satisfaction, robotic grasping manipulation, early-human vision processing, constraint reasoning, and “no particular subfield”. Three respondents said the rate of progress was staying the same, four said it was getting faster, two said it was slowing down, and six did not answer (or may not have been asked). 

The naive extrapolations2 of the answers from Hanson’s survey give a median time from 2020 to human-level AI (HLAI) of 372 years (2392). See the survey data and our calculations here.

2016 Expert Survey on Progress in AI

The 2016 Expert Survey on Progress in AI (2016 ESPAI) asked machine learning researchers which subfield they were in, how long they had been in their subfield, and what fraction of the remaining path to human-level performance (in their subfield) they thought had been traversed in that time.3 107 out of 111 responses were used in our calculation.4 42 subfields were reported, including “Machine learning”, “Graphical models”, “Speech recognition”, “Optimization”, “Bayesian Learning”, and “Robotics”.5 Notably, Hanson’s survey included subfields that weren’t represented in 2016 ESPAI, including analogic reasoning and knowledge representation. Since 2016 ESPAI was restricted to machine learning researchers, it may exclude non-machine-learning subfields that turn out to be important to fully human-level capabilities.

Acceleration

67% of all respondents said progress in their subfield was accelerating (see Figure 1). Most respondents said progress in their subfield was accelerating in each of the subsets we look at below (ML vs narrow subfield, and time in field).

Figure 1: Number of responses that progress was faster in the first half of the time in the field worked by respondents, the second half, or was about the same in both halves.

Most respondents think progress is accelerating. If this acceleration continues, our naively extrapolated estimates below may be overestimates for time to human-level performance.

Time to HLAI

We calculated estimated years from 2020 until human-level subfield performance by naively extrapolating the reported fractions of the subfield already traversed.6 Figure 2 below shows the implied estimates for time until human-level performance for all respondents’ answers. These estimates give a median time from 2020 until HLAI of 36 years (2056).

Figure 2: Extrapolated estimated time until human-level subfield performance for each respondent, arranged by length of time. The last four responses are above 1000 but have been cut off.
Machine learning vs subfield progress

Some respondents reported broad ‘subfields’, which encompassed all of machine learning, in particular “Machine learning” or “Deep learning”, while others reported narrow subfields, e.g. “Natural language processing” or “Robotics”. We split the survey data based on this subfield narrowness, guessing that progress on machine learning overall may be a better proxy for AGI overall. Among the 69 respondents who gave answers corresponding to the entire field of machine learning, the median implied time was 32 years (2052). Among the 70 respondents who gave narrow answers, the median implied time was 44 years (2064). Figures 3 and 4 show these estimates.

Figure 3: Implied estimates for human-level performance based on respondents who specified broad answers, e.g. “Machine learning” when asked about their subfield. The last three responses are above 1000 but have been cut off.

Figure 3: Implied estimates for human-level performance based on respondents who specified broad answers, e.g. “Machine learning” when asked about their subfield. The last three responses are above 1000 but have been cut off.

Figure 4: Implied estimates for human-level performance based on respondents who specified narrow answers, e.g. “Natural language processing” when asked about their subfield. The last response is above 1000 but has been cut off.

The median implied estimate until human-level performance for machine learning broadly was 12 years sooner than the median estimate for specific subfields. This is counter to what we might expect, if human-level performance in machine learning broadly implies human-level performance on each individual subfield.

Time spent in field

Robin Hanson has suggested that his survey may get longer implied forecasts than 2016 ESPAI because he asks exclusively people who have spent at least 20 years in their field.7 Filtering for people who have spent at least 20 years in their field, we have eight responses, and get a median implied time until HLAI of 142 years from 2020 (2162). Filtering for people who have spent at least 10 years in their field, we have 38 responses, and get a median implied time of 86 years (2106). Filtering for people who have spent less than 10 years in their field, we have 69 responses, and get a median implied time of 24 years (2044). Figures 5, 6 and 7 show estimates for each respondent, for each of these classes of time in field.

Figure 5: Implied estimates for human-level performance based on respondents who were working on their subfield for at least 20 years. The last response is above 1000 but has been cut off.
Figure 6: Implied estimates for human-level performance based on respondents who were working on their subfield for at least 10 years. The last three responses are above 1000 but have been cut off.
Figure 7: Implied estimates for human-level performance based on respondents who were working on their subfield for less than 10 years. The last response is above 1000 but has been cut off.

Comparison of the two surveys

The median implied estimate from 2020 until human-level performance suggested by responses from 2016 ESPAI (36 years) is an order of magnitude smaller than the one suggested by the Hanson survey (372 years). This appears to be at least partly explained by more experienced researchers giving responses that imply longer estimates. Hanson asks exclusively people who have spent at least 20 years in their subfield, whereas the 2016 survey does not filter based on experience. If we filter 2016 survey respondents for researchers who have spent at least 20 years in their subfield we instead get a median estimate of 142 years. 

More experienced researchers may generate longer implied estimates because the majority of progress has happened recently– many people think progress accelerated, which is some evidence of this. It could also be that less-experienced researchers feel that progress is more significant than it actually is.

If AI research is accelerating and is going to continue accelerating until we get to human-level AI, the time to HLAI may be sooner than these estimates. If AI research is accelerating now but is not representative of what progress will look like in the future, longer naive estimates by more experienced researchers may be more appropriate.

Comparison to estimates reached by other survey methods

2016 ESPAI also asked people to estimate time until human-level machine intelligence (HLMI) by asking them how many years they would give until a 50% chance of HLMI. The median answer for this question in 2016 was 40 years, or 36 years from 2020 (2056), exactly the same as the median answer of 36 years implied by extrapolating fractional progress. The survey also asked about time to HLMI in other ways, which yielded less consistent answers.

Primary author: Asya Bergal

Notes

  1. From this Overcoming Bias post:

    “I’d guess that relative to the starting point of our abilities of twenty years ago, we’ve come about 5-10% of the distance toward human level abilities. At least in probability-related areas, which I’ve known best. I’d also say there hasn’t been noticeable acceleration over that time. … If this 5-10% estimate is typical, as I suspect it is, then an outside view calculation suggests we probably have at least a century to go, and maybe a great many centuries, at current rates of progress.”Hanson, Robin. “AI Progress Estimate.” Overcoming Bias. Accessed April 14, 2020. http://www.overcomingbias.com/2012/08/ai-progress-estimate.html.

  2. Naively, we simply divide twenty years by the fraction of progress made to get an estimate of total years necessary, not accounting for possible acceleration. To get the time from now to human-level performanceI we subtract the twenty years of progress already made and subtract the difference between the year the question was asked and now (2020).
    • Which AI research area have you worked in for the longest time?
    • How long have you worked in this area?
    • Consider three levels of progress or advancement in this area:   A. Where the area was when you started working in it B. Where it is now C. Where it would need to be for AI software to have roughly human level abilities at the tasks studied in this area   What fraction of the distance between where progress was when you started working in the area (A) and where it would need to be to attain human level abilities in the area (C) have we come so far (B)?

    — From the printout of the 2016 ESPAI questions.

  3. We excluded responses which said a subfield had seen 100% or more progress, since we’re interested in the remaining progress required in the subfields that haven’t gotten to human-level yet.
  4. The complete list is: “Image Processing”, “Machine learning”, “Deep learning”, “Graphical models”, “Speech recognition”, “Optimization”, “Deep neural networks”, “Computer vision”, “Learning theory”, “Classifiers and statistical learning”, “Natural language processing”, “Sequential decision-making, “Online learning”, “Visual perception”, “Bayesian learning”, “Manifold learning”, “Reinforcement learning”, “Probabilistic modeling”, “Robotics”, “Active learning”, “Graph-based pattern recognition”, “Image processing”, “Continuous control”, “Planning algorithms”, and “Network analysis”.
  5. As with the Hanson survey, we divided time in the field by the fraction of the remaining path traversed, then subtracted the number of years worked in the subfield, then subtracted an additional four years to account for the difference between when these questions were asked (2016) and now (2020).
  6. “One obvious difference is that I limited my sample to people who’d been in a field for at least 20 years. Can you try limiting your sample in that way, or at least looking at the correlation between time in field and their rate estimates?“
    — From an email chain with Robin Hanson on February 15, 2020

We welcome suggestions for this page or anything on the site via our feedback box, though will not address all of them.