The 2016 Expert Survey on Progress in AI is a survey of machine learning researchers that AI Impacts ran in collaboration with others in 2016.

Details

Some results are reported in When Will AI Exceed Human Performance? Evidence from AI Experts, and others have not yet been published. This page should be updated with more results soon, as of June 2017.

The full list of questions is available here. Participants received randomized subsets of these questions.

Results

Human-level intelligence

Questions

We sought forecasts for something like human-level AI in three different ways, to reduce noise from unknown framing biases:

  • Directly, using a question much like Müller and Bostrom’s, though with a refined definition of High-Level Machine Intelligence (HLMI).
  • At the end of a sequence of questions about the automation of specific human occupations.
  • Indirectly, with an ‘outside view’ approximation: by asking each person how long it has taken to make the progress to date in their subfield, and what fraction of the ground has been covered. This is Robin Hanson‘s approach, which he found suggested much longer timelines than those reached directly.

For the first two of these, we split people in half, and asked one half how many years until a certain chance of the event would obtain, and the other half what the chance was of the event occurring by specific dates. We call these ‘fixed probabilities’ and ‘fixed years’ framings throughout.

For the (somewhat long and detailed) specifics of these questions, see here or here (pdf).

Answers

The table and figure below show the median dates and probabilities given for the direct ‘HLMI’ question, and in the ‘via occupations’ questions, under both the fixed probabilities and fixed years framings.

10% 50% 90% 10 years 20 years 50 years
Truck Driver 5 10 20 50% 75% 95%
Surgeon 10 30 50 5% 20% 50%
Retail Salesperson 5 13.5 20 30% 60% 91.5%
AI Researcher 25 50 100 0% 1% 10%
Existing occupation among final to be automated 50 100 200 0% 0% 3.5%
Full Automation of labor 50 90 200 0% 0.01% 3%
HLMI 15 40 100 1% 10% >30%* (30% in 40y)

*Due to a typo, this question asked about 40 years rather than 50 years, so doesn’t match the others.

Figure 1: Median answers to questions about probabilities by dates (‘fixed year’) and dates for probabilities (‘fixed probability’), for different occupations, all current occupations, and all tasks (HLMI).

Interesting things to note:

  • Fixed years framings (‘Fyears —‘, labeled with stars) universally produce later timelines.
  • HLMI (thick blue lines) is logically required to be after full automation of labor (‘Occ’) yet is forecast much earlier than it, and earlier even than the specific occupation ‘AI researcher’.
  • Even the more pessimistic Fyears estimates suggest retail salespeople have a good chance of being automated within 20 years, and are very likely to be in fifty.

Intelligence Explosion

Probability of dramatic technological speedup
Question

Assume that HLMI will exist at some point. How likely do you then think it is that the rate of global technological improvement will dramatically increase (e.g. by a factor of ten) as a result of machine intelligence:

Within two years of that point?       ___% chance

Within thirty years of that point?    ___% chance

[NB. If I understand correctly, a small number of respondents answered a slightly different version of this question in an initial round, and we changed it (probably to make it easier to understand), and those first answers aren’t included here.]

Answers

Median P(…within two years) = 20%

Median P(…within thirty years) = 80%

Probability of superintelligence
Question

Assume that HLMI will exist at some point. How likely do you think it is that there will be machine intelligence that is vastly better than humans at all professions (i.e. that is vastly more capable or vastly cheaper):

Within two years of that point?       ___% chance

Within thirty years of that point?    ___% chance

Answers

Median P(…within two years) = 10%

Median P(…within thirty years) = 50%

This is the distribution of answers  to the former:

Chance that the intelligence explosion argument is about right
Question

Some people have argued the following:

If AI systems do nearly all research and development, improvements in AI will accelerate the pace of technological progress, including further progress in AI.

.

Over a short period (less than 5 years), this feedback loop could cause technological progress to become more than an order of magnitude faster.

How likely do you find this argument to be broadly correct?

  • Quite unlikely (0-20%)
  • Unlikely (21-40%)
  • About even chance (41-60%)
  • Likely (61-80%)
  • Quite likely (81-100%)
Answers

These are the Pearson product-moment correlation coefficients for the different answers, among people who received both of a pair of questions:

Impacts of HLMI

Question

vb_1 Assume for the purpose of this question that HLMI will at some point exist. How positive or negative do you expect the overall impact of this to be on humanity, in the long run? Please answer by saying how probable you find the following kinds of impact, with probabilities adding to 100%:

______ Extremely good (e.g. rapid growth in human flourishing) (1)

______ On balance good (2)

______ More or less neutral (3)

______ On balance bad (4)

______ Extremely bad (e.g. human extinction) (5)

Answers

More summary results

We are in the process of adding more results to this page. For now, the below table from the Grace et al 2017 paper contains some not covered above.

Table S4 in Grace et al 2017.