Guide to pages on AI timeline predictions

This page is an informal outline of the other pages on this site about AI timeline predictions made by others. Headings link to higher level pages, intended to summarize the evidence from pages below them. This list was complete on 7 April 2017 (here is a category that may contain newer entries, though not conveniently organized).

Guide

Topic synthesis: AI timeline predictions as evidence (page)

The predictions themselves:

—from surveys (page):
  1. 2016 Expert survey on progress in AI: our own survey.
  2. Müller and Bostrom AI Progress Poll: the most recent survey with available results, including 29 of the most cited AI researchers as participants.
  3. Hanson AI Expert Survey: in which researchers judge fractional progress toward human-level performance over their careers, in a series of informal conversations.
  4. Kruel AI survey: in which experts give forecasts and detailed thoughts, interview style.
  5. FHI Winter Intelligence Survey: in which impacts-concerned AGI conference attendees forecast AI in 2011.
  6. AGI-09 Survey: in which AGI conference attendees forecast various human-levels of AI in 2009.
  7. Klein AGI survey: in which a guy with a blog polls his readers.
  8. AI@50 survey: in which miscellaneous conference goers are polled informally.
  9. Bainbridge Survey: in which 26 expert technologists expect human-level AI in 2085 and give it a 5.6/10 rating on benefit to humanity.
  10. Michie Survey: in which 67 AI and CS researchers are not especially optimistic in the ‘70s.
—from public statements:
  1. MIRI AI predictions dataset: a big collection of public predictions gathered from the internet.
—from written analyses (page), for example:
  1. The Singularity is Near: in which a technological singularity is predicted in 2045, based on when hardware is extrapolated to compute radically more than human minds in total.
  2. The Singularity Isn’t Near: in which it is countered that human-level AI requires software as well as hardware, and none of the routes to producing software will get there by 2045.
  3. (Several others are listed in the analyses page above, but do not have their own summary pages.)

On what to infer from the predictions

Some considerations regarding accuracy and bias (page):
  1. Contra a common view that past AI forecasts were unreasonably optimistic, AI predictions look fairly similar over time, except a handful of very early somewhat optimistic ones.
  2. The Maes Garreau Law claims that people tend to predict AI near the end of their own expected lifetime. It is not true.
  3. We expect publication biases to favor earlier forecasts.
  4. Predictions made in surveys seem to be overall a bit later than those made in public statements (maybe because surveys prevent some publication biases).
  5. People who are inclined toward optimism about AI are more likely to become AI researchers, leading to a selection bias from optimistic experts.
  6. We know of some differences in forecasts made by different groups.

Blog posts on these topics:


We welcome suggestions for this page or anything on the site via our feedback box, though will not address all of them.