This page is an informal outline of the other pages on this site about AI timeline predictions made by others. Headings link to higher level pages, intended to summarize the evidence from pages below them. This list was complete on 7 April 2017 (here is a category that may contain newer entries, though not conveniently organized).
Guide
Topic synthesis: AI timeline predictions as evidence (page)
The predictions themselves:
—from surveys (page):
- 2016 Expert survey on progress in AI: our own survey.
- (Concrete tasks that we asked for forecasts on)
- Müller and Bostrom AI Progress Poll: the most recent survey with available results, including 29 of the most cited AI researchers as participants.
- Hanson AI Expert Survey: in which researchers judge fractional progress toward human-level performance over their careers, in a series of informal conversations.
- Kruel AI survey: in which experts give forecasts and detailed thoughts, interview style.
- FHI Winter Intelligence Survey: in which impacts-concerned AGI conference attendees forecast AI in 2011.
- AGI-09 Survey: in which AGI conference attendees forecast various human-levels of AI in 2009.
- Klein AGI survey: in which a guy with a blog polls his readers.
- AI@50 survey: in which miscellaneous conference goers are polled informally.
- Bainbridge Survey: in which 26 expert technologists expect human-level AI in 2085 and give it a 5.6/10 rating on benefit to humanity.
- Michie Survey: in which 67 AI and CS researchers are not especially optimistic in the ‘70s.
—from public statements:
- MIRI AI predictions dataset: a big collection of public predictions gathered from the internet.
—from written analyses (page), for example:
- The Singularity is Near: in which a technological singularity is predicted in 2045, based on when hardware is extrapolated to compute radically more than human minds in total.
- The Singularity Isn’t Near: in which it is countered that human-level AI requires software as well as hardware, and none of the routes to producing software will get there by 2045.
- (Several others are listed in the analyses page above, but do not have their own summary pages.)
On what to infer from the predictions
Some considerations regarding accuracy and bias (page):
- Contra a common view that past AI forecasts were unreasonably optimistic, AI predictions look fairly similar over time, except a handful of very early somewhat optimistic ones.
- The Maes Garreau Law claims that people tend to predict AI near the end of their own expected lifetime. It is not true.
- We expect publication biases to favor earlier forecasts.
- Predictions made in surveys seem to be overall a bit later than those made in public statements (maybe because surveys prevent some publication biases).
- People who are inclined toward optimism about AI are more likely to become AI researchers, leading to a selection bias from optimistic experts.
- We know of some differences in forecasts made by different groups.
Blog posts on these topics:
4 Trackbacks / Pingbacks