Blog

Event: Exercises in Economic Futurism

By Katja Grace, 15 July 2015 On Thursday July 30th Robin Hanson is visiting again, and this time we will be holding an informal workshop on how to usefully answer questions about the future, with an emphasis on economic approaches.

Blog

Steve Potter on neuroscience and AI

By Katja Grace, 13 July 2015 Prof. Steve Potter works at the Laboratory of Neuroengineering in Atlanta, Georgia. I wrote to him after coming across his old article, ‘What can AI get from Neuroscience?’ I wanted to know how neuroscience might contribute to AI

AI Timelines

Conversation with Steve Potter

Posted 13 July 2015 Participants Professor Steve Potter – Associate Professor, Laboratory of NeuroEngineering, Coulter Department of Biomedical Engineering, Georgia Institute of Technology Katja Grace – Machine Intelligence Research Institute (MIRI) Note: These notes were

Blog

New funding for AI Impacts

By Katja Grace, 4 July 2015 AI Impacts has received two grants! We are grateful to the Future of Humanity Institute (FHI) for $8,700 to support work on the project until September 2015, and the Future of Life Institute (FLI)

Blog

Update on all the AI predictions

By Katja Grace, 5 June 2015 For the last little while, we’ve been looking into a dataset of individual AI predictions, collected by MIRI a couple of years ago. We also previously gathered all the surveys about AI predictions that we

AI Timelines

Predictions of Human-Level AI Timelines

Note: This page is out of date. See an up-to-date version of this page on our wiki. Updated 5 June 2015 We know of around 1,300 public predictions of when human-level AI will arrive, of

Accuracy of AI Predictions

Accuracy of AI Predictions

Updated 4 June 2015 It is unclear how informative we should expect expert predictions about AI timelines to be. Individual predictions are undoubtedly often off by many decades, since they disagree with each other. However their aggregate may still be quite informative. The

No Picture
Accuracy of AI Predictions

Publication biases toward shorter predictions

We expect predictions that human-level AI will come sooner to be recorded publicly more often, for a few reasons. Public statements are probably more optimistic than surveys because of such effects. The difference appears to be less than

No Picture
Accuracy of AI Predictions

Selection bias from optimistic experts

Experts on AI probably systematically underestimate time to human-level AI, due to a selection bias. The same is more strongly true of AGI experts. The scale of such biases appears to be decades. Most public AI predictions