Surveys seem to produce median estimates of time to human-level AI which are roughly a decade later than those produced from voluntary public statements.

Details

We compared several surveys to predictions made by similar groups of people in the MIRI AI predictions dataset, and found that predictions made in surveys were roughly 0-2 decade later. This was a rough and non-rigorous comparison, and we made no effort to control for most variables.

Stuart Armstrong and Kaj Sotala make a similar comparison here, and also find survey data to give later predictions. However they are comparing non-survey data largely from recent decades with survey data entirely from 1973, which we think makes the groups too different in circumstance to infer much about surveys and statements in particular. Though in the MIRI dataset (that they used), very early predictions tend to be more optimistic than later predictions, if anything, so if they had limited themselves to predictions from similar times there would have been a larger difference (though with a very small sample of statements).

Relevance

Accuracy of AI predictionssome biases which probably exist in public statements about AI predictions are likely to be smaller or not apply in survey data. For instance, public statements are probably more likely to be made by people who believe they have surprising or interesting views, whereas this should much less influence answers to a survey question once someone is taking a survey. Thus comparing data from surveys and voluntary statements can tell us about the strength of such biases. Given that median survey predictions are rarely more than a decade later than similar statements, and survey predictions seem unlikely to be strongly biased in this way, median statements are probably less than a decade early as a result of this bias.