What do ML researchers think about AI in 2022?

Katja Grace, 4 August 2022

AI Impacts just finished collecting data from a new survey of ML researchers, as similar to the 2016 one as practical, aside from a couple of new questions that seemed too interesting not to add.

This page reports on it preliminarily, and we’ll be adding more details there. But so far, some things that might interest you:

  • 37 years until a 50% chance of HLMI according to a complicated aggregate forecast (and biasedly not including data from questions about the conceptually similar Full Automation of Labor, which in 2016 prompted strikingly later estimates). This 2059 aggregate HLMI timeline has become about eight years shorter in the six years since 2016, when the aggregate prediction was 2061, or 45 years out. Note that all of these estimates are conditional on “human scientific activity continu[ing] without major negative disruption.”
  • P(extremely bad outcome)=5% The median respondent believes the probability that the long-run effect of advanced AI on humanity will be “extremely bad (e.g., human extinction)” is 5%. This is the same as it was in 2016 (though Zhang et al 2022 found 2% in a similar but non-identical question). Many respondents put the chance substantially higher: 48% of respondents gave at least 10% chance of an extremely bad outcome. Though another 25% put it at 0%.
  • Explicit P(doom)=5-10% The levels of badness involved in that last question seemed ambiguous in retrospect, so I added two new questions about human extinction explicitly. The median respondent’s probability of x-risk from humans failing to control AI1 was 10%, weirdly more than median chance of human extinction from AI in general2, at 5%. This might just be because different people got these questions and the median is quite near the divide between 5% and 10%. The most interesting thing here is probably that these are both very high—it seems the ‘extremely bad outcome’ numbers in the old question were not just catastrophizing merely disastrous AI outcomes.
  • Support for AI safety research is up: 69% of respondents believe society should prioritize AI safety research “more” or “much more” than it is currently prioritized, up from 49% in 2016.
  • The median respondent thinks there is an “about even chance” that an argument given for an intelligence explosion is broadly correct. The median respondent also believes machine intelligence will probably (60%) be “vastly better than humans at all professions” within 30 years of HLMI, and that the rate of global technological improvement will probably (80%) dramatically increase (e.g., by a factor of ten) as a result of machine intelligence within 30 years of HLMI.
  • Years/probabilities framing effect persists: if you ask people for probabilities of things occurring in a fixed number of years, you get later estimates than if you ask for the number of years until a fixed probability will obtain. This looked very robust in 2016, and shows up again in the 2022 HLMI data. Looking at just the people we asked for years, the aggregate forecast is 29 years, whereas it is 46 years for those asked for probabilities. (We haven’t checked in other data or for the bigger framing effect yet.)
  • Predictions vary a lot. Pictured below: the attempted reconstructions of people’s probabilities of HLMI over time, which feed into the aggregate number above. There are few times and probabilities that someone doesn’t basically endorse the combination of.
  • You can download the data here (slightly cleaned and anonymized) and do your own analysis. (If you do, I encourage you to share it!)
Individual inferred gamma distributions

The survey had a lot of questions (randomized between participants to make it a reasonable length for any given person), so this blog post doesn’t cover much of it. A bit more is on the page and more will be added.

Thanks to many people for help and support with this project! (Many but probably not all listed on the survey page.)


Cover image: Probably a bootstrap confidence interval around an aggregate of the above forest of inferred gamma distributions, but honestly everyone who can be sure about that sort of thing went to bed a while ago. So, one for a future update. I have more confidently held views on whether one should let uncertainty be the enemy of putting things up.


  1. Or, ‘human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment of the human species’
  2. That is, ‘future AI advances causing human extinction or similarly permanent and severe disempowerment of the human species’

We welcome suggestions for this page or anything on the site via our feedback box, though will not address all of them.

1 Comment

  1. > The median respondent’s probability of x-risk from humans failing to control AI1 was 10%, weirdly more than median chance of human extinction from AI in general2, at 5%.

    Rather than use two distinct terms (“x-risk” and “extinction”) in the same sentence to refer to the same exact thing (“human extinction or similarly permanent and severe disempowerment of the human species”), I suggest just using one of them.

    Before checking the exact wording of the questions, my first thought was that getting medians of 5% and 10% wasn’t surprising given that x-risk is a broader category than extinction.

50 Trackbacks / Pingbacks

  1. How effective altruism went from a niche movement to a billion-dollar force
  2. Effective altruism went from underfunded idea to philanthropic force - Vox
  3. Efficient altruism went from underfunded thought to philanthropic pressure - 24 Sport
  4. Effective altruism went from underfunded idea to philanthropic force - My Droll
  5. How effective altruism went from a niche movement to a billion-dollar force - Globe Echo
  6. The Perfect Enemy | Effective altruism went from underfunded idea to philanthropic force
  7. Effective altruism went from underfunded idea to philanthropic force – E-DeshSeba
  8. How effective altruism went from a niche movement to a billion-dollar force | 67nj
  9. How effective altruism went from a niche movement to a billion-dollar force - News BenipattiNews
  10. How effective altruism went from a niche movement to a billion-dollar force - Edulogg
  11. Effective altruism went from underfunded idea to philanthropic force - Techno Blender
  12. The Scotfree | What had been a small, eclectic club of philosophers and do-gooders is now worth billions. Things are getting weird for effective altruism
  13. Effective altruism went from underfunded idea to philanthropic force - Vox.com - Oline Earing Tips
  14. Effective altruism’s most controversial idea
  15. Effective altruism’s most controversial idea: Longtermism, explained - My Droll
  16. Effective altruism’s most controversial idea - Globe Echo
  17. The Perfect Enemy | Effective altruism’s most controversial idea: Longtermism, explained
  18. Effective altruism’s most controversial idea - Techio
  19. Effective altruism’s most controversial idea - News
  20. What is longtermism? The controversial idea, explained - Nasha Digital
  21. Effective altruism’s most controversial idea: Longtermism, explained - Vox.com -
  22. Effective altruism’s most controversial idea - Edulogg
  23. The Perfect Enemy | What is longtermism? The controversial idea, explained
  24. Efficient altruism went from underfunded thought to philanthropic pressure – Vox.com – MediaPOX
  25. 2022 AI Highlights: ChatGPT, Text-to-Image, and others
  26. Let's think about slowing down AI - My Blog
  27. How wicked a future attain ML researchers ask? – TOP Show HN
  28. The case for slowing down AI
  29. The AI arms race is on. But we should slow down AI progress instead. - Quick Telecast
  30. The AI arms race is on. But we should slow down AI progress instead. - Vox
  31. The AI arms race is on. But we should slow down AI progress instead. - Nasha Digital
  32. The case for slowing down AI - Techio
  33. The case for slowing down AI - Globe Echo
  34. The AI arms race is on. But we should slow down AI progress instead. – Pod World
  35. The AI arms race is on. But we should slow down AI progress instead. - World News Sources
  36. The case for slowing down AI | 67nj
  37. The AI arms race is on. But we should slow down AI progress instead. - Digital Wisdom
  38. The case for slowing down AI - Edulogg
  39. El caso para ralentizar la IA | Arcanus's Random Stuffs
  40. The case for slowing down AI - AMERICA TIMES
  41. Slowing Down AI: Rationales, Proposals, and Difficulties – TOP Show HN
  42. Top A.I. companies are getting serious about A.I. safety and concern about ‘extremely bad’ A.I. risks is growing - Daily Briefs
  43. Top A.I. companies are getting serious about A.I. safety and concern about ‘extremely bad’ A.I. risks is growing - Fortune - Breaking News
  44. ▷ Una mirada al panorama de 2022: hacia la IA
  45. Top A.I. companies are getting serious about A.I. safety and concern about ‘extremely bad’ A.I. risks is growing – Fortune | I Am Mario
  46. Top A.I. companies are getting serious about A.I. safety and concern about ‘extremely bad’ A.I. risks is growing - Fortune - The Ultimate AI Tools Directory | AiBotz.online
  47. Top A.I. companies are getting serious about A.I. safety and concern about ‘extremely bad’ A.I. risks is growing – Fortune - A.I. Generation
  48. What can we learn from ‘The AI Dilemma’? ~ mindful.technology
  49. Vamos pensar em desacelerar a IA - 80.000 Horas: Como fazer a diferença com sua carreira
  50. Let’s think about slowing down AI - Ai Is Crazy

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.