Evidence against current methods leading to human level artificial intelligence

This is a list of published arguments that we know of that current methods in artificial intelligence will not lead to human-level AI.

Details

Clarifications

We take ‘current methods’ to mean techniques for engineering artificial intelligence that are already known, involving no “qualitatively new ideas”.1 We have not precisely defined ‘current methods’. Many of the works we cite refer to currently dominant methods such as machine learning (especially deep learning) and reinforcement learning.

By human-level AI, we mean AI with a level of performance comparable to humans. We have in mind the operationalization of ‘high-level machine intelligence’ from our 2016 expert survey on progress in AI: “Say we have ‘high-level machine intelligence’ when unaided machines can accomplish every task better and more cheaply than human workers.”2

Because we are considering intelligent performance, we have deliberately excluded arguments that AI might lack certain ‘internal’ features, even if it manifests human-level performance.3 4 We assume, concurring with Chalmers (2010), that “If there are systems that produce apparently [human-level intelligent] outputs, then whether or not these systems are truly conscious or intelligent, they will have a transformative impact.”5

Methods

We read well-known criticisms of current AI approaches of which we were already aware. Using these as a starting point, we searched for further sources and solicited recommendations from colleagues familiar with artificial intelligence.

We include arguments that sound plausible to us, or that we believe other researchers take seriously. Beyond that, we take no stance on the relative strengths and weaknesses of these arguments.

We cite works that plausibly support pessimism about current methods, regardless of whether the works in question (or their authors) actually claim that current methods will not lead to human-level artificial intelligence. 

We do not include arguments that serve primarily as undercutting defeaters of positive arguments that current methods will lead to human-level intelligence. For example, we do not include arguments that recent progress in machine learning has been overstated.

These arguments might overlap in various ways, depending on how one understands them. For example, some of the challenges for current methods might be special instances of more general challenges. 

List of arguments

Inside view arguments

These arguments are ‘inside view’ in that they look at the specifics of current methods.

  • Innate knowledge: Intelligence relies on prior knowledge which it is currently not feasible to embed via learning techniques, recapitulate via artificial evolution, or hand-specify. — Marcus (2018)6
  • Data hunger: Training a system to human level using current methods will require more data than we will be able to generate or acquire. — Marcus (2018)7
Capacities

Some researchers claim that there are capacities which are required for human-level intelligence, but difficult or impossible to engineer with current methods.8 Some commonly-cited capacities are:

  • Causal models: Building causal models of the world that are rich, flexible, and explanatory — Lake et al. (2016)9, Marcus (2018)10, Pearl (2018)11
  • Compositionality: Exploiting systematic, compositional relations between entities of meaning, both linguistic and conceptual — Fodor and Pylyshyn (1988)12, Marcus (2001)13, Lake and Baroni (2017)14
  • Symbolic rules: Learning abstract rules rather than extracting statistical patterns — Marcus (2018)15
  • Hierarchical structure: Dealing with hierarchical structure, e.g. that of language — Marcus (2018)16
  • Transfer learning: Learning lessons from one task that transfer to other tasks that are similar, or that differ in systematic ways — Marcus (2018)17, Lake et al. (2016)18
  • Common sense understanding: Using common sense to understand language and reason about new situations — Brooks (2019)19, Marcus and Davis (2015)20

Outside view arguments

These arguments are ‘outside view’ in that they look at “a class of cases chosen to be similar in relevant respects”21 to current artificial intelligence research, without looking at the specifics of current methods.

  • Lack of progress: There are many tasks specified several decades ago that have not been solved, e.g. effectively manipulating a robot arm, open-ended question-answering. — Brooks (2018)22, Jordan (2018)23
  • Past predictions: Past researchers have incorrectly predicted that we would get to human-level AI with then-current methods. — Chalmers (2010)24
  • Other fields: Several fields have taken centuries or more to crack; AI could well be one of them. — Brooks (2018)25

Contributions

Robert Long and Asya Bergal contributed research and writing.

Notes

Featured image from www.extremetech.com.

  1. “It now seems possible that we could build ‘prosaic’ AGI, which can replicate human behavior but doesn’t involve qualitatively new ideas about ‘how intelligence works’”. — Christiano, Paul. “Prosaic AI Alignment”. 2017. Medium. Accessed August 13 2019. https://ai-alignment.com/prosaic-ai-control-b959644d79c2.
  2. Grace, Katja, John Salvatier, Allan Dafoe, Baobao Zhang, and Owain Evans. “When will AI exceed human performance? Evidence from AI experts.” Journal of Artificial Intelligence Research 62 (2018): 729-754.
  3. Block, Ned. 1981. “Psychologism and behaviorism.” The Philosophical Review 90, no. 1 (1981): 5-43.
  4. Searle, J. 1980. Minds, brains, and programs. Behavioral and Brain Sciences 3:417-57.
  5. Chalmers, David. “The Singularity: A Philosophical Analysis”. 2010. David Chalmers. Accessed August 12 2019. http://consc.net/papers/singularity.pdf.
  6. Section 3.1, “Deep learning thus far is data hungry” — Marcus, Gary. 2018. “Deep Learning: A Critical Appraisal”. arXiv. Accessed August 12 2019. https://arxiv.org/abs/1801.00631.
  7. Section 3.1, “Deep learning thus far is data hungry” — Marcus, Gary. 2018. “Deep Learning: A Critical Appraisal”. arXiv. Accessed August 12 2019. https://arxiv.org/abs/1801.00631.
  8. One could disagree with the claim that a given capacity is in fact required, or with the claim that current methods cannot engineer it.
  9. Section 4.2.2, “Causality” — Lake, Brenden M., Tomer D. Ullman, Joshua B. Tenenbaum, and Samuel J. Gershman. 2016. “Building Machines That Learn And Think Like People”. arXiv. Accessed August 12 2019. https://arxiv.org/abs/1604.00289.
  10. Section 3.7, “Deep learning thus far cannot inherently distinguish causation from correlation ” — Marcus, Gary. 2018. “Deep Learning: A Critical Appraisal”. arXiv. Accessed August 12 2019. https://arxiv.org/abs/1801.00631.
  11. Pearl, Judea. 2018. “Theoretical Impediments To Machine Learning With Seven Sparks From The Causal Revolution”. arXiv. Accessed August 12 2019. https://arxiv.org/abs/1801.04016.
  12. Part III: The need for Symbol Systems: Productivity, Systematicity, Compositionality and Inferential Coherence, in particular Sections “Systematicity of cognitive representation” and “Compositionality of representations” — Fodor, Jerry A. and Pylyshyn, “Connectionism and Cognitive Architecture: A Critical Analysis.” Zenon W. 1988. Rutgers Center for Cognitive Science. Accessed August 12 2019. http://ruccs.rutgers.edu/images/personal-zenon-pylyshyn/proseminars/Proseminar13/ConnectionistArchitecture.pdf.
  13. Marcus, G.F., 2001. The algebraic mind: Integrating connectionism and cognitive science. MIT press.
  14. Lake, Brenden M., and Marco Baroni. 2017. “Generalization Without Systematicity: On The Compositional Skills Of Sequence-To-Sequence Recurrent Networks”. arXiv. Accessed August 13 2019. https://arxiv.org/abs/1711.00350.
  15. Section 5.2, “Symbol-manipulation, and the need for hybrid models” — Marcus, Gary. 2018. “Deep Learning: A Critical Appraisal”. arXiv. Accessed August 12 2019. https://arxiv.org/abs/1801.00631.
  16. Section 3.3, “Deep learning thus far has no natural way to deal with hierarchical structure” — Marcus, Gary. 2018. “Deep Learning: A Critical Appraisal”. arXiv. Accessed August 12 2019. https://arxiv.org/abs/1801.00631.
  17. Section 3.2, “Deep learning thus far is shallow and has limited capacity for transfer” — Marcus, Gary. 2018. “Deep Learning: A Critical Appraisal”. arXiv. Accessed August 12 2019. https://arxiv.org/abs/1801.00631.
  18. Section 4.2.3, “Learning-to-learn” — Lake, Brenden M., Tomer D. Ullman, Joshua B. Tenenbaum, and Samuel J. Gershman. 2016. “Building Machines That Learn And Think Like People”. arXiv. Accessed August 12 2019. https://arxiv.org/abs/1604.00289.
  19. Section 3, “Read a Book” — Brooks, Rodney. “[For&AI] Steps Toward Super Intelligence III, Hard Things Today – Rodney Brooks”. 2019. Rodney Brooks. Accessed August 12 2019. http://rodneybrooks.com/forai-steps-toward-super-intelligence-iii-hard-things-today/.
  20. Marcus, Gary and Davis, Ernest. “Commonsense reasoning and commonsense knowledge in artificial intelligence”. Commun. ACM 58, no. 9 (2015): 92-103.
  21. Kahneman, Daniel and Lovallo, Dan. “Timid Choices and Bold Forecasts: A Cognitive Perspective on Risk Taking”. 1993. Warrington College of Business. Accessed August 13 2019. http://bear.warrington.ufl.edu/brenner/mar7588/Papers/kahneman-lovallo-mansci1993.pdf.
  22. Section 2, “Real Manipulation” — Brooks, Rodney. “[For&AI] Steps Toward Super Intelligence III, Hard Things Today – Rodney Brooks”. 2019. Rodney Brooks. Accessed August 12 2019. http://rodneybrooks.com/forai-steps-toward-super-intelligence-iii-hard-things-today/.
  23. Jordan, Michael. “Artificial Intelligence — The Revolution Hasn’t Happened Yet”. 2018. Medium. Accessed August 12 2019.
  24. “It must be acknowledged that every path to AI has proved surprisingly difficult to date. The history of AI involves a long series of optimistic predictions by those who pioneer a method, followed by a periods of disappointment and reassessment. This is true for a variety of methods involving direct programming, machine learning, and artificial evolution, for example. Many of the optimistic predictions were not obviously unreasonable at the time, so their failure should lead us to reassess our prior beliefs in significant ways. It is not obvious just what moral should be drawn: Alan Perlis has suggested ‘A year spent in artificial intelligence is enough to make one believe in God’. So optimism here should be leavened with caution.” — Chalmers, David. “The Singularity: A Philosophical Analysis”. 2010. David Chalmers. Accessed August 12 2019. http://consc.net/papers/singularity.pdf.
  25. “Einstein predicted gravitational waves in 1916. It took ninety nine years of people looking before we first saw them in 2015. Rainer Weiss, who won the Nobel prize for it, sketched out the successful method after fifty one years in 1967. And by then the key technologies needed, laser and computers, were in widespread commercial use. It just took a long time. Controlled nuclear fusion has been forty years away for well over sixty years now. Chemistry took millennia, despite the economic incentive of turning lead into gold (and it turns out we still can’t do that in any meaningful way). P=NP? has been around in its current form for forty seven years and its solution would guarantee whoever did it to be feted as the greatest computer scientist in a generation, at least. No one in theoretical computer science is willing to guess when we might figure that one out. And it doesn’t require any engineering or production. Just thinking. Some things just take a long time, and require lots of new technology, lots of time for ideas to ferment, and lots of Einstein and Weiss level contributors along the way. I suspect that human level AI falls into this class. But that it is much more complex than detecting gravity waves, controlled fusion, or even chemistry, and that it will take hundreds of years.“ — Brooks, Rodney. “[For&AI] Steps Toward Super Intelligence I, How We Got Here – Rodney Brooks”. 2018. Rodney Brooks. Accessed August 13 2019. https://rodneybrooks.com/forai-steps-toward-super-intelligence-i-how-we-got-here/.

We welcome suggestions for this page or anything on the site via our feedback box, though will not address all of them.