The Singularity Isn’t Near is an article in MIT Technology Review by Paul Allen which argues that a singularity brought about by super-human-level AI will not arrive by 2045 (as is predicted by Kurzweil).

The summarized argument

We will not have human-level AI by 2045:

1. To reach human-level AI, we need software as well as hardware.

2. To get this software, we need one of the following:

  • a detailed scientific understanding of the brain
  • a way to ‘duplicate’ brains
  • creation of something equivalent to a brain from scratch

3. A detailed scientific understanding of the brain is unlikely by 2045:

  1. To have enough understanding by 2045, we would need a massive acceleration of scientific progress:
    1. We are just scraping the surface of understanding the foundations of human cognition.
  2. A massive acceleration of progress in brain science is unlikely
    1. Science progresses irregularly:
      1. e.g. The discovery of long-term potentiation, the columnar organization of cortical areas, neuroplasticity.
    2. Science doesn’t seem to be exponentially accelerating
    3. There is a ‘complexity break’: the more we understand, the more complicated the next level to understand is

4. ‘Duplicating’ brains is unlikely by 2045:

  1. Even if we have good scans of brains, we need good understanding of how the parts behave to complete the model
  2. We have little such understanding
  3. Such understanding is not exponentially increasing

5. Creation of something equivalent to a brain from scratch is unlikely by 2045:

  1. Artificial intelligence research appears to be far from providing this
  2. Artificial intelligence research is unlikely to improve fast:
    1. Artificial intelligence research does not appear to be exponentially improving
    2. The ‘complexity break’ (see above) also operates here
    3. This is the kind of area where progress is not a reliable exponential


The controversial parts of this argument appear to be the parallel claims that progress is insufficiently fast (or accelerating) to reach an adequate understanding of the brain or of artificial intelligence algorithms by 2045. Allen’s argument does not present enough support to evaluate them from this alone. Others with at least as much expertise disagree with these claims, so they appear to be open questions.

To evaluate them, it appears we would need more comparable measures of accomplishments and rates of progress in brain science and AI. With only the qualitative style of Allen’s claims, it is hard to know whether progress being slow, and needing to go far, implies that it won’t get to a specific place by a specific date.