Making or breaking a thinking machine

By Katja Grace, 18 January 2015

Here is a superficially plausible argument: the brains of the slowest humans are almost identical to those of the smartest humans. And thus—in the great space of possible intelligence—the ‘human-level’ band must be very narrow. Since all humans are basically identical in design—since you can move from the least intelligent human to the sharpest human with imperceptible changes—then artificial intelligence development will probably cross this band of human capability in a blink. It won’t stop on the way to spend years being employable but cognitively limited, or proficient but not promotion material. It will be superhuman before you notice it’s nearly human. And from our anthropomorphic viewpoint, from which the hop separating village idiot and Einstein looks like most of the spectrum, this might seem like shockingly sudden progress.

This whole line of reasoning is wrong.

It is true that human brains are very similar. However, this implies very little about the design difficulty of moving from the intelligence of one to the intelligence of the other artificially. The basic problem is that the smartest humans need not be better-designed — they could be better instantiations of the same design.

What’s the difference? Consider an analogy. Suppose you have a yard full of rocket cars. They all look basically the same, but you notice that their peak speeds are very different. Some of the cars can drive at a few hundred miles per hour, while others can barely accelerate above a crawl. You are excited to see this wide range of speeds, because you are a motor enthusiast and have been building your own vehicle. Your car is not quite up to the pace of the slowest cars in your yard yet, but you figure that since all those cars are so similar, once you get it to two miles per hour, it will soon be rocketing along.

If a car is slow because it is a rocket car with a broken fuel tank, that car will be radically simpler to improve than the first car you build that can go over 2 miles per hour. The difference is something like an afternoon of tinkering vs. two centuries. This is intuitively because the broken rocket car already contains almost all of the design effort in making a fast rocket car. It’s not being used, but you know it’s there and how to use it.

Similarly, if you have a population of humans, and some of them are severely cognitively impaired, you shouldn’t get too excited about the prospects for your severely cognitively impaired robot.

Another way to see there must be something wrong with the argument is to note that humans can actually be arbitrarily cognitively impaired. Some of them are even dead. And the brain of a dead person can closely resemble the brain of a live person. Yet while these brains are again very similar in design, AI passed dead-human-level years ago, and this did not suggest that it was about to zip on past live-human-level.

Here is a different way to think about the issue. Recall that we were trying to infer from the range of human intelligence that AI progress would be rapid across that range. However, we can predict that human intelligence has a good probability of varying significantly, using only evolutionary considerations that are orthogonal to the ease of AI development.

In particular, if much of the variation in intelligence is from deleterious mutations, then the distribution of intelligence is more or less set by the equilibrium between selection pressure for intelligence and the appearance of new mutations. Regardless of how hard it was to design improvements to humans, we would always see this spectrum of cognitive capacities, so this spectrum cannot tell us about how hard it is to improve intelligence by design. (Though this would be different if the harm inflicted by a single mutation was likely to be closely related to the difficulty of designing an incrementally more intelligent human).

If we knew more about the sources of the variation in human intelligence, we might be able to draw a stronger conclusion. And if we entertain several possible explanations for the variation in human intelligence, we can still infer something; but the strength of our inference is limited by the prior probability that deleterious mutations on their own can lead to significant variation in intelligence. Without learning more, this probability shouldn’t be very low.

In sum, while the brain of an idiot is designed much like that of a genius, this does not imply that designing a genius is about as easy as designing an idiot.

We are still thinking about this, so now is a good time to tell us if you disagree. I even turned on commenting, to make it easier for you. It should work on all of the blog posts now.

Rocket car, photographed by Jon ‘ShakataGaNai’ Davis.

(Top image: One of the first cars, 1769)


We welcome suggestions for this page or anything on the site via our feedback box, though will not address all of them.

6 Comments

  1. Hi Katja,

    Thanks for your work on this site – I can see that you’ve been fleshing it out. I remember a conversation not so long ago in which I expressed a position with some overlap with the one that you’ve argued against here, so I wanted to clarify.

    I think that an argument similar to the argument in the first paragraph shows that (greatly) superhuman intelligence is physically possible: that there exists (in the mathematical sense of existence) a collection of computer algorithms that could be patched together and run on a computer (perhaps with robotic supplements) to recognize patterns far better than humans can, across all contexts in which humans recognize patterns,

    This doesn’t have any immediate implications concerning whether superhuman AI will emerge suddenly before we recognize that the AI is nearly human.

    But separately, I have the intuition that it is in fact the case that if one can develop an AI with, e.g. mammal level visual perception, one would be most of the way toward superhuman AI.

    I don’t have high confidence in this.

    The intuition is supported by the one learning algorithm hypothesis as well as the opinions of some researchers who I know. On an object level:

    (i) I would guess that there was no stage in evolutionary history between the emergence of mammals and the emergence of humans when there were very strong marginal selective pressures in the direction of greater intelligence, and there would have been times when there were marginal selective pressures against intelligence. (Here I’m leaving ‘very strong’ unquantified, and of course the devil is in the details.) If this is true, then human intelligence is something that a stochastic process generated over the span of ~100 million years…maybe I’m scope insensitive with respect the time elapsed, or the size of earth, but it seems this could have only have occurred if humans’ cognitive algorithms were (initially) not very different from those of mammals generically, e.g. up until the point at which humans began to diverge from other great apes.. And my impression is that chimpanzees are awfully close to humans in intelligence … that when it comes to abstract (nonverbal) pattern recognition, there’s overlap between humans and chimpanzees. I would not be surprised if out of a population of 10,000 chimpanzees, at least one could do as well as a human of IQ 70 on a Raven Matrices test.

    (ii) It seems to me that AI research is missing a major piece to the puzzle of general intelligence – that there’s some sense in which it’s not in the right ballpark. This is not a negative statement about any of the researchers involved, who generally aren’t explicitly working with a view toward general intelligence. If I recall correctly, Andrew Ng has said that he doesn’t think that refinements of deep learning as it presently exists will suffice. So my sense is not that there’s a historical track record of slow and steady progress toward creating progressively more intelligent systems. As such, it seems to me that there’s no reason to think that one will need more than a single major insight (together with a huge amount of work by brilliant people, but on the order of magnitude of what went into the development of modern physics, rather than something requiring millennia of research).

    • Thanks for your thoughts! I agree that AI far beyond human abilities is likely, however I don’t immediately see how something like the argument I present in the first paragraph suggests that.

      I think your considerations are good.

      On (i), even if chimps were quite close to humans, if the human range is wide, going from chimp to Einstein may take a while. But still, if you think evolutionary pressure was small, the distance from the best chimp to the best human should also be small (roughly), since that distance would appear to be from design improvements, rather than aberrations from the design. I do think that without quantification though, it’s hard to say anything quantitative! Also, I’m not sure what makes you say there were not very strong marginal selective pressures for intelligence since mammals emerged (I don’t mean to disagree necessarily — I just don’t know much about this evidence).

      On (ii), I know of very few areas where one big insight produced fairly sharp progress in anything important, which makes me doubt that it would happen here. If you do know of such cases, I would be interested in adding them to our page on the topic. I don’t mean to suggest that big insights are not important – I’d just guess that their impact is spread out over time, and so they don’t imply especially fast progress.

      • >Also, I’m not sure what makes you say there were not very strong marginal selective pressures for intelligence since mammals emerged (I don’t mean to disagree necessarily — I just don’t know much about this evidence).

        I think that the main thing driving my intuition here is a sense that general intelligence is not in fact useful until one reaches near human levels. One test of this would be to look at the evolutionary success (perhaps as measured by population after controlling for physical size or something) of different kinds of mammals as a function of their level of general intelligence and see whether one has a strong positive correlation.

        On (ii), I know of very few areas where one big insight produced fairly sharp progress in anything important, which makes me doubt that it would happen here. If you do know of such cases, I would be interested in adding them to our page on the topic. I don’t mean to suggest that big insights are not important – I’d just guess that their impact is spread out over time, and so they don’t imply especially fast progress.

        The time horizon that I have in mind is ~15 years. Maybe this is what you yourself have in mind.

        Also, maybe the “one big insight” framing is distracting – think about the development of the Internet – it’s hard to isolate a single big insight, or attribute it to a single person or group of people, but at some point an understanding of the broad range of things that it could be used for became apparent… It may not have been until 2000 when people could appreciate the impact that it would have by now (I don’t know the history)

        • >I think that the main thing driving my intuition here is a sense that general intelligence is not in fact useful until one reaches near human levels. One test of this would be to look at the evolutionary success (perhaps as measured by population after controlling for physical size or something) of different kinds of mammals as a function of their level of general intelligence and see whether one has a strong positive correlation.

          I’m not sure this test works well, because selective pressure often operate in relatively narrow niches. Lots of organisms do well without being able to move much at all, but I suspect it is still pretty reproductively useful for a given rodent to be able to run fast.

          For a trait that is not useful until a certain level, I would expect to see a bimodal distribution of it – some creatures wouldn’t have any, and some would have a lot (like flying). Instead it seems that animals range across a spectrum of intelligence, though I may be wrong about that.

          On abrupt progress, I am looking for discontinuities equivalent to at least about 15 years of progress, but hopefully more. I know of ~5. Sorry about the ‘one big insight’ framing – actually I’m just interested in any temporally brief progress, even if it was produced by a cluster of insights.

  2. “The brain of a dead person can closely resemble the brain of a live person. Yet while these brains are again very similar in design, AI passed dead-human-level years ago, and this did not suggest that it was about to zip on past live-human-level.” Not sure this argument holds. Dead brains don’t resemble live ones in the most important aspect, namely the actual movement of electrical impulses. And if you argue that it’s the structure which matters for similarity – well, a dead person’s brain could also be structurally very intelligent, if only somebody switched it on.

    In any case, couldn’t one argue that computers are likely to become better instantiations of the same design quite quickly, and that the band of intelligence possible for any given design may well be wider than the band of human intelligence?

    • I agree that dead brains don’t resemble live brains in an important way. My point is that surface structural similarity is not sufficient. Similarly, unintelligent brains and intelligent brains clearly differ in important ways, which aren’t evident at a structural level.

      How would you argue that computers are likely to become better instantiations of the same design quickly? In the case of broken objects, it’s easy to make a better instantiation quickly because you already have the good design. State of the art AI is by definition the best AI we have designed. If lots of effort is going into a project like AI, it would be strange to have a good design with only broken instantiations of it, which can be easily fixed.

1 Trackback / Pingback

  1. AI Impacts – The slow traversal of ‘human-level’

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.