AI and the Big Nuclear Discontinuity

By Katja Grace, 9 January 2015

As we’ve discussed before, the advent of nuclear weapons was a striking technological discontinuity in the effectiveness of explosives. In 1940, no one had ever made an explosive twice as effective as TNT. By 1945 the best explosive was 4500 times more potent, and by 1960 the ratio was 5 million.

Progress in nuclear weapons is sometimes offered as an analogy for possible rapid progress in AI (e.g. by Eliezer Yudkowsky here, and here). It’s worth clarifying the details of this analogy, which has nothing to do with the discontinuous progress in weapon effectiveness. It’s about a completely different discontinuity: a single nuclear pile’s quick transition from essentially inert to extremely reactive.

As you add more fissile material to a nuclear pile, little happens until it reaches a critical mass. After reaching critical mass, the chain reaction proceeds much faster than the human actions that assembled it. By analogy, perhaps as you add intelligence to a pile of intelligence, little will happen until it reaches a critical level which initiates a chain reaction of improvements (‘recursive self-improvement’) which proceeds much faster than the human actions that assembled it.

This discontinuity in individual nuclear explosions is not straightforwardly related to the technological discontinuity caused by their introduction. Older explosives were also based on chain reactions. The big jump seems to be a move from chemical chain reactions to nuclear chain reactions, two naturally occurring sources of energy with very different characteristic scales–and with no alternatives in between them. This jump has no obvious analog in AI.

One might wonder if the technological discontinuity was nevertheless connected to the discontinuous dynamics of individual nuclear piles. Perhaps the density and volume of fissile uranium required for any explosive was the reason that we did not see small, feeble nuclear weapons in between chemical weapons and powerful nuclear weapons. This doesn’t match the history however. Nobody knew that concentrating fissile uranium was important until after fission was discovered in 1938, less than seven years before the first nuclear detonation. Even if nuclear weapons had grown in strength gradually over this period, this would still be around one thousand years of progress at the historical rate per year. The dynamics of individual piles can only explain a miniscule part of the discontinuity.

There may be an important analogy between AI progress and nuclear weapons. And the development of nuclear weapons was in some sense a staggeringly abrupt technological development. But we probably shouldn’t conclude that the development of AI is much more likely to be comparably abrupt.

 If you vaguely remember that AI progress and nuclear weapons are analogous, and that nuclear weapons were a staggeringly abrupt development in explosive technology, try not to infer from this that AI is especially likely to be a staggeringly abrupt development.

(Image: Trinity test after ten seconds, taken from Atomic Bomb Test Site Photographs, courtesy of U.S. Army White Sands Missile Range Public Affairs Office)


We welcome suggestions for this page or anything on the site via our feedback box, though will not address all of them.

1 Comment

  1. How does the analogy between progress in nuclear weapons and the potential rapid advancement in AI, as mentioned by Eliezer Yudkowsky, differ from the notion of discontinuous progress in weapon effectiveness? Could you provide clarification on the specific details of this analogy and explain how it relates to the rapid transition from an essentially inert state to an extremely reactive one, particularly in the context of a single nuclear pile?
    Visit us telkom university

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.