Over thousands of years, humans became better at producing explosions. A weight of explosive that would have blown up a tree stump in the year 800 could have blown up more than three tree stumps in the 1930s. Then suddenly, a decade later, the figure became more like nine thousand tree stumps. The first nuclear weapons represented a massive leap – something like 6000 years of progress in one step*.
Though such jumps have been historical exceptions, some some observers think a massive jump in AI capability is likely. Progress may be fast due to the apparent amenability of software to groundbreaking insights, the possibility of rapid applications (to deploy a new algorithm, you don’t have to build any factories), the plausibility of simple conceptual ingredients underlying intelligent behavior, and the potential for ‘recursive self-improvement’ to speed software development to rates characteristic of superhumanly-programmed computers rather than that of humans.
We think the question, ‘will AI progress be discontinuous?’ is a good one to investigate. Not just because advance notice of abrupt world-changing developments is sure to come in handy somehow–nor because of the exciting degree of disagreement it elicits. What makes this a particularly good topic to study now is that it it helps us know what other information is most relevant to understanding AI progress.
One might hope to make predictions about how soon AI will reach human-level by extrapolating from how fast we are moving and how far we have to go, for instance.** Or we could monitor the rate at which automation replaces workers, or at which the performance of AI systems improves. These all provide valuable information if you think AI will be reached gradually, by the continuation of existing processes. However if you expect progress to be abrupt and uneven, these indicators are much less informative.
So whether AI will be reached abruptly or incrementally is an important question. But is it a tractable one to make progress on? My guess is yes. Plenty of evidence bears on this question: the historical patterns of progress in other technologies, instances of abnormally uneven progress, arguments suggesting abnormal degrees of abnormality in the AI case, theories explaining past continuity and discontinuities, cases that look relevantly analogous to AI…
We know some examples of very fast technological progress; simply understanding those cases better is likely to be an informative start.
So we’ve have started a list of cases here. Each case appears to involve abrupt technological progress. We looked into each one a little, usually just enough to check it really involved abrupt progress and to get approximate rates of progress before and during the discontinuity. We intend to do a more thorough job later for the cases seem particularly important or interesting.
This list will hopefully help us understand what fast progress looks like historically (How fast is it? How far is it? How unexpected is it?), and when it happens (Does it usually flow from a huge intellectual insight? The discovery of a new natural phenomenon? Overcoming a large upfront investment?).
So far, we have a couple of really big jumps, a couple of smaller jumps, a bunch of potentially interesting but uncertain cases, and a rich assortment of purported discontinuities that we are yet to investigate.
After nuclear weapons, the second most interesting case we’ve found is high temperature superconductivity. The maximum temperature of superconduction appears to have made something like 150 years of progress in one jump in 1986, after the discovery of a new class of materials with maximum temperatures for superconducting behavior above what was thought possible.
Do you have thoughts on this line of research? Do you have ideas for how to investigate cases? Do you know of historical cases of abrupt technological progress? Do you want to see our list?
* Measured in doublings; you would get a much more extreme estimate if you expected linear progress. Relative effectiveness (RE) had doubled less than twice in 1100 years, then it doubled more than eleven times when the first nuclear weapons emerged. (For more on nuclear weapons, see our page on them).
** Interestingly, asking AI researchers about rates of progress gives much more pessimistic estimates than asking them about when human-level AI will arrive, based on some very preliminary research. This may mean that AI researchers expect human-level AI to arrive following abnormally fast progress, though the discrepancy could be explained in many other ways. It seems worth of looking in to.
(Image: The first nuclear chain reaction. Painting by Gary Sheehan (Atomic Energy Commission).