Discontinuous progress in history: an update

Katja Grace
We’ve been looking for historic cases of discontinuously fast technological progress, to help with reasoning about the likelihood and consequences of abrupt progress in AI capabilities. We recently finished expanding this investigation to 37 technological trends. This blog post is a quick update on our findings. See the main page on the research and its outgoing links for more details.


Three kinds of competitiveness

By Daniel Kokotajlo In this post, I distinguish between three different kinds of competitiveness — Performance, Cost, and Date — and explain why I think these distinctions are worth the brainspace they occupy. For example,

Continuity of progress

Historic trends in book production

The number of books produced in the previous hundred years, sampled every hundred or fifty years between 600AD to 1800AD contains five greater than 10-year discontinuities, four of them greater than 100 years. The last

Continuity of progress

Penicillin and historic syphilis trends

Penicillin did not precipitate a discontinuity of more than ten years in deaths from syphilis in the US. Nor were there other discontinuities in that trend between 1916 and 2015. The number of syphilis cases

AI Inputs

AI conference attendance

Six of the largest seven AI conferences hosted a total of 27,396 attendees in 2018. Attendance at these conferences has grown by an average of 21% per year over 2011-2018. These six conferences host around six

No Picture

Examples of AI systems producing unconventional solutions

This page lists examples of AI systems producing solutions of an unexpected nature, whether due to goal misspecification or successful optimization.  This list is highly incomplete. List CoastRunners’ burning boat Incomprehensible evolved logic gates AlphaGo’s

No Picture
Accuracy of AI Predictions

Chance date bias

There is modest evidence that people consistently forecast events later when asked the probability that the event occurs by a certain year, rather than the year in which a certain probability of the event will


Changes in funding in the AI safety field

Guest post by Seb Farquhar, originally posted to the Center for Effective Altruism blog. The field of AI Safety has been growing quickly over the last three years, since the publication of “Superintelligence”. One of


Joscha Bach on remaining steps to human-level AI

Last year John and I had an interesting discussion with Joscha Bach about what ingredients of human-level artificial intelligence we seem to be missing, and how to improve AI forecasts more generally. Thanks to Connor Flexman’s summarizing efforts, you can now learn about