Blog

Discontinuous progress in history: an update

Katja Grace
We’ve been looking for historic cases of discontinuously fast technological progress, to help with reasoning about the likelihood and consequences of abrupt progress in AI capabilities. We recently finished expanding this investigation to 37 technological trends. This blog post is a quick update on our findings. See the main page on the research and its outgoing links for more details.

Blog

Takeaways from safety by default interviews

Asya Bergal

Last year, several researchers at AI Impacts (primarily Robert Long and I) interviewed prominent researchers inside and outside of the AI safety field who are relatively optimistic about advanced AI being developed safely. These interviews were originally intended to focus narrowly on reasons for optimism, but we ended up covering a variety of topics, including AGI timelines, the likelihood of current techniques leading to AGI, and what the right things to do in AI safety are right now. (…)

Blog

Atari early

By Katja Grace Deepmind announced that their Agent57 beats the ‘human baseline’ at all 57 Atari games usually used as a benchmark. I think this is probably enough to resolve one of the predictions we

Blog

Three kinds of competitiveness

By Daniel Kokotajlo In this post, I distinguish between three different kinds of competitiveness — Performance, Cost, and Date — and explain why I think these distinctions are worth the brainspace they occupy. For example,

Blog

AGI in a vulnerable world

By Asya Bergal I’ve been thinking about a class of AI-takeoff scenarios where a very large number of people can build dangerous, unsafe AGI before anyone can build safe AGI. This seems particularly likely if: