Continuity of progress

Historic trends in the maximum superconducting temperature

The maximum superconducting temperature of any material up to 1993 contained four greater than 10-year discontinuities: A 14-year discontinuity with NbN in 1941, a 26-year discontinuity with LaBaCuO4 in 1986, a 140-year discontinuity with YBa2Cu3O7

Continuity of progress

Historic trends in chess AI

The Elo rating of the best chess program measured by the Swedish Chess Computer Association did not contain any greater than 10-year discontinuities between 1984 and 2018. A four year discontinuity in 2008 was notable

No Picture
Continuity of progress

Historic trends in flight airspeed records

Flight airspeed records between 1903 and 1976 contained one greater than 10-year discontinuity: a 19-year discontinuity corresponding to the Fairey Delta 2 flight in 1956. The average annual growth in flight airspeed markedly increased with

No Picture
AI Timeline Surveys

Walsh 2017 survey

Toby Walsh surveyed hundreds of experts and non-experts in 2016 and found their median estimates for ‘when a computer might be able to carry out most human professions at least as well as a typical

No Picture
Conversation notes

Conversation with Adam Gleave

AI Impacts talked to AI safety researcher Adam Gleave about his views on AI risk. With his permission, we have transcribed this interview. Participants Adam Gleave — PhD student at the Center for Human-Compatible AI,

Blog

Robin Hanson on the futurist focus on AI

Robert Long and I recently talked to Robin Hanson—GMU economist, prolific blogger, and longtime thinker on the future of AI—about the amount of futurist effort going into thinking about AI risk. It was noteworthy to

No Picture
Conversation notes

Conversation with Robin Hanson

AI Impacts talked to economist Robin Hanson about his views on AI risk and timelines. With his permission, we have posted and transcribed this interview. Participants Robin Hanson — Associate Professor of Economics, George Mason

Blog

Rohin Shah on reasons for AI optimism

I along with several AI Impacts researchers recently talked to Rohin Shah about why he is relatively optimistic about AI systems being developed safely. Rohin Shah is a 5th year PhD student at the Center

No Picture
Conversation notes

Conversation with Rohin Shah

AI Impacts talked to AI safety researcher Rohin Shah about his views on AI risk. With his permission, we have transcribed this interview. Participants Rohin Shah — PhD student at the Center for Human-Compatible AI,

No Picture
Conversation notes

Conversation with Paul Christiano

AI Impacts talked to AI safety researcher Paul Christiano about his views on AI risk. With his permission, we have transcribed this interview. Participants Paul Christiano — OpenAI safety team Asya Bergal – AI Impacts