The maximum superconducting temperature of any material up to 1993 contained four greater than 10-year discontinuities: A 14-year discontinuity with NbN in 1941, a 26-year discontinuity with LaBaCuO4 in 1986, a 140-year discontinuity with YBa2Cu3O7
The Elo rating of the best chess program measured by the Swedish Chess Computer Association did not contain any greater than 10-year discontinuities between 1984 and 2018. A four year discontinuity in 2008 was notable
Flight airspeed records between 1903 and 1976 contained one greater than 10-year discontinuity: a 19-year discontinuity corresponding to the Fairey Delta 2 flight in 1956. The average annual growth in flight airspeed markedly increased with
Toby Walsh surveyed hundreds of experts and non-experts in 2016 and found their median estimates for ‘when a computer might be able to carry out most human professions at least as well as a typical
AI Impacts talked to AI safety researcher Adam Gleave about his views on AI risk. With his permission, we have transcribed this interview. Participants Adam Gleave — PhD student at the Center for Human-Compatible AI,
By Asya Bergal, 13 November 2019 Robert Long and I recently talked to Robin Hanson—GMU economist, prolific blogger, and longtime thinker on the future of AI—about the amount of futurist effort going into thinking about
AI Impacts talked to economist Robin Hanson about his views on AI risk and timelines. With his permission, we have posted and transcribed this interview. Participants Robin Hanson — Associate Professor of Economics, George Mason
By Asya Bergal, 31 October 2019 I along with several AI Impacts researchers recently talked to Rohin Shah about why he is relatively optimistic about AI systems being developed safely. Rohin Shah is a 5th
AI Impacts talked to AI safety researcher Rohin Shah about his views on AI risk. With his permission, we have transcribed this interview. Participants Rohin Shah — PhD student at the Center for Human-Compatible AI,
AI Impacts talked to AI safety researcher Paul Christiano about his views on AI risk. With his permission, we have transcribed this interview. Participants Paul Christiano — OpenAI safety team Asya Bergal – AI Impacts