Blog

Takeaways from safety by default interviews

Asya Bergal

Last year, several researchers at AI Impacts (primarily Robert Long and I) interviewed prominent researchers inside and outside of the AI safety field who are relatively optimistic about advanced AI being developed safely. These interviews were originally intended to focus narrowly on reasons for optimism, but we ended up covering a variety of topics, including AGI timelines, the likelihood of current techniques leading to AGI, and what the right things to do in AI safety are right now. (…)

Blog

Atari early

By Katja Grace, 1 April 2020 Deepmind announced that their Agent57 beats the ‘human baseline’ at all 57 Atari games usually used as a benchmark. I think this is probably enough to resolve one of

Blog

Three kinds of competitiveness

By Daniel Kokotajlo, 30 March 2020 In this post, I distinguish between three different kinds of competitiveness — Performance, Cost, and Date — and explain why I think these distinctions are worth the brainspace they

Blog

AGI in a vulnerable world

By Asya Bergal, 25 March 2020 I’ve been thinking about a class of AI-takeoff scenarios where a very large number of people can build dangerous, unsafe AGI before anyone can build safe AGI. This seems

Blog

Robin Hanson on the futurist focus on AI

By Asya Bergal, 13 November 2019 Robert Long and I recently talked to Robin Hanson—GMU economist, prolific blogger, and longtime thinker on the future of AI—about the amount of futurist effort going into thinking about

Blog

Rohin Shah on reasons for AI optimism

By Asya Bergal, 31 October 2019 I along with several AI Impacts researchers recently talked to Rohin Shah about why he is relatively optimistic about AI systems being developed safely. Rohin Shah is a 5th