Interviews on plausibility of AI safety by default

This is a list of interviews on the plausibility of AI safety by default.

Background

AI Impacts conducted interviews with several thinkers on AI safety in 2019 as part of a project exploring arguments for expecting advanced AI to be safe by default. The interviews also covered other AI safety topics, such as timelines to advanced AI, the likelihood of current techniques leading to AGI, and currently promising AI safety interventions.

List


We welcome suggestions for this page or anything on the site via our feedback box, though will not address all of them.