By Asya Bergal, 13 November 2019 Robert Long and I recently talked to Robin Hanson—GMU economist, prolific blogger, and longtime thinker on the future of AI—about the amount of futurist effort going into thinking about
AI Impacts talked to economist Robin Hanson about his views on AI risk and timelines. With his permission, we have posted and transcribed this interview. Contents ParticipantsSummaryAudioTranscript Participants Robin Hanson — Associate Professor of Economics,
Oren Etzioni surveyed 193 AAAI fellows in 2016 and found that 67% of them expected that ‘we will achieve Superintelligence’ someday, but in more than 25 years. Details Oren Etzioni, CEO of the Allen Institute
By Asya Bergal, 31 October 2019 I along with several AI Impacts researchers recently talked to Rohin Shah about why he is relatively optimistic about AI systems being developed safely. Rohin Shah is a 5th
AI Impacts talked to AI safety researcher Rohin Shah about his views on AI risk. With his permission, we have transcribed this interview. Contents ParticipantsSummaryTranscript Participants Rohin Shah — PhD student at the Center for
By Rick Korzekwa, 17 September 2019 Artificial intelligence defeated a pair of professional Starcraft II players for the first time in December 2018. Although this was generally regarded as an impressive achievement, it quickly became
AI Impacts talked to AI safety researcher Paul Christiano about his views on AI risk. With his permission, we have transcribed this interview. Contents ParticipantsSummary Transcript Participants Paul Christiano — OpenAI safety team Asya Bergal
By Asya Bergal, 11 September 2019 As part of our AI optimism project, we talked to Paul Christiano about why he is relatively hopeful about the arrival of advanced AI going well. Paul Christiano works
By Daniel Kokotajlo, 11 September 2019 Crossposted from the AI Alignment Forum. May contain more technical jargon than usual. [Epistemic status: Argument by analogy to historical cases. Best case scenario it’s just one argument among many.
By Robert Long, 23 August 2019 Earlier this month, I spoke with Ernie Davis about why he is skeptical that risks from superintelligent AI are substantial and tractable enough to merit dedicated work. This was