Blog

Robin Hanson on the futurist focus on AI

By Asya Bergal, 13 November 2019 Robert Long and I recently talked to Robin Hanson—GMU economist, prolific blogger, and longtime thinker on the future of AI—about the amount of futurist effort going into thinking about

No Picture
Conversation notes

Conversation with Robin Hanson

AI Impacts talked to economist Robin Hanson about his views on AI risk and timelines. With his permission, we have posted and transcribed this interview. Contents ParticipantsSummaryAudioTranscript Participants Robin Hanson — Associate Professor of Economics,

No Picture
AI Timeline Surveys

Etzioni 2016 survey

Oren Etzioni surveyed 193 AAAI fellows in 2016 and found that 67% of them expected that ‘we will achieve Superintelligence’ someday, but in more than 25 years. Details Oren Etzioni, CEO of the Allen Institute

Blog

Rohin Shah on reasons for AI optimism

By Asya Bergal, 31 October 2019 I along with several AI Impacts researchers recently talked to Rohin Shah about why he is relatively optimistic about AI systems being developed safely. Rohin Shah is a 5th

No Picture
Conversation notes

Conversation with Rohin Shah

AI Impacts talked to AI safety researcher Rohin Shah about his views on AI risk. With his permission, we have transcribed this interview. Contents ParticipantsSummaryTranscript Participants Rohin Shah — PhD student at the Center for

No Picture
Conversation notes

Conversation with Paul Christiano

AI Impacts talked to AI safety researcher Paul Christiano about his views on AI risk. With his permission, we have transcribed this interview. Contents ParticipantsSummary Transcript Participants Paul Christiano — OpenAI safety team Asya Bergal

Blog

Ernie Davis on the landscape of AI risks

By Robert Long, 23 August 2019 Earlier this month, I spoke with Ernie Davis about why he is skeptical that risks from superintelligent AI are substantial and tractable enough to merit dedicated work. This was