Ernie Davis on the landscape of AI risks

By Robert Long, 23 August 2019

Ernie Davis (NYU)

Earlier this month, I spoke with Ernie Davis about why he is skeptical that risks from superintelligent AI are substantial and tractable enough to merit dedicated work. This was part of a larger project that we’ve been working on at AI Impacts, documenting arguments from people who are relatively optimistic about risks from advanced AI. 

Davis is a professor of computer science at NYU, and works on the representation of commonsense knowledge in computer programs. He wrote Representations of Commonsense Knowledge (1990) and will soon publish a book Rebooting AI (2019) with Gary Marcus. We reached out to him because of his expertise in artificial intelligence and because he wrote a critical review of Nick Bostrom’s Superintelligence

Davis told me, “the probability that autonomous AI is going to be one of our major problems within the next two hundred years, I think, is less than one in a hundred.” We spoke about why he thinks that, what problems in AI he thinks are more urgent, and what his key points of disagreement with Nick Bostrom are. A full transcript of our conversation, lightly edited for concision and clarity, can be found here.


We welcome suggestions for this page or anything on the site via our feedback box, though will not address all of them.