By Robert Long, 23 August 2019 Earlier this month, I spoke with Ernie Davis about why he is skeptical that risks from superintelligent AI are substantial and tractable enough to merit dedicated work. This was
By Tegan McCaslin, 28 February 2019 The boring answer to that question is, “Yes, birds.” But that’s only because birds can pack more neurons into a walnut-sized brain than a monkey with a brain four
By Daniel Kokotajlo, 2 July 2019 Figure 0: The “four main determinants of forecasting accuracy.” Experience and data from the Good Judgment Project (GJP) provide important evidence about how to make accurate predictions. For a
This is a guest post by Ben Garfinkel. We revised it slightly, at his request, on February 9, 2019. A recent OpenAI blog post, “AI and Compute,” showed that the amount of computing power consumed
This is a guest cross-post by Cullen O’Keefe, 28 September 2018 High-Level Takeaway The extension of rights to corporations likely does not provide useful analogy to potential extension of rights to digital minds. Introduction Examining
This is a guest post by Ryan Carey, 10 July 2018. Over the last few years, we know that AI experiments have used much more computation than previously. But just last month, an investigation by
By Katja Grace, 5 July 2018 Before I get to substantive points, there has been some confusion over the distinction between blog posts and pages on AI Impacts. To make it clearer, this blog post
By Michael Wulfsohn, 6 April 2018 I was convinced. An intelligence explosion would result in the sudden arrival of a superintelligent machine. Its abilities would far exceed those of humans in ways we can’t imagine
By Tegan McCaslin, 30 March 2018 When I took on the task of counting up all the brain’s fibers and figuratively laying them end-to-end, I had a sense that it would be relatively easy–do a
By Katja Grace, 24 February 2018 Will advanced AI let some small group of people or AI systems take over the world? AI X-risk folks and others have accrued lots of arguments about this over