According to experience and data from the Good Judgment Project, the following are associated with successful forecasting, in rough decreasing order of combined importance and confidence: Past performance in the same broad domain Making more
This is a guest post by Ben Garfinkel. We revised it slightly, at his request, on February 9, 2019. A recent OpenAI blog post, “AI and Compute,” showed that the amount of computing power consumed
In diabetic retinopathy, automated systems started out just below expert human level performance, and took around ten years to reach expert human level performance. Contents DetailsContributionsFootnotes Details Diabetic retinopathy is a complication of diabetes in
The AGI-11 survey was a survey of 60 participants at the AGI-11 conference. In it: Nearly half of respondents believed that AGI would appear before 2030. Nearly 90% of respondents believed that AGI would appear
This is a guest cross-post by Cullen O’Keefe, 28 September 2018 High-Level Takeaway The extension of rights to corporations likely does not provide useful analogy to potential extension of rights to digital minds. Introduction Examining
Hardware overhang refers to a situation where large quantities of computing hardware can be diverted to running powerful AI systems as soon as the software is developed. Details Definition In the context of AI forecasting, hardware overhang refers
Trends for tallest ever structure heights, tallest ever freestanding structure heights, tallest existing freestanding structure heights, and tallest ever building heights have each seen 5-8 discontinuities of more than ten years. These are: Djoser and
This is a guest post by Ryan Carey, 10 July 2018. Over the last few years, we know that AI experiments have used much more computation than previously. But just last month, an investigation by
By Katja Grace, 5 July 2018 Before I get to substantive points, there has been some confusion over the distinction between blog posts and pages on AI Impacts. To make it clearer, this blog post
Compute used in the largest AI training runs appears to have roughly doubled every 3.5 months between 2012 and 2018. Details According to Amodei and Hernandez, on the OpenAI Blog: …since 2012, the amount of compute