By Katja Grace, 27 November 2017 Here is a video summary of some highlights from AI Impacts research over the past years, from the GoCAS Existential Risk workshop in Göteborg in September. Thanks to the folks there
By Katja Grace, 26 November 2017 When people make predictions about AI, they often assume that computing hardware will carry on getting cheaper for the foreseeable future, at about the same rate that it usually
The cheapest hardware prices (for single precision FLOPS/$) appear to be falling by around an order of magnitude every 10-16 years. This rate is slower than the trend of FLOPS/$ observed over the past quarter century,
A top supercomputer can perform a GFLOP for around $3, in 2017. The price of performance in top supercomputers continues to fall, as of 2016. Details TOP500.org maintains a list of top supercomputers and their
This is a list of public datasets that we know of containing either measured or theoretical performance numbers for computer processors. List Top 500 maintains a list of the top 500 supercomputers, updated every six
This is an interactive timeline we made, illustrating the median dates when respondents said they expected a 10%, 50% and 90% chance of different tasks being automatable, in the 2016 Expert Survey on progress in
By Katja Grace, 26 September 2017 We asked the ML researchers in our survey when they thought 32 narrow, relatively well defined tasks would be feasible for AI. Eighteen of them were included in our paper
By Katja Grace, 25 September 2017 So, maybe you are concerned about AI risk. And maybe you are concerned that many people making AI are not concerned enough about it. Or not concerned about the
Most machine learning researchers expect machines will be able to create top quality music by 2036. Contents DetailsEvidence from survey dataSummary resultsDistributions of answers to Taylor question Details Evidence from survey data In the 2016
Stuart Russell has argued that advanced AI poses a risk, because it will have the ability to make high quality decisions, yet may not share human values perfectly. Details Stuart Russell describes a risk from