Compute used in the largest AI training runs appears to have roughly doubled every 3.5 months between 2012 and 2018. Details According to Amodei and Hernandez, on the OpenAI Blog: …since 2012, the amount of compute
By Michael Wulfsohn, 6 April 2018 I was convinced. An intelligence explosion would result in the sudden arrival of a superintelligent machine. Its abilities would far exceed those of humans in ways we can’t imagine
This is an incomplete list of concrete projects that we think are tractable and important. We may do any of them ourselves, but many also seem feasible to work on independently. Those we consider especially
By Tegan McCaslin, 30 March 2018 When I took on the task of counting up all the brain’s fibers and figuratively laying them end-to-end, I had a sense that it would be relatively easy–do a
The human brain’s approximately 86 billion neurons are probably connected by something like 850,000 km of axons and dendrites. Of this total, roughly 80% is short-range, local connections (averaging 680 microns in length), and approximately
By Katja Grace, 24 February 2018 Will advanced AI let some small group of people or AI systems take over the world? AI X-risk folks and others have accrued lots of arguments about this over
We aren’t convinced by any of the arguments we’ve seen to expect large discontinuity in AI progress above the extremely low base rate for all technologies. However this topic is controversial, and many thinkers on
Computer performance per watt has probably doubled every 1.5 years between 1945 and 2000. Since then the trend slowed. By 2015, performance per watt appeared to be doubling every 2.5 years. Details In 2011 Jon
This page contains the data from Appendix 2 of William Nordhaus’ The progress of computing in usable formats. Notes This data was collected from Appendix 2 of The progress of computing, using Tabula (a program for turning
Tensor Processing Units (TPUs) perform around 1 GFLOPS/$, when purchased as cloud computing. Details In February 2018, Google Cloud Platform blog says their TPUs can perform up to 180 TFLOPS, and currently cost $6.50/hour. This