Welcome to the AI Impacts blog.
AI Impacts is premised on two ideas (at least!):
- The details of the arrival of human-level artificial intelligence matter
Seven years to prepare is very different from seventy years to prepare. A weeklong transition is very different from a decade-long transition. Brain emulations require different preparations than do synthetic AI minds. Etc.
- Available data and reasoning can substantially educate our guesses about these details
We can track progress in AI subfields. We can estimate the hardware represented by the human brain. We can detect the effect of additional labor on software progress. Etc.
Our goal is to assemble relevant evidence and considerations, and to synthesize reasonable views on questions such as when AI will surpass human-level capabilities, how rapid development will be at that point, what advance notice we might expect, and what kinds of AI are likely to reach human-level capabilities first.
We are doing this recursively, first addressing much smaller questions, like:
- Is AI likely to surpass human level in a discontinuous spurt, or through incremental progress?
- Does AI software undergo discontinuous progress often?
- Is technological progress of any sort discontinuous often?
- When is technological progress discontinuous?
- Why did explosives undergo discontinuous progress in the form of nuclear weapons?
In this way, we hope to inform decisions about how to prepare for advanced AI, and about whether it is worth prioritizing over other pressing issues in the world. Researchers, funders, and other thinkers and doers are choosing how to spend their efforts on the future impacts of AI, and we want to help them choose well.
AI impacts is currently something like a (brief) encyclopedia of semi-original AI forecasting research. That is, it is a growing collection of pages addressing particular questions or bodies of evidence relating to the future of AI. We intend to revise these in an ongoing fashion, according to new investigations and debates.
At the same time as producing reasonable views, we are interested in exploring and bettering humanity’s machinery for producing reasonable views. To this end, we have chosen this unusual – but we think promising – format, and may experiment with novel methods of organizing information and resolving questions and disagreements.
This blog exists to show you the most interesting findings of the AI Impacts project as we find them, and before they get lost in what we hope becomes a dense network of research pages. We might also write about other things, such as our thoughts on methodology, speculative opinions, news about the project itself, and anything else that seems like a good idea at the time.
If you like the sound of any of these things, consider signing up for one of our RSS feeds (blog, articles). If you don’t, or if you think you could (cheaply) like it more, we welcome your thoughts or suggestions.