The AI Impacts mission
In the coming decades, the burgeoning field of artificial intelligence could radically change every aspect of life in ways that we are only beginning to grasp. AI Impacts believes that understanding the details of this transition is critical to navigating it well, which in turn could be the difference between long-term human thriving and catastrophe. Research into this space is radically neglected at present, despite the existence of many feasible projects that could shed light on it.
Our past projects have included a large survey of machine learning researchers done in collaboration with researchers at Yale and Oxford Universities (16th most discussed journal article of 2017), an estimate of brain-equivalent hardware, mapping of hardware trajectories, and an investigation into historical technological discontinuities.
We aim to collect and organize existing knowledge, drawn from both public literature and discussions with domain experts, as well as conduct original research into underexplored issues. Most of AI Impacts’ output is made publicly accessible online in an effort to aid individual and organizational decision-making.
Future research projects will be in areas such as:
- Patterns in technological progress
- Brains and evolution
- Current trends in AI, hardware, and related social factors
- Good forecasting practices
- Opinions of AI practitioners and other relevant thinkers
Roles
We are not currently hiring.
Roles we have previously hired for:
How to apply
Fill out our application. Please note that, though we are currently accepting applications, we are evaluating them on a rolling basis and may not get back to you for a while.
(If you prefer a format with all the questions on one page, you can find that here)
About our organization
Started in 2014 by Katja Grace (current lead researcher) and Paul Christiano (now at the Alignment Research Center), AI Impacts was born out an experimental project to assess the priority of different philanthropic causes using structured arguments and iterative discussion. We refocused on the AI risk issue upon realizing how minimally the considerations had been researched or documented, in spite of their importance to so many decisions (including our own) and the apparent wealth of useful projects. At the same time, we replaced the fragile structured argument format with a more forgiving ‘knowledge web’ format.
As part of the broader Effective Altruism community, we prioritize inquiry into high impact areas like existential risk, but are also interested in other applications of AI forecasting.
AI Impacts is based at the Machine Intelligence Research Institute in Berkeley, California, and currently has three regular staff. We have been supported by grants from multiple institutions and individuals, including the Future of Humanity Institute, the Future of Life Institute, and the Open Philanthropy Project.