Researcher positions

AI Impacts is currently seeking part- and full-time researchers on contract to work on important questions relevant to AI forecasting.

The AI Impacts mission

In the coming decades, the burgeoning field of artificial intelligence could radically change every aspect of life in ways that we are only beginning to grasp. AI Impacts believes that understanding the details of this transition is critical to navigating it well, which in turn could be the difference between long-term human thriving and catastrophe. Research into this space is radically neglected at present, despite the existence of many feasible projects that could shed light on it.

Our past projects have included a large survey of machine learning researchers done in collaboration with researchers at Yale and Oxford Universities (16th most discussed journal article of 2017), an estimate of brain-equivalent hardware, mapping of hardware trajectories, and an investigation into historical technological discontinuities.

We aim to collect and organize existing knowledge, drawn from both public literature and discussions with domain experts, as well as conduct original research into underexplored issues.  Most of AI Impacts’ output is made publicly accessible online in an effort to aid individual and organizational decision-making.

Future research projects will be in areas such as:

  • Patterns in technological progress
  • Brains and evolution
  • Current trends in AI, hardware, and related social factors
  • Good forecasting practices
  • Opinions of AI practitioners and other relevant thinkers
The role

As a researcher for AI Impacts, you’ll be designing projects, conducting generalist research, and writing up your findings for an audience that may include AI researchers, policy makers, and philanthropic organizations. Depending on the project, you could be working independently, with one or more members of our small team, or with similarly aligned organizations and individuals. You may also have the opportunity to present AI Impacts research at conferences, workshops and other events.

Since we’re currently a small organization, hires have the potential to play a large role in shaping AI Impacts as it grows, as well as influencing the space of AI forecasting.

More details:
  • This is a contract position
  • 20 – 40 hours per week (but feel free to get in touch if you’d like to contribute on a more sporadic basis)
  • Located in Berkeley, CA, but parts of the work can be done remotely
  • Candidates will begin with a trial period lasting between one and three months
  • Compensation is negotiable, with a lower bound at $20 per hour


  • Generalist research experience
  • Ability to engage critically with academic literature
  • Excellent written communication skills
  • Ability to employ creative approaches to answer difficult or open-ended questions
  • Enthusiasm for accuracy


  • Statistics or data analysis skill
  • Familiarity with current AI methods
  • Strong self-direction
  • Familiarity with Effective Altruism
How to apply

Fill out this form.

About our organization

Started in 2014 by Katja Grace (current director) and Paul Christiano (now at OpenAI), AI Impacts was born out an experimental project to assess the priority of different philanthropic causes using structured arguments and iterative discussion. We refocused on the AI risk issue upon realizing how minimally the considerations had been researched or documented, in spite of their importance to so many decisions (including our own) and the apparent wealth of useful projects. At the same time, we replaced the fragile structured argument format with a more forgiving ‘knowledge web’ format.

As part of the broader Effective Altruism community, we prioritize inquiry into high impact areas like existential risk, but are also interested in other applications of AI forecasting.

AI Impacts is based at the Machine Intelligence Research Institute in Berkeley, California, and currently has two regular staff. We’re supported by grants from multiple institutions and individuals, including the Future of Humanity Institute, the Future of Life Institute, EA Grants, and the Open Philanthropy Project.