Thanks for visiting!

This project aims to improve our understanding of the likely impacts of human-level artificial intelligence.

The intended audience includes researchers doing work related to artificial intelligence, philanthropists involved in funding research related to artificial intelligence, and policy-makers whose decisions may be influenced by their expectations about artificial intelligence.

The focus is particularly on the long-term impacts of sophisticated artificial intelligence. Although human-level AI may be far in the future, there are a number of important questions which we can try to address today and may have implications for contemporary decisions. For example:

  • What should we believe about timelines for AI development?
  • How rapid is the development of AI likely to be near human-level? How much advance notice should we expect to have of disruptive change?
  • What are the likely economic impacts of human-level AI?
  • Which paths to AI should be considered plausible or likely?
  • Will human-level AI tend to pursue particular goals, and if so what kinds of goals?
  • Can we say anything meaningful about the impact of contemporary choices on long-term outcomes?

Today, public discussion on these issues appears to be highly fragmented and of limited credibility. More credible and clearly communicated views on these issues might help improve estimates of the social returns to AI investment, identify neglected research areas, improve policy, or productively channel public interest in AI.

The goal of the project is to clearly present and organize the considerations which inform contemporary views on these and related issues, to identify and explore disagreements, and to assemble whatever empirical evidence is relevant.

The project is provisionally organized as a collection of posts concerning particular issues or bodies of evidence, describing what is known and attempting to synthesize a reasonable view in light of available evidence. These posts are intended to be continuously revised in light of outstanding disagreements and to make explicit reference to those disagreements.

AI Impacts contributors

Katja Grace

Katja runs AI Impacts. She started doing this because she wanted to know what would happen with AI, and thought other people might too. She continues because she thinks it’s one of the most important research projects at the moment. Her background is in philosophy, economics, and human ecology, with particular interests in anthropic reasoning, artificial intelligence risk, and game theory. She blogs at meteuphoric.wordpress.com.

 

 

Rick Korzekwa

Rick works at AI Impacts because he thinks they’re solving important problems that he can help with. His interests include game theory, forecasting, and empirical investigations that can shed light on the future and AI. Outside of work, he enjoys competitive cycling, cooking, and tabletop gaming. He holds a PhD in physics from the University of Texas at Austin.

 

 

Ronja Lutz

Ronja studied Philosophy and Linguistics at the University of Oxford. She is excited about research that will (hopefully) prevent human extinction, and is currently trying to find out how to best contribute. Some things she finds particularly interesting, besides forecasting the development of AI, are how to deal with moral uncertainty and how machines might acquire human-friendly values. You can email her at info [at] aiimpacts [dot] org.

 

 

Daniel Kokotajlo

Daniel works at AI Impacts while getting his PhD in Philosophy from UNC Chapel Hill. His background is in decision theory, formal epistemology, consciousness, and philosophy of science. At AI Impacts he has done a variety of things, but right now his primary project is an investigation into the Agent AI vs. Tool AI debate.

 

 

Justis Mills

Justis Mills studied Philosophy at New College of Florida. His main interests are Effective Altruism, Homestuck, Super Smash Bros, and writing fiction. You can find some of the last of these, and other miscellanea, here.

 

 

 

Jimmy Rintjema

Jimmy Rintjema is a freelance contractor who specializes in providing support for startups and small companies. He is especially interested in helping organizations that study Existential Risk and Artificial Intelligence safety. When he is not glued to his computer, Jimmy enjoys playing soccer and car camping. He resides in Ontario, Canada.

 

 

Michael Wulfsohn

Michael is a Fellow of the Actuaries Institute in Australia (FIAA) and holds a master’s degree in International and Development Economics. Michael has previously worked as an investment analyst for a consulting firm, a researcher for a development policy think-tank, an economist at a developing country central bank, and a consultant to multilateral development banks. Michael’s research interests include regional and global integration, global public goods, and global catastrophes.

 

 

Past contributors

Paul Christiano

John Salvatier

Ben Hoffman

Stephanie Zolayvar

Tegan McCaslin

Connor Flexman

Finan Adamson