Thanks for visiting!

This project aims to improve our understanding of the likely impacts of human-level artificial intelligence.

The intended audience includes researchers doing work related to artificial intelligence, philanthropists involved in funding research related to artificial intelligence, and policy-makers whose decisions may be influenced by their expectations about artificial intelligence.

The focus is particularly on the long-term impacts of sophisticated artificial intelligence. Although human-level AI may be far in the future, there are a number of important questions which we can try to address today and may have implications for contemporary decisions. For example:

  • What should we believe about timelines for AI development?
  • How rapid is the development of AI likely to be near human-level? How much advance notice should we expect to have of disruptive change?
  • What are the likely economic impacts of human-level AI?
  • Which paths to AI should be considered plausible or likely?
  • Will human-level AI tend to pursue particular goals, and if so what kinds of goals?
  • Can we say anything meaningful about the impact of contemporary choices on long-term outcomes?

Today, public discussion on these issues appears to be highly fragmented and of limited credibility. More credible and clearly communicated views on these issues might help improve estimates of the social returns to AI investment, identify neglected research areas, improve policy, or productively channel public interest in AI.

The goal of the project is to clearly present and organize the considerations which inform contemporary views on these and related issues, to identify and explore disagreements, and to assemble whatever empirical evidence is relevant.

The project is provisionally organized as a collection of posts concerning particular issues or bodies of evidence, describing what is known and attempting to synthesize a reasonable view in light of available evidence. These posts are intended to be continuously revised in light of outstanding disagreements and to make explicit reference to those disagreements.

AI Impacts contributors

Research

Katja Grace

Katja co-founded AI Impacts, where she is the Lead Researcher. She started doing this because she wanted to know what would happen with AI, and thought other people might too. She continues because she thinks it’s one of the most important research projects at the moment. Her background is in philosophy, economics, and human ecology, with particular interests in anthropic reasoning, artificial intelligence risk, and game theory. She blogs at world spirit sock puppet.

Rick Korzekwa

Rick is the Director at AI Impacts, where he started as a researcher in 2019. He splits his time between running the organization and doing research. His current research interests are analyzing AI progress in the context of human performance, learning from the past about how we can confront big problems that require foresight, and figuring out how technological progress works, especially around very large advances. Areas where he has more enthusiasm than expertise include game theory, ethics, and AI alignment. Outside of work, he enjoys competitive cycling, tinkering, and tabletop gaming. He holds a PhD in physics from the University of Texas at Austin.

Zach Stein-Perlman

Zach’s research interests include narrow AI capabilities and their strategic implications, timelines forecasting, how relevant actors think about AI and how they could better coordinate, and macrostrategy. His background is mostly in philosophy, math, and political science. He also likes cats, Shakespeare, chocolate, fantasy novels, and social deduction games.

Jeffrey Heninger

Jeffrey is a research analyst at AI Impacts. His research interests include understanding broad trends of technological progress.  He has a background in physics, especially chaos theory, turbulence, and fusion, with a PhD from the University of Texas at Austin.  He enjoys running, hiking, church, and reading varied nonfiction.

Harlan Stewart

Harlan is a research assistant at AI Impacts. He is curious about perceptions of AI, the inputs that go into making AI, and progress towards functionally simulating biological brains. In his free time, he likes to play board games and read science fiction. Harlan has a background in mathematics and education.

Support

Justis Mills

Justis Mills studied Philosophy at New College of Florida. His main interests are Effective Altruism, Homestuck, Super Smash Bros, and writing fiction. You can find some of the last of these, and other miscellanea, here.

Jimmy Rintjema

Jimmy Rintjema is a freelance contractor who specializes in providing support for startups and small companies. He is especially interested in helping organizations that study Existential Risk and Artificial Intelligence safety. When he is not glued to his computer, Jimmy enjoys playing soccer and car camping. He resides in Ontario, Canada.

Past contributors

Paul Christiano

John Salvatier

Ben Hoffman

Stephanie Zolayvar

Tegan McCaslin

Connor Flexman

Finan Adamson

Michael Wulfsohn

Ronja Lutz

Ronny Fernandez

Daniel Kokotajlo

Asya Bergal

Aysja Johnson

34 Trackbacks / Pingbacks

  1. AI Impacts | Meteuphoric
  2. A Timeline for the Extinction of Jobs by Machines
  3. AI Impacts – X-Risk Library
  4. Using History to Chart the Future of AI: An Interview with Katja Grace – Errau Geeks
  5. Information Sources Survey | Fiona Barbolak’s MLIS Blog
  6. The AI Dilemma Seeks To Mislead You With Misinformation, The Same Way The Social Dilemma Did | Techdirt
  7. AI survey exaggerates horrific dangers – Illrec
  8. AI survey exaggerates horrific risks – Hibocci
  9. AI Survey Exaggerates Apocalyptic Risks - Techno Blender
  10. AI Survey Exaggerates Apocalyptic Risks – News For Today
  11. AI Survey Exaggerates Apocalyptic Risks CNN AUS
  12. AI survey exaggerates apocalyptic risks | Trending Viral hub - Trending Viral Hub
  13. AI Survey Exaggerates Apocalyptic Risks - EnglishSL
  14. 人工智能调查夸大了世界末日的风险 - Mandarinian
  15. Una encuesta de IA exagera los riesgos apocalípticos - Espanol News
  16. Une enquête sur l’IA exagère les risques apocalyptiques - Les Actualites
  17. AI Survey Exaggerates Apocalyptic Risks – Newss7.Online
  18. AI Survey Exaggerates Apocalyptic Risks - News 247
  19. AI Survey Exaggerates Apocalyptic Risks | NutiTech
  20. एआई सर्वेक्षण सर्वनाशकारी जोखिमों को बढ़ा-चढ़ाकर पेश करता है - Hindia News
  21. AI Survey Exaggerates Apocalyptic Risks - Bilal News
  22. AI Survey Exaggerates Apocalyptic Dangers – likeappsapk.com
  23. AI Survey Exaggerates Apocalyptic Risks – Monkey Viral
  24. AI Survey Exaggerates Apocalyptic Risks – Pakistan And The World News
  25. AI Survey Exaggerates Apocalyptic Risks – The Insight Post
  26. AI Survey Exaggerates Apocalyptic Dangers – Tiempore
  27. AI Survey Exaggerates Apocalyptic Risks – SUPERJII
  28. AI Survey Exaggerates Apocalyptic Risks – News24
  29. AI Survey Exaggerates Apocalyptic Risks – THE KNOWLEDGE PAL
  30. AI Survey Exaggerates Apocalyptic Risks – success street is a multipurpose site.For businesses,news across continents,regions and various interests.Rovs ,Careers and so much more
  31. AI Survey Exaggerates Apocalyptic Risks - DUK News
  32. AI Survey Exaggerates Apocalyptic Risks - year CatFish
  33. AI Survey Exaggerates Apocalyptic Risks | Scientific American
  34. Three Scenarios in the Post-AI Revolution:  Renaissance, Demolition Man and Goldilocks – Reinventing

Comments are closed.