![](https://aiimpacts.org/wp-content/uploads/2022/12/katjagrace_robot_sprinting_but_tangled_in_red_tape_3dd08ce4-a3c1-4276-8c77-624bfe643205-326x245.png)
All pages and blog posts
![](https://aiimpacts.org/wp-content/uploads/2022/12/katjagrace_robot_sprinting_but_tangled_in_red_tape_3dd08ce4-a3c1-4276-8c77-624bfe643205-326x245.png)
Blog
![](https://aiimpacts.org/wp-content/uploads/2022/12/jj-ying-WmnsGyaFnCQ-unsplash-326x245.jpg)
Blog
![](https://aiimpacts.org/wp-content/uploads/2022/11/DOOMs-326x245.jpg)
Blog
Against a General Factor of Doom
If you ask people a bunch of specific doomy questions, and their answers are suspiciously correlated, they might be expressing their p(Doom) for each question instead of answering the questions individually. Using a general factor of doom is unlikely to be an accurate depiction of reality. The future is likely to be surprisingly doomy in some ways and surprisingly tractable in others.
![](https://aiimpacts.org/wp-content/uploads/2022/11/ManifoldSample-326x245.png)
Blog
![](https://aiimpacts.org/wp-content/uploads/2022/10/Screen-Shot-2022-10-11-at-9.04.21-PM-326x245.png)
Blog
![A check written to Charles Lindbergh for winning the Orteig Prize](https://aiimpacts.org/wp-content/uploads/2022/08/Orteig_prize-326x245.jpg)
AI safety work
![](https://aiimpacts.org/wp-content/uploads/2021/11/iStock-1356254626-326x245.jpg)
Arguments for AI risk
![](https://aiimpacts.org/wp-content/uploads/2021/12/gulfer-ergin-LUGuCtvlk1Q-unsplash-326x245.jpg)
Arguments for AI risk
![](https://aiimpacts.org/wp-content/uploads/2020/10/dallemini_2022-8-6_16-38-44-326x245.png)
Arguments for AI risk
![](https://aiimpacts.org/wp-content/uploads/2022/01/iStock-1348819576-326x245.jpg)
Arguments for AI risk