Against a General Factor of Doom

If you ask people a bunch of specific doomy questions, and their answers are suspiciously correlated, they might be expressing their p(Doom) for each question instead of answering the questions individually. Using a general factor of doom is unlikely to be an accurate depiction of reality. The future is likely to be surprisingly doomy in some ways and surprisingly tractable in others.

A check written to Charles Lindbergh for winning the Orteig Prize
AI safety work

Outcomes of inducement prizes

This is a dataset of prizes we could find for incentivizing progress toward a specific technical or intellectual goal.