A TAI which kills all humans, without first ensuring that it is capable of doing everything required for its supply chain, takes a risk of destroying itself. This is a form of potential murder-suicide, rather than the convergent route to gaining long-term power.
Originally published 8 March 2023 In our survey last year, we asked publishing machine learning researchers how they would divide probability over the future impacts of high-level machine intelligence between five buckets ranging from ‘extremely good (e.g. rapid growth in human flourishing)’ to ‘extremely bad (e.g. human extinction).
AI Safety researchers and AI Capabilities researchers are part of the same community because they both believe that the long-term goal of humanity should be a technological utopia, and the most useful tool for building Our Glorious Future is AGI.