Blog

The tyranny of the god scenario

Michael Wulfsohn is an AI Impacts researcher/contributor. I was convinced. An intelligence explosion would result in the sudden arrival of a superintelligent machine. Its abilities would far exceed those of humans in ways we can’t

Blog

Will AI see sudden progress?

Will advanced AI let some small group of people or AI systems take over the world? AI X-risk folks and others have accrued lots of arguments about this over the years, but I think this

Blog

GoCAS talk on AI Impacts findings

Here is a video summary of some highlights from AI Impacts research over the past years, from the GoCAS Existential Risk workshop in Göteborg in September. Thanks to the folks there for recording it.

Blog

AI hopes and fears in numbers

People often wonder what AI researchers think about AI risk. A good collection of quotes can tell us that worry about AI is no longer a fringe view: many big names are concerned. But without a great sense of how many

Blog

Some survey results!

We put the main results of our survey of machine learning researchers on AI timelines online recently—see here for the paper. Apologies for the delay—we are trying to avoid spoiling the newsworthiness of the results for potential academic publishers, lest