Blog

AI hopes and fears in numbers

People often wonder what AI researchers think about AI risk. A good collection of quotes can tell us that worry about AI is no longer a fringe view: many big names are concerned. But without a great sense of how many

Blog

Some survey results!

We put the main results of our survey of machine learning researchers on AI timelines online recently—see here for the paper. Apologies for the delay—we are trying to avoid spoiling the newsworthiness of the results for potential academic publishers, lest

Blog

Joscha Bach on remaining steps to human-level AI

Last year John and I had an interesting discussion with Joscha Bach about what ingredients of human-level artificial intelligence we seem to be missing, and how to improve AI forecasts more generally. Thanks to Connor Flexman’s summarizing efforts, you can now learn about

Blog

What if you turned the world’s hardware into AI minds?

In a classic ‘AI takes over the world’ scenario, one of the first things an emerging superintelligence wants to do is steal most of the world’s computing hardware and repurpose it to running the AI’s own software. This step takes one from ‘super-proficient hacker’

No Picture
Blog

Friendly AI as a global public good

A public good, in the economic sense, can be (roughly) characterized as a desirable good that is likely to be undersupplied, or not supplied at all, by private companies. It generally falls to the government

Blog

Error in Armstrong and Sotala 2012

Can AI researchers say anything useful about when strong AI will arrive? Back in 2012, Stuart Armstrong and Kaj Sotala weighed in on this question in a paper called ‘How We’re Predicting AI—or Failing To‘. They looked