When do ML Researchers Think Specific Tasks will be Automated?

We asked the ML researchers in our survey when they thought 32 narrow, relatively well defined tasks would be feasible for AI. Eighteen of them were included in our paper earlier, but the other fourteen results

0 comments

What do ML researchers think you are wrong about?

So, maybe you are concerned about AI risk. And maybe you are concerned that many people making AI are not concerned enough about it. Or not concerned about the right things. But if so, do

0 comments

AI hopes and fears in numbers

People often wonder what AI researchers think about AI risk. A good collection of quotes can tell us that worry about AI is no longer a fringe view: many big names are concerned. But without a great sense of how many

4 comments

Some survey results!

We put the main results of our survey of machine learning researchers on AI timelines online recently—see here for the paper. Apologies for the delay—we are trying to avoid spoiling the newsworthiness of the results for potential academic publishers, lest

8 comments

Changes in funding in the AI safety field

Guest post by Seb Farquhar, originally posted to the Center for Effective Altruism blog. The field of AI Safety has been growing quickly over the last three years, since the publication of “Superintelligence”. One of

0 comments

Joscha Bach on remaining steps to human-level AI

Last year John and I had an interesting discussion with Joscha Bach about what ingredients of human-level artificial intelligence we seem to be missing, and how to improve AI forecasts more generally. Thanks to Connor Flexman’s summarizing efforts, you can now learn about

5 comments

Tom Griffiths on Cognitive Science and AI

This is a guest post by Finan Adamson Prof. Tom Griffiths is the director of the Computational Cognitive Science Lab and the Institute of Cognitive and Brain Sciences at UC Berkeley. He studies human cognition

0 comments

What if you turned the world’s hardware into AI minds?

In a classic ‘AI takes over the world’ scenario, one of the first things an emerging superintelligence wants to do is steal most of the world’s computing hardware and repurpose it to running the AI’s own software. This step takes one from ‘super-proficient hacker’

3 comments

Friendly AI as a global public good

A public good, in the economic sense, can be (roughly) characterized as a desirable good that is likely to be undersupplied, or not supplied at all, by private companies. It generally falls to the government

0 comments

Error in Armstrong and Sotala 2012

Can AI researchers say anything useful about when strong AI will arrive? Back in 2012, Stuart Armstrong and Kaj Sotala weighed in on this question in a paper called ‘How We’re Predicting AI—or Failing To‘. They looked

5 comments

Metasurvey: predict the predictors

As I mentioned earlier, we’ve been making a survey for AI researchers. The survey asks when AI will be able to do things like build a lego kit according to the instructions, be a surgeon, or radically accelerate global technological development. It also asks

0 comments

Concrete AI tasks bleg

We’re making a survey. I hope to write soon about our general methods and plans, so anyone kind enough to criticize them has the chance. Before that though, we have a different request: we want a list of concrete tasks that AI can’t do yet,

19 comments

Mysteries of global hardware

This blog post summarizes recent research on our Global Computing Capacity page. See that page for full citations and detailed reasoning. We recently investigated this intriguing puzzle: FLOPS (then) apparently performed by all of the world’s computing hardware: 3 x 1022 – 3

2 comments

Recently at AI Impacts

We’ve been working on a few longer term projects lately, so here’s an update in the absence of regular page additions. New researchers Stephanie Zolayvar and John Salvatier have recently joined us, to try out research here. Stephanie

0 comments

AI timelines and strategies

AI Impacts sometimes invites guest posts from fellow thinkers on the future of AI. These are not intended to relate closely to our current research, nor to necessarily reflect our views. However we think they are worthy contributions to the discussion of AI forecasting and strategy. This

3 comments

Introducing research bounties

Sometimes we like to experiment with novel research methods and formats. Today we are introducing ‘AI Impacts Research Bounties‘, in which you get money if you send us inputs to some of our research. To start, we have two bounties: one for showing us instances

2 comments

Time flies when robots rule the earth

This week Robin Hanson is finishing off his much anticipated book, The Age of Em: Work, Love and Life When Robots Rule the Earth. He recently told me that it would be helpful to include rough numbers for the brain’s memory and computing capacity in

1 comment

Event: Exercises in Economic Futurism

On Thursday July 30th Robin Hanson is visiting again, and this time we will be holding an informal workshop on how to usefully answer questions about the future, with an emphasis on economic approaches. We will pick roughly three concrete

1 comment

Steve Potter on neuroscience and AI

Prof. Steve Potter works at the Laboratory of Neuroengineering in Atlanta, Georgia. I wrote to him after coming across his old article, ‘What can AI get from Neuroscience?’ I wanted to know how neuroscience might contribute to AI in the future: for instance will

0 comments

New funding for AI Impacts

AI Impacts has received two grants! We are grateful to the Future of Humanity Institute (FHI) for $8,700 to support work on the project until September 2015, and the Future of Life Institute (FLI) for $49,310 for another year of

0 comments

Update on all the AI predictions

For the last little while, we’ve been looking into a dataset of individual AI predictions, collected by MIRI a couple of years ago. We also previously gathered all the surveys about AI predictions that we could find. Together, these are all the public predictions

3 comments

Why do AGI researchers expect AI so soon?

People have been predicting when human-level AI will appear for many decades. A few years ago, MIRI made a big, organized collection of such predictions, along with helpful metadata. We are grateful, and just put up a page about this dataset, including some analysis. Some of you saw

0 comments

Supporting AI Impacts

We now have a donations page. If you like what we are doing as much as anything else you can think of to spend marginal dollars on, I encourage you to support this project! Money will go to more of the kind of thing you

1 comment
By Martin Grandjean http://en.wikipedia.org/wiki/Data_visualization#/media/File:Social_Network_Analysis_Visualization.png

A new approach to predicting brain-computer parity

How large does a computer need to be before it is ‘as powerful’ as the human brain? This is a difficult question, which people have answered before, with much uncertainty. We have a new answer! (Longer description here;

7 comments

Preliminary prices for human-level hardware

Computer hardware has been getting cheap now for about seventy five years. Relatedly, large computing projects can afford to be increasingly large. If you think the human brain is something like a really impressive computer, then a

5 comments

What’s up with nuclear weapons?

When nuclear weapons were first built, the explosive power you could extract from a tonne of explosive skyrocketed. But why? Here’s a guess. Until nuclear weapons, explosives were based on chemical reactions. Whereas nuclear weapons are based on nuclear

3 comments

Multipolar research questions

The Multipolar AI workshop we ran a fortnight ago went well, and we just put up a list of research projects from it. I hope this is helpful inspiration to those of you thinking about applying to the new FLI grants in the

0 comments

How AI timelines are estimated

A natural approach to informing oneself about when human-level AI will arrive is to check what experts who have already investigated the question say about it. So we made this list of analyses that we could find. It’s a short list, though the bar for ‘analysis’ was

3 comments

At-least-human-level-at-human-cost AI

Often, when people are asked ‘when will human-level AI arrive?’ they suggest that it is a meaningless or misleading term. I think they have a point. Or several, though probably not as many as they think

0 comments

Penicillin and syphilis

Penicillin was a hugely important discovery. But was it a discontinuity in the normal progression of research, or just an excellent discovery which followed a slightly less excellent discovery, and so on? There are several

2 comments

The slow traversal of ‘human-level’

Once you have normal-human-level AI, how long does it take to get Einstein-level AI? We have seen that a common argument for ‘not long at all’ based on brain size does not work in a straightforward way, though a

5 comments

Making or breaking a thinking machine

Here is a superficially plausible argument: the brains of the slowest humans are almost identical to those of the smartest humans. And thus—in the great space of possible intelligence—the ‘human-level’ band must be very narrow. Since all humans are basically identical in

7 comments

Are AI surveys seeing the inside view?

An interesting thing about the survey data on timelines to human-level AI is the apparent incongruity between answers to ‘when will human-level AI arrive?’ and answers to ‘how much of the way to human-level AI have we come recently?‘ In particular, human-level AI

1 comment

Event: Multipolar AI workshop with Robin Hanson

On Monday 26 January we will be holding a discussion on promising research projects relating to ‘multipolar‘ AI scenarios. That is, future scenarios where society persists in containing a large number of similarly influential agents, rather than a single winner who takes all. The

3 comments

Michie and overoptimism

We recently wrote about Donald Michie’s survey on timelines to human-level AI. Michie’s survey is especially interesting because it was taken in 1972, which is three decades earlier than any other surveys we know of that ask about human-level AI.

1 comment

Were nuclear weapons cost-effective explosives?

Nuclear weapons were radically more powerful per pound than any previous bomb. Their appearance was a massive discontinuity in the long-run path of explosive progress, that we have lately discussed. But why do we measure energy

0 comments

A summary of AI surveys

If you want to know when human-level AI will be developed, a natural approach is to ask someone who works on developing AI. You might however be put off by such predictions being regularly criticized as inaccurate and biased. While they do seem

2 comments

AI and the Big Nuclear Discontinuity

As we’ve discussed before, the advent of nuclear weapons was a striking technological discontinuity in the effectiveness of explosives. In 1940, no one had ever made an explosive twice as effective as TNT. By 1945 the best

0 comments
First nuclear reaction

The Biggest Technological Leaps

Over thousands of years, humans became better at producing explosions. A weight of explosive that would have blown up a tree stump in the year 800 could have blown up more than three tree stumps in

1 comment

The AI Impacts Blog

Welcome to the AI Impacts blog.  AI Impacts is premised on two ideas (at least!): The details of the arrival of human-level artificial intelligence matter Seven years to prepare is very different from seventy years to prepare. A weeklong

1 comment