BLOG

AI hopes and fears in numbers

People often wonder what AI researchers think about AI risk. A good collections of quotes can tell us that it is no longer a fringe view: many big names are concerned. But without a great sense of how many total names there, how big they are, and what publication biases come between us and them, it has been hard (for me at least) to get a clear view on the distribution of opinion.

Our survey offers some new evidence on these questions. Here, 355 machine learning researchers on how good or bad they expect the results of ‘high-level machine intelligence’ to be for humanity:

(Click to expand)

Each column is one person’s opinion of how our chances are divided between outcomes. I put them roughly in order of optimism, to make it intelligible to look at.

If you are wondering how many machine learning researchers didn’t answer, and what their views looked like: nearly four times as many, and we don’t know. But we did try to make it hard for people to decide whether to answer based on their opinions on our questions, by being uninformative in our invitations. I think we went with saying we wanted to ask about ‘progress in the field’ and offering money for responding.

So it was only when people got inside that they would have discovered that we want to know how likely progress in the field is to lead to human extinction, rather than how useful improved datasets are for progress in the field (and actually, we did want to know about that too, and asked—more results to come!). Of the people who got as far as agreeing to take the survey at all, three quarters got as far as this question. So my guess is that this is a reasonable slice of machine learning researchers publishing in these good venues.

Note that expecting the outcome to be ‘extremely bad’ with high probability doesn’t necessarily indicate support for safety research—for instance, you may think it is hopeless. (We did ask several questions about that too.)

(I’ve been putting up a bunch of survey results, this one struck me as particularly interesting to people not involved in AI forecasting.)

Some survey results!

We put the main results of our survey of machine learning researchers on AI timelines online recently—see here for the paper.

Apologies for the delay—we are trying to avoid spoiling the newsworthiness of the results for potential academic publishers, lest they stop being potential academic publishers. But some places are ok with preprints on arXiv, so now we have one. And that means we can probably share some other things too.

There is actually a lot of stuff that isn’t in the paper, and it might take a little while for everything to be released. (The spreadsheet I’m looking at has 344 columns, and more than half of them represent boxes that people could write things in. And for many combinations of boxes, there are interesting questions to be asked.) We hope to share the whole dataset sometime soon, minus a few de-anonymizing bits. As we release more results, I’ll add them to our page about the survey.

The main interesting results so far, as I see them:

  • Comparable forecasts seem to be later than in past surveys. in the other surveys we know of, the median dates for a 50% chance of something like High-Level Machine Intelligence (HLMI) range from 2035 to 2050. Here the median answer to the most similar question puts a 50% chance of HLMI in 2057 (this isn’t in the paper—it is just the median response to the HLMI question asked using the ‘fixed probabilities framing’, i.e. the way it has been asked before). This seems surprising to me given the progress machine learning has seen since last survey, but less surprising because we changed the definition of HLMI, in part fearing it had previously been interpreted to mean a relatively low level of performance.
  • Asking people about specific jobs massively changes HLMI forecasts. When we asked some people when AI would be able to do several specific human occupations, and then all human occupations (presumably a subset of all tasks), they gave very much later timelines than when we just asked about HLMI straight out. For people asked to give probabilities for certain years, the difference was a factor of a thousand twenty years out! (10% vs. 0.01%) For people asked to give years for certain probabilities, the normal way of asking put 50% chance 40 years out, while the ‘occupations framing’ put it 90 years out. (These are all based on straightforward medians, not the complicated stuff in the paper.)
  • People consistently give later forecasts if you ask them for the probability in N years instead of the year that the probability is M. We saw this in the straightforward HLMI question, and most of the tasks and occupations, and also in most of these things when we tested them on mturk people earlier. For HLMI for instance, if you ask when there will be a 50% chance of HLMI you get a median answer of 40 years, yet if you ask what the probability of HLMI is in 40 years, you get a median answer of 30%.
  • Lots of ‘narrow’ AI milestones are forecast in the next decade as likely as not. These are interesting, because most of them haven’t been forecast before to my knowledge, and many of them have social implications. For instance, if in a decade machines can not only write pop hits as well as Taylor Swift can, but can write pop hits that sound like Taylor Swift as well as Taylor Swift can—and perhaps faster, more cheaply, and on Spotify—then will that be the end of the era of superstar musicians? This perhaps doesn’t rival human extinction risks for importance, but human extinction risks do not happen in a vacuum (except one) and there is something to be said for paying attention to big changes in the world other than the one that matters most.
  • There is broad support among ML researchers for the premises and conclusions of AI safety arguments. Two thirds of them say the AI risk problem described by Stuart Russell is at least moderately important, and a third say it is at least as valuable to work on as other problems in the field. The median researcher thinks AI has a one in twenty chance of being extremely bad on net. Nearly half of researchers want to see more safety research than we currently have (compared to only 11% who think we are already prioritizing safety too much). There has been a perception lately that AI risk has moved to being a mainstream concern among AI researchers, but it is hard to tell from voiced opinion whether one is hearing from a loud minority or the vocal tip of an opinion iceberg. So it is interesting to see this perception confirmed with survey data.
  • Researchers’ predictions vary a lot. That is pretty much what I expected, but it is still important to know. Interestingly (and not in the paper), researchers don’t seem to be aware that their predictions vary a lot. More than half of respondents guess that they disagree ‘not much’ with the typical AI researcher about when HLMI will exist (vs. a moderate amount, or a lot).
  • Researchers who studied in Asia have much shorter timelines than those who studied in North Amercia. In terms of the survey’s ‘aggregate prediction’ thing, which is basically a mean, the difference is 30 years (Asia) vs. 74 years (North America). (See p5)
  • I feel like any circumstance where a group of scientists guesses that the project they are familiar with has a 5% chance of outcomes near ‘human extinction’ levels of bad is worthy of special note, though maybe it is not actually that surprising, and could easily turn out to be misuse of small probabilities or something.

Some notes on interpreting the paper:

  1. The milestones in the timeline and in the abstract are from three different sets of questions. There seems to be a large framing effect between two of them—full automation of labor is logically required to be before HLMI, and yet it is predicted much later—and it is unclear whether people answer the third set of questions (about narrow tasks) more like the one about HLMI or more like the one about occupations. Plus even if there were no framing effect to worry about, we should expect milestones about narrow tasks to be much earlier than milestones about very similar sounding occupations. For instance, if there were an occupation ‘math researcher’, it should be later than the narrow task summarized here as ‘math research’. So there is a risk of interpreting the figure as saying AI research is harder than math research, when really the ‘-er’ is all-important. So to help avoid confusion, here is the timeline colored in by which set of questions each milestone came from. The blue one was asked on its own. The orange ones were always asked together: first all four occupations, then they were asked for an occupation they expected to be very late, and when they expected it, then full automation of labor. The pink milestones were randomized, so that each person got four. There are a lot more pink milestones not included here, but included in the long table at the end of the paper.
  2. In Figure 2 and Table S5 I believe the word ‘median’ means we are talking about the ‘50% chance of occurring’ number, and the dates given are this ‘median’ (50% chance) date for a distribution that was made by averaging together all of the different people’s distributions (or what we guess their distributions are like from three data points).
  3. Here are the simple median numbers for human level AI, for each combination of framings. I am sorry that the table is not simple. There are two binary framing choices—whether to ask about probabilities or years, and whether to ask about a bunch of occupations and then all occupations, or just to ask about HLMI. Each framing is in a different column, and bold blue numbers are the medians of probabilities or years given by respondents to match the years or probabilities that are non-bold numbers on the same line. e.g. the first line says that the median person seeing the fixed years, occupations framing said that in ten years there was 0% chance, and the median person seeing the fixed years, HLMI framing said that in ten years there would be a 1% chance of HLMI.
    Time Probability
    Fixed years, Occs Fixed years, HLMI
    10 years 0% 1%
    20 years 0.01% 10%
    40 years 30%
    50 years 3%
    15 years 50 years 10%
    40 years 90 years 50%
    100 years 200 years 90%
    Fixed probs, HLMI Fixed probs, Occs
  4. All of the questions are here. This is the specific statement of Stuart Russell’s quote that many questions referred to:

Stuart Russell summarizes an argument for why highly advanced AI might pose a risk as follows: 

The primary concern [with highly advanced AI] is not spooky emergent consciousness but simply the ability to make high-quality decisions. Here, quality refers to the expected outcome utility of actions taken […]. Now we have a problem:

1. The utility function may not be perfectly aligned with the values of the human race, which are (at best) very difficult to pin down.

2. Any sufficiently capable intelligent system will prefer to ensure its own continued existence and to acquire physical and computational resources – not for their own sake, but to succeed in its assigned task.

A system that is optimizing a function of n variables, where the objective depends on a subset of size k<n, will often set the remaining unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable. This is essentially the old story of the genie in the lamp, or the sorcerer’s apprentice, or King Midas: you get exactly what you ask for, not what you want.

 

Changes in funding in the AI safety field

Guest post by , originally posted to the Center for Effective Altruism blog.

The field of AI Safety has been growing quickly over the last three years, since the publication of “Superintelligence”. One of the things that shapes what the community invests in is an impression of what the composition of the field currently is, and how it has changed. Here, I give an overview of the composition of the field as measured by its funding.

Measures other than funding also matter, and may matter more, like types of outputs, distribution of employed/active people, or impact-adjusted distributions of either. Funding, however, is a little more objective and easier to assess. It gives us some sense of how the AI Safety community is prioritising, and where it might have blind spots. For a fuller discussion of the shortcomings of this type of analysis, and of this data, see section four.

Throughout, I am including the budgets of organisations who are explicitly working to reduce existential risk from machine superintelligence. It does not include work outside the AI Safety community, on areas like verification and control, that might prove relevant. This kind of work, which happens in mainstream computer science research, is much harder to assess for relevance and to get budget data for. I am trying as much as possible to count money spent at the time of the work, rather than the time at which a grant is announced or money is set aside.

Thanks to Niel Bowerman, Ryan Carey, Andrew Critch, Daniel Dewey, Viktoriya Krakovna, Peter McIntyre, Michael Page for their comments or help on content or gathering data in preparing this document (though nothing here should be taken as a statement of their views and any errors are mine).

The post is organised as follows:

  1. Narrative of growth in AI Safety funding
  2. Distribution of spending
  3. Soft conclusions from overview
  4. Caveats and assumptions

Narrative of growth in AI Safety funding

The AI Safety community grew significantly in the last three years. In 2014, AI Safety work was almost entirely done at the Future of Humanity Institute (FHI) and the Machine Intelligence Research Institute (MIRI) who were between them spending $1.75m. In 2016, more than 50 organisations have explicit AI Safety related programs, spending perhaps $6.6m. Note the caveats to all numbers in this document described in section 4.

chart

In 2015, AI Safety spending roughly doubled to $3.3m. Most of this came from growth at MIRI and the beginnings of involvement by industry researchers.

In 2016, grants from the Future of Life Institute (FLI) triggered growth in smaller-scale technical AI safety work.1 Industry invested more over 2016, specially at Google DeepMind and potentially at OpenAI.2 Because of their high salary costs, the monetary growth in spending at these firms may overstate actual growth of the field. For example, several key researchers moved from non-profits/academic orgs (MIRI, FLI, FHI) to Google DeepMind and OpenAI. This increased spending significantly, but may have had a smaller effect on output.3 AI Strategy budgets grew more slowly, at about 20%.

In 2017, multiple center grants are emerging (such as the Center for Human-Compatible AI (CHCAI) and Center for the Future of Intelligence (CFI)), but if their hiring is slow it will restrain overall spending. FLI grantee projects will be coming to a close over the year, which may mean that technical hires trained through those projects become available to join larger centers. The next round of FLI grants may be out in time to bridge existing grant holders onto new projects. Industry teams may keep growing, but there are no existing public commitments to do so. If technical research consolidates into a handful of major teams, it might make it easier to keep open dialogue between research groups, but might decrease individual incentives to because researchers have enough collaboration opportunities locally.

Although little can be said about 2018 at this point, the current round of academic grants which support FLI grantees as well as FHI end in 2018, potentially creating a funding cliff. (Though FLI has just announced a second funding round, and MIT Media Lab has just announced a $27m center (whose exact plans remain unspecified).4

Estimated spending in AI Safety broken down by field of work

S__lection_007

Distribution of spending

In 2014, the field of research was not very diverse. It was roughly evenly split into work at FHI on macrostrategy, with limited technical work, and at MIRI following a relatively focused technical research agenda which placed little emphasis on deep learning.

Since then, the field has diversified significantly.

The academic technical research field is very diverse, though most of the funding comes via FLI. MIRI remains the only non-profit doing technical research and continues to be the largest research group with 7 research fellows at the end of 2016 and a budget of $1.75m. Google DeepMind probably has the second largest technical safety research group with between 3 and 4 full-time-equivalent (FTE) researchers at the end of 2016 (most of whom joined at the end of the year), though OpenAI and GoogleBrain probably have 0.5-1.5 FTEs.5

FHI and SAIRC remains the only large-scale AI strategy center. The Global Catastrophic Risk Institute is the main long-standing strategy center working on AI, but is much smaller. Some much smaller groups (FLI Grantees and the Global Politics of AI team at Yale) are starting to form, but are mostly low-/no- salary for the time being.

A range of functions are now being filled which did not exist in the AI Safety community before. These include outreach, ethics research, and rationality training. Although explicitly outreach focused projects remain small, organisations like FHI and MIRI do significant outreach work (arguably, Nick Bostrom’s Superintelligence falls into this category, for example).

2017 (forecast) – total = $10.5m

chart

2016 – total = $6.56m

S__lection_012

2015 – total = $3.28m

chart

2014 – total = $1.75m

chart

Possible implications and tentative suggestions

Technical safety research

  • The MIRI technical agenda remains the largest coherent research project, despite the emergence of several other agendas. For the sake of diversity of approach, more work needs to be done to develop PIs within the AI community to take the “Concrete Problems” research agenda and others forwards.
  • The community should go out of its way to help the emerging academic technical research centers (CHCAI and Yoshua Bengio’s forthcoming center) to recruit and retain fantastic people.

Strategy, outreach, and policy

  • Near-term policy has had a lot of people outside the AI Safety community  moving towards it, though output remains relatively low. There is even less work on medium-term implications of AI.
  • Non-technical funding has not kept up with the growth of the AI safety field as a whole. This is likely to be because the pipeline for non-technical work is less easily specified and improved than it is for technical work. This could create gaps in the future, for example in:
  • Communication channels between AI Safety research teams.
  • Communication between the AI Safety research community and the rest of the AI community.
  • Guidance for policy-makers and researchers on long-run strategy.
  • It might be helpful to establish or identify a pipeline for AI strategy/policy work, perhaps by building a PhD or Masters course at an existing institution for the purpose.
  • There is not a lot of focused AI Safety outreach work. This is largely because all organisations are stepping carefully to avoid messaging that has the potential to frame the issues unconstructively, but it might be worthwhile to step into this gap over the next year or two.

Caveats and assumptions

  • Scope: I selected projects that either self-identify or were identified to me by people in the field as focused on AI Safety. Where organisations had only a partial focus on AI Safety, I estimated the proportion of their work that was related based on the distribution of their projects. The data probably represent the community of people who explicitly think they are working on AI safety moderately-well. But it doesn’t include anyone generally working on verification/control, auditing, transparency, etc. for other reasons. It also excludes people working on near-term AI policy.
  • Forecasting: Data for 2017 are a very loose guess. In particular, they make very rough guesses for the ability of centers to scale up, which have not been validated by interviews with centers. CFAR financial estimates for 2017 are also still not publicly available, and may be more than 10% of all AI Safety spending. I have assumed, in the pie charts of distribution only, that they will spend $1m next year (they spend $920k in 2015). That estimate is probably too low, but will probably not dramatically alter the overall picture. Forecasts also do not include funding for Yoshua Bengio’s new center or the next round of FLI grants.
  • FLI grant distribution: I have assumed that all FLI grantees spent according to the following schedule: nothing in 2015, 37% in 2016, 31% in 2017, 32% later. This is based on aggregate data, but will not be right for individual grants, which might mean the distribution of funding over time between fields is slightly wrong. The values are lagged slightly in order to account for the fact that money usually takes several months to make its way through university bureaucracies. In some cases, work happens at a different time from funding being received (earlier or later).
  • Industry spending: Estimates of industry spending are very rough. I approximated the amount of time spent by individual researchers on AI Safety based on conversations with some of them and with non-industry researchers. I (very) loosely approximated the per-researcher cost to firms at $300k each, inclusive of overheads and compute.
  • Categorisation: I used the abstracts of the FLI grants, and the websites of other projects, to categorise their work roughly. Some may be miscategorised, but the major chunks of funding are likely to be right.
  • Funding is not a perfect proxy for what matters: There are many ways of describing change in the field usefully, which include how funding is distributed. Funding is a moderate proxy for the amount of effort going into different approaches, but not perfect. For example, if a researcher were to move from being lightly funded at a non-profit to employed by OpenAI their ‘cost’ in this model will have increased by roughly an order of magnitude, which might be different from their impact. The funding picture may therefore come apart from ‘effort’ especially when comparing DeepMind/OpenAI/GoogleBrain to non-profits like MIRI.
  • Re-granting: I’ve tried to avoid double-counting (e.g., SAIRC is listed as an FHI project rather than FLI despite being funded by Elon Musk and OpenPhil via FLI), but there is enough regranting going on that I might not have succeeded.
  • Inclusion: I might have missed out organisations that should arguably be in there, or have incorrect information about their spending
  • Corrections: If you have corrections or extra information I should incorporate, please email me at seb@prioritisation.org.

Footnotes

Joscha Bach on remaining steps to human-level AI

Joscha Bach (from Wikimedia commons)

Joscha Bach (photos from Wikimedia commons )

Last year John and I had an interesting discussion with Joscha Bach about what ingredients of human-level artificial intelligence we seem to be missing, and how to improve AI forecasts more generally.

Thanks to Connor Flexman’s summarizing efforts, you can now learn about Joscha’s views on these questions without the effort of organizing an interview or reading a long and messy transcript.

(It’s been a while since the conversation, but I checked with Joscha that this is not an objectionably obsolete account of his views.)

Here are the notes.

Here is Connor’s shorter summary:

  • Before we can implement human-level artificial intelligence (HLAI), we need to understand both mental representations and the overall architecture of a mind
  • There are around 12-200 regularities like backpropagation that we need to understand, based on known unknowns and genome complexity
  • We are more than reinforcement learning on computronium: our primate heritage provides most interesting facets of mind and motivation
  • AI funding is now permanently colossal, which should update our predictions
  • AI practitioners learn the constraints on which elements of science fiction are plausible, but constant practice can lead to erosion of long-term perspective
  • Experience in real AI development can lead to both over- and underestimates of the difficulty of new AI projects in non-obvious ways

 

  1. Although grants were awarded in 2015, there is a lag between grants being awarded and work taking place. This is a significant assumption discussed in the caveats.
  2. Although note that most of the new hires at DeepMind arrived right at the end of the year.
  3. Although it is also conceivable that a researcher at DeepMind may be ten times more valuable than that same researcher elsewhere.
  4. This will depend on personal circumstance as well as giving opportunities. It would probably be a mistake to forgo time-bounded giving opportunities to cover this cliff, since other sources of funding might be found between now and then.
  5. This is based on anecdotal hiring information, and not a confirmed number from Google DeepMind.