Information storage in the brain

The brain probably stores around 10-100TB of data.

Support

According to Forrest Wickman, computational neuroscientists generally believe the brain stores 10-100 terabytes of data.1 He suggests that these estimates are produced by assuming that information is largely stored in synapses, and that each synapse stores around 1 byte. The number of bytes is then simply the number of synapses.

These assumptions are simplistic (as he points out). In particular:

  • synapses may store more or less than one byte of information on average
  • some information may be stored outside of synapses
  • not all synapses appear to store information
  • synapses do not appear to be entirely independent

We estimate that there are 1.8-3.2 x 10¹⁴ synapses in the human brain, so according to the procedure Wickman outlines, this suggests that the brain stores around 180-320TB of data. It is unclear from his article whether the variation in the views of computational neuroscientists is due to different opinions on the assumptions stated above, or on the number of synapses in the brain. This makes it hard to adjust our estimate well, so our best guess for now is that the brain can store around 10-100TB of data, based on this being the common view among computational neuroscientists.


 

Conversation with Steve Potter

Participants

Figure 1: Professor Steve Potter

  • Professor Steve Potter – Associate Professor, Laboratory of NeuroEngineering, Coulter Department of Biomedical Engineering, Georgia Institute of Technology
  • Katja Grace – Machine Intelligence Research Institute (MIRI)

Note: These notes were compiled by MIRI and give an overview of the major points made by Professor Steve Potter.

Summary

Katja Grace spoke with Professor Steve Potter of Georgia Institute of Technology as part of AI Impacts’ investigation into the implications of neuroscience for artificial intelligence (AI). Conversation topics included how neuroscience now contributes to AI and how it might contribute in the future.

How has neuroscience helped AI in the past?

Professor Potter found it difficult to think of examples where neuroscience has helped with higher level ideas in AI. Some elements of cognitive science have been implemented in AI, but these may not be biologically based. He described two broad instances of neuroscience-inspired projects.

Subsumption architecture

Past work in AI has focused on disembodied computers with little work in robotics. Researchers now understand that AI does not need to be centralized; it can also take on physical form. Subsumption architecture is one way that robotics has advanced. This involves the coupling of sensory information to action selection. For example, Professor Rodney Brooks at MIT has developed robotic legs that respond to certain sensory signals. These legs also send messages to one another to control their movement. Professor Potter believes that this work could have been based on neuroscience, but it is not clear how much Professor Brooks was inspired by neuroscience while working on this project; the idea may have come to him independently.

Neuromorphic engineering

This type of engineering employs properties of biological nervous systems in neural system AI, such as perception and motor control. One aspect of brain function can be imitated with silicon chips through pulse-coding, where analog signals are sent and received in tiny pulses. An application for this is in camera development by mimicking pulse-coded signals between the brain and the retina.

How is neuroscience contributing to AI today?

Although neuroscience has not assisted AI development much in the past, Professor Potter has confidence that this intersection has considerable potential. This is because the brain works well in areas where AI falls short. For example, AI needs to improve how it works in real time in the real world. Self-driving cars may be improved through examining how a model organism, such as a bee, would respond to an analogous situation. Professor Potter believes it would be worthwhile research to record how humans use their brains while driving. Brain algorithms developed from this could be implemented into car design.

Current work at the intersection of neuroscience and AI include the following:

Artificial neural networks

Most researchers at the intersection of AI and neuroscience are examining artificial neural networks, and might describe their work as ‘neural simulations’. These networks are a family of statistical learning models that are inspired by biological neural networks. Hardware in this discipline includes neuromorphic chips, while software includes work in pattern recognition. This includes handwriting recognition and finding military tanks in aerial photographs. The translation of these networks into useful products for both hardware and software applications has been slow.

Hybrots

Professor Potter has helped develop hybrots, which are hybrid living tissue interfaced with robotic machines: robots controlled by neurons. Silent Barrage was an early hybrot that drew on paper attached to pillars. Video was taken of people viewing the Silent Barrage hybrots. This data was transmitted back to Prof. Potter’s lab, where it was used to trigger electrical stimulation in the living brain of the system. This was a petri dish interfaced to a culture of rat cortical neurons. This work is currently being expanded to include more types of hybrots. In one the control will be by living neurons, while the other will be controlled by a simulated neural network.

Meart (MultiElectrode Array Art) was an earlier hybrot. Controlled by a brain composed of rat neuron cells, it used robotic arms to draw on paper. It never progressed past the toddler stage of scribbling.

How is neuroscience likely to help AI in the future?

A particular line of research in neuroscience that is likely to help with AI is the concept of delays. Computer design is often optimized to reduce the amount of time between command and execution. The brain though may take milliseconds longer to respond. However delays in the brain were evolved to respond to the timing of the real world and are a useful part of the brain’s learning process.

Neuroscience probably also has potential to help AI in searching databases. It appears that the brain has methods for this that are completely unlike those used in computers, though we do not yet know what the brain’s methods are. One example given of the brain’s impressive abilities here is that Professor Potter can meet a new person and instantly be confident that he has never seen that person before.

How long will it take to duplicate human intelligence?

It will be hard to say when this has been achieved; success is happening at different rates for different applications. The future of neuroscience in AI will most likely involve taking elements of neuroscience and applying them to AI; it is unlikely that there will be a wait until we have a good understanding of the brain, then an export of that knowledge complete to AI.

Professor Potter greatly respects Ray Kurzweil, but does not think that he has an in depth knowledge of neuroscience. Professor Potter thinks the brain is much more complex than Kurzweil appears to believe, and that ‘duplicating’ human intelligence will take far longer than Kurzweil predicts. In Professor Potter’s consideration, it will take over a hundred years to develop a robot butler that can convince you that it is human.

Challenges to progress

Lack of collaboration

Neuroscience-inspired AI progress has been hampered because researchers across neuroscience and AI seldom collaborate with one another. This may be from disinterest or limited understanding of each other’s fields. Neuroscientists are not generally interested in the goal of creating human-level artificial intelligence. Professor Potter believes that of the roughly 30,000 people who attend the Society for Neuroscience, approximately 20 people want this. Most neuroscientists, for example, want to learn how something works instead of learning how it can be applied (e.g. learning how the auditory system works instead of developing a new hearing aid). If more people saw benefits in applying neuroscience to AI and in particular human-level AI, there would be greater progress. However, the scale is hard to predict. There is the potential for very much more rapid progress. For researchers to move their projects in this direction, the priorities of funding agencies would first have to move; these as these effectively dictate which projects move forward.

Funding

Funding for work at the intersection of neuroscience and AI may be hard to find. The National Institute of Health (NIH) funds only health-related work and has not funded AI projects. The National Science Foundation (NSF) may not think the work fits its requirement of being basic science research; it may be too applied. NSF though, is more open-minded to funding research on AI than NIH is. The military is also interested in AI research. Outside (of )the U.S., the European Union (EU) funds cross-disciplinary work in neuroscience and AI.

National Science Foundation (NSF) funding

NSF had a call for radical proposals, from which Professor Potter received a four-year-long grant to apply neuroscience to electrical grid systems. Collaborators included a power engineer and people studying neural networks. The group was interested in addressing the U.S.’s large and uneven power supply and usage. The electrical grid has become increasingly difficult to control because of geographically varying differences in input and output.

Professor Potter believes that if people in neuroscience, AI, neural networks, and computer design talked more, this would bring progress. However, there were some challenges with this collaborative electrical grid systems project that need to be addressed. For example, the researchers needed to spend considerable time educating one another about their respective fields. It was also difficult to communicate with collaborators across the country; NSF paid for only one meeting per year, and the nuances of in-person interaction seem important for bringing together such diverse groups of people and reaping the benefits of their creative communication.

Other people working in this field

  • Henry Markram – Professor, École Polytechnique Fédérale de Lausanne, Laboratory of Neural Microcircuitry. Using EU funding, he creates realistic computer models of the brain, one piece at a time.
  • Rodney Douglas – Professor Emeritus, University of Zurich, Institute of Neuroinformatics. He is a neuromorphic engineer who worked on emulated brain function.
  • Carver Mead – Gordon and Betty Moore Professor of Engineering and Applied Science Emeritus, California Institute of Technology. He was a founding father of neuromorphic engineering.
  • Rodney Brooks – Panasonic Professor of Robotics Emeritus, Massachusetts Institute of Technology (MIT). He was a pioneer in studying distributed intelligence and developed subsumption architecture.
  • Andy Clark – Professor of Logic and Metaphysics, University of Edinburgh. He does work on embodiment, artificial intelligence, and philosophy.
  • Jose Carmena – Associate Professor of Electrical Engineering and Neuroscience, University of California-Berkeley. Co-Director of the Center of Neural Engineering and Prostheses, University of California-Berkeley, University of California-San Francisco. He has researched the impact of electrical stimulation on sensorimotor learning and control in rats.
  • Guy Ben-Ary – Manager, University of Western Australia, CELLCentral in the School of Anatomy and Human Biology. He is an artist and researcher who uses biologically related technology in his work. He worked in collaboration with Professor Potter on Silent Barrage.
  • Wolfgang Maass – Professor of Computer Science, Graz University of Technology. He is doing research on artificial neural networks.
  • Thad Starner – Assistant Professor, Georgia Institute of Technology, College of Computing. He applies biological concepts into developing wearable computing devices.
  • Jennifer Hasler – Professor, Georgia Institute of Technology, Bioengineering and Electronic Design and Applications. She has studied neuromorphic hardware.

 

Predictions of Human-Level AI Timelines

We know of around 1,300 public predictions of when human-level AI will arrive, of varying levels of quality. These include predictions from individual statements and larger surveys. Median predictions tend to be between 2030 and 2055 for predictions made since 2000, across different subgroups of predictors.

Details

The landscape of AI predictions

Predictions of when human-level AI will be achieved exist in the form of surveys and public statements (e.g. in articles, books or interviews). Some statements backed by analysis are discussed here. Many more statements have been collected by MIRI. Figure 1 illustrates almost all of the predictions we know about, though most are aggregated there into survey medians. Altogether, we know of around 1,300 public predictions of when human-level AI will arrive, though 888 are from a single informal online poll. We know of ten surveys that address this question directly (plus a set of interviews which we sometimes treat as a survey but here count here as individual statements, and a survey which asks about progress so far as a fraction of what is required for human-level AI). Only 65 predictions that we know of are not part of surveys.

Summary of findings

Figure 1: Predictions from the MIRI dataset (red = maxIY ≈ ‘AI more likely than not after …’, and green = minPY ≈ ‘AI less likely than not before …’) and surveys. This figure excludes one prediction of 3012 made in 2012, and the Hanson survey, which doesn’t ask directly about prediction dates.

Recent surveys tend to have median dates between 2040 and 2050. All six of the surveys which ask for the year in which human-level AI will have arrived with 50% probability produce medians in this range (not including Kruel’s interviews, which have a median of 2035, and are counted in the statements here). The median prediction in statements is 2042, though predictions of AGI researchers and futurists have medians in the early 2030s. Surveys give median estimates for a 10% chance of human-level AI in the 2020s. We have not attempted to adjust these figures for biases.

Implications

Expert predictions about AI timelines are often considered uninformative. Evidence that predictions are less informative than in other messy fields appears to be weak. We have not evaluated baseline prediction accuracy in such fields however. We expect survey results and predictions from those further from AGI are more accurate than other sources, due to selection biases. The differences between these sources appear to be a small number of decades.

Accuracy of AI Predictions

It is unclear how informative we should expect expert predictions about AI timelines to be. Individual predictions are undoubtedly often off by many decades, since they disagree with each other. However their aggregate may still be quite informative. The main potential reason we know of to doubt the accuracy of expert predictions is that experts are generally poor predictors in many areas, and AI looks likely to be one of them. However we have not investigated how accurate ‘poor’ is, or whether AI really is such a case.

Predictions of AI timelines are likely to be biased toward optimism by roughly decades, especially if they are voluntary statements rather than surveys, and especially if they are from populations selected for optimism. We expect these factors account for less than a decade and around two decades’ difference in median predictions respectively.

Support

Considerations regarding accuracy

A number of reasons have been suggested for distrusting predictions about AI timelines:

  • Models of areas where people predict well
    Research has produced a characterization of situations where experts predict well and where they do not. See table 1 here. AI appears to fall into several classes that go with worse predictions. However we have not investigated this evidence in depth, or the extent to which these factors purportedly influence prediction quality.
  • Expert predictions are generally poor
    Experts are notoriously poor predictors. However our impression is that this is because of their disappointing inability to predict some things well, rather than across the board failure. For instance, experts can predict the Higgs boson’s existence, outcomes of chemical reactions, and astronomical phenomena. So the question falls back to where AI falls in the spectrum of expert predictability, discussed in the last point.
  • Disparate predictions
    One sign that AI predictions are not very accurate is that they differ over a range of a century or so. This strongly suggests that many individual predictions are inaccurate, though not that the aggregate distribution is uninformative.
  • Similarity of old and new predictions
    Older predictions seem to form a fairly similar distribution to more recent predictions, except for very old predictions. This is weak evidence that new predictions are not strongly affected by evidence, and are therefore more likely to be inaccurate.
  • Similarity of expert and lay opinions
    Armstrong and Sotala found that expert and non-expert predictions look very similar.1 This finding is in doubt at the time of writing, due to errors in the analysis. If it were true, this would be weak evidence against experts having relevant expertise, since if they did, this might cause a difference with the opinions of lay-people. Note that it may also not, if the laypeople go to experts for information.
  • Predictions are about different things and often misinterpreted
    Comments made around predictions of human-level AI suggest that predictors are sometimes thinking about different events as ‘AI arriving’.2 Even when they are predictions about the same event, ‘prediction’ can mean different things. One person might ‘predict’ the year when they think human-level AI is more likely than not, while another ‘predicts’ the year that AI seems almost certain.

This list is not necessarily complete.

Purported biases

A number of biases have been posited to affect predictions of human-level AI:

  • Selection biases from optimistic experts
    Becoming an expert is probably correlated with independent optimism about the field, and experts make most of the credible predictions. We expect this to push median estimates earlier by less than a few decades.
  • Biases from short-term predictions being recorded
    There are a few reasons to expect recorded public predictions to be biased toward shorter timescales. Overall these probably make public statements less than a decade more optimistic.
  • Maes-Garreau law
    The Maes-Garreau law is a posited tendency for people to predict important technologies not long before their own likely death. It probably doesn’t afflict predictions of human-level AI substantially.
  • Fixed period bias
    There is a stereotype that people tend to predict AI in 20-30 years. There is weak evidence of such a tendency around 20 years, though little evidence that this is due to a bias (that we know of).

Conclusions

AI appears to exhibit several qualities characteristic of areas that people are not good at predicting. Individual AI predictions appear to be inaccurate by many decades in virtue of their disagreement. Other grounds for particularly distrusting AI predictions seem to offer weak evidence against them, if any. Our current guess is that AI predictions are less reliable than many kinds of prediction, though still potentially fairly informative.

Biases toward early estimates appear to exist, as a result of optimistic people becoming experts, and optimistic predictions being more likely to be published for various reasons. These are the only plausible substantial biases we know of.

Publication biases toward shorter predictions

We expect predictions that human-level AI will come sooner to be recorded publicly more often, for a few reasons. Public statements are probably more optimistic than surveys because of such effects. The difference appears to be less than a decade, for median predictions.

Support

Plausible biases

Below we outline five reasons for expecting earlier predictions to be stated and publicized more than later ones. We do not know of compelling reasons to expect longer term predictions to be publicized more, unless they are so distant as to also fit under the first bias discussed below.

Bias from not stating the obvious

In many circumstances, people are disproportionately likely to state beliefs that they think others do not hold. For example, “homeopathy works” gets more Google hits than “homeopathy doesn’t work”, though this probably doesn’t reflect popular beliefs on the matter. Making public predictions seems likely to be a circumstance with this character. Predictions are often made in books and articles which are intended to be interesting and surprising, rather than by people whose job it is to report on AI forecasts regardless of how far away they are. Thus we expect people with unusual positions on AI timelines to be more likely to state them. This should produce a bias toward both very short and very long predictions being published.

Bias from the near future being more concerning

Artificial intelligence will arguably be hugely important, whether as a positive or negative influence on the world. Consequently, people are motivated to talk about its social implications. The degree of concern motivated by impending events tends to increase sharply with proximity to the event. Thus people who expect human-level AI in a decade will tend to be more concerned about it than people who expect human-level AI to take a century, and so will talk about it more. Similarly, publishers are probably more interested in producing books and articles making more concerning claims.

Bias from ignoring reverse predictions

If you search for people predicting AI by a given date, you can get downwardly biased estimates by taking predictions from sources where people are asked about certain specific dates, and respond that AI will or will not have arrived by that date. If people respond ‘AI will arrive by X’ and ‘AI will not arrive by X’ as appropriate, the former can look like ‘predictions’ while the latter do not.

This bias affected some data in the MIRI dataset, though we have tried to minimize it now. For example, this bet (“By 2029 no computer – or “machine intelligence” – will have passed the Turing Test.”) is interpreted in the above collection as Kurzweil making a prediction, but not as Kapor making a prediction. It also contained several estimates of 70 years, taken from a group who appear to have been asked whether AI would come within 70 years, much later, or never. The ‘within 70 years’ estimates are recorded as predictions, while the others ignored, producing ’70 years’ estimates, almost regardless of the overall opinions of the group surveyed. In a population of people with a range of beliefs, this method of recording predictions would produce ‘predictions’ largely determined by which year was asked about.

Bias from unavoidably ignoring reverse predictions

The aforementioned bias arises from an error that can be avoided in recording data, where predictions and reverse predictions are available. However similar types of bias may exist more subtly. Such bias could arise where people informally volunteer opinions in a discussion about some period in the future. People with shorter estimates who can make a positive statement might feel more as though they have something to say, while those who believe there will not be AI at that time do not. For instance, suppose ten people write books about the year 2050, and each predicts AI in a different decade in the 21st Century. Those who predict it prior to 2050 will mention it, and be registered as a prediction of before 2050. Those who predict it after 2050 will not mention it, and not be registered as making a prediction. This could also be hard to avoid if predictions reach you through a filter of others registering them as predictions.

Selection bias from optimistic experts

Main article: Selection bias from optimistic experts

Some factors that cause people to make predictions about AI are likely to correlate with expectations of human-level AI arriving sooner. Experts are better positioned to make credible predictions about their field of expertise than more distant observers are. However since people are more likely to join a field if they are more optimistic about progress there, we might expect their testimony to be biased toward optimism.

Measuring these biases

These forms of bias (except the last) seem to us as if they should be much weaker in survey data than voluntary statements, for the following reasons:

  • Surveys come with a default of answering questions, so one does not need a strong reason or social justification for doing so (e.g. having a surprising claim, or wanting to elicit concern).
  • One can assess whether a survey ignores reverse predictions, and there appears to be little risk of invisible reverse predictions.
  • Participation in surveys is mostly determined before the questions are viewed, for a large number of questions at once. This allows less opportunity for views on the question to affect participation.
  • Participation in surveys is relatively cheap, so people who care little about expressing any particular view are likely to participate for reasons of orthogonal incentives, whereas costly communications (such as writing a book) are likely to be sensible only for those with a strong interest in promoting a specific message.
  • Participation in surveys is usually anonymous, so relatively unsatisfactory for people who particularly want to associate with a specific view, further aligning the incentives of those who want to communicate with those who don’t care.
  • Much larger fractions of people participate in surveys when requested than volunteer predictions in highly publicized arenas, which lessens the possibility for selection bias.

We think publication biases such as those described here are reasonably likely on theoretical grounds. We are also not aware of other reasons to expect surveys and statements to differ in their optimism about AI timelines. Thus we can compare the predictions of statements and surveys to estimate the size of these biases. Survey data appears to produce median predictions of human-level AI somewhat later than similar public statements do: less than a decade, at a very rough estimate. Thus we think some combination of these biases probably exist, and introduce less than a decade of error to median estimates.

Implications

Accuracy of AI predictions: AI predictions made in statements are probably biased toward being early, by less than a decade. This suggests both that predictions overall are probably slightly earlier than they would be otherwise, and surveys should be trusted more relative to statements (though there may be other considerations there).
Collecting data: When collecting data about AI predictions, it is important to avoid introducing bias by recording opinions that AI is before some date while ignoring opinions that it is after that date.
MIRI dataset: The earlier version of the MIRI dataset is somewhat biased due to ignoring reverse predictions, however this has been at least partially resolved.

Selection bias from optimistic experts

Experts on AI probably systematically underestimate time to human-level AI, due to a selection bias. The same is more strongly true of AGI experts. The scale of such biases appears to be decades. Most public AI predictions are from AI and AGI researchers, so this bias is relevant to interpreting these predictions.

Details

Why we expect bias

We can model a person’s views on AI timelines as being influenced both by their knowledge of AI and other somewhat independent factors, such as their general optimism and their understanding of technological history. People who are initially more optimistic about progress in AI seem more likely to enter the field of AI than those who are less so. Thus we might expect experts in AI to be selected for being optimistic, for reasons independent of their expertise. Similarly, AI researchers presumably enter the subfield of AGI more if they are optimistic about human-level intelligence being feasible soon.

This means expert predictions should tend to be more optimistic than they would if they were made by random people who became well informed, and thus are probably overall too optimistic (setting aside any other biases we haven’t considered).

This reason to expect bias only applies to the extent that predictions are made based on personal judgments, rather than explicit procedures that can be verified to avoid such biases. However predictions in AI appear to be very dependent on such judgments. Thus we expect some bias toward earlier predictions from AI experts, and more so from AGI experts. How large such biases might be is unclear however.

Empirical evidence for bias

Analysis of the MIRI dataset supports a selection bias existing. Median people working in AGI are around two decades more optimistic than median AI researchers from outside AGI. Those in AI are more optimistic again than ‘others’, and futurists are slightly more optimistic than even AGI researchers, though these are less clear due to small and ambiguous samples. In sum, the groups do make different predictions in the directions that we would expect as a result of such bias.

However it is hard to exclude expertise as an explanation for these differences, so this does not strongly imply that there are biases. There could also be biases that are not caused by selection effects, such as wishful thinking, planning fallacy, or self-serving bias. There may also be other plausible explanations we haven’t considered.

Since there are several plausible reasons for the differences we see here, and few salient reasons to expect effects in the opposite direction (expertise could go either way), the size of the selection biases in question are probably at most as large as the gaps between the predictions of the groups. That is, roughly two decades between AI and AGI researchers, and another several decades between AI researchers and others. Part of this span should be a bias of the remaining group toward being too pessimistic, but in both cases the remaining groups are much larger than the selected group, so most of the bias should be in the selected group.

Effects of group biases on predictions

People being selected into groups such as ‘AGI researchers’ based on their optimism does not in itself introduce a bias. The problem arises when people from different groups start making different numbers of predictions. In practice, they do. Among the predictions we know of, most are from AI researchers, and a large fraction of those are from AGI researchers. Of surveys we have recorded, 80% target AI or AGI researchers, and around half of them target AGI researchers in particular. Statements in the MIRI dataset since 2000 include 13 from AGI researchers, 16 from AI researchers, 6 from futurists, and 6 from others. This suggests we should expect aggregated predictions from surveys and statements to be optimistic, by roughly decades.

Conclusions

It seems likely that AI and AGI researchers’ predictions exhibit a selection bias toward being early, based on reason to expect such a bias, the large disparity between AI and AGI researchers’ predictions (while AI researchers seem likely to be optimistic if anything), and the consistency between the distributions we see and those we would expect under the selection bias explanation for disagreement. Since AI and AGI researchers are heavily represented in prediction data, predictions are likely to be biased toward optimism, by roughly decades.

 

Relevance

Accuracy of AI predictions: many AI timeline predictions come from AI researchers and AGI researchers, and people interested in futurism. If we want to use these predictions to estimate AI timelines, it is valuable to know how biased they are, so we can correct for such biases.

Detecting relevant expertise: if the difference between AI and AGI researcher predictions is not due to bias, then it suggests one group had additional information. Such information would be worth investigating.

Group Differences in AI Predictions

AGI researchers appear to expect human-level AI substantially sooner than other AI researchers. The difference ranges from about five years to at least about sixty years as we move from highest percentiles of optimism to the lowest. Futurists appear to be around as optimistic as AGI researchers. Other people appear to be substantially more pessimistic than AI researchers.

Details

MIRI dataset

We categorized predictors in the MIRI dataset as AI researchers, AGI researchers, Futurists and Other. We also interpreted their statements into a common format, roughly corresponding to the first year in which the person appeared to be suggesting that human-level AI was more likely than not (see ‘minPY’ described here).

Recent (since 2000) predictions are shown in the figure below. Those made by people from the subfield of AGI tend to be decades more optimistic than those at the same percentile of optimism in AI. The difference ranges from about five years to at least about sixty years as we move from highest percentiles of optimism to the lowest. Those who work in AI tend to be at least a decade more optimistic than ‘others’, at any percentile of optimism within their group. Futurists are about as optimistic as AGI researchers.

Note that these predictions were made over a period of at least 12 years, rather than at the same time.

xxx

Figure 1: Cumulative probability of AI being predicted (minPY), for various groups, for predictions made after 2000. See here.

Median predictions are shown below (these are also minPY predictions as defined on the MIRI dataset page, calculated from ‘cumulative distributions’ sheet in updated dataset spreadsheet also available there).

 Median AI predictions  AGI  AI  Futurist  Other  All
 Early (pre-2000) (warning: noisy)  1988  2031  2036  2025
 Late (since 2000)  2033  2051  2031  2101  2042

FHI survey data

The FHI survey results suggest that people’s views are not very different if they work in computer science or other parts of academia. We have not investigated this evidence in more detail.

Implications

Biases from optimistic predictors and information asymmetries: Differences of opinion among groups who predict AI suggest that either some groups have more information, or that biases exist in some of the groups. Either of these is valuable to know about, so that we can either look into the additional information, or try to correct for the biases.

The Maes-Garreau Law

The Maes-Garreau law posits that people tend to predict exciting future technologies toward the end of their lifetimes. It probably does not hold for predictions of human-level AI.

Clarification

From Wikipedia:

The Maes–Garreau law is the statement that “most favorable predictions about future technology will fall within the Maes–Garreau point”, defined as “the latest possible date a prediction can come true and still remain in the lifetime of the person making it”. Specifically, it relates to predictions of a technological singularity or other radical future technologies.

The law was posited by Kevin Kelly, here.

Evidence

In the MIRI dataset, age and predicted time to AI are very weakly anti-correlated, with a correlation of -0.017. That is, older people expect AI very slightly sooner than others. This suggests that if the Maes-Garreau law applies to human-level AI predictions, it is very weak, or is being masked by some other effect. Armstrong and Sotala also interpret an earlier version of the same dataset as evidence against the Maes-Garreau law substantially applying, using a different method of analysis.

Earlier, smaller, informal analyses find evidence of the law, but in different settings. According to Rodney Brooks (according to Kevin Kelly), Pattie Maes observed this effect strongly in a survey of public predictions of human uploading:

[Maes] took as many people as she could find who had publicly predicted downloading of consciousness into silicon, and plotted the dates of their predictions, along with when they themselves would turn seventy years old. Not too surprisingly, the years matched up for each of them. Three score and ten years from their individual births, technology would be ripe for them to download their consciousnesses into a computer. Just in the nick of time! They were each, in their own minds, going to be remarkably lucky, to be in just the right place at the right time.

However, according to Kelly, the data was not kept.

Kelly did another small search for predictions of the singularity, which appears to only support a very weakened version of the law: many people predict AI within their lifetime.

The hypothesized reason for this relationship is that people would like to believe they will personally avoid death. If this is true, we might expect the relation to apply much more strongly to predictions of events which might fairly directly save a person from death. Human uploading and the singularity are such events, while human-level AI does not appear to be. Thus it is plausible that this law does apply to some technological predictions, but not human-level AI.

Implications

Evidence about wishful thinking: the Maes-Garreau law is a relatively easy to check instance of a larger class of hypotheses to do with AI predictions being directed by wishful thinking. If wishful thinking were a large factor in AI predictions, this would undermine accuracy because it is not related to when human-level AI will appear. That the Maes-Garreau law doesn’t seem to hold is evidence against wishful thinking being a strong determinant of AI predictions. Further evidence might be obtained by observing the correlation between belief that human-level AI will be positive for society and belief that it will come soon.

  1. ‘Using a database of 95 AI timeline predictions, it will show that these expectations are borne out in practice: expert predictions contradict each other considerably, and are indistinguishable from non-expert predictions and past failed predictions.’ – Armstrong and Sotala 2012, p1
  2. For instance, in an interview with Alexander Kruel, Pei Wang says ‘Here by “roughly as good as humans” I mean the AI will follow roughly the same principles as human in information processing, though it does not mean that the system will have the same behavior or capability as human, due to the difference in body, experience, motivation, etc.’Nils Nilson interprets the question differently: ‘Because human intelligence is so multi-faceted, your question really should be divided into each of the many components of intelligence…A while back I wrote an essay about a replacement for the Turing test. It was called the “Employment Test.”  (See: http://ai.stanford.edu/~nilsson/OnlinePubs-Nils/General_Essays/AIMag26-04-HLAI.pdf)  How many of the many, many jobs that humans do can be done by machines?  I’ll rephrase your question to be: When will AI be able to perform around 80% of these jobs as well or better than humans perform?These researchers were asked for their predictions in a context conducive to elaboration. Had they been surveyed more briefly (as in most surveys), or chosen not to elaborate, at least one would have been misunderstood. It is an open question whether 80% of jobs being automated will roughly coincide with artificial minds using similar information processing principles to humans.

One Comment

  1. AI Impacts – The AI Impacts Blog Says :

    2015-04-01 at 7:27 AM

    […] Articles […]