Will Superhuman AI be created?

Published 6 Aug 2022

This page may represent little of what is known on the topic. It is incomplete, under active work and may be updated soon.

Superhuman AI appears to be very likely to be created at some point.

Details

Let ‘superhuman’ AI be a set of AI systems that together achieve human-level performance across virtually all tasks (i.e. HLMI) and substantially surpass human-level performance on some tasks.

Arguments

A. Superhuman AI is very likely to be physically possible

1. Human brains prove that it is physically possible to create human-level intelligence

A single human brain is not an existence proof of human-level intelligence according to our definition, because no specific human brain is able to perform at the level of any human brain, on all tasks. Given only the observation of the existence of human brains, it could conceivably be impossible to build a machine that performs any task at the level of any chosen human brain at the total cost of a single human brain. For instance, if each brain could only hold the information required to specialize in one career, then a machine that could do any career would need to store much more information than a brain, and thus could be more expensive.

The entire population of human brains together could be said to perform at ‘human-level’, if the cost of doing a single task is considered to be the cost of the single person’s labor used for that task, rather than the cost of maintaining the entire human race. This seems like a reasonable accounting for the present purposes. Thus, the entire collection of human brains demonstrates that it is physically possible to have a system which can do any task as well as the most proficient human, and can do marginal tasks at the cost of human labor (even if the cost of maintaining the entire system would be much higher, were it not spread between many tasks).

2. We know of no reason to expect that human brains are near the limits of possible intelligence

Human brains do appear to be near the limits of performance for some specific tasks. For instance, humans can play tic-tac-toe perfectly. Also for many tasks, human performance reaps a lot of the value potentially available, so it is impossible to perform much better in terms of value (e.g. selecting lunch from a menu, making a booking, recording a phone number).

However, many tasks do not appear to be like this (e.g. winning at Go), and even for the above mentioned tasks, there is room to carry out the task substantially faster or more cheaply than a human does. Thus there appears to be room for substantially better-than-human performance on a wide range of tasks, though we have not seen a careful accounting of this.

3. Artificial minds appear to have some intrinsic advantages over human minds

a) Human brains developed under constraints that would not apply to artificial brains. In particular, energy use was a more important cost, and there were reproductive constraints to human head size.

b) Machines appear to have huge potential performance advantages over biological systems on some fronts. Carlsmith summarizes Bostrom1:

Thus, as Bostrom (2014, Chapter 3) discusses, where neurons can fire a maximum of hundreds of Hz, the clock speed of modern computers can reach some 2 Ghz — ~ten million times faster. Where action potentials travel at some hundreds of m/s, optical communication can take place at 300,000,000 m/s — ~ a million times faster. Where brain size and neuron count are limited by cranial volume, metabolic constraints, and other factors, supercomputers can be the size of warehouses. And artificial systems need not suffer, either, from the brain’s constraints with respect to memory and component reliability, input/output bandwidth, tiring after hours, degrading after decades, storing its own repair mechanisms and blueprints inside itself, and so forth. Artificial systems can also be edited and duplicated much more easily than brains, and information can be more easily transferred between them.

4. Superhuman AI is very likely to be physically possible (from 1-3)

The human species exists (1). There seems little reason to think that no system could perform tasks substantially better (2), and multiple moderately strong reasons to think that more capable systems are possible (3). Thus it seems very likely that more capable systems are possible.

5. The likely physical possibility of superhuman AI minds strongly suggests the physical feasibility of creating such minds

A superhuman mind is a physically possible object (4). However, that a physical configuration is possible does not imply that bringing about such a configuration intentionally can be feasible in practice. For an example of the difference, a waterfall whose water is in exactly the configuration as that of the Niagara Falls for the last ten minutes is physically possible (the Niagara Falls just did it), yet bringing this about again intentionally may remain intractable forever.

In fact we know that human brains specifically are not only physically possible, but feasible for humans to create. However this is in the form of biological reproduction, and does not appear to straightforwardly imply that humans can create arbitrary different systems with at least the intelligence of humans. That is, human creation of human brains does not obviously imply that it is possible for humans to intentionally create human-level intelligence that isn’t a human brain.

However it seems strongly suggestive. If a physical configuration is possible, natural reasons it might be intractable to bring about are a) that it is computationally difficult to transform the given reference to an actionable description of the physical state required (e.g. ‘Niagara Falls ten minutes ago’ points at a particular configuration of water, but not in a way that is easily convertible to a detailed specification of the locations of that water2), and b) the actionable description is hard to bring about. For instance, it might require minute manipulation of particles beyond what is feasible today, either to get the required degree of specificity, or to avoid chaotic dynamics taking the system away from the desired state even at a macroscopic level (e.g. even if your description of the starting state of the waterfall is fairly detailed, it will quickly diverge farther from the real waterfall.)

These issues don’t appear to apply to creating superhuman intelligence, so in the absence of other evident defeaters, its physical possibility seems to strongly suggest its physical feasibility.

6. Contemporary AI systems exhibit qualitatively similar capabilities to human minds, suggesting modified versions of similar processes would give rise to capabilities matching human minds.

That is, given that current AI techniques create systems that do a lot of human-like tasks at some level of performance, e.g. recognize images and write human-like language, it would be somewhat surprising if getting to human level performance on these tasks required such starkly different methods as to be impossible.

7. Superhuman AI systems will very likely be physically feasible to create (from 4-6)

Superhuman AI systems are probably physically possible (4), and this suggests that they are feasible to create (5). Separately, presently feasible AI systems exhibit qualitatively similar behavior to human minds (6), weakly suggesting that systems exhibiting similar behavior at a higher level of performance will also be feasible to create.

B. If feasible, superhuman AI will very likely be created

8. Superhuman AI would appear to be very economically valuable, given its potential to do most human work better or more cheaply

Whenever such minds become feasible, by stipulation they will be superior in ways to existing sources of cognitive labor, or those sources would have already constituted superhuman AI. Unless the gap is quite small between human-level AI and the best feasible superhuman AI (which seems unlikely given the large potential room for improvement over human minds), the economic value from superhuman AI should be at least at the scale of the global labor market.

9. It appears there will be large incentives to create such systems (from 8)

That a situation would make large amounts of economic value available does not imply that any individual has an incentive to make that situation happen, because the value may not accrue to the possible decision maker. In this case however, substantial parts of the economic value created by superhuman AI systems appear likely to be captured by their creators, by analogy to other commercial software.

These economic incentives may not be the only substantial incentives in play. Creating such systems could incur social or legal consequences which could negate the positive incentives. Thus this step is relatively uncertain.

10. Superhuman AI seems likely to be created

Given that creation of a type of machine is physically feasible (7) and strongly incentivized (9), it seems likely that it will be created.

Notes

  1. Carlsmith, Joseph. “Is Power-Seeking AI an Existential Risk? [Draft].” Open Philanthropy Project, April 2021. https://docs.google.com/document/d/1smaI1lagHHcrhoi6ohdq3TYIZv0eNWWZMPEy8C8byYg/edit?usp=embed_facebook.

    Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. 1st edition. Oxford: Oxford University Press, 2014.
  2. Another example of evident physical possibility diverging from tractability that is being given the output of a hash function and wanting to create an input that produces that output

We welcome suggestions for this page or anything on the site via our feedback box, though will not address all of them.