Coordinated human action as example of superhuman intelligence

Collections of humans organized into groups and institutions provide many historical examples of the creation and attempted control of intelligences that routinely outperform individual humans. A preliminary look at the available evidence suggests that individuals are often cognitively outperformed in head-to-head competition with groups of similar average intelligence. This article surveys considerations relevant to the topic and lays out what a plausible research agenda in this area might look like.

Background

Humans are often organized into groups in order to perform tasks beyond the abilities of any single human in the group. Many such groups perform cognitive tasks. The history of forming such groups is long and varied, and provides some evidence about what new forms of superhuman intelligence might be like.

Some examples of humans cooperating on a cognitive task that no one member could perform include:

  • Ten therapists can see ten times as many patients as one therapist can.
  • A hospital can perform many more kinds of medical procedure and treat many more kinds of illness than any one person in the hospital.
  • A team of friends on trivia night might be able to answer more questions than any one of them individually might.

How such institutions are formed, and the sensitivity of their behavior to starting conditions, may help us predict the behavior of similarly constituted AIs or systems of AIs. This information would be especially useful if control or value alignment problems have been solved in some cases, or to the extent that existing human institutions resemble superintelligences or constitute an intelligence explosion.

There are several reasons these kinds of groups may present only a limited analogy to digital artificial intelligence. For instance, humans have no software-hardware distinction, so physical measures such as fences that can control the spread of humans are not likely to be as reliable at controlling the spread of digital intelligences. An individual human cannot easily be separated into different cognitive modules, which limits the design flexibility of intelligences constructed from humans. More generally, AIs may be programmed in ways very different from the heuristics and algorithms executed by the human brain, so while human organizations may be a kind of superhuman intelligence, they are not necessarily representative of the broader space of possible superintelligences.

Questions for further investigation:

  • Do any human organizations have the characteristics of superintelligences that some AI researchers and futurists expect to cause an intelligence explosion with catastrophic consequences? If so, do we expect catastrophe from human organizations? If not, what distinguishes them from other, potential artificial intelligences?
  • How similar is the problem of controlling institutional behavior to the value alignment problem with respect to powerful digital AIs? Are the expected consequences similar?
  • Do control mechanisms require limiting the cognitive performance of groups, or are there control mechanisms that do not appear to degrade in effectiveness as the intelligence of the group increases?
  • How relevant are the differences between human collective intelligence and digital AI?

Group vs individual performance

Institutions are mainly relevant as an example of constructed intelligence if their intelligence is higher than that of humans, in some sense. This section examines reasons to believe this might be the case.

Mechanisms for cognitive superiority of groups

We can think of several mechanisms by which a group might outperform individual humans on cognitive tasks, although this list is not comprehensive:

  • Aggregation – A large number of people can often perform cognitive tasks at a higher rate than a single person performing the same tasks. For example, a large accounting firm ought to be able to perform more audits, or prepare more tax returns, than a single accountant. In practice, there are often impediments to work scaling linearly with the number of people involved, as noted in observations such as Parkinson’s Law.
  • Cognitive economies of scale
    • It is often less costly to teach someone how to perform a task than for them to figure it out on their own. Knowledge transfer between members of a group may therefore accelerate the learning process.
    • Individuals with different skills can cooperate to produce things or quantities of things that no one person could have produced, through specialization and gains from trade. For example, I, Pencil describes the large number of processes, each requiring a very different set of skills and procedures that it would take a long time to learn, to produce a single pencil.
  • Model combination and adjustment
    • In groups solving problems, people can make different suggestions and identify one another’s incorrect suggestions, which may help the group avoid wasting time on blind alleys or adopting premature, incorrect solutions.
    • The average of the individual estimates from a group of people is typically more reliably accurate than the estimate of any individual in the group, because random errors tend to cancel each other out. This is often called the “wisdom of crowds”.
    • Groups of people can also coordinate by comparing predictions and accepting the claim the group finds most credible. Trivia teams typically use this strategy. Groups of people have also been pitted against individuals in chess games.
    • Markets can be used to combine information from many individuals.

Further investigation on this topic could include:

  • Generating a more comprehensive list of potential mechanisms by which institutions and groups may have a cognitive advantage, by examining the historical record, arguments, and experimental and case studies of individual vs group performance.
  • Assessing which mechanisms can be shown to work, and how much group intelligence can exceed individual intelligence, by evaluating historical examples, case studies, and experimental studies.
  • Assessing in which aspects of intelligence, if any, groups have not outperformed individuals.

Evidence of cognitive superiority of groups

An incomplete survey of literature on collective intelligence found several measures where group performance, distinct from individual performance, has been explicitly evaluated:

  • Wooley et al. 2010 examined the performance of groups on tasks such as solving visual puzzles, brainstorming, making collective moral judgments, negotiating over limited resources, and playing checkers against a standardized computer opponent. The study found correlation between performance on different tasks, related more to the ability of members to coordinate than to the average or maximum intelligence of group members.
  • Shaw 1932 compared the timed performance of individuals and four-person groups on simple spatial and logical reasoning problems, and verbal tasks (arranging a set of words to form the end of some text). The study found that on problems where anyone was able to solve them, groups substantially outperformed individuals, mostly by succeeding much more often than individuals did. No one was able to solve the last two problems, but the study did find that on those problems, suggestions rejected during the process of group problem-solving were predominantly incorrect suggestions rejected by someone other than the person who proposed them, which shows error-correction to be potentially an important part of the advantage of group cognition.
  • Thorndike 1938 compared group and individual performance on vocabulary completion, limerick completion, and solving and making cross-word puzzle tests. Groups outperformed individuals on everything except making crossword puzzles.
  • Taylor and Faust 1952 tested the ability of individuals, groups of two, and groups of four, to solve “twenty questions” style problems. Groups outperformed individuals, but larger groups did not outperform smaller groups.
  • Gurnee 1936 compared individual and group performance at maze learning. Groups completed mazes faster and with fewer false moves.
  • Gordon 1924 compared individual estimates of an object’s weight with the average of members of a group. The study found that group averages outperformed individual estimates, and that larger groups performed better than smaller groups.
  • McHaney et al. 2015 compared the performance of individuals, ad hoc groups, and groups with a prior history of working together, at detecting deception. The study found that groups with a prior history of working together outperform ad hoc groups, and refers to earlier literature that found no difference between the performance of individuals and that of ad hoc groups.

Mostly these studies appear to show groups outperforming individuals. We also found review articles referencing tens of other studies. We may follow up with a more comprehensive review of the evidence in this area in the future.

Questions for further investigation:

  • Which of the possible mechanisms for cognitive superiority of groups do human institutions demonstrate in practice? Do they have important advantages other than the ones enumerated?
  • In what contexts has the difference between group and individual performance been measured? Are there measures on which large organizations do much better than a single human? On what kinds of tasks does group performance most exceed that of individuals? How are these groups constituted?
  • Are there measures on which large organizations cannot be arbitrarily better than a single human? (These might still be things that an AI could do much better, and so where organizations are not a good analogue.) Are there measures for which large organizations have not yet even reached human level intelligence? (It is deprecatory to say something was “written by a committee.”)

We welcome suggestions for this page or anything on the site via our feedback box, though will not address all of them.

5 Comments

  1. Model combination and adjustment has been heavily studied in Machine Learning, with excellent results. These are referred to as “ensemble methods.” The core idea is to take several different models whose results were achieved somewhat independently, and take a vote weighted by the confidence each model has in its answer.

    Ensemble methods achieve outsized results in practice, for example:

    –The Netflix prize, which involved intense competition from top ML teams, was won using ensembles (https://en.wikipedia.org/wiki/Netflix_Prize#2009)
    –Most Kaggle contests are won using ensemble methods (http://www.overkillanalytics.net/more-is-always-better-the-power-of-simple-ensembles/)
    –Random Forests are widely used in business, due to being very simple and achieving great results (and Random Forests are an ensemble method)

    • I think that we should compare AI not only with small groups but also with large ones, namely:
      1) Large scientific institute : to become self-improving AI should be at least on this level, one human level is not enough (probably).
      2) National state. States may be seen as large computers regulated by laws. Their friendliness should be guaranteed by constitution (but often fails, and such comparison may be useful for friendly AI research). In case of slow take-off integration between AI and existing states may be possible.
      3) All human science in last 100 years. In order to really be able to influence human life AI should outperform all human science in speed, may be 10-100 times.

  2. Instrumentally efficient agents are presently unknown.

    There are perfect tic-tac-toe players. But even modern chess-playing programs, with ability far in advance of any human player, are not yet so advanced that every move that looks to us like a mistake must therefore be secretly clever. We don’t dismiss out of hand the notion that a human has thought of a better move than the chess-playing algorithm, the way we dismiss out of hand a supposed secret to the stock market that predicts 10% price changes of S&P 500 companies using public information.

    There is no analogue of ‘instrumental efficiency’ in asset markets, since market prices do not directly select among strategic options. Nobody has yet formulated a use of the EMH such that we could spend a hundred million dollars to guarantee liquidity, and get a well-traded asset market to directly design a liquid fluoride thorium nuclear plant, such that if anyone said before the start of trading, “Here is a design X that achieves expected value M”, we would feel confident that either the asset market’s final selected design would achieve at least expected value M or that the original assertion about X’s expected value was wrong.

    By restricting the concept of ‘instrumental efficiency’ even further, we get something like a valid metaphor in chess. An ordinary person such as you, if you’re an International Grandmaster with hours to think about the game, should regard a modern chess program as instrumentally efficient relative to you. The chess program will not make any mistake that you can understand as a mistake. You should expect the reason why the chess program moves anywhere to be only understandable as ‘because that move had the greatest probability of winning the game’ and not in any other terms like ‘it likes to move its pawn’. If you see the chess program move somewhere unexpected, you conclude that it is about to do exceptionally well or that the move you expected was surprisingly bad. There’s no way for you to find any better path to the chess program’s goals by thinking about the board yourself. An instrumentally efficient agent would have this property for humans in general and the real world in general, not just you and a chess game.

    For any reasonable attempt to define a corporation’s utility function (e.g. discounted future cash flows), it is not the case that we can confidently dismiss any assertion by a human that a corporation could achieve 10% more utility under its utility function by doing something differently. It is common for a corporation’s stock price to rise immediately after it fires a CEO or renounces some other mistake that many market actors knew was a mistake but had been going on for years – the market actors are not able to make a profit on correcting that error, so the error persists.

    Standard economic theory does not predict that any currently known economic actor will be instrumentally efficient under any particular utility function, including corporations. If it did, we could maximize any other strategic problem if we could make that actor’s utility function conditional on it, e.g., reliably obtain the best humanly imaginable nuclear plant design by paying a corporation for it via a sufficiently well-designed contract.

    Sometimes people try to label corporations as superintelligences, with the implication that corporations are the real threat and equally severe as threats to machine superintelligences. But epistemic or instrumental decision-making efficiency of individual corporations is just not predicted by standard economic theory. Most corporations do not even use internal prediction markets, or try to run conditional stock-price markets to select among known courses of action. Standard economic history includes many accounts of corporations making ‘obvious mistakes’ and these accounts are not questioned in the way that e.g. a persistent large predictable error in short-run asset prices would be questioned.

    Since corporations are not instrumentally efficient (or epistemically efficient), they are not superintelligences.

    • I agree that it’s unlikely that corporations as currently constituted represent anything near an upper bound for how dangerous superintelligence can get, and that they are not instrumentally efficient relative to the best other human institutions. However, they may still provide historical examples of attempts to constrain in advance the behavior of intelligent agents with considerable power and autonomy.

      Instrumental efficiency looks to me like a relative attribute, not an absolute one. While it’s likely true that no existing corporation is instrumentally efficient relative to the smartest relevant other human institution (e.g. markets, which aggregate the intelligence of many more humans than a corporation does), we might still hope to find and learn from historical examples where a person or a group of people attempts to constrain the behavior of a future institution that will become instrumentally efficient relative to them.

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.