Friendly AI as a global public good

By Katja Grace, 8 August 2016

A public good, in the economic sense, can be (roughly) characterized as a desirable good that is likely to be undersupplied, or not supplied at all, by private companies. It generally falls to the government to supply such goods. Examples include infrastructure networks, or a country’s military. See here for a more detailed explanation of public goods.

The provision of public goods by governments can work quite well at the national level. However, at the international level, there is no global government with the power to impose arbitrary legislation on countries and enforce it. As a result, many global public goods, such as carbon emission abatement, disease eradication, and existential risk mitigation, are partially provided or not provided.

Scott Barrett, in his excellent book Why Cooperate? The Incentive to Supply Global Public Goods, explains that not all global public goods are created equal. He develops a categorization scheme (Table 1), identifying important characteristics that influence whether they are likely to be provided, and what tools can be used to improve their likelihood of provision.

For example:

  • Climate change mitigation is classified as an “aggregate effort” global public good, since its provision depends on the aggregate of all countries’ CO2eq emissions. Provision is difficult, as countries each individually face strong incentives to pollute.
  • Defense against large Earth-bound asteroids is classified as a “single best effort” global public good, since provision requires actions by only one country (or coalition of countries). Providing this global public good unilaterally is likely to be in the interests and within the capabilities of at least one individual country, and so it is likely to be provided.
  • Nuclear non-proliferation is classified as a “mutual restraint” public good, since it is provided by countries refraining from doing something. Provision is difficult as many countries individually face strong incentives to maintain a nuclear deterrent (despite the associated economic cost).
Single best effort Weakest link Aggregate effort Mutual restraint Coordination
Supply depends on… The single best (unilateral or collective) effort The weakest individual effort The total effort of all countries Countries not doing something Countries doing the same thing
Examples Asteroid defense, knowledge, peacekeeping, suppressing an infectious disease outbreak at its source, geoengineering Disease eradication, preventing emergence of resistance and new diseases, securing nuclear materials, vessel reflagging Climate change mitigation, ozone layer protection Non-use of nuclear weapons, non-proliferation, bans on nuclear testing and biotechnology research Standards for the measurement of time, for oil tankers, and for automobiles
International cooperation needed? Yes, in many cases, to determine what should be done, and which countries should pay Yes, to establish universal minimum standards Yes, to determine the individual actions needed to achieve an overall outcome Yes, to agree on what countries should not do Yes, to choose a common standard
Financing and cost sharing needed? Yes, when the good is provided collectively Yes, in some cases Yes, with industrialized countries helping developing countries No No
Enforcement of agreement challenging? Not normally Yes, except when provision requires only coordination Yes Yes No, though participation will need to pass a threshold
International institutions for provision Treaties in some cases; international organizations, such as the UN, in other cases Consensus (World Health Assembly) or Security Council resolutions, customary law Treaties Treaties, norms, customary law Non-binding resolutions; treaties in some cases

Table 1: Simple Taxonomy of Global Public Goods
Source: Scott Barrett (2010), Why Cooperate? The Incentive to Supply Global Public Goods (location 520 of Kindle edition)

Applying the Barrett framework to friendly AI

Artificial Intelligence (AI) technology is likely to progress until the eventual creation of AI that vastly surpasses human cognitive capabilities—artificial superintelligence (ASI). The possibility of an intelligence explosion means that the first ASI system, or those that control it, might possess an unprecedented ability to shape the world according to their preferences. This event could define our entire species, leading rapidly to the full realization of humanity’s potential or causing our extinction. Since “friendly AI”—safe ASI deployed for the benefit of humanity—is a global public good, it may be informative to apply Barrett’s global public good classification scheme to analyse the different facets of this challenge.

Since this framework focuses on the incentives faced by national governments, it is most relevant to situations where ASI development is largely driven by governments, which will therefore be the focus of this article. This government-led scenario is distinct from the current situation of technology industry-led development of AI. Governments might achieve this high level of control through large-scale state-sponsored projects and regulation of private activities.

As with many global public goods, the development of friendly AI can be broken down into many components, each of which may conform to a different category within Barrett’s taxonomy. Here I will focus on those that I believe are most important for long term safety.

Arguably, one of the most concerning problems in the government-led scenario is the potential for the benefits of ASI to be captured by some subset of humanity. Humans are unfortunately much more strongly motivated by self-interest than by the common good, and this is reflected in national and international politics. This means that, given the chance, leaders whose governments control the development of ASI might seek to capture the benefits for their country only, or some subset of their country such as their political allies, or other groups. This could be achieved by instilling values in the ASI system that favor such groups, or through the direct exertion of control over the ASI system. Protection against this possibility constitutes a “mutual restraint” public good, since its provision relies upon countries refraining from doing so. Failing to prevent this possibility may, depending on the preferences of those that control ASI, cause an existential catastrophe, for example in the form of “flawed realization” or “shriek”.

Because of this, and given the current anarchical state of international relations, any ASI-developing country is likely to be perceived as a significant security threat by other countries. Fears that any country succeeding at creating ASI would gain a large strategic advantage over other countries could readily lead to an ASI development race. In this scenario, speed may be prioritized at the expense of safety measures, for example those necessary to solve the value-loading problem (Ch. 12) and the control problem (Ch. 9). This would compound the risks of misuse of ASI explored in the previous paragraph by increasing the possibility of humanity losing control of this creation. The likelihood of an ASI development race is somewhat supported by Chalmers 2010 (footnote, p. 29).

Further, given that ASI may only be achievable on a timescale of decades, the global order prevailing when ASI is within reach may be truly multi-polar. For example, this timescale may allow both China and India to far surpass the USA in terms of economic weight, and may allow countries such as Brazil and Russia to rival the influence of the USA. With a diverse mix of world powers with differing national values, attempts at coordination and restraint could easily be undermined by mistrust.

Another facet of the global public good of friendly AI is the aforementioned technical challenges, including the value-loading problem and the control problem, which currently receive much attention in discussions of long-term AI safety. In isolation, these technical challenges can be considered a “single best effort” global public good in Barrett’s taxonomy, similar to asteroid defense or geoengineering, where it is often in the interests of some countries to unilaterally provide the good. Therefore, a substantial attempt would probably be made to solve these challenges in the government-led scenario, if race dynamics were not present. In reality, any additional advance work on this technical front is likely to be highly beneficial.

What can be done?

Without aiming to present a robust solution, this section briefly explores some of the available options, informed by insights presented by Barrett regarding mutual restraint global public goods.

A “silver bullet” solution to these institutional challenges could be achieved through the emergence of a world government capable of providing global public goods. Although this may eventually be possible, it seems unlikely within the timeframe in which ASI may be developed. Supporting progression towards this outcome may help to provide the global public goods identified above, but such action is probably insufficient alone.

In relation to mutual restraint public goods generally, Barrett identifies treaties, norms and customary law as institutional tools for provision. If a treaty requiring the necessary restraint could be enforced—Shulman mentions (p. 3) some ways in which one might be—it could be effective. However, this would still rely on countries’ willingness to voluntarily join the agreement.

Norms and custom can help achieve mutual restraint. In his book, Barrett analyses (location 2506 of Kindle edition) an important example: the taboo on the use of nuclear weapons. Thanks to strong aversion towards any destructive use of nuclear weapons, such use has not occurred since 1945. This has occurred despite numerous situations in which it would have been militarily advantageous to use a nuclear weapon, e.g. when a nuclear power was at war with a non-nuclear state. In the presence of such attitudes, any benefits to a country from using nuclear weaponry must be weighed against the costs of severe loss of international reputation, or in the extreme, the end of the taboo and consequent nuclear war.

The taboo on the use of nuclear weapons was not inevitable, but arose partly because of mutual understanding of the seriousness of the threat of nuclear war. If the potential effects of ASI are similarly well understood by all powers seeking to develop it, it is possible that a similar taboo could be created, perhaps with the help of a carefully designed treaty between those countries with meaningful ASI development capabilities. The purpose of such an arrangement would be not only to mandate the adoption of proper safety measures, but also to ensure that the benefits of ASI would be spread fairly amongst all of humanity.


To achieve positions of power, all political leaders depend heavily on their ability to amass resources and influence. Upon learning of the huge potential of ASI, such individuals may instinctively attempt to capture control of its power. They will also expect their rivals to do the same, and will strategize accordingly. Therefore, in the event of government-led ASI development, mutual restraint by ASI-developing nations would be needed to avoid attempts to capture the vast benefits of ASI for a small subset of humanity, and to avoid the harmful effects of a race to develop ASI.

We welcome suggestions for this page or anything on the site via our feedback box, though will not address all of them.

Be the first to comment

Leave a Reply

Your email address will not be published.


This site uses Akismet to reduce spam. Learn how your comment data is processed.