List of multipolar research projects

This list currently consists of research projects suggested at the Multipolar AI workshop we held on January 26 2015.

Relatively concrete projects are marked [concrete]. These are more likely to already include specific questions to answer and feasible methods to answer them with. Other ‘projects’ are more like open questions, or broad directions for inquiry.

Projects are divided into three sections:

  1. Paths to multipolar scenarios
  2. What would happen in a multipolar scenario?
  3. Safety in a multipolar scenario

Order is not otherwise relevant. The list is an inclusive collection of the topics suggested at the workshop, rather than a prioritized selection from a larger list.

Luke Muehlhauser’s list of ‘superintelligence strategy’ research questions contains further suggestions.

List

Paths to multipolar scenarios

1.1 If we assume that AI software is similar to other software, what can we infer from observing contemporary software development? [concrete] For instance, is progress in software performance generally smooth or jumpy? What is the distribution? What are typical degrees of concentration among developers? What are typical modes of competition? How far ahead does the leading team tend to be to their competitors? How often does the lead change? How much does a lead in a subsystem produce a lead overall? How much do non-software factors influence who has the lead? How likely is a large player like Google—with its pre-existing infrastructure—to be the frontrunner in a random new area that they decide to compete in?

A large part of this project would be collecting what is known about contemporary software development. This information would provide one view on how AI progress might plausibly unfold. Combined with several such views, this might inform predictions on issues like abruptness, competition and involved players.

1.2 If the military is involved in AI development, how would that affect our predictions? [concrete] This is a variation on 1.1, and would similarly involve a large component of reviewing the nature of contemporary military projects.

1.3 If industry were to be largely responsible for AI development, how would that affect our predictions? [concrete] This is a variation on 1.2, and would similarly involve a large component of reviewing the nature of contemporary industrial projects.

1.4 If academia were to be largely responsible for AI development, how would that affect our predictions? [concrete] This is a variation on 1.2, and would similarly involve a large component of reviewing the nature of contemporary academic projects.

1.5 Survey AI experts on the likelihood of AI emerging in the military, business or academia, and on the likely size of a successful AI project.  [concrete]

1.6 Identify considerations that might tip us between multipolar and unipolar scenarios. 

1.7 To what extent will AGI progress be driven by developing significantly new ideas? 1.1 may bear on this. It could be approached in other ways, for instance asking AI researchers what they expect.

1.8 Run prediction markets on near-term questions, such as rates of AI progress, which inform our long-run expectations. [concrete] 

1.9 Collect past records of ‘lumpiness’ of AI success. [concrete] That is, variation in progress over time. This would inform expectations of future lumpiness, and thus potential for single projects to gain a substantial advantage.

What would happen in a multipolar scenario?

2.1 To what extent do values prevalent in the near-term affect the long run, in a competitive scenario? One could consider the role of values over history so far, or examine the ways in which the role of values may change in the future. One could consider the degree of instrumental convergence between actors (e.g. firms) today, and ask how that affects long-term outcomes. One might also consider whether non-values mental features might become locked in in a way that produces similar outcomes to particular values being influential. e.g. priors or epistemological methods that make a particular religion more likely

2.2 What other factors in an initial scenario are likely to have long-lasting effects? For instance social institutions, standards, and locations for cities.

2.3 What would AI’s value in a multipolar scenario? We can consider a range of factors that might influence AI values:

  1. The nature of the transition to AI
  2. Prevailing institutions
  3. The extentto which AI values become static, as compared to changing human values
  4. What values do humans want AI’s to have
  5. Competitive dynamics

There is a common view that a multipolar scenario would be better in the long run than a hegemonic ‘unfriendly AI’. This project would inform that comparison.

2.4 What are the prospects for human capital-holders? In a simple model, humans who own capital might become very wealthy during a transition to AI. On a classical economic picture, this would be a critical way for humans to influence the future. Is this picture plausible? Evaluate the considerations.

  1. What are the implications of capital holders doing no intellectual work themselves?
  2. [concrete] What does the existing literature on principal-agent problems suggest about multipolar AI scenarios?
  3. [concrete] Could humans maintain investments for significant periods of their lives, if during that time aeons of subjective time passes for faster moving populations? (i.e. is it plausible to expect to hold assets through millions of years of human history?) Investigate this via data on past expropriations

2.5 Identify risks distinctive to a multipolar scenario, or which are much more serious in a multipolar scenario. 

For instance:

  • Evolutionary dynamics bring an outcome that nobody desired initially
  • The AIs are not well integrated into human society, and consequently cause or allow destruction to human society
  • The AIs—integrated or not—have different values, and most of the resources end up being devoted to those values

2.6 Choose a specific multipolar scenario and try to predict its features in detail. [concrete] Base this on the basic changes we know would occur (e.g. minds could be copied like software), and our best understanding of social science.

Specific instances:

  1. Brain emulations (Robin Hanson is working on this in an upcoming book)
  2. Brain emulations, without the assumption that software minds are opaque
  3. One can buy maximally efficient software for anything you want; everything else is the same
  4. AI is much like contemporary software (see 1.1).

2.7 How would multipolar AI change the nature and severity of violent conflict? For instance, conflict between states.

2.8 Investigate the potential for AI-enforced rights. Think about how to enforce property rights in a multipolar scenario, given advanced artificial intelligence to do it with, and the opportunity to prepare ahead of time. Can you create programs that just enforce deals between two parties, but do nothing else? If you create AI with this stable motivational structure, possessed by many parties, how does this change the way that agents that interact? How could such a system be designed?

2.9 What is the future of democracy in such a scenario? In a world where resources can rapidly and cheaply be turned into agents, the existing assignment of a vote per person may be destructive and unstable.

2.10 How does the lumpiness of economic outcomes vary as a function of the lumpiness of origins? For instance, if one team creates brain emulations years before others, would that group have and retain extreme influence?

2.11 What externalities can we foresee, in computer security? That is, will people invest less (or more) in security than is socially optimal?

2.12 What externalities can we foresee in AI safety generally?

2.13 To what extent can artificial agents make more effective commitments, or more effectively monitor commitments, than humans? How does this change competitive dynamics? What proofs of properties of one’s source code may be available in the future?

Safety in a multipolar scenario

3.1 Assess the applicability of general AI safety insights to multipolar scenarios. [concrete] How useful are capability control methods, such as boxing, stunting, incentives, or tripwires in a multi-polar scenario? How useful are motivation selection methods, such as direct specification, domesticity, indirect normatively, augmentation in a multipolar scenario?

3.2 Would selective pressures strongly favor the existence of goal-directed agents, in a multipolar scenario where a variety of AI designs are feasible?

3.3 Develop a good model for the existing computer security phenomenon where nobody builds secure systems, though they can. [concrete] Model the long-run costs of secure and insecure systems, given distributions of attacker sophistication and possibility for incremental system improvement. Determine the likely situation various future scenarios, especially where computer security is particularly important.

3.4 Do paradigms developed for nuclear security and biological weapons apply to AI in a multi-polar scenario? [concrete] For instance, could similar control and detection systems be used?

3.5 What do the features of computer security systems tell us about how multipolar agents might compete?

3.8 What policies could help create more secure computer systems? For instance, the onus being on owners of systems to secure them, rather than on potential attackers to avoid attacking.

3.9 What innovations (either in AI or coinciding technologies) might reduce principal-agent problems? 

3.10 Apply ‘reliability theory’ to the problem of manufacturing trustworthy hardware. 

3.11 How can we transition in an economically viable way to hardware that we can trust is uncorrupted? At present, we must assume that the hardware is uncorrupted upon purchase, but this may not be sufficient in the long run.

 


We welcome suggestions for this page or anything on the site via our feedback box, though will not address all of them.