By Daniel Kokotajlo1, 18 June 2020.
Epistemic status: I started this as an AI Impacts research project, but given that it’s fundamentally a fun speculative brainstorm, it worked better as a blog post.
The default, when reasoning about advanced artificial general intelligence (AGI), is to imagine it appearing in a world that is basically like the present. Yet almost everyone agrees the world will likely be importantly different by the time advanced AGI arrives.
One way to address this problem is to reason in abstract, general ways that are hopefully robust to whatever unforeseen developments lie ahead. Another is to brainstorm particular changes that might happen, and check our reasoning against the resulting list.
This is an attempt to begin the second approach.2 I sought things that might happen that seemed both (a) within the realm of plausibility, and (b) probably strategically relevant to AI safety or AI policy.
I collected potential list entries via brainstorming, asking others for ideas, googling, and reading lists that seemed relevant (e.g. Wikipedia’s list of emerging technologies,3 a list of Ray Kurzweil’s predictions4, and DARPA’s list of projects.5)
I then shortened the list based on my guesses about the plausibility and relevance of these possibilities. I did not put much time into evaluating any particular possibility, so my guesses should not be treated as anything more. I erred on the side of inclusion, so the entries in this list vary greatly in plausibility and relevance. I made some attempt to categorize these entries and merge similar ones, but this document is fundamentally a brainstorm, not a taxonomy, so keep your expectations low.
I hope to update this post as new ideas find me and old ideas are refined or refuted. I welcome suggestions and criticisms; email me (gmail kokotajlod) or leave a comment.
Interactive “Generate Future” button
Asya Bergal and I made an interactive button to go with the list. The button randomly generates a possible future according to probabilities that you choose. It is very crude, but it has been fun to play with, and perhaps even slightly useful. For example, once I decided that my credences were probably systematically too high because the futures generated with them were too crazy. Another time I used the alternate method (described below) to recursively generate a detailed future trajectory, written up here. I hope to make more trajectories like this in the future, since I think this method is less biased than the usual method for imagining detailed futures.6
To choose probabilities, scroll down to the list below and fill each box with a number representing how likely you think the entry is to occur in a strategically relevant way prior to the advent of advanced AI. (1 means certainly, 0 means certainly not. The boxes are all 0 by default.) Once you are done, scroll back up and click the button.
A major limitation is that the button doesn’t take correlations between possibilities into account. The user needs to do this themselves, e.g. by redoing any generated future that seems silly, or by flipping a coin to choose between two generated possibilities that seem contradictory, or by choosing between them based on what else was generated.
Here is an alternate way to use this button that mostly avoids this limitation:
- Fill all the boxes with probability-of-happening-in-the-next-5-years (instead of happening before advanced AGI, as in the default method)
- Click the “Generate Future” button and record the results, interpreted as what happens in the next 5 years.
- Update the probabilities accordingly to represent the upcoming 5-year period, in light of what has happened so far.
- Repeat steps 2 – 4 until satisfied. I used a random number generator to determine whether AGI arrived each year.
If you don’t want to choose probabilities yourself, click “fill with pre-set values” to populate the fields with my non-expert, hasty guesses.7
Letters after list titles indicate that I think the change might be relevant to:
- TML: Timelines—how long it takes for advanced AI to be developed
- TAS: Technical AI safety—how easy it is (on a technical level) to make advanced AI safe, or what sort of technical research needs to be done
- POL: Policy—how easy it is to coordinate relevant actors to mitigate risks from AI, and what policies are relevant to this.
- CHA: Chaos—how chaotic the world is.8
- MIS: Miscellaneous
Each possibility is followed by some explanation or justification where necessary, and a non-exhaustive list of ways the possibility may be relevant to AI outcomes in particular (which is not guaranteed to cover the most important ones). Possibilities are organized into loose categories created after the list was generated.
List of strategically relevant possibilities
Inputs to AI
Narrow research and development tools might speed up technological progress in general or in specific domains. For example, several of the other technologies on this list might be achieved with the help of narrow research and development tools.
By this I mean computing hardware improves at least as fast as Moore’s Law. Computing hardware has historically become steadily cheaper, though it is unclear whether this trend will continue. Some example pathways by which hardware might improve at least moderately include:
- Ordinary scale economies9
- Improved data locality10
- Increased specialization for specific AI applications11
- Optical computing12
- Neuromorphic chips13
- 3D integrated circuits14
- Wafer-scale chips15
- Quantum computing16
- Carbon nanotube field-effect transistors17
Dramatically improved computing hardware may:
- Cause any given AI capability to arrive earlier
- Increase the probability of hardware overhang.
- Affect which kinds of AI are developed first (e.g. those which are more compute-intensive.)
- Affect AI policy, e.g. by changing the relative importance of hardware vs. research talent
Many forecasters think Moore’s Law will be ending soon (as of 2020).18 In the absence of successful new technologies, computing hardware could progress substantially more slowly than Moore’s Law would predict.
Stagnation in computing hardware progress may:
- Cause any given AI capability to arrive later
- Decrease the probability of hardware overhang.
- Affect which kinds of AI are developed first (e.g. those which are less compute-intensive.)
- Influence the relative strategic importance of hardware compared to researchers
- Make energy and raw materials a greater part of the cost of computing
Chip fabrication has become more specialized and consolidated over time, to the point where all of the hardware relevant to AI research depends on production from a handful of locations.19 Perhaps this trend will continue.
One country (or a small number working together) could control or restrict AI research by controlling the production and distribution of necessary hardware.
Advanced additive manufacturing could lead to various materials, products and forms of capital being cheaper and more broadly accessible, as well as to new varieties of them becoming feasible and quicker to develop. For example, sufficiently advanced 3D printing could destabilize the world by allowing almost anyone to secretly produce terror weapons. If nanotechnology advances rapidly, so that nanofactories can be created, the consequences could be dramatic:20
- Greatly reduced cost of most manufactured products
- Greatly faster growth of capital formation
- Lower energy costs
- New kinds of materials, such as stronger, lighter spaceship hulls
- Medical nanorobots
- New kinds of weaponry and other disruptive technologies
By “glut” I don’t necessarily mean that there is too much of a resource. Rather, I mean that the real price falls dramatically. Rapid decreases in the price of important resources have happened before.21 It could happen again via:
- Cheap energy (e.g. fusion power, He-3 extracted from lunar regolith,22 methane hydrate extracted from the seafloor,23 cheap solar energy24)
- A source of abundant cheap raw materials (e.g. asteroid mining,25 undersea mining26)
- Automation of relevant human labor. Where human labor is an important part of the cost of manufacturing, resource extraction, or energy production, automating labor might substantially increase economic growth, which might result in a greater amount of resources devoted to strategically relevant things (such as AI research) which is relevantly similar to a price drop even if technically the price doesn’t drop.27 and therefore investment in AI.
My impression is that energy, raw materials, and unskilled labor combined are less than half the cost of computing, so a decrease in the price of one of these (and possibly even all three) would probably not have large direct consequences on the price of computing.28 But a resource glut might lead to general economic prosperity, with many subsequent effects on society, and moreover the cost structure of computing may change in the future, creating a situation where a resource glut could dramatically lower the cost of computing.29
Hardware overhang refers to a situation where large quantities of computing hardware can be diverted to running powerful AI systems as soon as the AI software is developed.
If advanced AGI (or some other powerful software) appears during a period of hardware overhang, its capabilities and prominence in the world could grow very quickly.
The opposite of hardware overhang might happen. Researchers may understand how to build advanced AGI at a time when the requisite hardware is not yet available. For example, perhaps the relevant AI research will involve expensive chips custom-built for the particular AI architecture being trained.
A successful AI project during a period of hardware underhang would not be able to instantly copy the AI to many other devices, nor would they be able to iterate quickly and make an architecturally improved version.
Tools may be developed that are dramatically better at predicting some important aspect of the world; for example, technological progress, cultural shifts, or the outcomes of elections, military clashes, or research projects. Such tools could for instance be based on advances in AI or other algorithms, prediction markets, or improved scientific understanding of forecasting (e.g. lessons from the Good Judgment Project).
Such tools might conceivably increase stability via promoting accurate beliefs, reducing surprises, errors or unnecessary conflicts. However they could also conceivably promote instability via conflict encouraged by a powerful new tool being available to a subset of actors. Such tools might also help with forecasting the arrival and effects of advanced AGI, thereby helping guide policy and AI safety work. They might also accelerate timelines, for instance by assisting project management in general and notifying potential investors when advanced AGI is within reach.
Present technology for influencing a person’s beliefs and behavior is crude and weak, relative to what one can imagine. Tools may be developed that more reliably steer a person’s opinion and are not so vulnerable to the victim’s reasoning and possession of evidence. These could involve:
- Advanced understanding of how humans respond to stimuli depending on context, based on massive amounts of data
- Coaching for the user on how to convince the target of something
- Software that interacts directly with other people, e.g. via text or email
Strong persuasion tools could:
- Allow a group in conflict who has them to quickly attract spies and then infiltrate an enemy group
- Allow governments to control their populations
- Allow corporations to control their employees
- Lead to a breakdown of collective epistemology30
Powerful theorem provers might help with the kinds of AI alignment research that involve proofs or help solve computational choice problems.
Researchers may develop narrow AI that understands human language well, including concepts such as “moral” and “honest.”
Natural language processing tools could help with many kinds of technology, including AI and various AI safety projects. They could also help enable AI arbitration systems. If researchers develop software that can autocomplete code—much as it currently autocompletes text messages—it could multiply software engineering productivity.
Tools for understanding what a given AI system is thinking, what it wants, and what it is planning would be useful for AI safety.31
There are significant restrictions on which contracts governments are willing and able to enforce–for example, they can’t enforce a contract to try hard to achieve a goal, and won’t enforce a contract to commit a crime. Perhaps some technology (e.g. lie detectors, narrow AI, or blockchain) could significantly expand the space of possible credible commitments for some relevant actors: corporations, decentralized autonomous organizations, crowds of ordinary people using assurance contracts, terrorist cells, rogue AGIs, or even individuals.
This might destabilize the world by making threats of various kinds more credible, for various actors. It might stabilize the world in other ways, e.g. by making it easier for some parties to enforce agreements.
Technology for allowing groups of people to coordinate effectively could improve, potentially avoiding losses from collective choice problems, helping existing large groups (e.g. nations and companies) to make choices in their own interests, and producing new forms of coordinated social behavior (e.g. the 2010’s saw the rise of the Facebook group)). Dominant assurance contracts,32 improved voting systems,33 AI arbitration systems, lie detectors, and similar things not yet imagined might significantly improve the effectiveness of some groups of people.
If only a few groups use this technology, they might have outsized influence. If most groups do, there could be a general reduction in conflict and increase in good judgment.
Society has mechanisms and processes that allow it to identify new problems, discuss them, and arrive at the truth and/or coordinate a solution. These processes might deteriorate. Some examples of things which might contribute to this:
- Increased investment in online propaganda by more powerful actors, perhaps assisted by chatbots, deepfakes and persuasion tools
- Echo chambers, filter bubbles, and online polarization, perhaps driven in part by recommendation algorithms
- Memetic evolution in general might intensify, increasing the spreadability of ideas/topics at the expense of their truth/importance34
- Trends towards political polarization and radicalization might exist and continue
- Trends towards general institutional dysfunction might exist and continue
This could cause chaos in the world in general, and lead to many hard-to-predict effects. It would likely make the market for influencing the course of AI development less efficient (see section on “Landscape of…” below) and present epistemic hazards for anyone trying to participate effectively.
Technology that wastes time and ruins lives could become more effective. The average person spends 144 minutes per day on social media, and there is a clear upward trend in this metric.35 The average time spent watching TV is even greater.36 Perhaps this time is not wasted but rather serves some important recuperative, educational, or other function. Or perhaps not; perhaps instead the effect of social media on society is like the effect of a new addictive drug — opium, heroin, cocaine, etc. — which causes serious damage until society adapts. Maybe there will be more things like this: extremely addictive video games, or newly invented drugs, or wireheading (directly stimulating the reward circuitry of the brain).37
This could lead to economic and scientific slowdown. It could also concentrate power and influence in fewer people—those who for whatever reason remain relatively unaffected by the various productivity-draining technologies. Depending on how these practices spread, they might affect some communities more or sooner than others.
To my knowledge, existing “study drugs” such as modafinil don’t seem to have substantially sped up the rate of scientific progress in any field. However, new drugs (or other treatments) might be more effective. Moreover, in some fields, researchers typically do their best work at a certain age. Medicine which extends this period of peak mental ability might have a similar effect.
Separately, there may be substantial room for improvement in education due to big data, online classes, and tutor software.38
This could speed up the rate of scientific progress in some fields, among other effects.
Changes in human capabilities or other human traits via genetic interventions39 could affect many areas of life. If the changes were dramatic, they might have a large impact even if only a small fraction of humanity were altered by them.
Changes in human capabilities or other human traits via genetic interventions might:
- Accelerate research in general
- Differentially accelerate research projects that depend more on “genius” and less on money or experience
- Influence politics and ideology
- Cause social upheaval
- Increase the number of people capable of causing great harm
- Have a huge variety of effects not considered here, given the ubiquitous relevance of human nature to events
- Shift the landscape of effective strategies for influencing AI development (see below)
For a person at a time, there is a landscape of strategies for influencing the world, and in particular for influencing AI development and the effects of advanced AGI. The landscape could change such that the most effective strategies for influencing AI development are:
- More or less reliably helpful (e.g. working for an hour on a major unsolved technical problem might have a low chance of a very high payoff, and so not be very reliable)
- More or less “outside the box” (e.g. being an employee, publishing academic papers, and signing petitions are normal strategies, whereas writing Harry Potter fanfiction to illustrate rationality concepts and inspire teenagers to work on AI safety is not)40
- Easier or harder to find, such that marginal returns to investment in strategy research change
Here is a non-exhaustive list of reasons to think these features might change systematically over time:
- As more people devote more effort to achieving some goal, one might expect that effective strategies become common, and it becomes harder to find novel strategies that perform better than common strategies. As advanced AI becomes closer, one might expect more effort to flow into influencing the situation. Currently some ‘markets’ are more efficient than others; in some the orthodox strategies are best or close to the best, whereas in others clever and careful reasoning can find strategies vastly better than what most people do. How efficient a market is depends on how many people are genuinely trying to compete in it, and how accurate their beliefs are. For example, the stock market and the market for political influence are fairly efficient, because many highly-knowledgeable actors are competing. As more people take interest, the ‘market’ for influencing the course of AI may become more efficient. (This would also decrease the marginal returns to investment in strategy research, by making orthodox strategies closer to optimal.) If there is a deterioration of social epistemology (see below), the market might instead become less efficient.
- Currently there are some tasks at which the most skilled people are not much better than the average person (e.g. manual labor, voting) and others in which the distribution of effectiveness is heavy-tailed, such that a large fraction of the total influence comes from a small fraction of individuals (e.g. theoretical math, donating to politicians). The types of activity that are most useful for influencing the course of AI development may change over time in this regard, which in turn might affect the strategy landscape in all three ways described above.
- Transformative technologies can lead to new opportunities and windfalls for people who recognize them early. As more people take interest, opportunities for easy success disappear. Perhaps there will be a burst of new technologies prior to advanced AGI, creating opportunities for unorthodox or risky strategies to be very successful.
A shift in the landscape of effective strategies for influencing the course of AI is relevant to anyone who wants to have an effective strategy for influencing the course of AI.41 If it is part of a more general shift in the landscape of effective strategies for other goals — e.g. winning wars, making money, influencing politics — the world could be significantly disrupted in ways that may be hard to predict.
This might slow down research or precipitate other relevant events, such as war.
There is some evidence that scientific progress in general might be slowing down. For example, the millennia-long trend of decreasing economic doubling time seems to have stopped around 1960.42 Meanwhile, scientific progress has arguably come from increased investment in research. Since research investment has been growing faster than the economy, it might eventually saturate and grow only as fast as the economy.43
This might slow down AI research, making the events on this list (but not the technologies) more likely to happen before advanced AGI.
Here are some examples of potential global catastrophes:
- Climate change tail risks, e.g. feedback loop of melting permafrost releasing methane44
- Major nuclear exchange
- Global pandemic
- Volcano eruption that leads to 10% reduction in global agricultural production45
- Exceptionally bad solar storm knocks out world electrical grid46
- Geoengineering project backfires or has major negative side-effects47
A global catastrophe might be expected to cause conflict and slowing of projects such as research, though it could also conceivably increase attention on projects that are useful for dealing with the problem. It seems likely to have other hard to predict effects.
Attitudes toward AGI
The level of attention paid to AGI by the public, governments, and other relevant actors might increase (e.g. due to an impressive demonstration or a bad accident) or decrease (e.g. due to other issues drawing more attention, or evidence that AI is less dangerous or imminent).
Changes in the level of attention could affect the amount of work on AI and AI safety. More attention could also lead to changes in public opinion such as panic or an AI rights movement.
If the level of attention increases but AGI does not arrive soon thereafter, there might be a subsequent period of disillusionment.
There could be a rush for AGI, for instance if major nations begin megaprojects to build it. Or there could be a rush away from AGI, for instance if it comes to be seen as immoral or dangerous like human cloning or nuclear rocketry.
Increased investment in AGI might make advanced AGI happen sooner, with less hardware overhang and potentially less proportional investment in safety. Decreased investment might have the opposite effects.
The communities that build and regulate AI could undergo a substantial ideological shift. Historically, entire nations have been swept by radical ideologies within about a decade or so, e.g. Communism, Fascism, the Cultural Revolution, and the First Great Awakening.48 Major ideological shifts within communities smaller than nations (or within nations, but on specific topics) presumably happen more often. There might even appear powerful social movements explicitly focused on AI, for instance in opposition to it or attempting to secure legal rights and moral status for AI agents.49 Finally, there could be a general rise in extremist movements, for instance due to a symbiotic feedback effect hypothesized by some,50 which might have strategically relevant implications even if mainstream opinions do not change.
Changes in public opinion on AI might change the speed of AI research, change who is doing it, change which types of AI are developed or used, and limit or alter discussion. For example, attempts to limit an AI system’s effects on the world by containing it might be seen as inhumane, as might adversarial and population-based training methods. Broader ideological change or a rise in extremisms might increase the probability of a massive crisis, revolution, civil war, or world war.
Events could occur that provide compelling evidence, to at least a relevant minority of people, that advanced AGI is near.
This could increase the amount of technical AI safety work and AI policy work being done, to the extent that people are sufficiently well-informed and good at forecasting. It could also enable people already doing such work to more efficiently focus their efforts on the true scenario.
A convincing real-world example of AI alignment failure could occur.
This could motivate more effort into mitigating AI risk and perhaps also provide useful evidence about some kinds of risks and how to avoid them.
Precursors to AGI
An accurate way to scan human brains at a very high resolution could be developed.
Combined with a good low-level understanding of the brain (see below) and sufficient computational resources, this might enable brain emulations, a form of AGI in which the AGI is similar, mentally, to some original human. This would change the kind of technical AI safety work that would be relevant, as well as introducing new AI policy questions. It would also likely make AGI timelines easier to predict. It might influence takeoff speeds.
To my knowledge, as of April 2020, humanity does not understand how neurons work well enough to accurately simulate the behavior of a C. Elegans worm, though all connections between its neurons have been mapped51 Ongoing progress in modeling individual neurons could change this, and perhaps ultimately allow accurate simulation of entire human brains.
Combined with brain scanning (see above) and sufficient computational resources, this may enable brain emulations, a form of AGI in which the AI system is similar, mentally, to some original human. This would change the kind of AI safety work that would be relevant, as well as introducing new AI policy questions. It would also likely make the time until AGI is developed more predictable. It might influence takeoff speeds. Even if brain scanning is not possible, a good low-level understanding of the brain might speed AI development, especially of systems that are more similar to human brains.
Better, safer, and cheaper methods to control computers directly with our brains may be developed. At least one project is explicitly working towards this goal.52
Strong brain-machine interfaces might:
- Accelerate research, including on AI and AI safety53
- Accelerate in vitro brain technology
- Accelerate mind-reading, lie detection, and persuasion tools
- Deteriorate collective epistemology (e.g. by contributing to wireheading or short attention spans)
- Improve collective epistemology (e.g. by improving communication abilities)
- Increase inequality in influence among people
Neural tissue can be grown in a dish (or in an animal and transplanted) and connected to computers, sensors, and even actuators.54 If this tissue can be trained to perform important tasks, and the technology develops enough, it might function as a sort of artificial intelligence. Its components would not be faster than humans, but it might be cheaper or more intelligent. Meanwhile, this technology might also allow fresh neural tissue to be grafted onto existing humans, potentially serving as a cognitive enhancer.55
This might change the sorts of systems AI safety efforts should focus on. It might also automate much human labor, inspire changes in public opinion about AI research (e.g. promoting concern about the rights of AI systems), and have other effects which are hard to predict.
Researchers may develop something which is a true artificial general intelligence—able to learn and perform competently all the tasks humans do—but just isn’t very good at them, at least, not as good as a skilled human.
If weak AGI is faster or cheaper than humans, it might still replace humans in many jobs, potentially speeding economic or technological progress. Separately, weak AGI might provide testing opportunities for technical AI safety research. It might also change public opinion about AI, for instance inspiring a “robot rights” movement, or an anti-AI movement.
Researchers may develop something which is a true artificial general intelligence, and moreover is qualitatively more intelligent than any human, but is vastly more expensive, so that there is some substantial period of time before cheap AGI is developed.
An expensive AGI might contribute to endeavors that are sufficiently valuable, such as some science and technology, and so may have a large effect on society. It might also prompt increased effort on AI or AI safety, or inspire public thought about AI that produces changes in public opinion and thus policy, e.g. regarding the rights of machines. It might also allow opportunities for trialing AI safety plans prior to very widespread use.
Researchers may develop something which is a true artificial general intelligence, and moreover is qualitatively as intelligent as the smartest humans, but takes a lot longer to train and learn than today’s AI systems.
Slow AGI might be easier to understand and control than other kinds of AGI, because it would train and learn more slowly, giving humans more time to react and understand it. It might produce changes in public opinion about AI.
If the pace of automation substantially increases prior to advanced AGI, there could be social upheaval and also dramatic economic growth. This might affect investment in AI.
Shifts in the balance of power
Edward Snowden defected from the NSA and made public a vast trove of information. Perhaps something similar could happen to a leading tech company or AI project.
In a world where much AI progress is hoarded, such an event could accelerate timelines and make the political situation more multipolar and chaotic.
Espionage techniques might become more effective relative to counterespionage techniques. In particular:
- Quantum computing could break current encryption protocols.56
- Automated vulnerability detection57 could turn out to have an advantage over automated cyberdefense systems, at least in the years leading up to advanced AGI.
More successful espionage techniques might make it impossible for any AI project to maintain a lead over other projects for any substantial period of time. Other disruptions may become more likely, such as hacking into nuclear launch facilities, or large scale cyberwarfare.
Counterespionage techniques might become more effective relative to espionage techniques than they are now. In particular:
- Post-quantum encryption might be secure against attack by quantum computers.58
- Automated cyberdefense systems could turn out to have an advantage over automated vulnerability detection. Ben Garfinkel and Allan Dafoe59 give reason to think the balance will ultimately shift to favor defense.
Stronger counterespionage techniques might make it easier for an AI project to maintain a technological lead over the rest of the world. Cyber wars and other disruptive events could become less likely.
More extensive or more sophisticated surveillance could allow strong and selective policing of technological development. It would also have other social effects, such as making totalitarianism easier and making terrorism harder.
Autonomous weapons could shift the balance of power between nations, or shift the offense-defense balances resulting in more or fewer wars or terrorist attacks, or help to make totalitarian governments more stable. As a potentially early, visible and controversial use of AI, they may also especially influence public opinion on AI more broadly, e.g. prompting anti-AI sentiment.
Currently both governments and corporations are strategically relevant actors in determining the course of AI development. Perhaps governments will become more important, e.g. by nationalizing and merging AI companies. Or perhaps governments will become less important, e.g. by not paying attention to AI issues at all, or by becoming less powerful and competent generally. Perhaps some third kind of actor (such as religion, insurgency, organized crime, or special individual) will become more important, e.g. due to persuasion tools, countermeasures to surveillance, or new weapons of guerilla warfare.60
This influences AI policy by affecting which actors are relevant to how AI is developed and deployed.
Perhaps some strategically important location (e.g. tech hub, seat of government, or chip fab) will be suddenly destroyed. Here is a non-exhaustive list of ways this could happen:
- Terrorist attack with weapon of mass destruction
- Major earthquake, flood, tsunami, etc. (e.g. this research claims a 2% chance of magnitude 8.0 or greater earthquake in San Francisco by 2044.)61
If it happens, it might be strategically disruptive, causing e.g. the dissolution and diaspora of the front-runner AI project, or making it more likely that some government makes a radical move of some sort.
For instance, a new major national hub of AI research could arise, rivalling the USA and China in research output. Or either the USA or China could cease to be relevant to AI research.
This might make coordinating AI policy more difficult. It might make a rush for AGI more or less likely.
This might cause short-term, militarily relevant AI capabilities research to be prioritized over AI safety and foundational research. It could also make global coordination on AI policy difficult.
This might be very dangerous for people living in those countries. It might change who the strategically relevant actors are for shaping AI development. It might result in increased instability, or cause a new social movement or ideological shift.
This would make coordinating AI policy easier in some ways (e.g. there would be no need for multiple governing bodies to coordinate their policy at the highest level), however it might be harder in others (e.g. there might be a more complicated regulatory system overall).
- Many thanks to Katja Grace, Asya Bergal, Rick Korzekwa, Charlie Giattino, Carl Shulman, Max Daniel, Tobias Baumann, and Greg Lewis for comments on drafts.
- Both approaches seem valuable to me. Most of the time I do the first approach, so it seemed there might be low-hanging fruit to pick by trying the second.
- “List of Emerging Technologies.” In Wikipedia, March 2, 2020. https://en.wikipedia.org/w/index.php?title=List_of_emerging_technologies&oldid=943560081
- “Ray Kurzweil’s Mind-Boggling Predictions for the Next 25 Years.” Accessed March 24, 2020. https://singularityhub.com/2015/01/26/ray-kurzweils-mind-boggling-predictions-for-the-next-25-years/
- “Our Research.” Accessed April 26, 2020. https://www.darpa.mil/our-research?ppl=viewall.
- In my experience at least, the usual method is to think about some important feature of the scenario — maybe it’s a slow takeoff, maybe it involves an AI takeover, maybe it involves population-based training — and gradually add details that seem to fit. This often means I fill in the past of my story to justify the present, wheras in the real world the future flows from the past. Moreover I worry that the brain is not good at simulating randomness; if I go down a list of possibilities and mark my credences that each will be realized, and then ask myself to imagine a random future, I doubt very much my ability to randomly sample from the list according to my credences. So I made an app to do it for me.
- This document is a brainstorm of possibilities potentially worth thinking about, not an attempt to quantify how likely they are. I only spent about a few seconds per question estimating these probabilities, and the resolution criteria are ambiguous anyway, so don’t take them seriously. They are there so that people who don’t have time to make their own estimates can at least have fun clicking the button.
- This is relevant for several reasons. For example, it might make flexible long-term strategies more valuable relative to strategies that depend on specific predictions. It also might make wars, new ideologies, and shifts in the balance of power more likely.
- Because computing hardware has been improving, it has been designed not to last more than a few years, and the fixed costs of designing and manufacturing are amortized over supply runs of only a few years. Therefore, even if technology stops improving, costs will continue to improve for a while as hardware is redesigned to last longer and fixed costs are amortized over many years. Credit to Carl Shulman for pointing this out.
- By putting memory closer to computing hardware, time and energy can be saved. Cerebras describes a chip under development: “The WSE has 18 GB of on-chip memory, all accessible within a single clock cycle, and provides 9 PB/s memory bandwidth. This is 3000x more capacity and 10,000x greater bandwidth than the leading competitor. More cores with more local memory enables fast, flexible computation, at lower latency and with less energy.” Cerebras. “Product.” Accessed April 26, 2020. https://www.cerebras.net/product/.
- ”When it comes to the compute-intensive field of AI, hardware vendors are reviving the performance gains we enjoyed at the height of Moore’s Law. The gains come from a new generation of specialized chips for AI applications like deep learning.” IEEE Spectrum: Technology, Engineering, and Science News. “Specialized AI Chips Hold Both Promise and Peril for Developers – IEEE Spectrum.” Accessed April 26, 2020. https://spectrum.ieee.org/tech-talk/semiconductors/processors/specialized-ai-chips-hold-both-promise-and-peril-for-developers. “Because of their unique features, AI chips are tens or even thousands of times faster and more efficient than CPUs for training and inference of AI algorithms. State-of-the-art AI chips are also dramatically more cost-effective than state-of-the-art CPUs as a result of their greater efficiency for AI algorithms.” Center for Security and Emerging Technology. “AI Chips: What They Are and Why They Matter.” Accessed June 8, 2020. https://cset.georgetown.edu/research/ai-chips-what-they-are-and-why-they-matter/.
- ”Fathom Computing is developing high-performance machine learning computers, built to run both training and inference workflows for very-large-scale artificial neural networks. Data movements, not math or logic operations, are the bottleneck in computing. Fathom’s all digital electro-optical architecture focuses innovation precisely in this area, enabling orders of magnitude more data bandwidth at the chip, rack, and warehouse-scale. This high-bandwidth architecture brings ML performance improvements significantly beyond what is possible in electronics-only systems.One of our long-term goals is to build the hardware to train neural networks with the same number of parameters as the human brain has synapses (>100 trillion).” Fathom Computing. “Fathom Computing.” Accessed April 16, 2020. https://www.fathomcomputing.com/.
- “Members of the neuromorphics research community soon discovered that they could take a deep-learning network and run it on their new style of hardware. And they could take advantage of the technology’s power efficiency: The TrueNorth chip, which is the size of a postage stamp and holds a million “neurons,” is designed to use a tiny fraction of the power of a standard processor.” “Neuromorphic Chips Are Destined for Deep Learning—or Obscurity – ” IEEE Spectrum: Technology, Engineering, and Science News. Accessed April 26, 2020. https://spectrum.ieee.org/semiconductors/design/neuromorphic-chips-are-destined-for-deep-learningor-obscurity.
- ”While traditional CMOS scaling processes improves signal propagation speed, scaling from current manufacturing and chip-design technologies is becoming more difficult and costly, in part because of power-density constraints, and in part because interconnects do not become faster while transistors do. 3D ICs address the scaling challenge by stacking 2D dies and connecting them in the 3rd dimension. This promises to speed up communication between layered chips, compared to planar layout.”“Three-Dimensional Integrated Circuit.” In Wikipedia, April 14, 2020. https://en.wikipedia.org/w/index.php?title=Three-dimensional_integrated_circuit&oldid=950952374.
- According to Cerebras Systems, “A single CS-1 delivers orders of magnitude greater deep learning performance than a graphics processor. As such, far fewer CS-1 systems are needed to achieve the same effective compute as large-scale cluster deployments of traditional machines.” Cerebras. “Product.” Accessed April 26, 2020. https://www.cerebras.net/product/.
- ”We think quantum computing will help us develop the innovations of tomorrow, including AI.” From Google Research page, accessed online April 16, 2020. https://research.google/teams/applied-science/quantum/
- “According to Moore’s law, the dimensions of individual devices in an integrated circuit have been decreased by a factor of approximately two every two years. This scaling down of devices has been the driving force in technological advances since the late 20th century. However, as noted by ITRS 2009 edition, further scaling down has faced serious limits related to fabrication technology and device performances as the critical dimension shrunk down to sub-22 nm range. The limits involve electron tunneling through short channels and thin insulator films, the associated leakage currents, passive power dissipation, short channel effects, and variations in device structure and doping. These limits can be overcome to some extent and facilitate further scaling down of device dimensions by modifying the channel material in the traditional bulk MOSFET structure with a single carbon nanotube or an array of carbon nanotubes.” “Carbon Nanotube Field-Effect Transistor.” In Wikipedia, March 30, 2020.https://en.wikipedia.org/w/index.php?title=Carbon_nanotube_field-effect_transistor&oldid=948242190
- ”Most forecasters, including Gordon Moore, expect Moore’s law will end by around 2025.” “Moore’s Law.” In Wikipedia, April 16, 2020. https://en.wikipedia.org/w/index.php?title=Moore%27s_law&oldid=951366634.
- “Fixed costs increasing faster than variable costs has created higher barriers of entry, squeezing fab profits and shrinking the number of chipmakers operating fabs at the leading nodes.” “AI-Chips%E2%80%94What-They-Are-and-Why-They-Matter.Pdf.” Accessed April 17, 2020. https://cset.georgetown.edu/ai-chips/. ”The fact that the complex supply chains needed to produce leading-edge AI chips are concentrated in the United States and a small number of allied democracies provides an opportunity for export control policies.” Center for Security and Emerging Technology. “AI Chips: What They Are and Why They Matter.” Accessed June 8, 2020. https://cset.georgetown.edu/research/ai-chips-what-they-are-and-why-they-matter/.
- “Nanofactories.” Accessed April 16, 2020. https://foresight.org/nano/nanofactories.html.
- Many times in the past, resources which were once expensive became cheap, sometimes very quickly. For example, Aluminum went from $12/lb to $0.78/lb in 13 years. “And the price of aluminum began to drop, from $12 a pound in 1880, to $4.86 in 1888, to 78 cents in 1893 to, by the 1930s, just 20 cents a pound.” Laskow, Sarah. “Aluminum Was Once One of the Most Expensive Metals in the World.” The Atlantic, November 7, 2014.https://www.theatlantic.com/technology/archive/2014/11/aluminum-was-once-one-of-the-most-expensive-metals-in-the-world/382447/.
- ”Cosmochemist and geochemist Ouyang Ziyuan from the Chinese Academy of Sciences who is now in charge of the Chinese Lunar Exploration Program has already stated on many occasions that one of the main goals of the program would be the mining of helium-3, from which operation “each year, three space shuttle missions could bring enough fuel for all human beings across the world.”” “Helium-3.” In Wikipedia, April 18, 2020. https://en.wikipedia.org/w/index.php?title=Helium-3&oldid=951699417.
- ”This source of carbon, the most abundant in the world, may be one of the last new forms of fossil fuel to be extracted on a commercial scale.”Henriques, Martha. “Why ‘Flammable Ice’ Could Be the Future of Energy.” Accessed April 26, 2020. https://www.bbc.com/future/article/20181119-why-flammable-ice-could-be-the-future-of-energy.
- Roberts, David. “The Falling Costs of US Solar Power, in 7 Charts.” Vox, August 24, 2016. https://www.vox.com/2016/8/24/12620920/us-solar-power-costs-falling.
- “The “gold at the end of the rainbow,” he added, is the extraction and exploitation of platinum-group metals, which are rare here on Earth but are extremely important in the manufacture of electronics and other high-tech goods.” August 11, Mike Wall, and 2015. “Asteroid Mining May Be a Reality by 2025.” Space.com. Accessed April 26, 2020. https://www.space.com/30213-asteroid-mining-planetary-resources-2025.html.
- Hylton, Story by Wil S. “History’s Largest Mining Operation Is About to Begin.” The Atlantic. Accessed April 26, 2020. https://www.theatlantic.com/magazine/archive/2020/01/20000-feet-under-the-sea/603040/.
- ”These models also suggest that wholesale use of machine intelligence could increase economic growth rates by an order of magnitude or more. These increased growth rates are due to our assumptions that computer technology improves faster than general technology, and that the labor population of machine intelligences could grow as fast as desired to meet labor demand.” Hanson, Robin (2001), “Economic growth given machine intelligence,” Technical Report, University of California, Berkeley. https://www.economicsofai.com/economic-growth
- An unofficial analysis claims that computing hardware makes up 57% of the cost of running a data center, with the remainder being energy, cooling, networking equipment, and other infrastructure. This was based on using the hardware for three years, and the infrastructure for ten years. “Overall Data Center Costs – Perspectives.” Accessed April 30, 2020. https://perspectives.mvdirona.com/2010/09/overall-data-center-costs/. I do not know what the cost breakdown is for computing hardware but I imagine it involves lots of skilled labor to design the chips and chip fabs.
- For example, energy efficiency improvements may fail to keep up with other improvements, or progress in computing hardware technology might stagnate. In either scenario, skilled labor and capital could become less important relative to energy, unskilled labor, and materials. Thus a resource glut could potentially have a large effect on computing costs.
- For instance if people’s opinions become divorced from reality at a large scale it might become hard for public discourse and institutions to support good policy making. See below.
- Even weak versions of these tools might be useful. Olah et al describe a weak yet promising version of this sort of tool. Olah, Chris, Nick Cammarata, Ludwig Schubert, Gabriel Goh, Michael Petrov, and Shan Carter. “Zoom In: An Introduction to Circuits.” Distill 5, no. 3 (March 10, 2020): e00024.001. https://doi.org/10.23915/distill.00024.001
- An explanation and real-life example can be found in Economics, and Political Science. “A Test of Dominant Assurance Contracts.” Marginal REVOLUTION, August 29, 2013. https://marginalrevolution.com/marginalrevolution/2013/08/a-test-of-dominant-assurance-contracts.html.
- For a comparison of different voting systems, see wikipedia. “Comparison of Electoral Systems.” In Wikipedia, March 31, 2020. https://en.wikipedia.org/w/index.php?title=Comparison_of_electoral_systems&oldid=948343913.
- See: Extremal Goodhart, from “Goodhart Taxonomy – LessWrong 2.0.” Accessed April 17, 2020 https://www.lesswrong.com/posts/EbFABnst8LsidYs5Y/goodhart-taxonomy.
- BroadbandSearch.net. “Average Time Spent Daily on Social Media (Latest 2020 Data).” Accessed April 28, 2020. https://cdn.broadbandsearch.net/blog/average-daily-time-on-social-media.
- BroadbandSearch.net. “Average Time Spent Daily on Social Media (Latest 2020 Data).” Accessed April 28, 2020. https://cdn.broadbandsearch.net/blog/average-daily-time-on-social-media.
- “Wirehead (Science Fiction).” In Wikipedia, April 9, 2020. https://en.wikipedia.org/w/index.php?title=Wirehead_(science_fiction)&oldid=949912810.
- For example, there is at least some evidence that two standard deviations of performance improvement results from good one-on-one instruction, and at least one experiment suggests that good tutor software can quickly train people to be better than the average expert! “The Digital Tutor students outperformed traditionally taught students and field experts in solving IT problems on the final assessment. They did not merely meet the goal of being as good after 16 weeks as experts in the field, but they actually outperformed them.” “DARPA Digital Tutor: Four Months to Total Technical Expertise? – LessWrong 2.0.” Accessed July 9, 2020. https://www.lesswrong.com/posts/vbWBJGWyWyKyoxLBe/darpa-digital-tutor-four-months-to-total-technical-expertise. “Bloom’s 2 Sigma Problem.” In Wikipedia, June 25, 2020. https://en.wikipedia.org/w/index.php?title=Bloom%27s_2_sigma_problem&oldid=964499386.
- Genetic engineering and large-mammal cloning have already been demonstrated, so may plausibly be applied to humans at some point. For a scholarly argument for the feasibility of iterated embryo selection, see Bostrom, Nick and Shulman, Carl. Embryo Selection for Cognitive Enhancement: Curiosity or Game Changer? Global Policy, Vol. 5, Iss. 1 (2014): 85–92 http://www.nickbostrom.com/papers/embryo.pdf and Branwen, Gwern. “Embryo Selection For Intelligence,” January 22, 2016. https://www.gwern.net/Embryo-selection.
- Harry Potter and the Methods of Rationality, written by Eliezer Yudkowsky in the early 2010s, anecdotally brought many people into the ai risk community. As of 2020 it is the most popular harry potter fanfiction of all time according to fanfiction.net.“Harry Potter FanFiction Archive | FanFiction.” Accessed April 27, 2020. https://www.fanfiction.net/book/Harry-Potter/?&srt=3&r=103.
- This talk and this post discuss some dimensions of this landscape and explain why it is important to think about. “Some Cruxes on Impactful Alternatives to AI Policy Work – EA Forum.” Accessed April 16, 2020. https://forum.effectivealtruism.org/posts/DW4FyzRTfBfNDWm6J/some-cruxes-on-impactful-alternatives-to-ai-policy-work. Effective Altruism. “Prospecting for Gold.” Accessed April 16, 2020. https://www.effectivealtruism.org/articles/prospecting-for-gold-owen-cotton-barratt/.
- See the first graph in Alexander, Scott. “1960: The Year The Singularity Was Cancelled.” Slate Star Codex, April 23, 2019. https://slatestarcodex.com/2019/04/22/1960-the-year-the-singularity-was-cancelled/.
- ”We present a wide range of evidence from various industries, products, and firms showing that research effort is rising substantially while research productivity is declining sharply.” Bloom, Nicholas, Charles I Jones, John Van Reenen, and Michael Webb. “Are Ideas Getting Harder to Find?” Working Paper. Working Paper Series. National Bureau of Economic Research, September 2017. https://doi.org/10.3386/w23782.
- See here for an overview of such risks.GiveWell. “Extreme Risks from Climate Change.” Accessed April 17, 2020. https://www.givewell.org/shallow/climate-change/extreme-risks.
- See here for some arguments and sources about the likelihood of 10% reductions in global agricultural production, e.g. from volcanos. “Should We Be Spending No Less on Alternate Foods than AI Now? – EA Forum.” Accessed April 17, 2020. https://forum.effectivealtruism.org/posts/7XRjb3Tx8j36AcBpb/should-we-be-spending-no-less-on-alternate-foods-than-ai-now.
- From Wikipedia: “In June 2013, a joint venture from researchers at Lloyd’s of London and Atmospheric and Environmental Research (AER) in the United States used data from the Carrington Event to estimate the current cost of a similar event to the U.S. alone at $0.6–2.6 trillion.” “Solar Storm of 1859.” In Wikipedia, April 16, 2020. https://en.wikipedia.org/w/index.php?title=Solar_storm_of_1859&oldid=951346360.
- From Wikipedia: “There may be unintended climatic consequences of solar radiation management, such as significant changes to the hydrological cycle that might not be predicted by the models used to plan them. Such effects may be cumulative or chaotic in nature. Ozone depletion is a risk of techniques involving sulfur delivery into the stratosphere.” “Solar Radiation Management.” In Wikipedia, April 16, 2020. https://en.wikipedia.org/w/index.php?title=Solar_radiation_management&oldid=951302439.
- “Cultural Revolution.” In Wikipedia, April 23, 2020. https://en.wikipedia.org/w/index.php?title=Cultural_Revolution&oldid=952680930. “First Great Awakening.” In Wikipedia, April 8, 2020. https://en.wikipedia.org/w/index.php?title=First_Great_Awakening&oldid=949753276.
- For an example of someone trying to secure legal rights for artificial intelligences, see this document in which the US patent office denies someone’s request to have the patent granted to their AI, stating that “Lastly, petitioner has outlined numerous policy considerations to support the position that a patent application can name a machine as inventor … These policy considerations notwithstanding, they do not overcome the plain language of the patent laws as passed by the Congress and interpreted by the courts.” From the first PDF linked in this news article: CNN, AJ Willingham. “Artificial Intelligence Can’t Technically Invent Things, Says Patent Office.” CNN. Accessed April 30, 2020. https://www.cnn.com/2020/04/30/us/artificial-intelligence-inventing-patent-office-trnd/index.html.
- Polar opposite political movements can arguably sometimes be symbiotes; each one gains power and followers by claiming to be a proportional and necessary response to the dire threat posed by the other. For some arguments to this effect, see here and here. Illing, Sean. “Reciprocal Rage: Why Islamist Extremists and the Far Right Need Each Other.” Vox, December 19, 2017. https://www.vox.com/world/2017/12/19/16764046/islam-terrorism-far-right-extremism-isis. “The Toxoplasma Of Rage | Slate Star Codex.” Accessed April 17, 2020. https://slatestarcodex.com/2014/12/17/the-toxoplasma-of-rage/.
- Cook, Steven J., Travis A. Jarrell, Christopher A. Brittin, Yi Wang, Adam E. Bloniarz, Maksim A. Yakovlev, Ken C. Q. Nguyen, et al. “Whole-Animal Connectomes of Both Caenorhabditis Elegans Sexes.” Nature 571, no. 7763 (July 2019): 63–71. https://doi.org/10.1038/s41586-019-1352-7.
- “Neuralink.” Accessed April 17, 2020. https://neuralink.com/.
- Elon Musk, founder of Neuralink, claims that eventually their interfaces will multiply the user’s economic productivity tenfold. See timestamp 46:40 of this video: “(1) Joe Rogan Experience #1470 – Elon Musk – YouTube.” Accessed May 8, 2020. https://www.youtube.com/watch?v=RcYjXbSJBN8.
- For example, in 2004 some rat brain cells in a dish were trained to partially fly a simulated fighter jet. Biever, Celeste. “Brain Cells in a Dish Fly Fighter Plane.” New Scientist. Accessed May 6, 2020. https://www.newscientist.com/article/dn6573-brain-cells-in-a-dish-fly-fighter-plane/.
- “Since our abilities are the product of neuronal development and activity, augmenting brain function with IVB is not beyond the realms of possibility, especially if used at the same time as treating a brain disease or injury to render the person “better than well.”” “Brain in a Vat: 5 Challenges for the In Vitro Brain | Practical Ethics.” Accessed May 6, 2020. http://blog.practicalethics.ox.ac.uk/2015/08/brain-in-a-vat-5-challenges-for-the-in-vitro-brain/.
- “The cryptography underpinning modern internet communications and e-commerce could someday succumb to a quantum attack.” “Denning, Dorothy. “Is Quantum Computing a Cybersecurity Threat?” The Conversation. Accessed April 16, 2020. http://theconversation.com/is-quantum-computing-a-cybersecurity-threat-107411.”
- ”The need for automated, scalable, machine-speed vulnerability detection and patching is large and growing fast as more and more systems—from household appliances to major military platforms—get connected to and become dependent upon the internet. … To help overcome these challenges, DARPA launched the Cyber Grand Challenge, a competition to create automatic defensive systems capable of reasoning about flaws, formulating patches and deploying them on a network in real time.”Fraze, Dustin. “Cyber Grand Challenge (CGC) (Archived).” Defense Advanced Research Projects Agency. Accessed April 30, 2020. https://www.darpa.mil/program/cyber-grand-challenge.
- “Post-quantum cryptography (sometimes referred to as quantum-proof, quantum-safe or quantum-resistant) refers to cryptographic algorithms (usually public-key algorithms) that are thought to be secure against an attack by a quantum computer.” “Post-Quantum Cryptography.” In Wikipedia, March 29, 2020. https://en.wikipedia.org/w/index.php?title=Post-quantum_cryptography&oldid=948037928.
- Garfinkel, Ben, and Allan Dafoe. “How Does the Offense-Defense Balance Scale?” Journal of Strategic Studies 42, no. 6 (September 19, 2019): 736–63. https://www.tandfonline.com/doi/full/10.1080/01402390.2019.1631810
- For example, 3D printers might make it easier for underground organizations to secretly procure weapons and equipment, and cheap AI-guided drones might make terror attacks and assassinations both more effective and harder to trace.
- See Figure 4 in the UCERF3 report, available here. “USGS Open-File Report 2013–1165: Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3)—The Time-Independent Model.” Accessed April 17, 2020. https://pubs.usgs.gov/of/2013/1165/.