The tyranny of the god scenario

By Michael Wulfsohn, 6 April 2018

I was convinced. An intelligence explosion would result in the sudden arrival of a superintelligent machine. Its abilities would far exceed those of humans in ways we can’t imagine or counter. It would likely arrive within a few decades, and would wield complete power over humanity. Our species’ most important challenge would be to solve the value alignment problem. The impending singularity would lead either to our salvation, our extinction, or worse.

Intellectually, I knew that it was not certain that this “god scenario” would come to pass. If asked, I would even have assigned it a relatively low probability, certainly much less than 50%. Nevertheless, it dominated my thinking. Other possibilities felt much less real: that humans might achieve direct control over their superintelligent invention, that reaching human-level intelligence might take hundreds of years, that there might be a slow progression from human-level intelligence to superintelligence, and many others. I paid lip service to these alternatives, but I didn’t want them to be valid, and I didn’t think about them much. My mind would always drift back to the god scenario.

I don’t know how likely the god scenario really is. With currently available information, nobody can know for sure. But whether or not it’s likely, the idea definitely has powerful intuitive appeal. For example, it led me to change my beliefs about the world more quickly and radically than I ever had before. I doubt that I’m the only one.

Why did I find the god scenario so captivating? I like science fiction, and the idea of an intelligence explosion certainly has science-fictional appeal. I was able to relate to the scenario easily, and perhaps better think through the implications. But the transition from science fiction to reality in my mind wasn’t immediate. I remember repeatedly thinking “nahhh, surely this can’t be right!” My mind was trying to put the scenario in its science-fictional place. But each time the thought occurred, I remember being surprised at the scenario’s plausibility, and at my inability to rule out any of its key components.

I also tend to place high value on intelligence itself. I don’t mean that I’ve assessed various qualities against some measure of value and concluded that intelligence ranks highly. I mean it in a personal-values sense. For example, the level of intelligence I have is a big factor in my level of self-esteem. This is probably more emotional than logical.

This emotional effect was an important part of the god scenario’s impact on me. At first, it terrified me. I felt like my whole view of the world had been upset, and almost everything people do day to day seemed to no longer matter. I would see a funny video of a dog barking at its reflection, and instead of enjoying it, I’d notice the grim analogy of the intellectual powerlessness humanity might one day experience. But apart from the fear, I was also tremendously excited by the thought of something so sublimely intelligent. Having not previously thought much about the limits of intelligence itself, the concept was both consuming and eye-opening, and the possibilities were inspiring. The notion of a superintelligent being appealed to me similarly to the way Superman’s abilities have enthralled audiences.

Other factors included that I was influenced by highly engaging prose, since I first learned about superintelligence by reading this excellent waitbutwhy.com blog post. Another was my professional background; I was accustomed to worrying about improbable but significant threats, and to arguments based on expected value. The concern of prominent people—Bill Gates, Elon Musk, and Stephen Hawking—helped. Also, I get a lot of satisfaction from working on whatever I think is humanity’s most important problem, so I really couldn’t ignore the idea.

But there were also countervailing effects in my mind, leading away from the god scenario. The strongest was the outlandishness of it all. I had always been dismissive of ideas that seem like doomsday theories, so I wasn’t automatically comfortable giving the god scenario credence in my mind. I was hesitant to introduce the idea to people who I thought might draw negative conclusions about my judgement.

I still believe the god scenario is a real possibility. We should assiduously prepare for it and proceed with caution. However, I believe I have gradually escaped its intuitive capture. I can now consider other possibilities without my mind constantly drifting back to the god scenario.

I believe a major factor behind my shift in mindset was my research interest in analyzing AI safety as a global public good. Such research led me to think concretely about other scenarios, which increased their prominence in my mind. Relatedly, I began to think I might be better equipped to contribute to outcomes in those other scenarios. This led me to want to believe that the other scenarios were more likely, a desire compounded by the danger of the god scenario. My personal desires may or may not have influenced my objective opinion of the probabilities. But they definitely helped counteract the emotional and intuitive appeal of the god scenario.

Exposure to mainstream views on the subject also moderated my thinking. In one instance, reading an Economist special report on artificial intelligence helped counteract the effects I’ve described, despite that I actually disagreed with most of their arguments against the importance of existential risk from AI.

Exposure to work done by the Effective Altruism community on different future possibilities also helped, as did my discussions with Katja Grace, Robin Hanson, and others during my work for AI Impacts. The exposure and discussions increased my knowledge and the sophistication of my views such that I could better imagine the range of AI scenarios. Similarly, listening to Elon Musk’s views of the importance of developing brain-computer interfaces, and seeing OpenAI pursue goals that may not squarely confront the god scenario, also helped. They gave me a choice: decide without further ado that Elon Musk and OpenAI are misguided, or think more carefully about other potential scenarios.

Relevance to the cause of AI safety

I believe the AI safety community probably includes many people who experience the god scenario’s strong intuitive appeal, or have previously experienced it. This tendency may be having some effects on the field.

Starting with the obvious, such a systemic effect could cause pervasive errors in decision-making. However, I want to make clear that I have no basis to conclude that it has done so among the Effective Altruism community. For me, the influence of the god scenario was subtle, and driven by its emotional facet. I could override it when asked for a rational assessment of probabilities. But its influence was pervasive, affecting the thoughts to which my mind would gravitate, the topics on which I would tend to generate ideas, and what I would feel like doing with my time. It shaped my thought processes when I wasn’t looking.

Preoccupation with the god scenario may also entail a public relations risk. Since the god scenario’s strong appeal is not universal, it may polarize public opinion, as it can seem bizarre or off-putting to many. At worst, a rift may develop between the AI safety community and the rest of society. This matters. For example, policymakers throughout the world have the ability to promote the cause of AI safety through funding and regulation. Their involvement is probably an essential component of efforts to prevent an AI arms race through international coordination. But it is easier for them to support a cause that resonates with the public.

Conversely, the enthusiasm created by the intuitive appeal of the god scenario can be quite positive, since it attracts attention to related issues in AI safety and existential risk. For example, others’ enthusiasm and work in these areas led me to get involved.

I hope readers will share their own experience of the intuitive appeal of the god scenario, or lack thereof, in the comments. A few more data points and insights might help to shed light.


We welcome suggestions for this page or anything on the site via our feedback box, though will not address all of them.

10 Comments

  1. I think I focused almost exclusively on the god scenario for quite a while. This was probably because I was introduced to the AI alignment problem through reading Eliezer Yudkowsky’s writings online.

    It still seems almost inevitable that if we continue improving AI systems then godlike agents will eventually emerge, but (as I think you’re saying here?) in some possible scenarios AI would lead to an existential catastrophe before this point. This could be because of, say, robot weapons negating nuclear deterrence, or society falling apart in the face of such rapid change, or countries going to war due to an AI arms race.

    Am I understanding you correctly? Which non-godlike scenarios are you most worried about?

    • Well, my point is more about the effects on my thinking than about my assessment of the risks. The main thing that’s changed is that I can more cleanly think about other possibilities, without them all leading me think of a superintelligent AI singleton.

      A couple of examples of the sort of scenarios I think about more now:
      – There could easily be a take-off slow enough that no singleton would form. This starts to seem likely when you imagine the extent of problems that usually get in the way of the development of new technology. Think of the difference between the outside and inside views.
      – There might be little more to intelligence than what is implemented in the human brain, with the main improvements available from a “scaling up” (improved memory, thought speed, etc). If so, there would still be huge capability improvements, but no concepts that are permanently beyond human reach. The human-animal comparison would be less appropriate.

      To answer your question, the scenario that I worry about most, and that seems most likely to me, is that humans maintain control over AI, with multiple world powers gaining advanced AI capabilities at effectively the same time. Then our existing institutions and power structures will largely remain, along with all their problems.

  2. “There might be little more to intelligence than what is implemented in the human brain, with the main improvements available from a “scaling up” (improved memory, thought speed, etc). If so, there would still be huge capability improvements, but no concepts that are permanently beyond human reach. The human-animal comparison would be less appropriate.”

    This is a very interesting possibility – do you know of any good writing on this?

    My first impression on the prior on this be low considering the extreme difference in capability between an average homo sapiens and a genius homo sapiens who use nearly identical hardware. If it were true that we’re already near the apex of capability then we’d also have to believe that if we were to (hypothetically of course!) selectively breed humans for a million generations, constantly optimising intelligence, we’d get something not much more impressive than a smart person today.

    “To answer your question, the scenario that I worry about most, and that seems most likely to me, is that humans maintain control over AI, with multiple world powers gaining advanced AI capabilities at effectively the same time. Then our existing institutions and power structures will largely remain, along with all their problems.”

    This is also a super interesting possibility, and I’d be very grateful if you’re able to point me towards any discussions others have had about this!

    Again my first impression (without having any expertise) is that most powerful people/institutions today would probably do pretty good things given significantly higher capabilities. Would you mind sharing an example of a particular negative quality of the current status-quo which you think might be worsened by AI?

    • I’m sorry, I accidentally commented twice 🙁

      My reply below is basically the same but slightly better please read that one

  3. “There might be little more to intelligence than what is implemented in the human brain, with the main improvements available from a “scaling up” (improved memory, thought speed, etc). If so, there would still be huge capability improvements, but no concepts that are permanently beyond human reach. The human-animal comparison would be less appropriate.”

    This is a very interesting possibility – do you know of any good writing on this?

    My first impression on the prior on this be low considering the extreme difference in capability between an average homo sapiens and a genius homo sapiens who use nearly identical hardware. If it were true that we’re already near the apex of capability then we’d also have to believe that if we were to (leaving aside the obvious moral problems!) selectively breed humans for a million generations, constantly optimising intelligence, we’d get something not much more impressive than a smart person today.

    “To answer your question, the scenario that I worry about most, and that seems most likely to me, is that humans maintain control over AI, with multiple world powers gaining advanced AI capabilities at effectively the same time. Then our existing institutions and power structures will largely remain, along with all their problems.”

    This is also a super interesting possibility, and I’d be very grateful if you’re able to point me towards any discussions others have had about this!

    Would you mind sharing an example of a particular negative quality of the current status-quo which you think might be worsened by AI? If macro-historical trends are generally extremely positive and AI improves gradually like other technologies, what makes artificial intelligence more likely to disrupt these positive trends compared to other new technology?

    • Thanks for your questions! I’m not aware of others’ discussions re the first one. But let me clarify what I’m saying. I’m not ruling out significant improvements in intelligence. I’d agree that the human mind is far from being the most powerful mind possible.

      Rather, I’m referring to the idea that a higher quality of intelligence might open up a new “level” of concepts that a human mind can never understand, regardless of how much time we spend. (This is similar to how animals can never understand mathematics.) Call that “situation A”, and call situation B the one where, given enough time, the human brain can understand anything. At this point, we can’t know whether we are in A or B. My point in my earlier comment is that, after first learning about superintelligence, my thoughts all lived in situation A. Now, situation B gets a lot more airtime in my mind.

      Re your second question, I’m not saying that the status quo would be worsened by AI. I’m saying that, without an AI singleton taking over the world, the key problems now might remain as the key problems.

      I’ll give as an example a problem I think about a lot: we have no powerful decision-making structures at the global level. When you think about it, it’s a bit funny that our species has such a fragmented approach to governance. About 200 independent decision-makers (countries) exist, and no-one is able to really enforce rules worldwide. I believe a lot of gains would be available from robust international coordination. War could be more reliably prevented. Global public goods, like AI safety research and carbon emission abatement, would be more reliably provided. Global safety nets might be introduced, eliminating the worst of global poverty. We’d be better equipped to react to currently unknown threats. In fact, I’d be willing to argue that this is one of the greatest challenges humanity faces: settling our differences enough to be able to coordinate on species-level problems.

      In that vein, there’s a gradual trend towards better international coordination. For example, 100 years ago there was no United Nations, or European Union. But I believe progress isn’t inevitable, so outside of the god scenario, international coordination progress is really important.

      It’s also important for the progress to be in the right direction. A benevolent, democratic world government might be a good eventual result. An authoritarian world government would be horrific.

      Re your question about others’ discussions on this, I don’t know of any discussion of the specific point I made. But on the more general point, you might be interested in the reading list from Allan Dafoe’s course on the global politics of AI http://www.allandafoe.com/aiclass (though I don’t claim to have read everything there). For analysis of the incentives to supply global public goods, and how international law and treaties can help, I’d recommend Scott Barrett’s “Why Cooperate?” book: https://global.oup.com/academic/product/why-cooperate-9780199211890?cc=gb&lang=en&. There is also a book chapter by Bryan Caplan on totalitarianism as an x-risk: https://global.oup.com/academic/product/global-catastrophic-risks-9780199606504?cc=gb&lang=en&.

  4. Genetics analysis of intelligence suggests max IQ for human in the 500 range.
    i.e. assuming one human had all discovered genes responsible for high IQ.
    This may still be significantly less than any “god” like AI.
    My take is that human will become the AI. i.e. if you had a choice between immortality, with super intelligence, athletic ability, ability to live in space etc why wouldn’t you choose it.
    I know there will always be some that prefer the “organic” humans but my bet is that the “synthetic” human will be the norm and this will encompass god-like AI.
    Having said that it is becoming increasingly difficult to look into the future as technology progresses faster. So a matrix scenario is also possible. I like to be optimistic and hope that this never eventuates & that AI and technological progress in manufacturing allows us to transcend our organic deficiencies.

  5. There seems to be a bit of a common-knowledge problem here; most people who are actually working on AI-Xrisk at this point don’t believe in an extreme version of the god scenario; those that do are mostly associated with MIRI, but not everyone at MIRI believes in it either. Yet many people who are on the outskirts of the AI-Xrisk community and especially people from the EA/rationalist community don’t realize that.

    I don’t think people have made enough effort to outline credible alternative scenarios. Furthermore, most credible alternative scenarios also seem to engender a large Xrisk, and to have similar technical and societal solution paths.

    An example of extreme alternative development scenario that I don’t think has been well analyzed would be a massively multi-polar situation… basically what Elon seems to want, where AI is freely shared. I think there’s fairly widespread agreement that if alignment is not solved, or comes with a significant performance hit, then this situation is dangerous. But what if alignment is actually solved? If everyone suddenly had aligned superintelligent assistants, would our social institutions hold up? Or would we be able to construct new ones fast enough?

  6. Your basic premise makes a lot of sense to me: there is an emotional/psychological component that distorts rational thinking, and I also think that generally, it is almost impossible to counter. The proof in the pudding is that the “rationalist community” appears to have a disproportionate number of folks who seem to be just out of their minds. That is, Yudkowsky/rationalist.community types 1) think that their brand of rationality that is superior to all or most others 2) that they are the only ones that can determine rational public policy 3)that what they are doing is super important(!!!!!) when, in fact, they are irrelevant (they are generally just a manifestation of the same boring (but important) cultural phenomenon where people in a “disconnected” society want to feel that they are part of a meaningful community 4) very few of them are involved in the cutting edge research being done by outfits like Google Brain / Deep Mind / Open AI. Anyway, I can go on. As for myself, yes, I found Yudkowsky, Bostrom and company really fascinating at first. Then, with further exploration, came across John Serle, who was great in disabusing me of any belief in conscious machines taking over anytime in the near future. I became even more convinced that Searle has the superior argument, seeing that almost all those from the “rationalist community” fail to respond to his arguments with anything less than emotionally based argumentation (meant to preserve their own pathetic self importance and to cast Searle as some idiot). Anyway, anyone who doesn’t take Searle seriously best have a really good argument as to why not. Searle is correct, that’s the bottom line: we will very well build a machine that can pass the Turing test, but such a machine will still be a zombie. This does not disprove doomsday scenarios, but it puts things in perspective. Alignment issues may well be super important, but, I am skeptical as to the theoretical work done by MIRI and the pressing importance attributed to it.

    p.s. As an aside, your yearning for a one world government is also reflective of a psychological need to have perfect order, to achieve utopian ideals. Such regimes are generally a real disaster and anyone who has even the most basic nationalist ideals and who is exposed to the workings of the UN and EU would grimace at any form of world government that is rooted in these severely flawed institutions. A rational person can see clearly that the EU is an authoritarian institution that is working to limit democratic rights and that the UN is often controlled in a gangster-like fashion by a whole host of backward countries— and kindly, avoid the irrational propensity of intellectuals to state that all cultures and religions are equally valid. If you live in a Western country, count your blessings and don’t let authoritarians try to dismantle the good you have with bad forms of immigration (I.e. the importation of the most backward types and those with the most poisonous of ideologies)..

2 Trackbacks / Pingbacks

  1. Long Haul Drug Rehab – 7 Issues Want To Bear In Mind – 8owic0icu
  2. Superintelligence Is Not Omniscience - Top 100 AI Tools

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.