By Katja Grace, 4 September 2016
In a classic ‘AI takes over the world’ scenario, one of the first things an emerging superintelligence wants to do is steal most of the world’s computing hardware and repurpose it to running the AI’s own software. This step takes one from ‘super-proficient hacker’ levels of smart to ‘my brain is one of the main things happening on Planet Earth’ levels of smart. There is quite a bit of hardware in the world, so this step in the takeover plan is kind of terrifying.
How terrifying exactly depends on A) how much computing hardware there is in the world at the time, and B) how efficiently hardware can be turned into AI at the time. We have some tentative answers to A)—probably at least a couple of hundred exaFLOPS now, growing somewhere between not at all and very fast. However B) is harder, in the absence of any idea how to get the efficiency of hardware-to-general-AI conversions above zero. Nonetheless, I think there are a couple of interesting reference points we can look at.
The one I’ll discuss now is the efficiency of the human brain. What if we could use about as much hardware as the human brain represents (in some sense) to run AI about as smart as a human brain? This is an interesting point to look at for a few reasons. We know brains are somewhere in the range of efficiency with which hardware can produce intelligent behavior, because they are an instance of that. And looking at one data point in the desired range is better than none. Also, for some means of building artificial intelligence—most obviously, brain emulation—we might expect to get something roughly as efficient as a human brain, give or take some.
So, we can think of the human brain as representing a pile of (fairly application specific) computing hardware. And we can estimate its computing power, in terms of FLOPS. People have done this (very inaccurately— their estimates are twelve orders of magnitude apart, but running through this calculation with such an uncertain number still seems informative). According to different sources, a human brain seems to be worth between about 3 x 1013 FLOPS and 1025 FLOPS. The median estimate is 1018 FLOPS.
So we can ask, if you turned all of the world’s two hundred exaFLOPS or more of computing hardware into brains, how many brains would you get?
This graph shows the answers over time, for a variety of assumptions about brain FLOPS, world FLOPS, and global computing hardware growth rates. Probably the most plausible line is the lower green one (brains median, world hardware high).
The basic answer is, if you turned all of the world’s computing hardware into AI as efficient as human brains right now, you would get less than a hundred million extra brains, or 1% of the population of the world. Probably a whole lot less. For the median estimates of brain computing power, you would get about a hundred or a thousand extra brains worth of AI.
That means, for instance, that if we figured out how to make uploads right now, and they were roughly as efficient as the median brains estimate, and then someone acquired all of the hardware in the world for them, they would only have about as many additional minds as a project willing to spend a few hundred million dollars per year on wages, e.g. Facebook. Which would really be something. But not something overwhelmingly outscaling everything else going on in the world.
If you trust the projections of hardware growth fifty years into the future at all (which you shouldn’t, but suppose you did) the most plausible (median brain size, low growth) lines don’t even reach the world population line by then, though they would certainly make for an incredible AI research project, if that was the direction to which the additional mental effort was directed.
Remember, all of this is very sketchy and probably inaccurate and you should maybe think about it a bit more if your decisions depend on it much (or ask us nicely to). But I strongly favor sketchy projections over none.
Image: Planetary Brain, Adrian Kenyon, some rights reserved.
I thought a big chunk of the fear was that coordination was a primary or even the primary bottleneck in rapid gains in power. Presumably ems can copy paste best practices for coordination as well as rapidly a/b test new methods and then propagate the results to all nodes rapidly.