A new approach to predicting brain-computer parity

By Katja Grace, 7 May 2015

How large does a computer need to be before it is ‘as powerful’ as the human brain?

This is a difficult question, which people have answered before, with much uncertainty.

We have a new answer! (Longer description here; summary in the rest of this post.) This answer is based on ‘traversed edges per second’ (TEPS), a metric which emphasizes communication within a computer, instead of computing operations (like FLOPS). That is, TEPS measures how fast information can move around.

Communication can be a substantial bottleneck for big computers, slowing them down in spite of their powerful computing capacity. It seems plausible that communication is also a bottleneck for the brain, which is both a big computer, and one that spends lots of resources on communication. This is one reason to measure the brain in terms of TEPS: if communication is a bottleneck, then it is especially important to know when computers will achieve similar performance to the brain there, not just on easier aspects of being a successful computer.

The TEPS benchmark asks the computer to simulate a graph, and then to search through it. The question is how many edges in the graph the computer can follow per second. We can’t ask the brain to run the TEPS benchmark, but the brain is already a graph of neurons, and we can measure edges being traversed in it (action potentials communicating between neurons). So we can count how many edges are traversed in the brain per second, and compare this to existing computer hardware.

The brain seems to have around 1.8-3.2 x 10^14  synapses. We’d like to know how often these synapses convey spikes, but this has been too hard to discover. So we use neuron firing frequency as a proxy. We previously calculated that each neuron spikes around 0.1-2 times per second. Together with the number of synapses, this suggests the brain performs at around 0.18 – 6.4 * 1014 TEPS. This assumes many things, and is hazy in many ways, some of which are detailed in our longer page on the topic. The estimate could be tightened on many fronts with more work.

The Sequoia supercomputer is currently the best computer in the world on the TEPS benchmark. Its record is 2.3 *1013 TEPS. So the human brain seems to be somewhere between as powerful and thirty times as powerful as the best supercomputer, in terms of TEPS.

At current prices for TEPS, the brain’s performance should cost roughly $4,700 – $170,000/hour. Our previous fairly wild guess was that TEPS prices should improve by a factor of ten every four years. If this is true, it should take seven to fourteen years for a computer which costs $100/hour to be competitive with the human brain. At that point, if having human-level hardware in terms of TEPS were enough to have human-level AI, human-level AI should be replacing well paid humans.

Moravec’s and Kurzweil’s estimates of computation in the brain suggest human-equivalent hardware should cost $100/hour either some time in the past or in about four years respectively, so our TEPS estimate is actually late relative to those. However they are all pretty close together. Sandberg and Bostrom’s estimates of hardware required to emulate a brain span from around then to around thirty years later, though note that emulating is different from replicating functionally. Altogether ‘human-level’ hardware seems likely to be upon us soon, if it isn’t already. The estimate from TEPS points to the near future even more strongly.

(Featured image by MartinGrandjean)

We welcome suggestions for this page or anything on the site via our feedback box, though will not address all of them.


  1. Nice article! It might also be interesting to think about the role of myelin in the brain, which can be thought of as providing long-range connections between disparate regions of the brain, and has been found to play roles in brain plasticity and development.

    Aside from providing long-term connections, its main three functions are a) increasing signal speed, b) increasing signal fidelity, and c) nutrient support to axons (which is probably not relevant for your considerations). (It was previously thought to save energy, but more recent estimates have gone against that.)

    I’d be very curious to hear your thoughts on this.

    • Thanks!

      I thought a bit about myelin as a thing that might be measured which is used largely for communication, so an indicator of how many of the brain’s resources are spent on communication. I couldn’t find good figures for how much there is, so I gave up on that line of reasoning for now. Is this the kind of thing you mean? Do you have other potential inferences in mind?

  2. How do you compare human vs supercomputer TEPs when the problem scale is so different.
    e.g. problem scale for Sequoia = 41
    What is the comparable problem scale for a human?

3 Trackbacks / Pingbacks

  1. A reply to Wait But Why on machine superintelligence
  2. AI Impacts – Index of hardware articles
  3. Does artificial intelligence really pose an existential threat? - CognitionX

Leave a Reply

Your email address will not be published.


This site uses Akismet to reduce spam. Learn how your comment data is processed.