Five credible estimates of brain computing capacity in terms of FLOPS by others that we are aware of are spread across the range from 3 x 10^{13} to 10^{25} FLOPS. The median estimate is 10^{18} FLOPS.

Our own estimate of brain communication capacity in terms of TEPS suggests a figure of roughly 1-30 * 10^{16} FLOPS, with high uncertainty.

## Details

### Notes

We have not investigated the brain’s performance in FLOPS in detail. This page summarizes others’ estimates that we are aware of, as well as the implications of our investigation into brain performance in TEPS. Text on this page was heavily borrowed from a blog post, Preliminary prices for human-level hardware.

### Estimates

#### Sandberg and Bostrom 2008

Sandberg and Bostrom project the processing required to emulate a human brain at different levels of detail.^{1} For the three levels that their workshop participants considered most plausible, their estimates are 10^{18}, 10^{22}, and 10^{25} FLOPS. These would cost around $100K/hour, $1bn/hour and $1T/hour in 2015.

#### Moravec 2009

Moravec (2009) estimates that the brain performs around 100 million MIPS.^{2} MIPS are not directly comparable to MFLOPS (millions of FLOPS), and have deficiencies as a measure, but the empirical relationship in computers is something like MFLOPS = 2.3 x MIPS^{0.89}, according to Sandberg and Bostrom.^{3} This suggests Moravec’s estimate coincides with around 3.0 x 10^{13} FLOPS. Given that an order of magnitude increase in computing power per dollar corresponds to about four years, knowing that MFLOPS and MIPS are roughly comparable is plenty of precision.

#### Kurzweil 2005

In The Singularity is Near, Kurzweil claimed that a human brain required 10^{16} calculations per second, which appears to be roughly equivalent to 10^{16} FLOPS.^{4}

**Conversion from brain performance in TEPS**

Among a small number of computers we compared^{5}, FLOPS and TEPS seem to vary proportionally, at a rate of around 1.7 GTEPS/TFLOP. We also estimate that the human brain performs around 0.18 – 6.4 * 10^{14} TEPS. So this gives us 0.9 – 33.7 * 10^{16} FLOPS.^{6}

- From Sandberg and Bostrom, table 9: Processing demands (emulation only, human brain)(p80):
- spiking neural network: 10
^{18}FLOPS (Earliest year, $1 million: commodity computer estimate: 2042, supercomputer estimate: 2019) - electrophysiology: 10
^{22}FLOPS (Earliest year, $1 million: commodity computer estimate: 2068, supercomputer estimate: 2033) - metabolome: 10
^{25}FLOPS (Earliest year, $1 million: commodity computer estimate: 2087, supercomputer estimate: 2044)

- spiking neural network: 10
- ” it would take, in round numbers, 100 million MIPS (100 trillion instructions per second) to emulate the 1,500-gram human brain.” – Moravec, 2009
- See p89. It actually says FLOPS not MFLOPS, but this appears to be an error, given the graph.
- ‘If we use the figure of 10
^{16}cps that I believe will be sufficient for functional emulation of human intelligence…’ – Kurzweil, The Singularity is Near, p121 - “The [eight] supercomputers measured here consistently achieve around 1-2 GTEPS per scaled TFLOPS (see Figure 3). The median ratio is 1.9 GTEPS/TFLOPS, the mean is 1.7 GTEPS/TFLOP, and the variance 0.14 GTEPS/TFLOP. ” See
*Relationship between FLOPS and TEPS*here for more details - 0.18 – 6.4 * 10
^{14}TEPS =0.18 – 6.4 * 10^{5}GTEPS =0.18 – 6.4 * 10^{5}GTEPS * 1TFLOPS/1.9GTEPS = 9,000-337,000 TFLOPS = 0.9 – 33.7 * 10^{16}FLOPS

Measuring brain performance in FLOPS is like measuring intelligence by trying to test only working memory capacity in an IQ test. Are you measuring in single or double precision? What is the width of the floating point register? Does each type of operation have an identical latency? Simply saying “FLOPS” is meaningless.

FLOPS is specifically a measure of floating point mathematical operations with a certain precision. While this measure may be important in determining the power required to *emulate* a brain (as statistical computations involve floating point numbers), that does not mean that a brain works in floating point values. In fact, it is easy to construct a system that does very little computation, but requires an insanely high amount of floating point operations to emulate it.

Emulating a biochemical system involves operating on floating point values, often with high precision. Emulating a CPU on the other hand may only need to use the APU for math. That does not mean an isolated system with 3 macromolecules and a couple thousand surrounding molecules does more meaningful computation than an old Zilog Z80 at 3.5 MHz, even though it takes a powerful computer with an extremely high FLOPS rating to emulate the biochemical system in real-time, whereas even an older embedded CPU from the early 2000s can perfectly emulate a Z80.

Furthermore, even for a computer, FLOPS is a very poor measurement of performance, as it only examines a single subsystem in a CPU, the FPU (floating point unit). A program using 100% of a processor may be spending very little of its time working the FPU. There are so many other subsystems in a CPU which make a difference, like the APU (arithmetic processing unit, which itself is split into multiple parts, like the multiplication engine and the addition engine), decoding engine, execution engine, crypto engine, cache system, register renaming engine, out-of-order/dependency execution engine or whatever it’s called, and much, much more. Simply doubling the FPUs on a CPU may double the FLOPS it can spit out, but a real-world workload may not be improved at all.

Not all floating point operations are the same. For a computer, two operations (with a few exceptions) are identical, and both will return in a predictable number of cycles. For a biological system (or literally *any* system that does not involve a fast and periodic clock source determining the rate of instruction execution, where a given instruction takes a fixed number of instructions), some instructions may be faster than others. What is 0.00000000000000000 divided by 1.000000000000000000 to 17 bit precision? A computer does this at the same speed as 0.000597012244189652 divided by 166892813.54003433 to 17 bits precision, simply because the division unit of the FPU (or the APU) is capable of answering *any* floating point question in a single cycle (a single cycle for a Nehelem Core 2).

For a biological system, there are many shortcuts that make some operations easier than others. A computer does not necessarily use these shortcuts because the limiting factor is the speed at which the fetching engine can read instructions from memory, and the speed at which the decoding engine can send the operations to the correct execution units. As the brain does not use fetching, decoding, and execution engines, FLOPS is a meaningless measurement.