FLOPS is specifically a measure of floating point mathematical operations with a certain precision. While this measure may be important in determining the power required to *emulate* a brain (as statistical computations involve floating point numbers), that does not mean that a brain works in floating point values. In fact, it is easy to construct a system that does very little computation, but requires an insanely high amount of floating point operations to emulate it.

Emulating a biochemical system involves operating on floating point values, often with high precision. Emulating a CPU on the other hand may only need to use the APU for math. That does not mean an isolated system with 3 macromolecules and a couple thousand surrounding molecules does more meaningful computation than an old Zilog Z80 at 3.5 MHz, even though it takes a powerful computer with an extremely high FLOPS rating to emulate the biochemical system in real-time, whereas even an older embedded CPU from the early 2000s can perfectly emulate a Z80.

Furthermore, even for a computer, FLOPS is a very poor measurement of performance, as it only examines a single subsystem in a CPU, the FPU (floating point unit). A program using 100% of a processor may be spending very little of its time working the FPU. There are so many other subsystems in a CPU which make a difference, like the APU (arithmetic processing unit, which itself is split into multiple parts, like the multiplication engine and the addition engine), decoding engine, execution engine, crypto engine, cache system, register renaming engine, out-of-order/dependency execution engine or whatever it’s called, and much, much more. Simply doubling the FPUs on a CPU may double the FLOPS it can spit out, but a real-world workload may not be improved at all.

Not all floating point operations are the same. For a computer, two operations (with a few exceptions) are identical, and both will return in a predictable number of cycles. For a biological system (or literally *any* system that does not involve a fast and periodic clock source determining the rate of instruction execution, where a given instruction takes a fixed number of instructions), some instructions may be faster than others. What is 0.00000000000000000 divided by 1.000000000000000000 to 17 bit precision? A computer does this at the same speed as 0.000597012244189652 divided by 166892813.54003433 to 17 bits precision, simply because the division unit of the FPU (or the APU) is capable of answering *any* floating point question in a single cycle (a single cycle for a Nehelem Core 2).

For a biological system, there are many shortcuts that make some operations easier than others. A computer does not necessarily use these shortcuts because the limiting factor is the speed at which the fetching engine can read instructions from memory, and the speed at which the decoding engine can send the operations to the correct execution units. As the brain does not use fetching, decoding, and execution engines, FLOPS is a meaningless measurement.

]]>1) That a synapse can exist in 256 possible states. This is a problematic assumption because, unlike a byte where each possible value has the same weight, a synapse can “prefer” to be in a given state. Synaptic facilitation or augmentation is far easier and faster than synaptic engraming. Other factors can be in play that affect the number of states a synapse can be in.

2) That a synapse stores a single independent unit if information, i.e. that two synapses store exactly twice that of a single synapse. This comes directly from our experience with computers, but when you ask a computer scientist, this often cannot be further for the truth. The brain more likely stores information in the form of relationships. This is like a super complicated version of a radix tree (a keyed membership-inclusion algorithm). If a single synapse acts like a single node in a radix tree, the “amount” of information it stores is directly dependent on every other synapse it is related to.

3) That every synapse, or even a large number of synapses, store information (an issue you pointed out already). Many synapses are not capable of storing information for long periods, or can’t store information at all.

4) That synapses are atomic units of storage, with the number of synapses being the only factor determining information storage, with factors such as the state of the neuron and the proximity of one synapse with another being discounted. This means that there is an assumption that one neuron with 100,000 synapses can exist in the same number of possible states as ten neurons with 10,000 synapses. An action on one synapse will affect other synapses nearby, and collective actions on many synapses can change the behavior of the whole neuron.

5) That neurons are the only source of information storage. In reality, glial cells heavily influence the excitability of a neuron. Astrocyte support can change a neuron’s behavior and thus “capacity”.

6) That all information has the same value. To quantify storage capacity, any individual bit is no different from any other bit. Sector 1234 on a hard drive is no more “important” than sector 4321, and either can be used to store any information you want. In the brain, a given synapse may be able to exist in 256 possible states, but might only encode the information necessary to calibrate its own response so it is not overactive. This neuron cannot be used to store yet another digit of pi you are trying to remember. The closest analogy in computer science is the harvard execution model, where machine instructions and data are kept in different areas. The brain is like this, but on steroids. Not all data is equal.

In the end, applying information theory on neurons requires knowing how many bits each individual factor adds to a given neuron. Saying that a single synapse can exist in 256 possible states (or rather, than a postsynaptic neuron can only accurately determine activity with 1/256th granularity) and thus provides a byte of information is so simplified as to be incorrect.

The only similarity between the brain and a computer is that they are both are capable of turing-complete execution.

]]>I hypothesize that people don’t really think about the fact that the current year is 2017 and they implicitly round it off to either 2000 or 2010, which would explain this.

> For people asked to give probabilities for certain years, the difference was a factor of a thousand twenty years out! (10% vs. 0.01%)

Sounds like a great betting arbitrage opportunity! 😀

Some other comments:

* Naturally “AI researcher” is the hardest job to replace, according to this.

* Quite a gap between time to replace the hardest job (AI researcher) vs. time to replace all jobs. I wonder what surveyors expect will be the last job to be automated?

* Median prediction for writing a high school essay is 10 years whereas writing a bestseller is close to 30 years. Surely writing a high school essay is almost all of the way to writing a bestseller?