Costs of human-level hardware

Computing hardware which is equivalent to the brain –

  • in terms of FLOPS probably costs between $1 x 105 and $3 x 1016, or $2/hour-$700bn/hour.
  • in terms of TEPS probably costs $200M – $7B, or or $4,700 – $170,000/hour (including energy costs in the hourly rate).
  • in terms of secondary memory probably costs $300-3,000, or $0.007-$0.07/hour.

Details

Partial costs

Computation

Main articles: Brain performance in FLOPS, Current FLOPS prices, Trends in the costs of computing

FLoating-point Operations Per Second (FLOPS) is a measure of computer performance that emphasizes computing capacity. The human brain is estimated to perform between 1013.5 and 1025 FLOPS. Hardware currently costs around $3 x 10-9/FLOPS, or $7 x 10-14/FLOPShour. This makes the current price of hardware which has equivalent computing capacity to the human brain between $1 x 105 and $3 x 1016, or $2/hour-$700bn/hour if hardware is used for five years.

The price of FLOPS has probably decreased by a factor of ten roughly every four years in the last quarter of a century.

Communication

Main articles: Brain performance in TEPSThe cost of TEPS 

Traversed Edges Per Second (TEPS) is a measure of computer performance that emphasizes communication capacity. The human brain is estimated to perform at 0.18 – 6.4 x 105 GTEPS. Communication capacity costs around $11,000/GTEP or $0.26/GTEPShour in 2015, when amortized over five years and combined with energy costs. This makes the current price of hardware which has equivalent communication capacity to the human brain around $200M – $7B in total, or $4,700 – $170,000/hour including energy costs.

We estimate that the price of TEPS falls by a factor of ten every four years, based the relationship between TEPS and FLOPS.

Information storage

Main articles: Information storage in the brainCosts of information storageCosts of human-level information storage

Computer memory comes in primary and secondary forms. Primary memory (e.g. RAM) is intended to be accessed frequently, while secondary memory is slower to access but has higher capacity. Here we estimate the secondary memory requirements ofthe brain. The human brain is estimated to store around 10-100TB of data. Secondary storage costs around $30/TB in 2015. This means it costs $300-3,000 for enough storage to store the contents of a human brain, or $0.007-$0.07/hour if hardware is used for five years.

In the long run the price of secondary memory has declined by an order of magnitude roughly every 4.6 years. However the rate has declined so much that prices haven’t substantially dropped since 2011 (in 2015).

Interpreting partial costs

Calculating the total cost of hardware that is relevantly equivalent to the brain is not as simple as adding the partial costs as listed. FLOPS and TEPS are measures of different capabilities of the same hardware, so if you pay for TEPS at the aforementioned prices, you will also receive FLOPS.

The above list is also not exhaustive: there may be substantial hardware costs that we haven’t included.

Brain performance in FLOPS

Five credible estimates of brain performance in terms of FLOPS that we are aware of are spread across the range from 3 x 1013 to 1025. The median estimate is 1018.

Details

Notes

We have not investigated the brain’s performance in FLOPS in detail. This page summarizes others’ estimates that we are aware of. Text on this page was heavily borrowed from a blog post, Preliminary prices for human-level hardware.

Estimates

Sandberg and Bostrom 2008

Sandberg and Bostrom project the processing required to emulate a human brain at different levels of detail.1 For the three levels that their workshop participants considered most plausible, their estimates are 1018, 1022, and 1025 FLOPS. These would cost around $100K/hour, $1bn/hour and $1T/hour in 2015.

Moravec 2009

Moravec (2009) estimates that the brain performs around 100 million MIPS.2 MIPS are not directly comparable to MFLOPS (millions of FLOPS), and have deficiencies as a measure, but the empirical relationship in computers is something like MFLOPS = 2.3 x MIPS0.89, according to Sandberg and Bostrom.3 This suggests Moravec’s estimate coincides with around 3.0 x 1013 FLOPS. Given that an order of magnitude increase in computing power per dollar corresponds to about four years, knowing that MFLOPS and MIPS are roughly comparable is plenty of precision.

Kurzweil 2005

In The Singularity is Near, Kurzweil claimed that a human brain required 1016 calculations per second, which appears to be roughly equivalent to 1016 FLOPS.4


 

Index of articles about hardware

Hardware in terms of computing capacity (FLOPS and MIPS)

Brain performance in FLOPS

Current FLOPS prices

Trends in the cost of computing

Wikipedia history of GFLOPS costs

Hardware in terms of communication capacity (TEPS)

Brain performance in TEPS (includes the cost of brain-level TEPS performance on current hardware)

The cost of TEPS (includes current costs, trends and relationship to other measures of hardware price)

Information storage

Information storage in the brain

Costs of information storage

Costs of human-level information storage

Other

Costs of human-level hardware

Research topic: hardware, software and AI

Index of articles about hardware

Related blog posts

Preliminary prices for human level hardware (4 April 2015)

A new approach to predicting brain-computer parity (7 May 2015)

Time flies when robots rule the earth (28 July 2015)

Cost of human-level information storage

It costs roughly $300-$3000 to buy enough storage space to store all information contained by a human brain.

Support

The human brain probably stores around 10-100TB of data. Data storage costs around $30/TB. Thus it costs roughly $300-$3000 to buy enough storage space to store all information contained by a human brain.

If we suppose that one wants to replace the hardware every five years, this is $0.007-$0.07/hour.1

For reference, we have estimated that the computing hardware and electricity required to do the computation the brain does would cost around $4,700 – $170,000/hour at present (using an estimate based on TEPS, and assuming computers last for five years). Estimates based on computation rather than communication capabilities (like TEPS) appear to be spread between $3/hour and $1T/hour.2 On the TEPS-based estimate then, the cost of replicating the brain’s information storage using existing hardware would currently be between a twenty millionth and a seventy thousandth of the cost of replicating the brain’s computation using existing hardware.

Costs of information storage

Cheap secondary memory appears to cost around $0.03/GB in 2015. In the long run the price has declined by an order of magnitude roughly every 4.6 years. However the rate has declined so much that prices haven’t substantially dropped since 2011 (in 2015).

Support

Cheap secondary memory appears to cost around $0.03/GB in 2015.1

The price appears to have declined at an average rate of around an order of magnitude every five years in the long run, as illustrated in Figures 1 and 2. Figure 1 shows roughly six and a half orders of magnitude in the thirty years between 1985 and 2015, for around an order of magnitude every 4.6 years. Figure 2 shows thirteen orders of magnitude over the the sixty years between 1955 and 2015, for exactly the same rate. Both figures suggest the rate has been much slower in the past five years, seemingly as part of a longer term flattening. It appears that prices haven’t substantially dropped since 2011 (in 2015).

xxx

Figure 1: Historic prices of hard drive space, from Matt Komorowski

Figure 2:

Figure 2: Historical prices of information storage in various formats, from Havard Blok, mostly drawing on John C. McCallum’s data.


 

Information storage in the brain

The brain probably stores around 10-100TB of data.

Support

According to Forrest Wickman, computational neuroscientists generally believe the brain stores 10-100 terabytes of data.1 He suggests that these estimates are produced by assuming that information is largely stored in synapses, and that each synapse stores around 1 byte. The number of bytes is then simply the number of synapses.

These assumptions are simplistic (as he points out). In particular:

  • synapses may store more or less than one byte of information on average
  • some information may be stored outside of synapses
  • not all synapses appear to store information
  • synapses do not appear to be entirely independent

We estimate that there are 1.8-3.2 x 10¹⁴ synapses in the human brain, so according to the procedure Wickman outlines, this suggests that the brain stores around 180-320TB of data. It is unclear from his article whether the variation in the views of computational neuroscientists is due to different opinions on the assumptions stated above, or on the number of synapses in the brain. This makes it hard to adjust our estimate well, so our best guess for now is that the brain can store around 10-100TB of data, based on this being the common view among computational neuroscientists.


 

Conversation with Steve Potter

Participants

Figure 1: Professor Steve Potter

  • Professor Steve Potter – Associate Professor, Laboratory of NeuroEngineering, Coulter Department of Biomedical Engineering, Georgia Institute of Technology
  • Katja Grace – Machine Intelligence Research Institute (MIRI)

Note: These notes were compiled by MIRI and give an overview of the major points made by Professor Steve Potter.

Summary

Katja Grace spoke with Professor Steve Potter of Georgia Institute of Technology as part of AI Impacts’ investigation into the implications of neuroscience for artificial intelligence (AI). Conversation topics included how neuroscience now contributes to AI and how it might contribute in the future.

How has neuroscience helped AI in the past?

Professor Potter found it difficult to think of examples where neuroscience has helped with higher level ideas in AI. Some elements of cognitive science have been implemented in AI, but these may not be biologically based. He described two broad instances of neuroscience-inspired projects.

Subsumption architecture

Past work in AI has focused on disembodied computers with little work in robotics. Researchers now understand that AI does not need to be centralized; it can also take on physical form. Subsumption architecture is one way that robotics has advanced. This involves the coupling of sensory information to action selection. For example, Professor Rodney Brooks at MIT has developed robotic legs that respond to certain sensory signals. These legs also send messages to one another to control their movement. Professor Potter believes that this work could have been based on neuroscience, but it is not clear how much Professor Brooks was inspired by neuroscience while working on this project; the idea may have come to him independently.

Neuromorphic engineering

This type of engineering employs properties of biological nervous systems in neural system AI, such as perception and motor control. One aspect of brain function can be imitated with silicon chips through pulse-coding, where analog signals are sent and received in tiny pulses. An application for this is in camera development by mimicking pulse-coded signals between the brain and the retina.

How is neuroscience contributing to AI today?

Although neuroscience has not assisted AI development much in the past, Professor Potter has confidence that this intersection has considerable potential. This is because the brain works well in areas where AI falls short. For example, AI needs to improve how it works in real time in the real world. Self-driving cars may be improved through examining how a model organism, such as a bee, would respond to an analogous situation. Professor Potter believes it would be worthwhile research to record how humans use their brains while driving. Brain algorithms developed from this could be implemented into car design.

Current work at the intersection of neuroscience and AI include the following:

Artificial neural networks

Most researchers at the intersection of AI and neuroscience are examining artificial neural networks, and might describe their work as ‘neural simulations’. These networks are a family of statistical learning models that are inspired by biological neural networks. Hardware in this discipline includes neuromorphic chips, while software includes work in pattern recognition. This includes handwriting recognition and finding military tanks in aerial photographs. The translation of these networks into useful products for both hardware and software applications has been slow.

Hybrots

Professor Potter has helped develop hybrots, which are hybrid living tissue interfaced with robotic machines: robots controlled by neurons. Silent Barrage was an early hybrot that drew on paper attached to pillars. Video was taken of people viewing the Silent Barrage hybrots. This data was transmitted back to Prof. Potter’s lab, where it was used to trigger electrical stimulation in the living brain of the system. This was a petri dish interfaced to a culture of rat cortical neurons. This work is currently being expanded to include more types of hybrots. In one the control will be by living neurons, while the other will be controlled by a simulated neural network.

Meart (MultiElectrode Array Art) was an earlier hybrot. Controlled by a brain composed of rat neuron cells, it used robotic arms to draw on paper. It never progressed past the toddler stage of scribbling.

How is neuroscience likely to help AI in the future?

A particular line of research in neuroscience that is likely to help with AI is the concept of delays. Computer design is often optimized to reduce the amount of time between command and execution. The brain though may take milliseconds longer to respond. However delays in the brain were evolved to respond to the timing of the real world and are a useful part of the brain’s learning process.

Neuroscience probably also has potential to help AI in searching databases. It appears that the brain has methods for this that are completely unlike those used in computers, though we do not yet know what the brain’s methods are. One example given of the brain’s impressive abilities here is that Professor Potter can meet a new person and instantly be confident that he has never seen that person before.

How long will it take to duplicate human intelligence?

It will be hard to say when this has been achieved; success is happening at different rates for different applications. The future of neuroscience in AI will most likely involve taking elements of neuroscience and applying them to AI; it is unlikely that there will be a wait until we have a good understanding of the brain, then an export of that knowledge complete to AI.

Professor Potter greatly respects Ray Kurzweil, but does not think that he has an in depth knowledge of neuroscience. Professor Potter thinks the brain is much more complex than Kurzweil appears to believe, and that ‘duplicating’ human intelligence will take far longer than Kurzweil predicts. In Professor Potter’s consideration, it will take over a hundred years to develop a robot butler that can convince you that it is human.

Challenges to progress

Lack of collaboration

Neuroscience-inspired AI progress has been hampered because researchers across neuroscience and AI seldom collaborate with one another. This may be from disinterest or limited understanding of each other’s fields. Neuroscientists are not generally interested in the goal of creating human-level artificial intelligence. Professor Potter believes that of the roughly 30,000 people who attend the Society for Neuroscience, approximately 20 people want this. Most neuroscientists, for example, want to learn how something works instead of learning how it can be applied (e.g. learning how the auditory system works instead of developing a new hearing aid). If more people saw benefits in applying neuroscience to AI and in particular human-level AI, there would be greater progress. However, the scale is hard to predict. There is the potential for very much more rapid progress. For researchers to move their projects in this direction, the priorities of funding agencies would first have to move; these as these effectively dictate which projects move forward.

Funding

Funding for work at the intersection of neuroscience and AI may be hard to find. The National Institute of Health (NIH) funds only health-related work and has not funded AI projects. The National Science Foundation (NSF) may not think the work fits its requirement of being basic science research; it may be too applied. NSF though, is more open-minded to funding research on AI than NIH is. The military is also interested in AI research. Outside (of )the U.S., the European Union (EU) funds cross-disciplinary work in neuroscience and AI.

National Science Foundation (NSF) funding

NSF had a call for radical proposals, from which Professor Potter received a four-year-long grant to apply neuroscience to electrical grid systems. Collaborators included a power engineer and people studying neural networks. The group was interested in addressing the U.S.’s large and uneven power supply and usage. The electrical grid has become increasingly difficult to control because of geographically varying differences in input and output.

Professor Potter believes that if people in neuroscience, AI, neural networks, and computer design talked more, this would bring progress. However, there were some challenges with this collaborative electrical grid systems project that need to be addressed. For example, the researchers needed to spend considerable time educating one another about their respective fields. It was also difficult to communicate with collaborators across the country; NSF paid for only one meeting per year, and the nuances of in-person interaction seem important for bringing together such diverse groups of people and reaping the benefits of their creative communication.

Other people working in this field

  • Henry Markram – Professor, École Polytechnique Fédérale de Lausanne, Laboratory of Neural Microcircuitry. Using EU funding, he creates realistic computer models of the brain, one piece at a time.
  • Rodney Douglas – Professor Emeritus, University of Zurich, Institute of Neuroinformatics. He is a neuromorphic engineer who worked on emulated brain function.
  • Carver Mead – Gordon and Betty Moore Professor of Engineering and Applied Science Emeritus, California Institute of Technology. He was a founding father of neuromorphic engineering.
  • Rodney Brooks – Panasonic Professor of Robotics Emeritus, Massachusetts Institute of Technology (MIT). He was a pioneer in studying distributed intelligence and developed subsumption architecture.
  • Andy Clark – Professor of Logic and Metaphysics, University of Edinburgh. He does work on embodiment, artificial intelligence, and philosophy.
  • Jose Carmena – Associate Professor of Electrical Engineering and Neuroscience, University of California-Berkeley. Co-Director of the Center of Neural Engineering and Prostheses, University of California-Berkeley, University of California-San Francisco. He has researched the impact of electrical stimulation on sensorimotor learning and control in rats.
  • Guy Ben-Ary – Manager, University of Western Australia, CELLCentral in the School of Anatomy and Human Biology. He is an artist and researcher who uses biologically related technology in his work. He worked in collaboration with Professor Potter on Silent Barrage.
  • Wolfgang Maass – Professor of Computer Science, Graz University of Technology. He is doing research on artificial neural networks.
  • Thad Starner – Assistant Professor, Georgia Institute of Technology, College of Computing. He applies biological concepts into developing wearable computing devices.
  • Jennifer Hasler – Professor, Georgia Institute of Technology, Bioengineering and Electronic Design and Applications. She has studied neuromorphic hardware.

 

Predictions of Human-Level AI Timelines

We know of around 1,300 public predictions of when human-level AI will arrive, of varying levels of quality. These include predictions from individual statements and larger surveys. Median predictions tend to be between 2030 and 2055 for predictions made since 2000, across different subgroups of predictors.

Details

The landscape of AI predictions

Predictions of when human-level AI will be achieved exist in the form of surveys and public statements (e.g. in articles, books or interviews). Some statements backed by analysis are discussed here. Many more statements have been collected by MIRI. Figure 1 illustrates almost all of the predictions we know about, though most are aggregated there into survey medians. Altogether, we know of around 1,300 public predictions of when human-level AI will arrive, though 888 are from a single informal online poll. We know of ten surveys that address this question directly (plus a set of interviews which we sometimes treat as a survey but here count here as individual statements, and a survey which asks about progress so far as a fraction of what is required for human-level AI). Only 65 predictions that we know of are not part of surveys.

Summary of findings

Figure 1: Predictions from the MIRI dataset (red = maxIY ≈ ‘AI more likely than not after …’, and green = minPY ≈ ‘AI less likely than not before …’) and surveys. This figure excludes one prediction of 3012 made in 2012, and the Hanson survey, which doesn’t ask directly about prediction dates.

Recent surveys tend to have median dates between 2040 and 2050. All six of the surveys which ask for the year in which human-level AI will have arrived with 50% probability produce medians in this range (not including Kruel’s interviews, which have a median of 2035, and are counted in the statements here). The median prediction in statements is 2042, though predictions of AGI researchers and futurists have medians in the early 2030s. Surveys give median estimates for a 10% chance of human-level AI in the 2020s. We have not attempted to adjust these figures for biases.

Implications

Expert predictions about AI timelines are often considered uninformative. Evidence that predictions are less informative than in other messy fields appears to be weak. We have not evaluated baseline prediction accuracy in such fields however. We expect survey results and predictions from those further from AGI are more accurate than other sources, due to selection biases. The differences between these sources appear to be a small number of decades.

One Comment

  1. AI Impacts – The AI Impacts Blog Says :

    2015-04-01 at 7:27 AM

    […] Articles […]