We do not know how AGI will scale with marginal hardware. Several sources of evidence may shed light on this question.
Details
Background
Suppose that at some point in the future, general artificial intelligence can be run on some quantity of hardware, h, producing some measurable performance p. We would like to know how running approximately the same algorithms using additional hardware (increasing h) affects p.
This is important, because if performance scales superlinearly, then at the time we can run a human-level intelligence on hardware that costs as much as a human, we can run an intelligence that performs more than twice as well as a human on twice as much hardware, and so already have superhuman efficiency at converting hardware into performance. And perhaps go on, for instance producing an entity which is 64 times as costly as a human yet almost incomparably better at thinking.
This might mean that the first ‘human-level’ effectiveness at converting dollars of hardware into performance would be earlier, when perhaps a mass of hardware costing one thousand times as much as a human can be used to produce something which performs a thousand times as well as a human. However it might be that the first time we have software to produce something roughly human-like, it does so at human-level with much smaller amounts of hardware. In which case, immediately scaling up the hardware might produce a substantially superhuman intelligence. This is one reason some people expect fast progress from sub-human to superhuman intelligence.
Whether gains are superexponential or sublinear depends on the metrics of performance. For instance, imagine hypothetically that doubling hardware generally produces a twenty point IQ increase (logarithmic gains in IQ), but twenty IQ points above the smartest human is enough to conquer areas of science that thousands of scientists have puzzled over ineffectually forever (much better than exponential gains in some metric of discovery or economic value). So the question must be how marginal hardware affects metrics that matter to us.
Considerations
We have not investigated this question, but the following sources of evidence seem promising.
Evidence from existing algorithms
We do not yet have any kind of artificial general intelligence. However we can look at how performance scales with hardware in other kinds of software applications, especially narrow AI.
Evidence from human brain scaling
Among humans, brain size and intelligence are related. The exact relationship has been studied, but we have not reviewed the literature.
Evidence from between-animal brain scaling
Between animals, brain size relative to body size is related to intelligence. The exact relationship has probably been studied, but we have not reviewed the literature.
Evidence from new types of computation being possible with additional hardware
Some gains with hardware could come not from better performance on a particular task, but from being able to perform new tasks that were previously infeasible with so little hardware. We do not know how big an issue this is, but examining past experience with increasing hardware availability (e.g. by talking to researchers) seems promising.
One issue which I have with this approach is the correlation of IQ with performance.
Rough estimates show that every 30 IQ points is approximately twice as effective in learning / processing information.
i.e. smartest person on earth (iq ~ 250 = 32x as fast at processing information or looking at complex data)
This has nothing to do with originality!
Ideas to solve problems come from originality and NOT from processing speed. Processing speed only helps solving logistic based problems i.e. they are not NP hard problems.
The only caveat to this is the recent performance of alpha zero which found new pathways for playing GO. However, the processing power of this program is significantly higher than the best human as rated by ELO standards.