Price performance Moore’s Law seems slow

By Katja Grace, 26 November 2017

When people make predictions about AI, they often assume that computing hardware will carry on getting cheaper for the foreseeable future, at about the same rate that it usually does. Since this is such a common premise, and whether reality has yet proved it false is checkable, it seems good to check sometimes. So we did.

Looking up the price and performance of some hardware turned out to be a real mess, with conflicting numbers everywhere and the resolution of each error or confusion mostly just leading to several more errors and confusions.

I suppose the way people usually make meaningful figures depicting computer performance changing over time is that they are doing it over long enough periods of time that even if each point is only accurate to within three orders of magnitude, it is fine because the whole trend is traversing ten or fifteen orders of magnitude. But since I wanted to know what was happening in the last few years, this wouldn’t do—half an order of magnitude of progress could be entirely lost in that much noise.

In the end, the two best looking sources of data we could find are the theoretical performance of GPUs (via Wikipedia), and Passmark‘s collection of performance records for their own benchmark. Neither is perfect, but both make it look like prices for computing are falling substantially slower than they were. Over the last couple of decades it had been taking about four years for computing to get ten times cheaper, and now (on these measures) it’s taking more like twelve years. Which could in principle be to do with these measures being different from usual, but I think probably not.

There are quite a few confusions still to resolve here. For instance, in spite of showing slower progress, these numbers look a lot cheaper than what would have been predicted by extrapolating past trends (or sometimes more expensive). Which might be because we are comparing performance using different metrics, and converting between them badly. Different records of past trends seem to disagree with one another too, which is perhaps a hint. Or it could be that there was faster growth somewhere in between that we didn’t see. Or we might not have caught all of the miscellaneous errors in this cursed investigation.

But before we get too bogged down trying to work these things out, I just wanted to say that price performance Moore’s Law tentatively looks slower than usual.

See full investigation at: Recent Trend in the Cost of Computing


We welcome suggestions for this page or anything on the site via our feedback box, though will not address all of them.

12 Comments

  1. I wonder if the reason why Price performance Moore’s Law seems slow has to do with the new era of compute usage in training AI systems since 2012 (https://i.imgur.com/Iz26N6P.png) that this OpenAI article informed me about: https://openai.com/blog/ai-and-compute/#addendum

    Specifically since in 2012 the growth rate in compute usage increased (to an unsustainable level), surely it makes sense that this is only possible if compute is purchased for a higher price.

    Related: https://www.lesswrong.com/posts/aNAFrGbzXddQBMDqh/moore-s-law-ai-and-the-pace-of-progress

  2. The point you made about how “half an order of magnitude of progress could be entirely lost in that much noise” really resonated with me. I’ve definitely felt that frustration trying to track more granular tech trends where the data just isn’t clean enough.

    It makes me wonder, did you consider looking at specialized hardware like ASICs for specific AI tasks, even if just to see if their price-performance trajectory differed wildly? And what kind of impact do you think this slowdown might have on smaller research labs or startups trying to commercialize AI?

  3. I really appreciated your point about how difficult it was to find reliable data for recent hardware price/performance – that “real mess” with conflicting numbers resonates. I’ve definitely hit similar walls trying to track component costs for personal builds, and it’s frustrating.

    Given the challenges in finding clean data, do you think industry benchmarks or manufacturing reports might offer a more consistent, albeit potentially biased, picture? And I’m curious, what do you see as the biggest implication if this slower trend truly holds for AI development?

  4. That’s a really interesting point about the difficulty of getting reliable data for recent hardware price/performance – I’ve definitely hit similar walls trying to track trends for specific components. It makes me wonder if the industry intentionally makes it opaque.

    Given the slowdown you’re seeing, do you think this might push AI development towards more efficient algorithms and software optimization, rather than relying as much on brute-force hardware improvements? And did you notice any significant differences when trying to source data for CPUs versus GPUs, or was it consistently messy across the board?

Leave a Reply to Golfcore outfit Cancel reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.