This is a guest post by Ben Garfinkel. We revised it slightly, at his request, on February 9, 2019.
A recent OpenAI blog post, “AI and Compute,” showed that the amount of computing power consumed by the most computationally intensive machine learning projects has been doubling every three months. The post presents this trend as a reason to better prepare for “systems far outside today’s capabilities.” Greg Brockman, the CTO of OpenAI, has also used the trend to argue for the plausibility of “near-term AGI.” Overall, it seems pretty common to interpret the OpenAI data as evidence that we should expect extremely capable systems sooner than we otherwise would.
However, I think it’s important to note that the data can also easily be interpreted in the opposite direction. A more pessimistic interpretation goes like this:
- If we were previously underestimating the rate at which computing power was increasing, this means we were overestimating the returns on it.
- In addition, if we were previously underestimating the rate at which computing power was increasing, this means that we were overestimating how sustainable its growth is.1
- Let’s suppose, as the original post does, that increasing computing power is currently one of the main drivers of progress in creating more capable systems. Then — barring any major changes to the status quo — it seems like we should expect progress to slow down pretty soon and we should expect to be underwhelmed by how far along we are when the slowdown hits.
I actually think of this more pessimistic interpretation as something like the default one. There are many other scientific fields where R&D spending and other inputs are increasing rapidly, and, so far as I’m aware, these trends are nearly always interpreted as reasons for pessimism and concern about future research progress.2 If we are going to treat the field of artificial intelligence differently, then we should want clearly articulated and compelling reasons for doing so.
These reasons certainly might exist.3 Still, whatever the case may be, I think we should not be too quick to interpret the OpenAI data as evidence for dramatically more capable systems coming soon.4
Thank you to Danny Hernandez and Ryan Carey for comments on a draft of this post.
- As Ryan Carey has argued, we should expect the trend to run up against physical and financial limitations within the decade.
- See, for example, the pharmaceutical industry’s concern about “Eroom’s Law”: the observation that progress in developing new drugs has been steady despite exponentially growing R&D spending and increasingly powerful drug discovery technologies. The recent paper “Are Ideas Getting Harder to Find?” (Bloom et al., 2018) also includes a pessimistic discussion of several other domains, including agriculture and semiconductor manufacturing.
- One way to argue for the bullish interpretation is to draw on work (briefly surveyed in Ryan’s post) that attempts to estimate the minimum quantity of computing power required to produce a system with the same functionality as the human brain. We can then attempt to construct an argument where: (a) we estimate this minimum quantity of computing power (using evidence unrelated to the present rate of return on computing power), (b) predict that the quantity will become available before growth trends hit their wall, and (c) argue that having it available would be nearly sufficient to rapidly train systems that can do a large portion of the things humans can do. In this case, the OpenAI data would be evidence that we should expect the computational “threshold” to be reached slightly earlier than we would otherwise have expected to reach it. For example, it might take only five years to reach the threshold rather than ten. However, my view is that it’s very difficult to construct an argument where parts (a)-(c) are all sufficiently compelling. In any case, it still doesn’t seem like the OpenAI data alone should substantially increase the probability anyone assigns to “near-term AGI” (rather than just shifting forward their conditional probability estimates of how “near-term” “near-term AGI” would be).
- As a final complication, it’s useful to keep in mind that the OpenAI data only describes the growth rate for the most well-resourced research projects. If you think about progress in developing more “capable” AI systems (whatever you take “capable” to mean) as mostly a matter of what the single most computationally unconstrained team can do at a given time, then this data is obviously relevant. However, if you instead think that something like the typical amount of computing power available to talented researchers is what’s most important — or if you simply think that looking at the amount of computing power available to various groups can’t tell us much at all — then the OpenAI data seems to imply relatively little about future progress.
2 Trackbacks / Pingbacks
Comments are closed.