Published 15 Oct 2020
Progress in computer chess performance took:
- ~0 years to go from from playing chess at all to playing it at human beginner level
- ~49 years to go from human beginner level to superhuman level
- ~11 years to go from superhuman level to the the current highest performance
Details
Human range performance milestones
We use the common Elo system for measuring chess performance. Human chess Elo ratings range from around 800 (beginner)1 to 2882 (highest recorded).2 The highest recorded human score is likely higher than it would have been without chess AI existing, since top players can learn from the AI.3
Times for machines to cross ranges
Beginner to superhuman range
We could not find transparent sources for low computer chess Elo records, but it seems common to place Elo scores of 800-1200 in the 1950s and 1960s. In his book Robot (1999)[/note]Moravec, Hans. Robot: Mere Machine to Transcendent Mind, n.d., p71, also at https://frc.ri.cmu.edu/~hpm/book97/ch3/index.html[/note], Moravec gives the diagram shown in Figure 1, which puts a machine with an Elo of around 800 in 1957. He does not appear to provide a source for this however. Figure 2 shows another figure without sources from a 2002 article by L. Stephen Coles at Dr. Dobbs 4, which puts some machine at over 1000 in around 1950. To err on the side of assuming narrow human ranges, and because Moravec appears to be a more reliable source, we use his data here. This means that machine chess performance entered the human range in 1957 at the latest.
The chess computer Deep Blue famously beat the then world-champion Kasparov under tournament conditions in 1997.7 However this does not imply that Deep Blue was at that point overall more capable than Kasparov, i.e. had a higher Elo rating.8
According to the Swedish Chess Computer Association records, 2006 is the year when the highest machine Elo rating surpassed the highest human Elo (both the highest at the time, and the highest in 2020). In particular Rybka 1.2 was rated 2902. At the time, the highest human Elo rating was Garry Kasparov at 2851.9
Thus it took around 49 years for computers to progress from beginner human level chess to superhuman chess.
Pre-human range
The Chess Programming Wiki says that the 1957 Bernstein Chess program was the first complete chess program.10 This seems likely to be the same Bernstein program noted by Moravec as having an 800 Elo in 1957 (see above). Thus if correct, this means that once machines could complete the task of playing chess at all, they could already do it at human beginner level. This may not be accurate (none of these sources appear to be very reliable), but it strongly suggests that the time between lowest possible performance and beginner human performance was not as long as decades.
Superhuman performance range
The Swedish Chess Computer Association has measured continued progress. As of July 2020, the best chess machine is rated 355811, whereas in 2019 sometime, the highest rating was 3529.12 Alphazero also appeared to have an Elo just below 3500 in 2017, according to its creators (from a small figure with unclear labels).13
We know of no particular upper bound to chess performance.
This suggests that so far the superhuman range in chess playing has permitted at least least 14 years of further progress, and may permit much more.
Primary author: Katja Grace
Notes
- “In general, a beginner (non-scholastic) is 800, the average player is 1500, and professional level is 2200.”
“Elo Rating System.” In Wikipedia, October 12, 2020. https://en.wikipedia.org/w/index.php?title=Elo_rating_system&oldid=983064897. - “Table of top 20 rated players of all-time, with date their best ratings were first achieved…1 2882 Magnus Carlsen May 2014 23 years, 5 months”
“Comparison of Top Chess Players throughout History.” In Wikipedia, July 27, 2020. https://en.wikipedia.org/w/index.php?title=Comparison_of_top_chess_players_throughout_history&oldid=969714742. - For instance, the highest ranked player Magnus Carlsen has called AlphaZero his hero, and his strategy was noted by some to be reminiscent of it:
“This original strategy drew comparisons with the neural network program Alphazero, which Carlsen called his “hero” in a recent interview.”the Guardian. “Chess: Magnus Carlsen Scores in Alphazero Style in Hunt for Further Records,” June 28, 2019. http://www.theguardian.com/sport/2019/jun/28/chess-magnus-carlsen-scores-in-alphazero-style-hunts-new-record.
- “Computer Chess: The Drosophila of AI | Dr Dobb’s.” Accessed October 13, 2020. https://www.drdobbs.com/parallel/computer-chess-the-drosophila-of-ai/184405171?pgno=2. p2
- Moravec, Hans. Robot: Mere Machine to Transcendent Mind, n.d., p71, also at https://frc.ri.cmu.edu/~hpm/book97/ch3/index.html
- “Computer Chess: The Drosophila of AI | Dr Dobb’s.” Accessed October 13, 2020. https://www.drdobbs.com/parallel/computer-chess-the-drosophila-of-ai/184405171?pgno=2. p2
- “Deep Blue versus Garry Kasparov.” In Wikipedia, October 1, 2020. https://en.wikipedia.org/w/index.php?title=Deep_Blue_versus_Garry_Kasparov&oldid=981337286.
- A group of Redditors discuss:
reddit. “R/Chess – Deep Blue’s True Elo Rating?” Accessed October 13, 2020. https://www.reddit.com/r/chess/comments/7bm361/deep_blues_true_elo_rating/.
- “Comparison of Top Chess Players throughout History.” In Wikipedia, July 27, 2020. https://en.wikipedia.org/w/index.php?title=Comparison_of_top_chess_players_throughout_history&oldid=969714742.
- “The Bernstein Chess Program,
was the first complete chess program, developed around 1957““The Bernstein Chess Program.” In Chess Programming WIKI. Accessed October 13, 2020. https://www.chessprogramming.org/The_Bernstein_Chess_Program.
- “The SSDF Rating List.” Accessed October 15, 2020. https://ssdf.bosjo.net/list.htm.
- “Swedish Chess Computer Association.” In Wikipedia, September 19, 2020. https://en.wikipedia.org/w/index.php?title=Swedish_Chess_Computer_Association&oldid=979229545.
- See Figure 1:
Silver, David, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, et al. “Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm.” ArXiv:1712.01815 [Cs], December 5, 2017. http://arxiv.org/abs/1712.01815.