AI Impacts Quarterly Newsletter, Jan-Mar 2023

Updates, research, and fundraising

0 comments

What we’ve learned so far from our technological temptations project

The history of geoengineering, nuclear power, and human challenge trials suggest that social norms and regulation exert powerful forces on the use of technology.

0 comments

Superintelligence Is Not Omniscience

Chaos theory allows us to rigorously show that there are ceilings on our abilities to make some prediction. This post introduces an investigation which explores the relationship between chaos and intelligence in more detail.

1 comment

A policy guaranteed to increase AI timelines

A redefinition of the second is a foolproof way to increase the number of years between nearly any two events.

0 comments

You Can’t Predict a Game of Pinball

The uncertainty in the location of the pinball grows by a factor of about 5 every time the ball collides with one of the disks. After 12 bounces, an initial uncertainty in position the size of an atom grows to be as large as the disks themselves. Since you cannot measure the location of a pinball with more than atom-scale precision, it is in principle impossible to predict the motion of a pinball as it bounces between the disks for more than 12 bounces.

1 comment

How bad a future do ML researchers expect?

Every survey respondent’s guess about the future, lined up by expectation of the worst.

6 comments

How popular is ChatGPT? Part 2: slower growth than Pokémon GO

ChatGPT had unusually fast user growth, but it did not set a record.

0 comments

Scoring forecasts from the 2016 “Expert Survey on Progress in AI”

Patrick Levermore, 1 March 2023 Summary This document looks at the predictions made by AI experts in The 2016 Expert Survey on Progress in AI, analyses the predictions on ‘Narrow tasks’, and gives a Brier score to

0 comments

How popular is ChatGPT? Part 1: more popular than Taylor Swift

What does search volume tell us about public attention on ChatGPT and AI in general?

0 comments

The public supports regulating AI for safety

Some survey data

0 comments

Whole Bird Emulation requires Quantum Mechanics

Birds see magnetic fields using quantum mechanical spin states in their retina. Whole Bird Emulation requires much higher resolution than you might expect.

2 comments

Framing AI strategy

Ten angles on AI strategy

0 comments

Product safety is a poor model for AI governance

Relying on safety-checking after development is not sufficient.

0 comments

We don’t trade with ants

AI’s relationship with us will not be like our relationship with ants

0 comments

Let’s think about slowing down AI

18 or so reasons to reflexively dismiss slowing down AI and why I think they fail and leave it worth thinking seriously about.

0 comments

December 2022 updates and fundraising

What we’ve been up to and funding we think we could use well

0 comments

Against a General Factor of Doom

If you ask people a bunch of specific doomy questions, and their answers are suspiciously correlated, they might be expressing their p(Doom) for each question instead of answering the questions individually. Using a general factor of doom is unlikely to be an accurate depiction of reality. The future is likely to be surprisingly doomy in some ways and surprisingly tractable in others.

0 comments

Notes on an Experiment with Markets

AI Impacts decided to try using Manifold Markets to help us plan social events in the evenings of our work retreat. Here are some notes from that experiment.

0 comments

Counterarguments to the basic AI x-risk case

Sixteen weaknesses in the classic argument for AI risk.

4 comments

What do ML researchers think about AI in 2022?

Katja Grace Aug 4 2022

First findings from the new 2022 Expert Survey on Progress in AI.

51 comments

Why work at AI Impacts?

Katja Grace Mar 2022

My grounds for spending my time on this: a hand-wavy account.

0 comments

Observed patterns around major technological advancements

by Rick Korzekwa, 2 February, 2022 Summary In this post I outline apparent regularities in how major new technological capabilities and methods come about. I have not rigorously checked to see how broadly they hold,

0 comments

Beyond fire alarms: freeing the groupstruck

Katja Grace Sept 2021

Fire alarms are the wrong way to think about the public AGI conversation.

2 comments

Vignettes workshop

Daniel Kokotajlo June 2021

Write down how AI will go down!

0 comments

April files

Katja Grace April 2021

Internal drafts for feedback

0 comments

Coherence arguments imply a force for goal-directed behavior

Katja Grace Mar 2021

Behavior that is permitted by the ‘coherence arguments’ may still be discouraged by them.

1 comment

Misalignment and misuse: whose values are manifest?

Katja Grace Nov 2020

Are misalignment and misuse helpful catastrophe categories?

1 comment

Automated intelligence is not AI

Katja Grace Nov 2020

Sometimes we think of ‘artificial intelligence’ as whatever technology ultimately automates human cognitive labor…

1 comment

Relevant pre-AGI possibilities

Daniel Kokotajlo June 2020
Brainstorm of ways the world could be relevantly different by the time advanced AGI arrives

0 comments
Map of the first transatlantic telegraph

Description vs simulated prediction

Rick Korzekwa April 2020
What are we trying to do when we look at history to inform forecasting?

0 comments

Discontinuous progress in history: an update

Katja Grace April 2020
We’ve been looking for historic cases of discontinuously fast technological progress, to help with reasoning about the likelihood and consequences of abrupt progress in AI capabilities. We recently finished expanding this investigation to 37 technological trends. This blog post is a quick update on our findings. See the main page on the research and its outgoing links for more details.

1 comment

Takeaways from safety by default interviews

Asya Bergal

Last year, several researchers at AI Impacts (primarily Robert Long and I) interviewed prominent researchers inside and outside of the AI safety field who are relatively optimistic about advanced AI being developed safely. These interviews were originally intended to focus narrowly on reasons for optimism, but we ended up covering a variety of topics, including AGI timelines, the likelihood of current techniques leading to AGI, and what the right things to do in AI safety are right now. (…)

0 comments

Atari early

By Katja Grace, 1 April 2020 Deepmind announced that their Agent57 beats the ‘human baseline’ at all 57 Atari games usually used as a benchmark. I think this is probably enough to resolve one of

0 comments

Three kinds of competitiveness

By Daniel Kokotajlo, 30 March 2020 In this post, I distinguish between three different kinds of competitiveness — Performance, Cost, and Date — and explain why I think these distinctions are worth the brainspace they

0 comments

AGI in a vulnerable world

By Asya Bergal, 25 March 2020 I’ve been thinking about a class of AI-takeoff scenarios where a very large number of people can build dangerous, unsafe AGI before anyone can build safe AGI. This seems

0 comments

Cortés, Pizarro, and Afonso as precedents for takeover

Daniel Kokotajlo, 29 February 2020 Epistemic status: I am not a historian, nor have I investigated these case studies in detail. I admit I am still uncertain about how the conquistadors were able to colonize

4 comments

Robin Hanson on the futurist focus on AI

By Asya Bergal, 13 November 2019 Robert Long and I recently talked to Robin Hanson—GMU economist, prolific blogger, and longtime thinker on the future of AI—about the amount of futurist effort going into thinking about

0 comments

Rohin Shah on reasons for AI optimism

By Asya Bergal, 31 October 2019 I along with several AI Impacts researchers recently talked to Rohin Shah about why he is relatively optimistic about AI systems being developed safely. Rohin Shah is a 5th

0 comments

The unexpected difficulty of comparing AlphaStar to humans

By Rick Korzekwa, 17 September 2019 Artificial intelligence defeated a pair of professional Starcraft II players for the first time in December 2018. Although this was generally regarded as an impressive achievement, it quickly became

3 comments

Paul Christiano on the safety of future AI systems

By Asya Bergal, 11 September 2019 As part of our AI optimism project, we talked to Paul Christiano about why he is relatively hopeful about the arrival of advanced AI going well. Paul Christiano works

0 comments

Soft takeoff can still lead to decisive strategic advantage

By Daniel Kokotajlo, 11 September 2019 Crossposted from the AI Alignment Forum. May contain more technical jargon than usual. [Epistemic status: Argument by analogy to historical cases. Best case scenario it’s just one argument among many.

0 comments

Ernie Davis on the landscape of AI risks

By Robert Long, 23 August 2019 Earlier this month, I spoke with Ernie Davis about why he is skeptical that risks from superintelligent AI are substantial and tractable enough to merit dedicated work. This was

0 comments

Primates vs birds: Is one brain architecture better than the other?

By Tegan McCaslin, 28 February 2019 The boring answer to that question is, “Yes, birds.” But that’s only because birds can pack more neurons into a walnut-sized brain than a monkey with a brain four

1 comment

Evidence on good forecasting practices from the Good Judgment Project: an accompanying blog post

By Daniel Kokotajlo, 2 July 2019 Figure 0: The “four main determinants of forecasting accuracy.” Experience and data from the Good Judgment Project (GJP) provide important evidence about how to make accurate predictions. For a

15 comments

Reinterpreting “AI and Compute”

This is a guest post by Ben Garfinkel. We revised it slightly, at his request, on February 9, 2019. A recent OpenAI blog post, “AI and Compute,” showed that the amount of computing power consumed

2 comments

On the (in)applicability of corporate rights cases to digital minds

This is a guest cross-post by Cullen O’Keefe, 28 September 2018 High-Level Takeaway The extension of rights to corporations likely does not provide useful analogy to potential extension of rights to digital minds. Introduction Examining

0 comments

Interpreting AI compute trends

This is a guest post by Ryan Carey, 10 July 2018. Over the last few years, we know that AI experiments have used much more computation than previously. But just last month, an investigation by

7 comments

Occasional update July 5 2018

By Katja Grace, 5 July 2018 Before I get to substantive points, there has been some confusion over the distinction between blog posts and pages on AI Impacts. To make it clearer, this blog post

0 comments

The tyranny of the god scenario

By Michael Wulfsohn, 6 April 2018 I was convinced. An intelligence explosion would result in the sudden arrival of a superintelligent machine. Its abilities would far exceed those of humans in ways we can’t imagine

12 comments

Brain wiring: The long and short of it

By Tegan McCaslin, 30 March 2018 When I took on the task of counting up all the brain’s fibers and figuratively laying them end-to-end, I had a sense that it would be relatively easy–do a

0 comments

Will AI see sudden progress?

By Katja Grace, 24 February 2018 Will advanced AI let some small group of people or AI systems take over the world? AI X-risk folks and others have accrued lots of arguments about this over

4 comments

GoCAS talk on AI Impacts findings

By Katja Grace, 27 November 2017 Here is a video summary of some highlights from AI Impacts research over the past years, from the GoCAS Existential Risk workshop in Göteborg in September. Thanks to the folks there

2 comments

Price performance Moore’s Law seems slow

By Katja Grace, 26 November 2017 When people make predictions about AI, they often assume that computing hardware will carry on getting cheaper for the foreseeable future, at about the same rate that it usually

1 comment

When do ML Researchers Think Specific Tasks will be Automated?

By Katja Grace, 26 September 2017 We asked the ML researchers in our survey when they thought 32 narrow, relatively well defined tasks would be feasible for AI. Eighteen of them were included in our paper

0 comments

What do ML researchers think you are wrong about?

By Katja Grace, 25 September 2017 So, maybe you are concerned about AI risk. And maybe you are concerned that many people making AI are not concerned enough about it. Or not concerned about the

1 comment

AI hopes and fears in numbers

By Katja Grace, 28 June 2017 People often wonder what AI researchers think about AI risk. A good collection of quotes can tell us that worry about AI is no longer a fringe view: many big names are concerned. But

6 comments

Some survey results!

By Katja Grace, 8 June 2017 We put the main results of our survey of machine learning researchers on AI timelines online recently—see here for the paper. Apologies for the delay—we are trying to avoid spoiling the newsworthiness of the

17 comments

Changes in funding in the AI safety field

Guest post by Seb Farquhar, originally posted to the Center for Effective Altruism blog. 20 February 2017 The field of AI Safety has been growing quickly over the last three years, since the publication of

19 comments

Joscha Bach on remaining steps to human-level AI

By Katja Grace, 29 November 2016 Last year John and I had an interesting discussion with Joscha Bach about what ingredients of human-level artificial intelligence we seem to be missing, and how to improve AI forecasts more generally. Thanks

5 comments

Tom Griffiths on Cognitive Science and AI

This is a guest post by Finan Adamson, 8 September 2016 Prof. Tom Griffiths is the director of the Computational Cognitive Science Lab and the Institute of Cognitive and Brain Sciences at UC Berkeley. He

0 comments

What if you turned the world’s hardware into AI minds?

By Katja Grace, 4 September 2016 In a classic ‘AI takes over the world’ scenario, one of the first things an emerging superintelligence wants to do is steal most of the world’s computing hardware and repurpose it to running the AI’s

3 comments

Friendly AI as a global public good

By Katja Grace, 8 August 2016 A public good, in the economic sense, can be (roughly) characterized as a desirable good that is likely to be undersupplied, or not supplied at all, by private companies.

0 comments

Error in Armstrong and Sotala 2012

By Katja Grace, 17 May 2016 Can AI researchers say anything useful about when strong AI will arrive? Back in 2012, Stuart Armstrong and Kaj Sotala weighed in on this question in a paper called ‘How We’re

6 comments

Metasurvey: predict the predictors

By Katja Grace, 12 May 2016 As I mentioned earlier, we’ve been making a survey for AI researchers. The survey asks when AI will be able to do things like build a lego kit according to the instructions, be a surgeon, or radically

0 comments

Concrete AI tasks bleg

By Katja Grace, 30 March 2016 We’re making a survey. I hope to write soon about our general methods and plans, so anyone kind enough to criticize them has the chance. Before that though, we have a different request: we want a list

23 comments

Mysteries of global hardware

By Katja Grace, 7 March 2016 This blog post summarizes recent research on our Global Computing Capacity page. See that page for full citations and detailed reasoning. We recently investigated this intriguing puzzle: FLOPS (then) apparently performed by all of the world’s computing

2 comments

Recently at AI Impacts

By Katja Grace, 24 November 2015 We’ve been working on a few longer term projects lately, so here’s an update in the absence of regular page additions. New researchers Stephanie Zolayvar and John Salvatier have recently joined us,

0 comments

AI timelines and strategies

AI Impacts sometimes invites guest posts from fellow thinkers on the future of AI. These are not intended to relate closely to our current research, nor to necessarily reflect our views. However we think they are worthy contributions to the discussion of AI forecasting and strategy. This

3 comments

Introducing research bounties

By Katja Grace, 7 August 2015 Sometimes we like to experiment with novel research methods and formats. Today we are introducing ‘AI Impacts Research Bounties‘, in which you get money if you send us inputs to some of our research. To start, we

2 comments

Time flies when robots rule the earth

By Katja Grace, 28 July 2015 This week Robin Hanson is finishing off his much anticipated book, The Age of Em: Work, Love and Life When Robots Rule the Earth. He recently told me that it would be helpful to include

1 comment

Event: Exercises in Economic Futurism

By Katja Grace, 15 July 2015 On Thursday July 30th Robin Hanson is visiting again, and this time we will be holding an informal workshop on how to usefully answer questions about the future, with an emphasis on economic approaches.

1 comment

Steve Potter on neuroscience and AI

By Katja Grace, 13 July 2015 Prof. Steve Potter works at the Laboratory of Neuroengineering in Atlanta, Georgia. I wrote to him after coming across his old article, ‘What can AI get from Neuroscience?’ I wanted to know how neuroscience might contribute to AI

0 comments

New funding for AI Impacts

By Katja Grace, 4 July 2015 AI Impacts has received two grants! We are grateful to the Future of Humanity Institute (FHI) for $8,700 to support work on the project until September 2015, and the Future of Life Institute (FLI)

0 comments

Update on all the AI predictions

By Katja Grace, 5 June 2015 For the last little while, we’ve been looking into a dataset of individual AI predictions, collected by MIRI a couple of years ago. We also previously gathered all the surveys about AI predictions that we

7 comments

Why do AGI researchers expect AI so soon?

By Katja Grace, 24 May 2015 People have been predicting when human-level AI will appear for many decades. A few years ago, MIRI made a big, organized collection of such predictions, along with helpful metadata. We are grateful, and just put up a page

0 comments

Supporting AI Impacts

By Katja Grace, 21 May 2015 We now have a donations page. If you like what we are doing as much as anything else you can think of to spend marginal dollars on, I encourage you to support this project! Money will go to more

1 comment
By Martin Grandjean http://en.wikipedia.org/wiki/Data_visualization#/media/File:Social_Network_Analysis_Visualization.png

A new approach to predicting brain-computer parity

By Katja Grace, 7 May 2015 How large does a computer need to be before it is ‘as powerful’ as the human brain? This is a difficult question, which people have answered before, with much uncertainty. We have

7 comments

Preliminary prices for human-level hardware

By Katja Grace, 4 April 2015 Computer hardware has been getting cheap now for about seventy five years. Relatedly, large computing projects can afford to be increasingly large. If you think the human brain is something like

8 comments

What’s up with nuclear weapons?

By Katja Grace, 27 February 2015 When nuclear weapons were first built, the explosive power you could extract from a tonne of explosive skyrocketed. But why? Here’s a guess. Until nuclear weapons, explosives were based on chemical reactions. Whereas

6 comments

Multipolar research questions

By Katja Grace, 11 February 2015 The Multipolar AI workshop we ran a fortnight ago went well, and we just put up a list of research projects from it. I hope this is helpful inspiration to those of you thinking about applying

0 comments

How AI timelines are estimated

By Katja Grace, 9 February 2015 A natural approach to informing oneself about when human-level AI will arrive is to check what experts who have already investigated the question say about it. So we made this list of analyses that we could find. It’s

3 comments

At-least-human-level-at-human-cost AI

By Katja Grace, 7 February 2015 Often, when people are asked ‘when will human-level AI arrive?’ they suggest that it is a meaningless or misleading term. I think they have a point. Or several, though probably

1 comment

Penicillin and syphilis

By Katja Grace, 2 February 2015 Penicillin was a hugely important discovery. But was it a discontinuity in the normal progression of research, or just an excellent discovery which followed a slightly less excellent discovery,

3 comments

The slow traversal of ‘human-level’

By Katja Grace, 21 January 2015 Once you have normal-human-level AI, how long does it take to get Einstein-level AI? We have seen that a common argument for ‘not long at all’ based on brain size does not

7 comments

Making or breaking a thinking machine

By Katja Grace, 18 January 2015 Here is a superficially plausible argument: the brains of the slowest humans are almost identical to those of the smartest humans. And thus—in the great space of possible intelligence—the ‘human-level’ band must be very narrow. Since

7 comments

Are AI surveys seeing the inside view?

By Katja Grace, 15 January 2015 An interesting thing about the survey data on timelines to human-level AI is the apparent incongruity between answers to ‘when will human-level AI arrive?’ and answers to ‘how much of the way to human-level AI have

1 comment

Event: Multipolar AI workshop with Robin Hanson

By Katja Grace, 14 January 2015 On Monday 26 January we will be holding a discussion on promising research projects relating to ‘multipolar‘ AI scenarios. That is, future scenarios where society persists in containing a large number of similarly

3 comments

Michie and overoptimism

By Katja Grace, 12 January 2015 We recently wrote about Donald Michie’s survey on timelines to human-level AI. Michie’s survey is especially interesting because it was taken in 1972, which is three decades earlier than any other surveys we

1 comment

Were nuclear weapons cost-effective explosives?

By Katja Grace, 11 January 2015 Nuclear weapons were radically more powerful per pound than any previous bomb. Their appearance was a massive discontinuity in the long-run path of explosive progress, that we have lately discussed.

1 comment

A summary of AI surveys

By Katja Grace, 10 January 2015 If you want to know when human-level AI will be developed, a natural approach is to ask someone who works on developing AI. You might however be put off by such predictions being regularly criticized

2 comments

AI and the Big Nuclear Discontinuity

By Katja Grace, 9 January 2015 As we’ve discussed before, the advent of nuclear weapons was a striking technological discontinuity in the effectiveness of explosives. In 1940, no one had ever made an explosive twice as

1 comment
First nuclear reaction

The Biggest Technological Leaps

By Katja Grace, 9 January 2015 Over thousands of years, humans became better at producing explosions. A weight of explosive that would have blown up a tree stump in the year 800 could have blown

4 comments

The AI Impacts Blog

By Katja Grace, 9 January 2015 Welcome to the AI Impacts blog.  AI Impacts is premised on two ideas (at least!): The details of the arrival of human-level artificial intelligence matterSeven years to prepare is very different from

1 comment