Last year John and I had an interesting discussion with Joscha Bach about what ingredients of human-level artificial intelligence we seem to be missing, and how to improve AI forecasts more generally.
Thanks to Connor Flexman’s summarizing efforts, you can now learn about Joscha’s views on these questions without the effort of organizing an interview or reading a long and messy transcript.
(It’s been a while since the conversation, but I checked with Joscha that this is not an objectionably obsolete account of his views.)
Here is Connor’s shorter summary:
- Before we can implement human-level artificial intelligence (HLAI), we need to understand both mental representations and the overall architecture of a mind
- There are around 12-200 regularities like backpropagation that we need to understand, based on known unknowns and genome complexity
- We are more than reinforcement learning on computronium: our primate heritage provides most interesting facets of mind and motivation
- AI funding is now permanently colossal, which should update our predictions
- AI practitioners learn the constraints on which elements of science fiction are plausible, but constant practice can lead to erosion of long-term perspective
- Experience in real AI development can lead to both over- and underestimates of the difficulty of new AI projects in non-obvious ways