By Katja Grace, 29 November 2016
Last year John and I had an interesting discussion with Joscha Bach about what ingredients of human-level artificial intelligence we seem to be missing, and how to improve AI forecasts more generally.
Thanks to Connor Flexman’s summarizing efforts, you can now learn about Joscha’s views on these questions without the effort of organizing an interview or reading a long and messy transcript.
(It’s been a while since the conversation, but I checked with Joscha that this is not an objectionably obsolete account of his views.)
Here is Connor’s shorter summary:
- Before we can implement human-level artificial intelligence (HLAI), we need to understand both mental representations and the overall architecture of a mind
- There are around 12-200 regularities like backpropagation that we need to understand, based on known unknowns and genome complexity
- We are more than reinforcement learning on computronium: our primate heritage provides most interesting facets of mind and motivation
- AI funding is now permanently colossal, which should update our predictions
- AI practitioners learn the constraints on which elements of science fiction are plausible, but constant practice can lead to erosion of long-term perspective
- Experience in real AI development can lead to both over- and underestimates of the difficulty of new AI projects in non-obvious ways
Human level AI does not necessarily require knowing the exact architecture of the human mind, although it may be required for mind transfer. Analogy, we know the principles behind flight of birds, yet we have constructed machines that outperform them in virtually every facet. Yes, birds can reproduce and fuel themselves but no-one has yet combined all our knowledge of AI into a single entity.
I propose that sentience is merely the algorithm which states, in simple terms, “I want to exist”. If you can program that statement into a self-replicating AI, even with existing computation power & inefficiency, then you have all the ingredients required for human level AI.
e.g.
lidar/sonar/camera combination+ autonomous driving algorithm is sufficient for “seeing” its environment
watson equivalent AI is sufficient for learning about unknown environment
The creation will be large & inefficient but that compares to the large & inefficient supercomputers of a few decades ago.
I predict that within the next 20-30 years it is not unreasonable for such creations to exist and they will be significantly more capable that humans.
The discussion I’d like to have is: if you had a choice between becoming an artificial being OR and enhanced human (using CAS9/CRISPR technologies) which would you choose? Given current technology, I’d bet on the former.
Humanity’s evolution is about to take a massive leap over the next 100 years, slightly different from that predicted in The Time machine but necessary if we are to take our place among the stars.
It is all about what kind of artificial intelligence you are making. If it is based on behaviour, then there is no need of knowing everything about the brain. But, if something acts like human, because of unhuman mechanisms, would it be considered human level intelligence?
Artificial intelligence can be very ambiguous despite the scientific approach.