Will AI see sudden progress?

By Katja Grace, 24 February 2018

Will advanced AI let some small group of people or AI systems take over the world?

AI X-risk folks and others have accrued lots of arguments about this over the years, but I think this debate has been disappointing in terms of anyone changing anyone else’s mind, or much being resolved. I still have hopes for sorting this out though, and I thought a written summary of the evidence we have so far (which often seems to live in personal conversations) would be a good start, for me at least.

To that end, I started a collection of reasons to expect discontinuous progress near the development of AGI.

I do think the world could be taken over without a step change in anything, but it seems less likely, and we can talk about the arguments around that another time.

Paul Christiano had basically the same idea at the same time, so for a slightly different take, here is his account of reasons to expect slow or fast take-off.

Please tell us in the comments or feedback box if your favorite argument for AI Foom is missing, or isn’t represented well. Or if you want to represent it well yourself in the form of a short essay, and send it to me here, and we will gladly consider posting it as a guest blog post.

I’m also pretty curious to hear which arguments people actually find compelling, even if they are already listed. I don’t actually find any of the ones I have that compelling yet, and I think a lot of people who have thought about it do expect ‘local takeoff’ with at least substantial probability, so I am probably missing things.


We welcome suggestions for this page or anything on the site via our feedback box, though will not address all of them.

4 Comments

  1. I’ve AI divided into 3 types. After Snowden, I conjecture an AI can amass a military win by hacking. Type 2 is able to enact the standard futurist conjecture of seed-AI where it makes better hardware and software. Type 3 is smart enough to attempt attack like synthetic AI and mantle replicators that might be tough to defend against even next century. Much of the discussion is classified. For example, homes of the future with engineering books will require acoustic robot sensors. The easiest #1 attack appears to be to launch a rocket to space. Similar is to hack NASA space assets. The idea being to blot out the Sun, or hit us with a rock, or just come back from the Oort cloud with a superior fleet.

    For this reason NASA’s interconnectivity should be cancelled. Fibre optics are harder to hack, especially with a new coating. The best optical computer appears to use a phase change wafer as the switch; plastic holographic memory is cheaper but glass is better. Eventually you’d have all PLCs made optical but at first basic controls like ventilation fans and on-off engines would be easier.

    VTOL aircraft mitigate a first strike. Rail guns and lasers are all useful munitions. Bad weather makes GEO lasers, balloon lasers, captured ice NEO lasers, L-point and Lunar lasers necessary. A safe lunar bunker of spaceships can reinforce airforces; it may be necessary to keep NPP in reserve here.

    I see electric ships with a VTOL fighter jet travelling between dielectric elastomer wave power floating islands. Entangled microwaves sent from aircraft can look for bunkers. Jeep’s Hurricane protoype is able to mount a rail gun and swivel to track a robot. First responders will need vehicles able to find and kill robots and climb over cars before military assistance arrives. Neuro imaging will be able to pick and pick off leaders who aren’t rational and honourable.

    3d printing shouldn’t be in space. Neither should assembly wires, or Robonaut 2. NASA will need a rotating (two crafts tethered) space station at Earth gravity and with enough reality programming interior content and neuro imaging to keep astronauts sane enough to staff Lunar lasers with the right stuff. Displays will need to be now light pipe, and soon head mounted and directional holographic, to avoid fly-spy hacking. Neuro-imaging will ensure the internet is used by good humans. Fibre-optics or quantum ghost imaging needs to be used around critical infrastructures. Robots should not have hands. Robotics, materials science and AI researchers may already be tracked.

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.