Before I get to substantive points, there has been some confusion over the distinction between blog posts and pages on AI Impacts. To make it clearer, this blog post shall proceed in a way that is silly, to distinguish it from the very serious and authoritative reference pages that comprise the bulk of AI Impacts.
Now for a picture of a duck, to remind you that this is silly, and also that we are all fragile biological organisms that evolved because apparently that’s what happens if you just leave a bunch of wet mud on a space rock for long enough, alone and vulnerable in a hostile and uncharted world.
And now to the exciting facts on the ground, as we try to marginally rectify that situation.
Tegan McCaslin is now working at AI Impacts, as far as I can tell about five hundred hours a week. It’s going well, except that the rate at which she sends me extensive, carefully researched articles about neuroanatomy and genetics and such to review is in slight tension with my preferred lifestyle.
We also welcome Carl Shulman as occasional consultant on everything, and reviewer of things (especially articles about neuroanatomy and genetics and such…)
Justis Mills joined us last year, to work on miscellany. He usually does one of those software related things, but in his spare time he has been making illustrative timelines of near-term AI predictions and checking whether everything on AI Impacts isn’t obviously false, and fixing bits of it, and such.
We mostly-farewell Michael Wulfsohn—an Australian economist, called to us from the Central Bank of Lesotho by WaitButWhy—who is winding up his assessment of how great avoiding human extinction might be (having already estimated how how much of a bother it might be). He has gone to get a PhD, the better to save the world.
Our implicit office has moved from a spare room in my house to Tegan’s house. This is good, because she has an absurdly nice rug, and an excellent snack drawer, and it is a minor ambition of mine to head an organization which has Bay Area start-up quality snack areas.
We have also been trying out co-working with other save-the-world-something-something-AI related folks around Berkeley, which seems promising. We have also been trying out co-working with Oxford, which seems promising, but not as Bay Area convenient as we would like.
Things we want
We want to hire more people. Relatedly, we would like money. We think these things would nicely complement our many brilliant and tractable research ideas and our ambition. We also want to have our own T-shirts, but that is on the back-burner.
Things we got
$100,000 from The Open Philanthropy Project, for the next two years.
$39,000 from another donor to support several specific research projects from our list of promising research projects.
You can mostly see what we are up to by watching various parts of our front page, so I shan’t go into it all, except to say that I for one am especially enjoying my investigation into reasons to (or not to) expect AI discontinuities. If you too are fascinated by this topic, and want to give especially many pointed comments on it, you can do so on this doc version.
Our survey became the 16th most discussed journal article of 2017, so that was neat. If I recall, I was at least relatively in favor of not writing a paper about it, so I was probably wrong there. (Probably good job, everyone else!) I suspect this success is related to the journalists who have been writing to me endlessly, and me being invited to give talks, and go to Chile and be on the radio and that kind of thing. Which has all been an unusual experience.
How you can get involved
If you want to do this kind of work, consider applying for a job with us, or just doing one of these projects anyway, and sending it to us. If you want to chat about this kind of research, or spy on it, or help a tiny bit noncommittally, ask us nicely and we might add you to our fairly open Slack. If you want to help in some other way, we especially welcome money and any good researchers you have hanging around, but are open to other ideas.