My post last month about the Google scientists who created a neural network capable of recognizing the content of images--the network was able to recognize cats from first principles--was, well, cat-biased. Annalee Newitz' io9 article is a much more thorough examination of the thought processes of what may be the most successful artificial intelligence-like entity yet created. How would it think?
I'm reminded of philosopher Thomas Nagel's famous thought experiment: What is it like to be a bat?
[T]his network isn't like a human brain, though they share some characteristics. It's a new kind of (semi) intelligent entity. Let's call it XNet. Most of the news stories covering XNet have focused on how it learned to recognize humans and kitties after seeing them thousands of times, which is just the kind of thing a little kid would do. Very cuddly and relatable.
But XNet also recognized some other things, too. Over at Slate, Will Oremus reports:
Dean notes that the computers "learned" a slew of concepts that have little meaning to humans. For instance, they became intrigued by "tool-like objects oriented at 30 degrees," including spatulas and needle-nose pliers.
This is, to me, the most interesting part of the research. What are the patterns in human existence that jump out to non-human intelligences? Certainly 10 million videos from YouTube do not comprise the whole of human existence, but it is a pretty good start. They reveal a lot of things about us we might not have realized, like a propensity to orient tools at 30 degrees. Why does this matter, you ask? It doesn't matter to you, because you're human. But it matters to XNet.
What else will matter to XNet? Will it really discern a meaningful difference between cats and humans? What about the difference between a tool and a human body? This kind of question is a major concern for University of Oxford philosopher Nick Bostrom, who has written about the need to program AIs so that they don't display a "lethal indifference" to humanity. In other words, he's not as worried about a Skynet scenario where the AIs want to crush humans — he's worried that AIs won't recognize humans as being any more interesting than, say, a spatula. This becomes a problem if, as MIT roboticist Cynthia Breazeal has speculated, human-equivalent machine minds won't emerge until we put them into robot bodies. What if XNet exists in a thousand robots, and they all decide for some weird reason that humans should stand completely still at 30 degree angles? That's some lethal indifference right there.
I'm not terribly concerned about future AIs turning humans into spatulas. But I am fascinated by the idea that XNet and its next iterations will start noticing patterns we never would. Already, XNet is showing signs of being a truly alien intelligence. If it's true that we cobble together our identities out of what we recognize in the world around us, what exactly would a future XNet come to think of as "itself"? Would it imagine itself as a cat, or as something oddly abstract, like an angle? We just don't know.
I'm reminded of philosopher Thomas Nagel's famous thought experiment: What is it like to be a bat?