[BRIEF NOTE] On artificial intelligence
Jan. 12th, 2010 11:47 pmThis Globe and Mail opinion piece, co-authored by Peter Singer and Agata Sagan, looks forward to the future of robots (for which, read "artificial intelligence"). Robots are becoming more and more capable, as they note.
What will happen, they wonder, if--perhaps when--robots start to evolve into complex entities, stop being objects and stasrt developing interiority, consciousness?
These last two paragraphs refer to the famous Turing test, a scenario proposed by Alan Turing wherein a computer that behaved in a manner indistinguishable from a human being would be deemed to be an intelligent entity. One problem with the Turing test is that it doesn't allow for the possibility that a computer might be able to simulate a human being simply by being a powerful machine lacking interiority; a bigger problem with the Turing test is that consciousness hasn't been strictly defined.
Thoughts?
Robots already perform many functions, including making cars, defusing bombs and, more menacingly, firing missiles. Children and adults play with toy robots, while vacuum-cleaning robots are sucking up dirt in a growing number of homes and – as evidenced by YouTube videos – entertaining cats. There is even a Robot World Cup, although judging by the standard of the event held in Graz, Austria, last summer, footballers have no need to feel threatened just yet. (Chess, of course, is a different matter.)
Most of the robots being developed for home use are functional in design: Gecko Systems' home-care robot looks rather like the Star Wars robot R2-D2. Honda and Sony are designing robots that look more like the “android” C-3PO. But there are already some robots with soft, flexible bodies, human-like faces and expressions, and a large repertoire of movement. Hanson Robotics has a demonstration model called Albert, whose face bears a striking resemblance to that of Albert Einstein.
What will happen, they wonder, if--perhaps when--robots start to evolve into complex entities, stop being objects and stasrt developing interiority, consciousness?
At present, robots are mere items of property. But what if they become sufficiently complex to have feelings? After all, isn't the human brain just a very complex machine?
If machines can and do become conscious, will we take their feelings into account? The history of our relations with the only non-human sentient beings we have encountered so far, animals, gives no ground for confidence that we would recognize sentient robots not just as items of property, but as beings with moral standing and interests that deserve consideration.
Cognitive scientist Steve Torrance has pointed out that powerful new technologies, such as cars, computers and phones, tend to spread rapidly in an uncontrolled way. The development of a conscious robot that (who?) was not widely perceived as a member of our moral community could therefore lead to mistreatment on a large scale.
The hard question, of course, is how we could tell that a robot really was conscious and not just designed to mimic consciousness. Understanding how the robot had been programmed would provide a clue: Did the designers write the code to provide only the appearance of consciousness? If so, we would have no reason to believe that the robot was conscious.
But if the robot was designed to have human-like capacities that might incidentally give rise to consciousness, we would have a good reason to think that it really was conscious. At that point, the movement for robot rights would begin.
These last two paragraphs refer to the famous Turing test, a scenario proposed by Alan Turing wherein a computer that behaved in a manner indistinguishable from a human being would be deemed to be an intelligent entity. One problem with the Turing test is that it doesn't allow for the possibility that a computer might be able to simulate a human being simply by being a powerful machine lacking interiority; a bigger problem with the Turing test is that consciousness hasn't been strictly defined.
Thoughts?