Tuesday, February 17, 2009
A new Navy-funded report warns against a hasty deployment of war robots, and urges programmers to include ethics subroutines — a warrior code of sorts. The alternative they say, is the possibility of a robotic atrocity, akin to the Terminator or other sci-fi movies. (Source: Warner Brothers)Robots must learn to obey a warrior code, but increasing intelligence may make keeping the robots from turning on their masters increasingly difficult
Robots gone rogue killing their human masters is rich science fiction fodder, but could it become reality? Some researchers are beginning to ask that question as artificial intelligence advances continue, and the world’s high-tech nations begin to deploy war-robots to the battlefront. Currently, the U.S. armed forces use many robots, but they all ultimately have a human behind the trigger. However, there are many plans to develop and deploy fully independent solutions as the technology improves.
Some mistakenly believe that such robots would only be able to operate within a defined set of behaviors. Describes Patrick Lin, the chief compiler of a new U.S. Navy-funded report, “There is a common misconception that robots will do only what we have programmed them to do. Unfortunately, such a belief is sorely outdated, harking back to a time when . . . programs could be written and understood by a single person.”
The new report points out that the size of artificial intelligence projects will likely make their code impossible to fully analyze and dissect for possible dangers. With hundreds of programmers working on millions of lines of code for a single war robot, says Dr. Lin, no one has a clear understanding of what going on, at a small scale, across the entire code base.
(ARTICLE CONTINUES BELOW)
He says the key to avoiding robotic rebellion is to include “learning” logic which teaches the robot the rights and wrongs of ethical warfare. This logic would be mixed with traditional rules based programming.
The new report looks at many issues surrounding the field of killer robots. In addition to code malfunction, another potential threat would be a terrorist attack which reprogrammed the robots, turning them on their owners. And one tricky issue discussed is the question of who would take the blame for a robotic atrocity — the robot, the programmers, the military, or the U.S. President.
Full story here.