Smart missiles, rolling robots, and flying drones currently controlled by humans, are being used on the battlefield more every day. But what happens when humans are taken out of the loop, and robots are left to make decisions, like who to kill or what to bomb, on their own?
Ronald Arkin, a professor of computer science at Georgia Tech, is in the first stages of developing an “ethical governor,” a package of software and hardware that tells robots when and what to fire. His book on the subject, “Governing Lethal Behavior in Autonomous Robots,” comes out this month.
He argues not only can robots be programmed to behave more ethically on the battlefield, they may actually be able to respond better than human soldiers.
(ARTICLE CONTINUES BELOW)
“Ultimately these systems could have more information to make wiser decisions than a human could make,” said Arkin. “Some robots are already stronger, faster and smarter than humans. We want to do better than people, to ultimately save more lives.”
Lethal military robots are currently deployed in Iraq, Afghanistan and Pakistan. Ground-based robots like iRobot’s SWORDS or QinetiQ’s MAARS robots, are armed with weapons to shoot insurgents, appendages to disarm bombs, and surveillance equipment to search buildings. Flying drones can fire at insurgents on the ground. Patriot missile batteries can detect incoming missiles and send up other missiles to intercept and destroy them.