Arguments against the use of such machines by armed forces are blurred in the fog of war
Imagine this futuristic scenario: a US-led coalition is closing in on Raqqa determined to eradicate Isis. The international forces unleash a deadly swarm of autonomous, flying robots that buzz around the city tracking down the enemy.
Using face recognition technology, the robots identify and kill top Isis commanders, decapitating the organisation. Dazed and demoralised, the Isis forces collapse with minimal loss of life to allied troops and civilians.
Who would not think that a good use of technology?
As it happens, quite a lot of people, including many experts in the field of artificial intelligence, who know most about the technology needed to develop such weapons.
In anÂ open letterÂ published last July, a group of AI researchersÂ warnedÂ that technology had reached such a point that the deployment of Lethal Autonomous Weapons Systems (or Laws as they are incongruously known) was feasible within years, not decades. Unlike nuclear weapons, such systems could be mass produced on the cheap, becoming the â€œKalashnikovs of tomorrow.â€
â€œIt will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing,â€ they said. â€œStarting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.â€
Already, the US has broadly forsworn the use of offensive autonomous weapons. Earlier this month, the United Nations held a furtherÂ round of talksÂ in Geneva between 94 military powers aiming to draw up an international agreement restricting their use.
The chief argument is a moral one: giving robots the agency to kill humans would trample over a red line that should never be crossed.
Jody Williams, who won a Nobel Peace Prize for campaigning against landmines and is a spokesperson forÂ the Campaign To Stop Killer Robots, describes autonomous weapons as more terrifying than nuclear arms. â€œWhere is humanity going if some people think itâ€™s OK to cede the power of life and death of humans over to a machine?â€
There areÂ other concernsÂ beyond the purely moral. Would the use of killer robots lower the human costs of war thereby increasing the likelihood of conflict? How could proliferation of such systems be stopped? Who would be accountable when they went wrong?
This moral case against killer robots is clear enough in a philosophy seminar. The trouble is the closer you look at their likely use in the fog of war the harder it is to discern the moral boundaries. Robots (with limited autonomy) are already deployed on the battlefield in areas such as bomb disposal, mine clearance and antimissile systems. Their use is set to expand dramatically.
TheÂ Center for a New American SecurityÂ estimates that global spending on military robots will reach $7.5bn a year by 2018 compared with the $43bn forecast to be spent on commercial and industrial robots.
The Washington-based think-tank supports the further deployment of such systems arguing they can significantly enhance â€œthe ability of warfighters to gain a decisive advantage over their adversariesâ€.
In the antiseptic prose it so loves, the arms industry draws a distinction between different levels of autonomy. The first, described as humans-in-the-loop, includes predator drones, widely used by US and other forces. Even though a drone may identify a target it still requires a human to press the button to attack. As vividly shown in the filmÂ Eye in the SkyÂ , such decisions can be morally agonising, balancing the importance of hitting vital targets with the risks of civilian casualties.
As online threats race up national security agendas and governments look at ways of protecting their national infrastructures a cyber arms race is causing concern to the developed world
See full article here.