Arguments against the use of such machines by armed forces are blurred in the fog of war

by John Thornhill via The Financial Times

Imagine this futuristic scenario: a US-led coalition is closing in on Raqqa determined to eradicate Isis. The international forces unleash a deadly swarm of autonomous, flying robots that buzz around the city tracking down the enemy.

Using face recognition technology, the robots identify and kill top Isis commanders, decapitating the organisation. Dazed and demoralised, the Isis forces collapse with minimal loss of life to allied troops and civilians.

Who would not think that a good use of technology?

As it happens, quite a lot of people, including many experts in the field of artificial intelligence, who know most about the technology needed to develop such weapons.

In an open letter published last July, a group of AI researchers warned that technology had reached such a point that the deployment of Lethal Autonomous Weapons Systems (or Laws as they are incongruously known) was feasible within years, not decades. Unlike nuclear weapons, such systems could be mass produced on the cheap, becoming the “Kalashnikovs of tomorrow.”

“It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing,” they said. “Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.”

Already, the US has broadly forsworn the use of offensive autonomous weapons. Earlier this month, the United Nations held a further round of talks in Geneva between 94 military powers aiming to draw up an international agreement restricting their use.

The chief argument is a moral one: giving robots the agency to kill humans would trample over a red line that should never be crossed.

Jody Williams, who won a Nobel Peace Prize for campaigning against landmines and is a spokesperson for the Campaign To Stop Killer Robots, describes autonomous weapons as more terrifying than nuclear arms. “Where is humanity going if some people think it’s OK to cede the power of life and death of humans over to a machine?”

There are other concerns beyond the purely moral. Would the use of killer robots lower the human costs of war thereby increasing the likelihood of conflict? How could proliferation of such systems be stopped? Who would be accountable when they went wrong?

This moral case against killer robots is clear enough in a philosophy seminar. The trouble is the closer you look at their likely use in the fog of war the harder it is to discern the moral boundaries. Robots (with limited autonomy) are already deployed on the battlefield in areas such as bomb disposal, mine clearance and antimissile systems. Their use is set to expand dramatically.

The Center for a New American Security estimates that global spending on military robots will reach $7.5bn a year by 2018 compared with the $43bn forecast to be spent on commercial and industrial robots.

The Washington-based think-tank supports the further deployment of such systems arguing they can significantly enhance “the ability of warfighters to gain a decisive advantage over their adversaries”.

In the antiseptic prose it so loves, the arms industry draws a distinction between different levels of autonomy. The first, described as humans-in-the-loop, includes predator drones, widely used by US and other forces. Even though a drone may identify a target it still requires a human to press the button to attack. As vividly shown in the film Eye in the Sky , such decisions can be morally agonising, balancing the importance of hitting vital targets with the risks of civilian casualties.

As online threats race up national security agendas and governments look at ways of protecting their national infrastructures a cyber arms race is causing concern to the developed world

 

See full article here.