Major technological innovations in artificial intelligence are making worlds envisioned in sci-fi flicks like I, Robot a possibility. Humans and advanced, artificially intelligent technology are already working together to accomplish tasks, especially in the military sector.
The U.S. is a leader of military artificial intelligence (AI). In 1953, the U.S. launched “computer-guided missiles,” the precursor to the Talos missile system that could correct its own trajectory and speed. Ten years later, the Defense Advanced Research Projects Agency invested $2 million in artificial intelligence research at Massachusetts Institute of Technology. In 1994, the U.S. contracted its first drone – the RQ-1 Predator drone – and ushered in a new era of warfare.
Unmanned ground and aerial vehicles (UGVs, UAVs) are helping soldiers dismantle bombs, scout enemy territory and perform reconnaissance missions. Many argue they will make war and conflict more humane and efficient, with fewer casualties.
A variety of factors make drones and robots a sexy option for militaries. Though the initial costs of investing and building this type of technology can be rather high, the long-term savings payoff: drones and robots don’t require sleep, food, clothing or salaries. They don’t require medical supplies or assistance in combat zones – in a word, they are more expendable than soldiers.
More technically speaking, AI is neither influenced by fear or emotions nor held to the physical limitations of the human body. AI systems can share, interpret and act on information almost instantaneously and with more accuracy than humans, meaning that its missions are theoretically less likely to fail.
The most fervent argument in favor of autonomous technology (AT): its potential to limit the number of military and civilian casualties in conflict.
(Besides all of that – drones and robots are just pretty damn cool).
However, there are substantial implications in the bourgeoning of AI and the military use of drones/robots.
The U.S. implementation of UAVs to target terrorists in places like Pakistan and Yemen is met with massive amounts of criticism and protest of the Pakistani and Yemeni people. In December 2013, a U.S. drone mistakenly targeted a wedding party in Yemen, killing 12 and injuring 14. This demonstrates just two major shortcomings of using AT.
First, it reveals the inability to make value-based distinctions between friend and foe. The failure to make moral differentiations in combat zones – i.e. between a guerilla fighter and a civilian, or an attacking soldier and a surrendering one – could mean bloodier wars with indiscriminate violence and killing.
Second, the case of the Yemeni wedding party illustrates the possibility of miscalculations. A pilot could have seen that the target was a wedding party, and reinterpreted the information with his superiors by radio, potentially saving civilian lives and political fallout.
However, drone miscalculations, or even technical breakdowns, could cause unprecedented death or accidents, spark political strife between states or lead to accidental wars.
There remains the possibility that as drone and robotic technologies develop, their level of intelligence and ability to evaluate and understand will surpass human capacities.
If future AI systems will be able to act, think and evaluate in ways humans cannot understand, does that mean they will act in ways humans cannot justify?
Peter Asaro examines the justification of AT and its effects on the concept of war in his essay, “How Just Could a Robot War be?”
He raises the issue of accidental wars from manipulating technology purposefully, technical breakdowns or even intentional actions by the technology itself. Misinterpretation of action or intent with AT could cause unintended wars – or at the very least, heightened conflict and tense diplomatic relations between states.
Using AT also gives more political cover in such situations. Nations can avoid political fallout over controversial interference by blaming technical malfunctions or miscalculations in a drone attack much easier than they can recover from a soldier’s contentious actions.
Does this mean that states could infringe on each other’s sovereignty, escaping punishment with the excuse of “misbehaving robots?” No one can answer that question, but it begs to be asked.
This leads to another of Asaro’s points: how technology lowers the barrier of entry to war. The primary factor in deciding to go to war is weighing its costs and benefits – the costs namely being human lives. Asaro concedes that minimizing risk or threat of death is the goal of most military technology, though AT most dramatically lowers the risk of death in war.
It ultimately allows nations to conduct wars from thousands of miles away without sending soldiers into a combat zone. If a nation can go to war with few or no risks to soldiers and civilians the allure of war and its spoils could entice that nation to fight more readily.
This could promote what Lieutenant Colonel Douglas Pryer of the U.S. Army calls “forever war.” Easier wars with fewer costs and virtually no death could mean more wars, or never-ending war. Threat of death persuades nations to settle disputes through more peaceful means.
However, if a nation can achieve their goals without harming soldiers or civilians, why not use force? Pryer’s prudence toward AT stems from this fear of the infinite cycles of war autonomous militaries could arouse.
AT also violates the traditional convention of war in that “those doing the killing are not themselves willing to die.” This applies more specifically to asymmetric wars (one side has AT and the other does not). The fairness inherent in killing and being killed maintains a balance in war as a just means of settling disputes. If one side does not face or accept the equal probability of dying, it violates the norms of war and risks further violation of other limits as well.
States facing asymmetric situations in wars against an autonomous military force might resort to terroristic actions or intense guerilla warfare to defend themselves from aggressor nations.
If one side can fight without fear of death, it questions the morality and justness of war.
However, as technology develops and the reality of an autonomous army complete with drone planes and robot soldiers nears, the conventions of war and our moral perspectives thereof will evolve as well. Perhaps these issues will bear no significance on the justness or morality of war in the future.
A very interesting and thought provoking piece. I would make one comment though, I thought that drones were actually piloted remotely by human pilots and their missiles fired by remote human operators. In other words, the missile strike on the wedding party in Yemen was an error made by humans, perhaps based on bad intelligence. The history of war is littered with such errors, humans being far from infallible – and indeed the heightened emotions of soldiers in conflict zones can often lead them to commit atrocities.
But I agree with your general point that making war more remote, and especially if it’s asymmetric, could lead to it becoming at once more acceptable and less ‘just’. I would argue that has been going on for decades already with bombings and air strikes by manned aircraft.
Drones ARE remotely piloted by human pilots. I didn’t make a distinction between human “in the loop” technologies like the drones programs the U.S. is currently operating overseas, and the human “out of the loop” ones I end up talking about (autonomous technologies).
I completely agree that similar issues have already arisen from manned aircraft, and even in boots-on-the ground situations (the Rape of Nanking). The concerns I wanted to focus on are potential future issues from completely autonomous technology (i.e. drone systems that are programmed to choose when/where/whom to strike).