HomepageCommercial LawPrivate LawPublic Law & Human RightsCriminal LawEU & International LawCareers

Accessibility

Have Irlen Syndrome, or need different contrast? Click the button below for options.

Background Colours

Subscribe

Enter you email address below to subscribe to free customisable article notifications.

Alternatively, click the button below for our various RSS Feeds (available journal wide, or per section).

Robot Wars? Autonomous Weapons and International Humanitarian Law

Article Cover Image

About The Author

Joseph Mahon (Former Regular Writer)

Originally a theology graduate from the University of Edinburgh, Joseph is currently undertaking an LLM in Public Law at UCL. His primary interests lie in three fields: public law, human rights, and international law. Outside of law, he plays football, tennis and cricket and is a die-hard supporter of Bath Rugby.

Technologies are morally neutral until we apply them.

William Gibson

Autonomous Lethal Weapons (ALWs) (also known as Autonomous Weapons Systems, Lethal Autonomous Weapons or – a personal favourite – Killer Robots) are weapons that operate independently of humans. They can move, navigate, identify targets, and choose to attack without human direction. Equipped with Artificial Intelligence (AI), they can learn and adapt to changing circumstances and can make ‘decisions’ by themselves. Where current technologies operate with humans ‘in the loop’ as part of the decision making-process, ALWs – in contrast – have humans ‘outside the loop’. Once deployed according to a set of rules, ALWs are on their own.

Killer robots roaming the earth is a spectre that has, naturally, led to some opposition. It raises questions of ethics and philosophy, asking humanity how it wants war to progress. It has led experts including Elon Musk, Stephen Hawking and Noam Chomsky to pen a letter to the UN calling for a pre-emptive ban on such weapons lest they usher in a ‘third revolution in warfare’. There is a Campaign to Stop Killer Robots. And, in case this all seemed too vague, the UN Convention on Certain Conventional Weapons 1980 (CCW 1980) have met to discuss the issue three times, reportedly drawing record crowds.

It is therefore perhaps surprising that there is little in International Humanitarian Law (IHL) that prohibits elements of autonomy in weapon systems. While there are outright bans on certain weapons – lasers designed to cause blindness, for example, are prohibited by Article 1 of Protocol VI to the CCW 1980 – there are no such bans on autonomous weapons or weapons with autonomous characteristics, such as homing missiles. On the contrary, autonomy is often viewed as an improvement that complements IHL by making weapons more effective, in many cases minimising incidental civilian collateral.

It follows that there are two key questions to be answered in relation to ALWs, both of which this article seeks to explore. First, will the weapons ever be able to comply with IHL? Second, if it is established that they will be, should they be allowed to?

Can ALWs Make the Necessary Assessments?

Distinction

Distinction is the core principle of IHL. It provides that belligerent forces must distinguish between combatants and civilians. Enemy combatants can be targeted, civilians cannot. Certain civilian loss is permitted as unintentional collateral, but this depends on the necessity of the military objective sought and the proportionality assessment used when considering the gain of that objective in relation to the loss of civilian life.

As this author has written in a previous article for Keep Calm Talk Law, the characteristics of modern warfare are making this assessment more difficult. Modern wars are predominantly non-international: fought in civilian areas rather than battlefields, where the population is centralised. Combatants regularly conceal themselves as civilians and hide among civilian surroundings. Against this backdrop, distinction not only becomes more problematic, but more pressing.

Without exploring the intricacies of how distinction assessments are made (again, refer to this piece), it is sufficient to say that while attempting to provide clarity for troops, states adopt policies such as the Continuous Combat Function to identify combatants. These work up to a point, but are not much use in the heat of the moment. When a soldier is faced with an individual who may or may not be a combatant, they will not recall a policy. They will use human judgement. As it currently stands, therefore, the distinction assessment is subjective, informed, at least in part, by emotional intelligence.

As Human Rights Watch argues, the distinction assessment depends on the ability to gauge intention; this means interpreting subtle, human clues and nuances of behaviour that machines – programmed in advance – would be unable to pick up on. Does the injured soldier intend to pick up his rifle and begin firing, or has the injury taken its toll on him? Are they ‘hors de combat’ and subject to civilian immunity provided by Article 41 of the Additional Protocol 1 to the Geneva Conventions?

Indeed, while distinction is generally easier than proportionality, traditional IHL problems rear their heads. It can be assumed that ALWs would develop the technology to work out when a combatant is carrying a loaded gun compared to when a civilian is feeding a baby. However, concerns can be raised about those that are laying a trap or concealing themselves among civilians: the standard assessments, difficult enough already, potentially become harder without human judgement. As Dale Stephens argues, moral and ethical agency are needed in decisions in order to apply IHL: a robot could not emulate this.

Proportionality

Proportionality poses still greater problems. There are two elements to it: the first – a quantitative judgement – works out how much civilian life will be lost as a result of an attack; the second – a qualitative one – asks whether that loss of life would be, in the words of Article 51(5)(b) of the Additional Protocol 1 to the Geneva Conventions, “excessive in relation to the concrete and direct military advantage anticipated”. It is easy to concede that a machine could do the former, and could even do so far more effectively than a human but it is questionable whether it could ever do the latter. After all, terms like ‘excessive’ have never reached consensus among humans, so it seems an impossible to feat to how programme the meaning of such a term into a robot.

In the International Committee of the Red Cross’s (ICRC) commentary on Article 57 of the Additional Protocol 1 to the Geneva Conventions – an article which address precaution in attack that again uses the phrase ‘excessive’ – the test of proportionality is not clear-cut. It allows commanders a “fairly broad margin of judgment” and “must above all be a question of common sense and good faith for military commanders.” It is not clinical, but chaotic: heavily dependent on the permutations and facts at a specific moment in time. It is, as Peter Asaro argues, “abstract, not easily quantified, and highly relative to specific contexts and subjective estimates of value.” These are huge technological hurdles for scientists to overcome. As Peter Asaro concludes, writing separately for the ICRC, it is doubtful that the technology will ever be able, legally and ethically, to handle “the fog and friction of war”.

But supposing a machine’s assessment could handle this complexity and arrive at a verdict. Would it matter that its assessment would be devoid of ‘human’ judgement?

Answering this, Dale Stephens draws strongly on the psychological elements of decision-making. Human cognition, he argues, places a greater emphasis on human loss than on military gain. In this framework, the proportionality tests work and can even result in humanitarian outcomes. In war, he contends, surrounded by a ‘mutuality of risk’, life takes on a greater value. Concepts such as mercy, compassion, forbearance and sympathy come to the fore, not necessarily manifesting themselves in soldiers’ actions, but at least informing the decision-making process. Yet, as can be seen with drones and their pilots, this mutuality of risk is eroded by distance. With separation from the event, the decision no longer requires an analysis of such value-laden concepts. By outsourcing the kill decision to robots, he suggests, we are in danger of doing the same thing: removing the emotional and ethical value inherent in the proportionality assessment and, by extension, making the decision to kill easier.

Numerous scholars have echoed Dale Stephens’ suggestion that humans do not innately want to kill others. As Armin Krishnan writes in his comprehensive review of the topic:

One of the greatest restraints for the cruelty in war has been the natural inhibition of humans not to kill or hurt fellow human beings…Taking away that inhibition to kill by using robots for the job could weaken the most powerful psychological and ethical restraint in war. War would be inhumanely efficient and would no longer be constrained by the urge of soldiers not to kill.

This assertion is not without critique. Marco Sassoli, a former head of the legal division of the ICRC, considers there to be no evidence that distance would make the killing any easier. Certain soldiers he met could kill during combat without hesitation and move swiftly on – in his view, it is an issue of personality rather than distance. Further still, with greater safeguards, any distant, ‘computer game’ mentality could be removed.

Sassoli’s argument, however, does not acknowledge the steps that will have, almost certainly, led to the creation of what he portrays as cold hard killers. The discipline, the command structure, the relentless planning – these are all strategies employed by militaries to distance the individual from the moment. A soldier is part of a wider unit, a cog in a machine. The decision is not theirs, but their superior’s, and that decision is borne out of an entrenched submission to a greater ideal. It is questionable how many soldiers are in fact, by nature, cold, hard killers.

There is no objective arbiter by which to measure which would be more catastrophic – letting humans continue to make proportionality decisions or allowing robots to make them, devoid of human emotion. It is regularly assumed that the human factor is positive to the decision; but maybe it is not. What is availabe, however, is a history of catastrophic cruelties in war, the greatest of which have been achieved by mechanisation and separation, system and routine. Submitting humanity to this on grand scale may be a dire decision.

If They Can Make The Assessments, Should They Be Allowed To?

There are clearly a number of question marks surrounding the ability of ALWs to comply with IHL. Since there is no clear understanding as to the real capability of AI, however, the next phase must be considered. Supposing the robots are capable of the distinction and proportionality assessments outlined above, and supposing their assessments are not diminished by the lack of human judgement, should they be allowed to make them?

Accountability

It is a condition of just war, and of criminal legal theory in general, that someone must be accountable. If there is no one to fasten a crime to, there is little value in having that law in the first place. If a commander tells a robot who to kill, it would follow that the commander would still be accountable. However, if an entirely autonomous robot uses AI to determine who to kill, who is accountable then?

With the introduction of computers, there will be a clear electronic trail of orders. Despite this trail, however, it is not given that violations of IHL will be easily attributed to individuals. Military commanders were given the weapons; engineers were instructed to create them; programmers were told which laws to input; the robot is certainly not culpable. To which of these individuals should we attribute accountability?

If these robots are truly autonomous, and possess the ability to self-learn, their knowledge could quickly escalate from that which was initially envisaged: not necessarily breaking the rules which were programmed into them, but taking them to extremes their ‘creators’ may not have foreseen. It may not seem just, in that case, to hold any of those mentioned above responsible. Furthermore, since the robots make the critical assessments themselves, if holding someone accountable, you would need to hold them accountable for creating the framework in which the assessment was made, not the assessment itself. This culpability is fine in other areas of law, but not in the case-by-case analysis of IHL.

Faced with this accountability issue (and with a host of ethical ones – is it an affront to human dignity to give an autonomous robot control over human life or death?), a number of states have commented that they intend always to have a human ‘in the loop’. When the CCW met for the third time, Germany stated that it “will not accept that the decision over life and death is taken solely by an autonomous system”. Japan echoed that statement. US Deputy Defence Secretary Robert Work said in 2016 that his department would "not delegate lethal authority to a machine to make a decision", but was considering the possibility that "authoritarian regimes" might do so. The UK has added, somewhat underwhelmingly: “the operation of weapons systems will always be under human control.”

For opponents of ALWs, these are encouraging affirmations. With the technology approaching fast, however, they must not be complacent. They must push for meaningful, not formulaic, control over critical decisions. Accountability must be upheld. In the words of Article 36, an organisation that helped to found the Campaign to Stop Killer Robots:

[I]t must be clearly acknowledged that the responsibility for legal judgements remains with the person or person(s) who plan or decide upon an attack.

The Robot on the Clapham Omnibus

Before reaching the accountability stage, however, it would need to be determined whether a robot’s decision was a violation of IHL in the first place. When interpreting this in practice, international courts generally adopt the standard of the “reasonable military commander”, as transpired in the ICTY’s Report into the NATO Bombing of Yugoslavia. This was expanded by the ICT in Prosecutor v Galic [2003]:

[I]n determining whether an attack was proportionate it is necessary to examine whether a reasonably well-informed person in the circumstances of the actual perpetrator, making reasonable use of the information available to him or her, could have expected excessive civilian casualties to result from the attack.

It is not a particularly complex standard, and it is well entrenched in domestic law. Yet, despite best attempts to reach objectivity, it is still a human one. As written in the Max Planck Encyclopaedia of International Law, “reasonableness exhibits an important link with human reason…it is generally perceived as opening the door to several ethical or moral, rather than legal, considerations.” In short, it is a human sat on the Clapham omnibus, not a robot.

Geo-Political Considerations

There are wider issues to consider, which, although not strictly within the bounds of IHL, still demand attention from military lawyers and academics. Based on the evidence presented by drones, and planes before them, people are more prepared to vote for conflict when there are no ‘boots on the ground’. ALWs, therefore, may further lower the threshold for war. There are other concerns: that ALWs will, like drones, become cloaked in secrecy; that they will drastically increase the asymmetry that is already causing such problems for modern warfare; or that they will create a situation of constant armed conflict around the world – a global police state where always, somewhere, there is an ALW buzzing around, hunting anyone that looks like a ‘terrorist’.

There is not space to address these concerns fully, other than to offer one defence of ALWs that rebuts many of them. Succinctly put by Joseph W. Dyer, “one of the great arguments for armed robots is they can fire second.” While a human may kill to avoid being killed, may panic and decide that force is needed, or may simply mistake a firework for a gunshot, a robot (unless so programmed) is unlikely to do any of these things. It could wait until the very last minute before deciding to engage. How many ‘accidental escalations’ could be avoided with that ability to withhold fire? Perhaps that delay could even be a window for diplomacy, a space where both sides can re-assess and take the decision not to go to war.

The Case for AI: A Brief Look at its Potential

There is nonetheless great potential of AI to positively transform our society, and this article would be incomplete without a strong case for its use in autonomous weaponry. When reading the literature in this field it is notable how many scholars base their opinions on an assumption, conscious or subconscious, that they are somehow aware of what the AI in these machines will be like.

To an extent this is permissible. It can be assumed, for instance, that these machines will be able to identify numbers of combatants with far greater speed than the current machines are capable of; that they will be able to strike targets with far greater precision than our current technology allows; even that the machines will have a far more in-depth knowledge of IHL than the most esteemed professors, able to recall this knowledge—down to the most understated obiter—in a flash of a millisecond. These are the basic assumptions. Yet what cannot be predicted is the upper limit, if there is one.

A recurring theme in this article has been the humanity of IHL. It is a set of rules built for humans, by humans. It should not be a great surprise, therefore, when autonomous weapons cannot be shoehorned into this framework. Functionally, they may never be able to make the assessments outlined above because, legally, those assessments work solely in a framework relying on human traits – intuition, ethics, value.

Utilising the capacities of AI, complete reform of this system could be aimed for. This could see an arrival at novel solutions, not yet considered, that have no need for the imperfections of subjectivity, and are all the better for it. IHL is far from perfect; human compliance with it has not been exemplary. If a new set of laws could be created in which autonomous robots could operate, this could pave the way for, ironically, a far more humanitarian kind of war.

Conclusion

In conclusion, it should again be emphasised that this is a vast debate and this article has only been able to cover a small portion of it. Legitimacy (or the lack of it) under IHL should not, alone, be viewed as sanction for such weapons. It has also been assumed that parties to conflicts actually want to comply with IHL – what happens when they do not? What if, as is the case with many non-state groups, killing civilians is a priority, not a war crime? The consequences, if ALWs made it into the hands of such groups, could be catastrophic.

In distinction and proportionality, IHL sets two rules that are inherently human. They require moral agency to succeed in their aims. And still, if a robot could mimic them, there are doubts over whether or not we should allow them to. Yet, there is cause for optimism. While robots may be inhuman, they are not inhumane. They can neither hate nor fear, and cannot choose to ignore orders. Unless programmed to do so, they would not rape, pillage or plunder, and would do no more damage than required to succeed in the job at hand. Arriving at a reformed legal system in which ALWs could facilitate a more humane type of warfare would be far from easy; lawyers, scientists and ethicists should be in no doubt as to the scale of this challenge. It would require grappling with war’s most challenging legal and moral dilemmas. Yet, with the potential of AI, it seems possible.

For the latest articles straight to your inbox, you can subscribe for free. Alternatively, follow @KeepCalmTalkLaw on Twitter or Like us on Facebook.

Tagged: Anti-Terror, Armed Conflict, Criminal Law, Drones, International Law

Comment / Show Comments (2)

You May Also Be Interested In...

Drones, Donald and Distinction: Civilian Protection in Contemporary Warfare

4th Aug 2017 by Joseph Mahon

Killing for Baseball Cards: Analysing The Drone Papers

12th Jan 2016 by Rowan Clapp

The Syria Airstrikes: Creative Ambiguity and Transient Definitions

23rd Dec 2015 by Rebecca Von Blumenthal

The Illegality of Guantanamo Bay

30th Oct 2015 by Sophie Cole-Hamilton

Drone Strikes under International Law

3rd Mar 2015 by Alex Hitchcock

The Legality of Armed Intervention

7th Oct 2014 by Francesca Norris

Section Pick October

To Infinity and Beyond: Legislating for the Peaceful Use of Outer Space

Editors' Pick Image

View More

KCTL News

Keep Calm Talk Law: Moving Forward

3rd Sep 2019

Changing of the Guard: Moving Keep Calm Talk Law Forward

12th Aug 2018

An Anniversary or Two: Four Years of Keep Calm Talk Law

11th Nov 2017

Rising from the Ashes: The Return of Keep Calm Talk Law

18th Nov 2016

Two Years On, Keep Calm Talk Law’s Legacy is Expanding

11th Nov 2015

Twitter

Javascript must be enabled for the Twitter plugin to function. Click below to visit us on Twitter.

Free Email Subscription

Subscribe to Keep Calm Talk Law for email updates, and/or weekly roundups. You can tailor your subscription on activation. Both fields are required.

Your occupation / Career stage is used to tailor your subscription and for readership monitoring.

Uncheck this box if you do not want to receive our monthly newsletter.

By clicking the Subscribe button, you agree to our privacy policy and terms of service. Please ensure you read these in full.

Free Subscription