Killer Robots: Autonomous Weapons Systems and the Law of Armed Conflict
Blog Post | 113 KY. L. J. ONLINE | January 28, 2025
Killer Robots: Autonomous Weapons Systems and the Law of Armed Conflict
By: Conor Washburn, Staff Editor, Vol. 113
Imagine a drone powered by artificial intelligence that can take off, land, and fly completely on its own. Now, imagine this drone is packed with explosives and can acquire and engage targets autonomously without human input. This may seem like a scene out of the Terminator franchise, but the use of Autonomous Weapons Systems (“AWS”) is far from science fiction. In 2021, a United Nations report suggested that an autonomous weaponized drone in Libya attacked a human target completely on its own, without any human input.[1] Israel has used similar technology in carrying out combat strikes using the world’s first autonomous drone swarm against Hamas fighters in Gaza.[2] The increasing use of AWS brings to light several concerns under the Law of Armed Conflict (“LOAC”), namely concerns related to distinction, proportionality, and accountability.
LOAC requires that a weapons system can only be used where there has been an adequate distinction made between military and civilian infrastructure, combatants, civilians, active combatants, and those combatants who are “hors de combat” or “out of combat.”[3] Whether an individual is an active combatant is a highly nuanced consideration that must be made on the battlefield. For example, an enemy could be rendered “hors de combat” due to something as simple as a slight change in their appearance.[4] If an enemy exhibits behavior that casts doubt on their intent or ability to fight, such as surrendering by raising their hands or showing signs of disorientation, they may no longer be considered a legitimate target for attack.[5] If an AWS were to be deployed in a combat environment, it would be in charge of making this legal and moral distinction, potentially without any human input.
While the principle of distinction mandates that AWS target only legitimate military objectives, the principle of proportionality adds another layer of complexity by requiring careful consideration of the broader consequences of a strike.[6] Evaluating the proportionality of a strike is a complex process involving analytical frameworks like the collateral damage estimation methodologies (“CDEM”).[7] The United States CDEM involves a five-question analysis: 1.) Can a positive identification (“PID”) be made of the target?[8] 2.) Are there involuntary or unwitting human shields on target, or would the strike result in significant environmental concerns?[9] 3.) Can collateral damage concerns be mitigated by using a different weapon?[10] 4.) If damage cannot be mitigated, how many civilians or noncombatants will be killed or injured by the strike?[11] 5.) Are the collateral damage expectations of the attack excessive in relation to the expected military gain?[12] Once these five basic questions have been answered, further analysis is necessary, such as assessing dual-use facilities. This involves gathering additional intelligence to determine population density and conducting additional collateral damage analysis if the target serves both military and civilian purposes.[13]
This level of responsibility raises critical questions about accountability, particularly when AWS are tasked with making decisions requiring complex and nuanced analysis, such as the one above. Understanding who is responsible for errors or violations made by a completely autonomous machine is crucial for its use. Currently, it is unclear who would be responsible if an AWS violates LOAC by attacking a civilian target or causing disproportionate harm—whether that be the commander who deployed the system, the programmer, the manufacturer, or the State sponsoring the strike.[14] If a weapon is operating truly autonomously, then the humans who programmed it or activated it may not even possess the knowledge or intent to be found liable for violations of LOAC.[15] This creates an accountability gap that brings into question whether AWS can ever be fully compliant with LOAC without some level of human oversight.
Although LOAC offers a framework for regulating the use of AWS through established principles of distinction and proportionality, significant challenges persist in determining who bears the responsibility for a strike and AWS ability to comply with LOAC without human oversight. Currently, the level of complex reasoning required by machines to conduct moral and legal analyses such as CDEM at a human level is unavailable, and many experts believe machine reasoning may never reach this level.[16] Therefore, it is imperative that some level of human oversight exist in the implementation of AWS on the battlefield to ensure that nations are in compliance with LOAC.
[1] Hitoshi Nasu, The Kargu-2 Autonomous Attack Drone: Legal and Ethical Dimensions, Lieber Institute West Point (Jun. 10, 2021), https://lieber.westpoint.edu/kargu-2-autonomous-attack-drone-legal-ethical/#:~:text=In%20March%202021%2C%20a%20UN,capability%E2%80%9D%20(para%2063).
[2] Zak Kallenborn, Israels Drone Swarm Over Gaza Should Worry Everyone, DefenseOne (July 7, 2021), https://www.defenseone.com/ideas/2021/07/israels-drone-swarm-over-gaza-should-worry-everyone/183156/#:~:text=In%20a%20world%20first%2C%20Israel,national%20defense%20and%20global%20stability.
[3] Neil Davison, A legal perspective: Autonomous weapon systems under international humanitarian law 7 (U.N. Office for Disarmament Affairs Occasional Papers, No. 30), chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://www.icrc.org/sites/default/files/document/file_list/autonomous_weapon_systems_under_international_humanitarian_law.pdf.
[4] Elliot Winter, The Compatibility of Autonomous Weapons with the Principles of International Humanitarian Law, 27 J. Conflict & Sec. L. 1, 7 (2022).
[5] Id. at 7-8.
[6] Id. at 15; see also Protocol Additional to the Geneva Conventions of 12 August 1949, and Relating to the Protection of Victims of International Armed Conflicts (Protocol I) art. 58, Jun. 8, 1977, 1125 U.N.T.S 3.
[7] Winter, supra note 4, at 16.
[8] See Chairman of the Joint Chiefs of Staff Instruction 3160.01A, No-Strike and the Collateral Damage Estimation Methodology (Oct. 12, 2012).
[9] Id.
[10] Id.
[11] Id.
[12] Id.
[13] Id.
[14] Elliot Winter, The Accountability of Software Developers for War Crimes Involving Autonomous Weapons: The Role of the Joint Criminal Enterprise Doctrine, 83 Pitt L. Rev. 51, 53-54 (2021).
[15] Davison, supra note 3, at 17.
[16] Vincent C. Müller & Nick Bostrom, Future Progress in Artificial Intelligence: A Survey of Expert Opinion, PhilPapers (2016), https://philpapers.org/rec/MLLFPI.