Autonomous Systems of Normative Control in Military Applications of AI

Research output: Contribution to specialist/vulgarizing publicationArticleSpecialist

Abstract

This policy brief addresses military applications of AI in the sense of partially autonomous lethal weapon systems (PALWS) and logistical AI units. The systems that I call ‘autonomous systems of normative control’ (ASNCs) are comparable to intelligent speed assistance (ISA)-systems in cars. ISA-systems alert or correct drivers when exceeding the speed limit using road-sign recognition and speed-limit databases linked to geoposition data. Correspondingly, ASNCs should block the unlawful use of military applications of AI, for instance, in the case of a war of aggression or alert commanders if an action is disproportionate or a selected target civilian.

I promote a technology-centered approach, which is in line with the multilateral 2023 Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy and the technical recommendations in the report of the 2023 session of the UN Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems. I argue that PALWS and logistical AI units in the military should be equipped with ASNCs to contribute to ensuring that they are used in compliance with international humanitarian law (IHL), most importantly the principles of proportionality and harm minimization.

Furthermore, ASNCs should include blocking mechanisms to contribute to ensuring that PALWS and logistical AI units are neither used in wars of aggression nor against domestic peaceful protesters. In a technological sense, ASNCs likely require a hybrid approach to AI systems, combining data-driven and rule-based elements and much simpler blocking mechanisms based on geolocation data. Whilst it is not possible to outsource moral or legal responsibility to machines, it is plausible that ASNCs contribute to making military decision-making on the battlefield more responsible in a legal and ethical sense.In parallel with this technology-centered approach, national and international attempts to regulate military applications of AI should be pursued further.

However, the development of ASNCs does not necessarily constitute a reaction to governmental regulations but could also voluntarily be advanced as de facto industry standard by producers of military technology. Rather than refraining from the production of PALWS and logistical AI units for the military, European producers of military technology should aim at leading in research and development and establishing a standard made in Europe, including ASNCs that contribute to guaranteeing their use within the boundaries of legal and ethical principles. At the same time, it must already be warned that these systems should not be abused for ‘ASNC-washing’ to justify arms exports to authoritarian regimes and that the establishment of a de facto standard can only constitute one element within a broader toolkit of measures to regulate the military use of AI
Original languageEnglish
Number of pages36
Specialist publicationSciencesPo Policy Briefs
Publication statusPublished - 2023

Fingerprint

Dive into the research topics of 'Autonomous Systems of Normative Control in Military Applications of AI'. Together they form a unique fingerprint.

Cite this