Human/Machine(-Learning) Interactions, Human Agency and the International Humanitarian Law Proportionality Standard

Open Access
Authors
Publication date 01-2024
Journal Global Society
Volume | Issue number 38 | 1
Pages (from-to) 100-121
Number of pages 22
Organisations
  • Faculty of Law (FdR) - T.M.C. Asser Instituut
  • Faculty of Law (FdR)
Abstract

Developments in machine learning prompt questions about algorithmic decision-support systems (DSS) in warfare. This article explores how the use of these technologies impact practices of legal reasoning in military targeting. International Humanitarian Law (IHL) requires assessment of the proportionality of attacks, namely whether the expected incidental harm to civilians and civilian objects is excessive compared to the anticipated military advantage. Situating human agency in this practice of legal reasoning, this article considers whether the interaction between commanders (and the teams that support them) and algorithmic DSS for proportionality assessments alter this practice and displace the exercise of human agency. As DSS that purport to provide recommendations on proportionality generate output in a manner substantively different to proportionality assessments, these systems are not fit for purpose. Moreover, legal reasoning may be shaped by DSS that provide intelligence information due to the limits of reliability, biases and opacity characteristic of machine learning.

Document type Article
Note In special issue: The Algorithmic Turn in Security and Warfare
Language English
Published at https://doi.org/10.1080/13600826.2023.2267592
Other links https://www.scopus.com/pages/publications/85176248308
Downloads
Human Machine -Learning Interactions (Final published version)
Permalink to this page
Back