Keywords: Artificial Intelligence – Armed Conflict – Responsibility – Judgement – Philosophy of International Law.
Technological progress in the field of Artificial Intelligence (AI) led to the development of Autonomous Weapons Systems (AWS). Due to their ability to conduct an entire military operation without the technical need for human control, the possible deployment of such systems in armed conflict scenarios raises relevant ethical issues and legal questions. After briefly pointing out the main arguments in favor of and contrary to the use of AWS, we will argue that, for different reasons, both knowledge-based and machine-learning systems are intrinsically unable to respect international humanitarian law in as far as its norms have been construed in order to be applied by subjects endowed with human faculties, which we think can be well identified by looking at the theory of human judgement developed by Hannah Arendt. Also, from an international criminal law perspective, their deployment is at odds with the concepts of responsibility and criminal liability and could thus lead to a lack of accountability for military action.