Federica Merenda

Autonomous Weapons Systems (AWS): the ethical and legal challenges of having robots apply international humanitarian law

Are you already subscribed?
Login to check whether this content is already included on your personal or institutional subscription.

Abstract

Technological progress in the field of Artificial Intelligence (AI) led to the development of Autonomous Weapons Systems (AWS). Due to their ability to conduct an entire military operation without the technical need for human control, the possible deployment of such systems in armed conflict scenarios raises relevant ethical issues and legal questions. After briefly pointing out the main arguments in favor of and contrary to the use of AWS, we will argue that, for different reasons, both knowledge-based and machine-learning systems are intrinsically unable to respect international humanitarian law in as far as its norms have been construed in order to be applied by subjects endowed with human faculties, which we think can be well identified by looking at the theory of human judgement developed by Hannah Arendt. Also, from an international criminal law perspective, their deployment is at odds with the concepts of responsibility and criminal liability and could thus lead to a lack of accountability for military action.

Keywords

  • Artificial Intelligence –
  • Armed Conflict –
  • Responsibility –
  • Judgement –
  • Philosophy of International Law

Preview

Article first page

What do you think about the recent suggestion?

Trova nel catalogo di Worldcat