Military Artificial Intelligence and the Accountability of States and Individuals for Crimes against Humanity in the Ukraine
Tens of thousands of soldiers and civilians have died as a result of Russia’s invasion of Ukraine in February 2022 and the continuing armed conflict. Many forms of critical infrastructure have been destroyed. Much of this devastation has been caused by weapons that utilise forms of artificial intelligence (AI). Whilst there have been many accusations of war crimes committed since the start of the war, and calls for accountability have intensified, less attention has been paid to crimes against humanity.
- 2022 - 2023
- Dan Saxon
This research will look into the challenges of proving that such crimes occurred in the Ukraine, and the complexities of assigning responsibility to states and individuals.
Within the Global Challenges theme of Peace and Justice this research project looks into the use of military artificial intelligence and the occurrence of crimes against humanity since the start of the 2022 war in Ukraine. Crimes against humanity pertain a serious violation of international law. These are crimes committed as part of widespread or systematic attack(s) carried out against a civilian population. The term “widespread” reflects the large-scale nature of an attack and the number of victims, whereas the term “systematic” refers to the organized nature of the unlawful acts and the improbability of their random occurrence.
Proving the responsibility of States and individuals for such crimes in an environment where weapon systems rely on artificial intelligence presents new and complex evidentiary and legal challenges. Using Ukraine as a “case study,” the objective of this research is to describe the current state of the law of crimes against humanity, the evidentiary challenges of proving that such crimes have occurred in the Ukraine, and the particular complexities of assigning responsibility to States and individuals when artificial intelligence “contributes” to violations of international law. For this project, the author will use primary sources pertaining to the law of armed conflict and international criminal law, relevant doctrinal secondary sources, and credible information pertaining to the use of military artificial intelligence in the Ukraine. The outcome of this research will be a chapter to be published in the edited volume “Responsible Use of Military Artificial Intelligence,” forthcoming from CRC Press in 2024.
Dan Saxon, Fighting Machines: Autonomous Weapons and Human Dignity, University of Pennsylvania Press, 2022