Lethal Autonomous Weapon Systems
Please use this identifier to cite or link to this item:
https://doi.org/10.48693/444
https://doi.org/10.48693/444
Title: | Lethal Autonomous Weapon Systems |
Authors: | Saalbach, Klaus |
Abstract: | The development of autonomous weapons is in progress due to technical advances, decreasing production costs, the progress in Artificial Intelligence (AI) and the resulting degree of autonomy. It is expected that fully autonomous weapon systems will become operational in the next few years. Lethal autonomous weapon systems (LAWS), also known as autonomous weapon systems (AWS), robotic weapons or killer robots, use sensors and algorithms to independently identify, engage and destroy a target. In military practice, the development of unmanned drone swarms is the technology closest to full LAWS. This is accompanied by an intense ethical and legal discussion. While substantial progress was made on the responsible use of AI for military purposes, a ban on LAWS could not yet be achieved. Additional technical risks include errors, reliability issues, hacking, data poisoning, spoofing, unintended engagement, and other scenarios. Among the approximately 800 AI-related projects and unmanned device (UxS) programs of the US Department of Defense (DoD), in particular three programs are steps towards LWAS: the Golden Horde program for collaboration between small bombs, the Replicator program for coordinated mass attacks of unmanned systems from seabed to satellites and the ongoing development of the new inter-machine language Droidish. While currently human beings are directly part of the decision process (human-in-the-loop) or are at least acting as supervisors (human-on-the-loop), the speed and complexity of inter-machine communication between thousands of machines will make it difficult for humans to intervene (humans-out-of-the loop) and could reduce human supervision to a symbolic presence. Another factor that may undermine human control is the massive expansion of AI capabilities such as logical reasoning in the Q*-debate, the difficulty to safeguard strong AIs (Superalignment), the uncertainty of future relations between humans and AI-enabled machines and the new option that larger AI can create small AIs and spread them which could be used a new kind of cyber attack. This paper briefly presents the status of LAWS development, of the US DoD programs Golden Horde, Replicator and Droidish, and the legal, ethical, and technical challenges for LAWS and AI-enabled weapons. |
URL: | https://doi.org/10.48693/444 https://osnadocs.ub.uni-osnabrueck.de/handle/ds-2023121910199 |
Subject Keywords: | Lethal Autonomous Weapon Systems; Artificial Intelligence; drone swarms; superalignment |
Issue Date: | 19-Dec-2023 |
License name: | Attribution 3.0 Germany |
License url: | http://creativecommons.org/licenses/by/3.0/de/ |
Type of publication: | Arbeitspapier [WorkingPaper] |
Appears in Collections: | FB01 - Hochschulschriften |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Lethal_Autonomous_Weapons_Systems_2023_Saalbach.pdf | 418,6 kB | Adobe PDF | Lethal_Autonomous_Weapons_Systems_2023_Saalbach.pdf View/Open |
This item is licensed under a Creative Commons License