top of page
Olivia Bush-Moline

Ban the Killer Bots


By. Liv Bush-Moline

DOI: 10.57912/27089635

 

AI weapons, also known as lethal autonomous weapons systems (LAWS), development has skyrocketed worldwide, accelerating exponentially in the private sector and backed by nations pursuing a technological edge against their adversaries. From the battlefields of Russia and Ukraine to Israel and Palestine, these weapons have already demonstrated their capacity for mass destruction and. consequently, the dire need for regulation. But as technology continues to rapidly innovate, our international policy lags. Ultimately, AI weapons need to be restricted and regulated on the international scale similarly to chemical, biological, radiological, and nuclear (CBRN) weapons, due to their similarly massive potential for destruction and threat to civilian life. 

 

One of the largest obstacles when discussing LAWS, ethics, and international actions is the absence of a universally accepted definition of lethal autonomous weapons systems. Does the weapon have to contain a fully human-out-of-the-loop system? How does one even define “autonomous”? Twelve different definitions of LAWS can be identified from various international actors and states, with differing standards and stipulations. 

 

For this piece, I offer my own working definition of LAWS, which is both specific enough to regulate internationally, but broad enough to encompass different variations of AI implementations. My general definition of LAWS is as follows: a weapons system that could operate autonomously and independently from human direction with a lethal capacity; this includes target identification, detection, tracking, and/or the decision to then engage said target with force. LAWS are complex and non-homogenous, with differences that will likely grow as more states and private actors continue to develop them. Thus, this definition’s inclusions do not all need to be met to classify a weapon as a lethal autonomous weapons system. 

 

The key example that comes to mind that autonomously identifies targets but doesn’t trigger the attacks directly are the Gospel and Lavender AI systems, utilized by the Israeli Defense Force and contributing to the widespread bombardment and devastation in Gaza. Lavender autonomously generates “kill lists,” marking potential combatants as targets, including 37,000 Palestinians in the first few weeks of the conflict. Similarly, the Gospel AI generates a target list of buildings and structures based on the surveillance and intelligence it’s fed, but there is no transparency at this time regarding what precisely those data sets are. In general, there is an overwhelming lack of human oversight of these systems, with no requirement to assess the decisions made by them or review the intelligence that the target lists are generated from before striking.

 

Furthermore, AI targeting programs have been implemented into drones with the capacity to fire upon a target independently. As both sides in the Russia-Ukraine War began utilizing Electronic Warfare (EW) systems that disrupt signals between remotely piloted drones and their pilots, the desperation for AI-piloted drones surged dramatically. While in Ukrainian development for years, they were seen as far too costly and experimental. Now, costs of production have been cut down due to the implementation of AI tracking programs onto Raspberry Pi computers. AI programmed drones could coordinate in large swarm-like formations formally impossible with human piloting, reaching new heights of warfare while lowering the cost of war. 

 

What’s particularly alarming about the implementation of AI targeting is their wholly unchecked and experimental nature and the catastrophic potential for those coded with corrupt or faulty models. Generative AI models are known to falsely fabricate content based on the data they were trained on, regardless of whether that data was accurate or not. These fabrications are so common that they necessitated their own term: AI “hallucinations”. Utilizing AI programs for targeting in warfare is especially problematic when we consider the high probability of these “hallucinations”, unreliable training data sets, and potentially encoded biases in the models. 

 

The question of whether AI weapons conflict with International Humanitarian Law (IHL) and their morality has been contentiously debated in academia and international forums for over a decade. International officials from UN Special Rapporteur Christ of Heyns to EU parliamentary research have maintained the long-standing claim that LAWS are both immoral and illegal. Human rights groups such as Amnesty International and Human Rights Watch have repeatedly called for restrictions on LAWS to little avail.

 

Not only do LAWS violate the right of human dignity by autonomously deciding to kill or injure a human being, but they also conflict with Chapter III, Rule 11 of the IHL, prohibiting indiscriminate attacks. According to machine learning expert Heidy Khlaaf, the engineering director of machine learning at Trail of Bits cybersecurity firm, “Given the track record of high error-rates of AI systems… imprecisely and biasedly automating targets is really not far from indiscriminate targeting”. Unless AI models can be coded without the biases of their creators and can truly precisely target combatants with little margin of error, they break IHL regulations. Much like landmines, the original indiscriminate autonomous weapon, AI targeting has great potential to be a significant threat to innocent civilians. 

 

When it comes to international regulation, little tangible progress has been made. In September, more than 90 countries met in South Korea for the Responsible AI in the Military Domain (REAIM) summit to discuss and establish basic guidelines for AI weaponry usage. The second of its kind, this summit concluded with 60 countries, including the United States, endorsing a non-legally binding “blueprint for action” to govern responsible usage of AI in their militaries. China, notably, opted out of endorsement. Additionally, the U.S. DoD claims to lead the world in responsible AI practices, by publishing multiple policies on best practices for AI military usage— also non-legally binding. 

 

Along with those best practices, the United States just recently upped its spending on AI development enormously, with the bulk of it going to the Department of Defense. From 2022 to 2023, there was an almost 1,200% increase in the potential value of AI-related federal contracts, from $355 million to $4.6 billion; U.S. military-specific spending on AI nearly tripled. Simultaneously, and non-coincidentally, China continues to pour massive amounts of funding and man-power into AI development. While specific numbers are difficult to come by, it’s estimated that their spending is in the tens of billions of dollars. This Cold War-esque build up is one of the leading arguments that an international legally binding instrument seems highly unlikely at this point, despite the best wishes and promises of the DoD. 

 

Unfortunately, until the international community is forced to understand the catastrophic potential of these weapons and the need for strict regulation, it’s unlikely concrete action will happen. More summits, more proclamations of responsibility, perhaps, but the development of these weapons will only continue to grow and fester into even more complex manifestations of lethal AI. Ideally, strict regulations modeled on chemical and nuclear weapons restrictions and bans would be achieved and implemented on the international scale. Until then, it’s essential to continue to push for action, invoking the IHL and arguing for the fundamental human right to dignity.

コメント


bottom of page