Artificial Intelligence has taken the world by storm. The advantages it delivers are compelling: vast volumes of data processed at scale, allowing rapid decision making. But in the fog of war, these advantages may carry a moral cost for the government willing to use them.

On April 3, the Guardian revealed ‘kill lists’ were developed by the IDF using an artificial intelligence toolset called Lavender, with sources suggesting as many as 37,000 people were deemed valid targets.
To date, the IDF has released two separate statements claiming that ‘Information systems are merely tools for analysts in the target identification process.’
However, multiple sources have described target development for approving strikes on human targets as accelerating as the IDF’s bombardment of Gaza intensified, with commanders demanding a continuous pipeline of targets.
“We were constantly being pressured: ‘Bring us more targets.’ They really shouted at us,” said one intelligence officer. “We were told: now we have to fuck up Hamas, no matter what the cost. Whatever you can, you bomb.”’
The use of artificial intelligence in selecting those battlefield targets and approving kinetic action allowed intelligence officers to keep up with this demand. But in doing so, the IDF have abdicated personal moral responsibility for target selection. They have also undermined the moral authority of Israeli Prime Minister Benjamin Netanyahu to govern.
Data Issues
States are required to conduct reviews of any new ‘means or methods’ of war under Article 36 of the Geneva Conventions, to ensure that these weapons comply with international laws of war. - including AI.
"Automation of killing potentially creates greater power imbalances, destabilizes our global order, and may dehumanize us further", writes Dr Helen Durham AO for famed American military academy West Point’s research arm, the Lieber Institute. "Past experiences and carefully listening to the warnings of experts, are the best data sets we have to navigate the future."
Dr Durham’s co-author, Dr Kobi Leins (GAICD) is a Melbourne-based international lawyer with experience in digital ethics, disarmament and human rights and Visiting Senior Research Fellow, Department of War Studies at King's College London.
“There are ethical issues, but there are also enormous legal issues around the identification of individuals using AI, including biometric data”, said Dr Leins, referencing Lavender’s sister program of individual target tracking.
Known as ‘Where’s Daddy’, this system compiled data from mass surveillance tracking of more than 2.7 million Gazan citizens. Including facial tracking, behavioural analysis and network interactions, each person would then be assigned a value as a potential Hamas operative, tracked and targeted when certain thresholds were reached.
“There is also the question of the data sets that are being used for this system. So if this were a commercial system, we would have reviewed for compliance on a whole lot of levels and doing an AI impact assessment, including things like what kind of data set to use, what kind of drift is there in the model? How valid is it technically?”
The IDF currently estimates the Lavender AI error rates at 10 percent (Source). That’s a terrifying number of false-positive killings by the IDF, but that’s just the tip of the iceberg. In the dreadful mathematics of war, IDF target selection also included ‘acceptable’ civilian casualties, with sources claiming as many as 20 civilian lives were routinely acceptable as expendable to target a single Lavender-identified Hamas operative, and sometimes far higher.
“In the bombing of the commander of the Shuja’iya Battalion, we knew that we would kill over 100 civilians,” said an IDF source identified as B. “For me, psychologically, it was unusual. Over 100 civilians — it crosses some red line.”
Constant Care
Article 57 of the Geneva Convention says, quite simply: In the conduct of military operations, constant care shall be taken to spare the civilian population, civilians, and civilian objects.
The assessment of whether an AI-governed or targeted strike is warranted and proportionate or not – and therefore in compliance with Article 57 - requires balancing military advantage compared to anticipated harm to civilians and civilian objects.
Turning this process over to AI runs considerable risk. In his book Machines Behaving Badly, Ethics in AI expert and Chief Scientist at UNSW.ai Professor Toby Walsh examined what he calls the ‘value alignment problem’. “Suppose we want to eliminate cancer”, writes Walsh. “A super-intelligence might decide: I simply need to get rid of all hosts of cancer.”’
Lavender seems to have been given free rein, with one source claiming that human oversight served only as a “rubber stamp” for the machine’s decisions. Normally, the operative claimed he would spend about “20 seconds” evaluating each target before authorizing a bombing — to ensure the Lavender-marked target was male.
Sovereignty & Moral Authority
The idea of sovereignty – the authority of a state to govern itself and its’ citizens – has an implied responsibility. From Socrates to Thomas Hobbes, all serious analysis of political power includes a moral imperative for any system of government to act in the best interests of the citizens it claims power over.
Logically it then follows that if a state chooses not to act in the best interests of its own citizens, it can no longer be thought of as a functioning “sovereign” state.
From the early years of Nazi Germany to Chile under Pinochet or South Africa under Apartheid, any government that systematically abrogates this responsibility is on shaky political ground.
The Israeli government claims sovereignty over all of Israel, including Gaza; making it morally responsible for what happens there. And so the implementation of AI to select targets for execution in Gaza adds a fresh dimension to this debate: is there a moral imperative for governments to leave this kind of targeting to humans? And when software is involved, how accurate does it need to be before deciding to level an apartment block?
AI and Morality
With a moral imperative of any state to act in the best interests of its citizens, which for Israel include the population of Gaza, all of this raises two important questions.
- Is it morally sound to select bombing targets with potentially fatally flawed software?
- Is the IDF, as Benjamin Netanyahu claimed as recently as October, still “the most moral army in the world”?
"The bigger issue is that if we’re going to industrialize warfare, you need meaningful human oversight", said Professor Walsh.
"Either the AI didn’t work, or they didn’t care. It’s the only way to explain the vast number of collateral casualties. And it seems to be both."
Fatal Error
Combining the Lavender AI error rates of 10 percent (as admitted by the IDF, and which may be far higher) with the IDF’s acceptable collateral damage ratio of 20 to 1 makes for grim statistics. Entire buildings of civilians have, on the balance of numbers and probability, been levelled by data error and a lack of human oversight - by accident or design.
What is especially damning is this known error rate quite simply means that the IDF knew in advance that civilians would be slaughtered at scale. The legalities of this are a matter for the International Criminal Court.
There now seems to be no question that the IDF is no longer the most moral army in the world.
Mike Woodcock is a communications professional with experience across media, technology comms and PR and social issues. Connect with him on Linkedin.