Ethical concerns dominate Israel’s expansive use of AI in Gaza

One of the most high-profile operations where AI tools were used was the assassination of Ibrahim Biari, one of the key senior Hamas commanders behind the October 2023 attacks on Israel

Israel's extensive and accelerated deployment of artificial intelligence (AI) in its Gaza operations has led to increasing ethical scrutiny, even as employees at several multinational tech firms cooperating with Jerusalem are turning into conscientious objectors. While Israeli officials emphasise the technology’s role in enhancing accuracy and reducing harm to civilians, investigations by The New York Times and the Associated Press reveal a more complex picture, one where technological advances frequently outpace moral and legal safeguards.

 

One of the most high-profile operations where AI tools were used was the assassination of Ibrahim Biari, one of the key senior Hamas commanders behind the October 2023 attacks on Israel. The assassination was masterminded by Unit 8200, the Israel Defence Forces’ elite intelligence division, and the key was an AI-powered system which helped it locate and eliminate Biari in Gaza. The system, initially developed a decade ago, was only deployed after engineers integrated artificial intelligence capabilities into its core. Biari was targeted after the system identified his location through intercepted calls. The strike that killed him reportedly also led to the deaths of 50 others classified by the IDF as combatants.

 

Meanwhile, the potential for massive collateral damage from such strikes forced American officials to seek detailed explanations from the IDF on the rationale behind such strikes, as it raised serious concerns over the reliability, legality and ethics behind AI-guided targeting, especially in densely populated urban enclaves like Gaza.

 

Sources from within the Israeli military and technology sector told The New York Times that many of the AI initiatives emerged from close collaboration between Unit 8200 and IDF reservists working at major tech firms, including Google, Microsoft, and Meta. These reservists have set up a project named ‘The Studio’ to utilise their expertise in data science, cloud computing and AI to aid Israeli military applications. While Google has officially distanced itself from the military activities of its employees who are reservists, there is criticism that the lines between corporate innovation and its military use are getting increasingly blurred as the war in Gaza continues without end.

 

Israel now uses AI for hostage recovery efforts as well. Audio analysis of intercepted conversations are being used to track hostages. It is also integrated into drone systems, allowing them to track suspects from a distance. AI is also used to scour social media and messaging platforms in Arabic. On the battlefield, facial recognition systems are used to identify injured or partially obscured individuals.

 

Ethical concerns, however, remain whether machines can be relied completely to make life and death decisions. Many observers believe that AI technology, as it is being used at the moment, poses profound ethical dilemmas, particularly around mistaken identity and disproportionate force.

 

These worries are echoed in an earlier Washington Post investigation, which revealed that the IDF had used AI to rapidly replenish its “target bank”—a dynamic list of individuals deemed threats based on behavioural patterns, intercepted data, and presumed affiliations. While the military claimed such efficiency was critical to responding swiftly in wartime, senior officials admitted that the quality of intelligence generated by AI was a point of contention. Some questioned whether an over-reliance on AI was degrading traditional human intelligence capabilities.

 

The AP investigation went further, highlighting how Israel has emerged as a global leader in the real-time application of AI on the battlefield. The report, drawing from internal documents and interviews with officials and tech industry insiders, found that Israeli use of Microsoft and OpenAI technologies spiked dramatically after the October attacks. Military activity on Microsoft’s Azure cloud platform surged nearly 200-fold, with stored data ballooning to over 13.6 petabytes—hundreds of times more than what’s needed to archive the entire Library of Congress. These systems, the report noted, were not only used for target selection but also to monitor behaviour, interpret intercepted communications, and predict militant movements.

 

Microsoft and OpenAI have both sought to distance themselves from these military applications. OpenAI reiterated that while its terms now permit national security-related use, it prohibits customers from using its models to inflict harm or develop weapons. Yet the company had, only a year prior, quietly updated its policies to allow for military usage under certain conditions. This followed similar policy changes at Google, which dropped its earlier ban on developing AI for surveillance or weapon systems.

 

At Microsoft’s 50th anniversary event in March, there were protests by employees who accused the company of enabling war crimes. Some of them quit and others were asked to leave, including two who interrupted a keynote address by AI executive Mustafa Suleyman. Similar unrest has been reported at Google, where staff walked out in protest of the company's ties to Israel’s military infrastructure.

 

European and American defence experts acknowledge that no other country has adopted AI on the battlefield with the speed and scope that Israel has. While the Israeli military views this as a strategic advantage, it also sets a precedent for the future of warfare—one that risks normalising algorithmic targeting without sufficient ethical oversight.

 

Israeli officials insist that they use AI only as part of their decision-making process. Although IDF analysts use AI tools to identify potential targets, they say there exists an oversight mechanism of senior officers to assess each situation according to international humanitarian law, including an assessment whether the military advantage justifies potential harm to civilians. “These AI tools make the intelligence process more accurate and more effective,” said an IDF statement. “⁠They make more targets faster, but not at the expense of accuracy, and many times in this war they have been able to minimise civilian casualties.”

Middle East