The killing of seven employees of World Central Kitchen in Gaza is as deplorable as it was avoidable. A substantial body of research suggests that technology can mitigate civilian harm in war, particularly mistakes that result in the deaths of humanitarian workers, but unfortunately few of the solutions it points to have been implemented.

World Central Kitchen has been a key player assisting the population of Israel and Gaza since the outbreak of hostilities on October 7, 2023. It had just delivered one hundred tons of food to Gaza when seven of its workers (from the United States, the United Kingdom, Australia, and Poland, along with one Palestinian staff member) were killed in an Israel Defense Forces strike. The Israel Defense Forces hit three vehicles—two armored (but unarmed) vehicles bearing the organization’s logo and another passenger vehicle—as they drove south along al-Rasheed Road, heading to staging areas for more supplies near the Egyptian border. World Central Kitchen, like many humanitarian organizations operating in war zones, notified the Israel Defense Forces about this activity and planned route of operation. The Israel Defense Forces gave the organization clearance to operate in this area, though it claims that the route eventually chosen by the convoy was different from that which had been coordinated.

At 11:09 p.m. local time on April 1, 2024, without warning, one of the vehicles was struck by an attack from an airborne drone. Within a few minutes and within a mile and a half of each other, all three vehicles were struck, tragically killing all seven aid workers.

A few days later, an investigation carried out by the Israel Defense Forces Joint Chiefs of Staff’s Fact-Finding and Assessment Mechanism and presented to the Israel Defense Forces chief of the General Staff concluded that the strike was “based on the misclassification of the event and misidentification of the vehicles as having Hamas operatives inside them” and ordered in “serious violation of the commands and IDF [Israel Defense Forces] Standard Operating Procedures.” Specifically, it found that the Israel Defense Forces had made several fundamental mistakes.

First, the Israel Defense Forces misidentified the humanitarian workers as a threat after two individuals with guns boarded some of the vehicles, additional vehicles joined the convoy, and the convoy later split. Second, information regarding the identity and route of the World Central Kitchen convoy did not make its way to the military personnel in charge of the military operation. Third, the Israel Defense Forces failed to recognize the logos displayed on the roof of some of the vehicles, designed to clearly indicate their protected status under the law of armed conflict.

Mistakes happen all the time in war. Despite their best efforts, even the most professional, responsible militaries can harm innocent civilians in their operations. In 2015, the US military mistakenly struck a building in Kunduz, Afghanistan that turned out to be a hospital operated by Médecins Sans Frontières. Forty-two civilians, including Médecins Sans Frontières staff, were killed.

At the time, a comprehensive investigation directed by the commander of US Forces–Afghanistan concluded that “this tragic incident was caused by a combination of human errors, compounded by process and equipment failures. Fatigue and high operational tempo also contributed to the incident. These factors contributed to the ‘fog of war,’ which is the uncertainty often encountered during combat operations. The investigation found that this combination of factors caused both the ground force commander and the air crew to believe mistakenly that the air crew was firing on the intended target, an insurgent-controlled site approximately 400 meters away from the MSF [Médecins Sans Frontières] Trauma Center.”

The mistakes that happened then are similar to those emerging from the World Central Kitchen incident: a humanitarian entity was misidentified as a threat, a humanitarian organization notified military forces of its activities but that information was not subsequently shared with units operating in the area, and the military force did not recognize the distinctive signs and logos used by humanitarian organizations to identify themselves.

The tragic World Central Kitchen incident is therefore (and sadly) not unique, and the mistakes made by the Israel Defense Forces are mistakes that militaries make regularly in war. The need to enhance coordination within militaries and improve deconfliction processes is highlighted time and time again, whenever such incidents occur.

In this context, two questions must be asked. First, are strikes mistakenly directed at humanitarian sites and humanitarian personnel avoidable in war? And second, if they are, what is needed for militaries to implement policies to reduce their occurrence?

On the first question, there is no doubt: steps can be taken to reduce mistakes in identification in war. CNA research has shown that artificial intelligence can reduce misidentification, avoid miscorrelation (for example, by recognizing that a swap has occurred between a threat vehicle and a civilian vehicle), and enhance the recognition of protected symbols.

Both of us have written about how emerging technologies can help protect civilians during and in the wake of conflict. With Laurie Blank and Eric Jensen, we advocated for a greater use of technology to alleviate human suffering and uncertainty as wars come to an end. We have also discussed the need to shape a humanitarian role for technology in war, and promote “humanitarian AI” as part of the emerging norm of responsible artificial intelligence in the military domain. Though challenges naturally exist, we strongly believe that technology may be channeled to uphold humanitarian values and responsible artificial intelligence commitments as expressed in, inter alia, the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy.

Here, we discuss how the use of emerging technologies can help address each of the three main mistakes that led to the World Central Kitchen attack, and what is needed to implement technological solutions that mitigate civilian harm.

Misidentification

The Israel Defense Forces inquiry noted that “those who approved the strike were convinced that they were targeting armed Hamas operatives and not [World Central Kitchen] employees.” This is a classic example of how misidentification—the mistaken belief that civilians or civilian objects are legitimate military targets—may result in civilian harm in war. The Israel Defense Forces engaged these vehicles in the belief that they were military targets when they were, in fact, civilian. This differs significantly from situations in which military forces engage with a military target and civilians in the vicinity of the target are maimed or killed.

The decision to strike was influenced by the mistaken belief that there was one or more armed individuals in, on, or around the vehicles. The World Central Kitchen employees wore protective vests (standard practice after a World Central Kitchen vehicle was hit by an Israel Defense Forces sniper’s bullet days earlier) but did not carry weapons. While we emphasize that simply carrying a weapon does not necessarily involve a loss of protection from attack, it seems that decision-makers failed to accurately assess the presence or absence of weapons.

This is something that technology can help with. The United States Department of Defense has developed automated image recognition tools to identify objects of interest within full-motion video sources, including weapons, structures, vehicles, and people. The purpose is not to classify threats but to cue the attention of humans to those objects of interest so that operators can focus on what matters most. This kind of functionality could have helped in this case by indicating that no weapons were present.

The Israel Defense Forces investigation also reveals some confusion regarding the general pattern followed by the convoy, the types of vehicles involved (trucks as opposed to individual cars), and the route chosen. Artificial intelligence can enhance operational awareness in such cases by shaping a pattern of life around the activity of humanitarian actors in general, and specific vehicles in particular. As a 2022 CNA report points out, artificial intelligence could be used to detect whether a vehicle drove to a humanitarian organization’s location as part of its stops during its previous activities. This could help make a more informed determination regarding the status of the vehicle as a threat.

Humanitarian Deconfliction

The Israel Defense Forces noted that the events of that day did not fully align with the information provided to it by the World Central Kitchen (see illustration included in the Israel Defense Forces statement). The process through which humanitarian organizations share information about their planned activities and locations to parties to the conflict is known as humanitarian notification or deconfliction.

Formal humanitarian notification processes protect humanitarian organizations and humanitarian workers who operate in volatile and dangerous environments. Once the locations or routes of humanitarian organizations have been communicated, the organizations operate under a greater level of safety and protection.

Unfortunately, however, these processes are still largely ad hoc in nature and vary from country to country. They are rarely foolproof. The deconfliction process—whereby humanitarian entities’ positions and activities are incorporated into the operational picture used by a military for operational decision-making—tends to be incomplete and limited in reach. The information about humanitarian organizations is often retained at higher headquarters and may inform some operations, but operational decision-makers at lower echelons may remain unaware that humanitarians are operating in proximity to a zone of combat or, as in the World Central Kitchen attack, have been identified for targeting.

Emerging technologies can help here too. The United States Agency for International Development supported the development of a prototype humanitarian notification system—implemented by the Massachusetts Institute of Technology Lincoln Laboratory and supported by CNA and Stanford University—that provides a streamlined, digital approach to humanitarian notification. The system uses blockchain to provide a permanent, inalterable record that can be consulted to determine the facts when tragedies result. Because the prototype uses APIs (application programming interfaces), a type of software that enables communication between various computer systems, this information can easily be imported into military networks and data links to develop a more complete operational picture. This same information can also be shared with lower echelons through the prototype. This can help avoid the common and tragic situation where humanitarians share their information with militaries but are killed because militaries do not share this information internally with operators.

In the case of World Central Kitchen, it seems that the addition of the smaller vehicles to the humanitarian convoy had been coordinated with the Israel Defense Forces, but the division directing the operation (known as the “front division”) was not aware of this change and operated on the belief that only trucks were part of the convoy. The information simply did not make its way down to the decision-makers. This is precisely the type of problem that an advanced humanitarian notification system could mitigate.

In addition, the Israel Defense Forces assert that the route chosen by the convoy was different from that which had been coordinated previously. Whether or not this is the case, it illustrates that processes used currently for humanitarian notification can tend to be slow and bureaucratic, making it a challenge to provide last-minute updates necessary in a dynamic and insecure environment. Having a more automated system for reporting and deconflicting humanitarian movements would help to address this need for dynamic adjustments.

Identifying Protected Logos and Symbols

The Israel Defense Forces noted that although the smaller vehicles bore the logos and distinctive signs of the World Central Kitchen, those were not visible in the dark to the drone’s infrared sensors.

It can be difficult for tactical units to identify humanitarian organizations on the battlefield. In conflict zones, humanitarian organizations are not always in clearly distinguishable structures such as established hospitals. And while the law of armed conflict is clear about their protected status, the only practical identification measure it provides is contained in the Geneva Conventions of 1949, which require humanitarian organizations to display recognized emblems to inform the parties to the conflict of their protected status. (We note that these entities have protected status, regardless of whether the identification measure is present.)

Unfortunately, recent advances in military technology for sensors can make this measure less effective. A colored marking will not be evident to a pilot in an aircraft conducting an air strike using an infrared sensor. This was seen to be a contributing factor in several air-to-ground friendly-fire incidents during the US-led war in Iraq, where ground forces or vehicles marked with orange panels were attacked: the pilots were using infrared sensors for targeting and did not observe the orange markings intended to identify friendly forces. In other cases, friendly forces were equipped with infrared markers, but they were attacked because pilots lacked infrared sensors. The conclusion was that markers combining visual and infrared signatures would reduce the risk of inadvertent attacks. Humanitarian organizations could employ similar solutions to reduce such risk, with markings compatible with various kinds of sensors (e.g., a low-cost combination of color and infrared beacon, or a combination of color and infrared reflective materials in physical markings on vehicles or structures).

Emerging technologies also offer an opportunity for strengthening the protection of humanitarian organizations by improving identification during attacks—for example, recognizing protected symbols or humanitarian organizations’ logos by using machine learning to identify a set of symbols and alerting the operator or the chain of command accordingly. The presence of protected symbols does not mean that the location is, in fact, protected from attack: the location may have lost its protection, or an unscrupulous party may be using the symbol to deter attacks, in violation of international humanitarian law. But this capability would provide a safety net in cases where the protected symbol is present but was overlooked by operating forces.

The increased use of networks in targeting underscores the importance of information being shared at various military command echelons. Considering that militaries increasingly use data links and systems for situational awareness, having them display humanitarian organizations such as hospitals and convoys could improve situational awareness and avert unintended attacks even when the broader deconfliction process fails.

The strike that killed seven humanitarian workers from World Central Kitchen calls for a rethinking of military practices regarding the mitigation of civilian harm in war. All too common mistakes of misidentification and miscommunication can be minimized with the help of technology, raising hopes that the recent tragedy will trigger a much-needed shift in priorities.

We have pointed to ways in which tech can help—by more accurately assessing the nature of the target, the presence of weapons, and a pattern of civilian life; by developing tools that protect information shared by humanitarian actors and can share it across systems and military echelons; and by improving the ability of belligerents to identify protected logos and emblems.

This is just the tip of the iceberg, and more is needed to understand how technology can help mitigate the type of errors witnessed on April 1, 2024. A broad range of technologies and capabilities are currently in use for the protection of friendly forces and for identifying threats, yet the corresponding level of investment has not been made so far for the identification and protection of humanitarian and civilian entities. Military forces could benefit from additional practical guidance and protocols for improving the identification and deconfliction of humanitarian actors in war, and implementing lessons learned from previous mistakes.

More broadly, this discussion underscores the positive role technology can play in war, particularly by reducing uncertainty and alleviating the effects of war on those most vulnerable. The protection of humanitarian action and humanitarian actors is one important aspect of this role. Using emerging technologies to reduce the cost of war on civilians is within the realm of the possible. Tragedies like the World Central Kitchen attack should lead states and militaries to ask how technology can be used to uphold legal obligations, to improve the practical protection of civilians, and to lessen the infliction of suffering, injury, and destruction overall.

Larry Lewis leads a program on civilian harm mitigation at CNA, a nonprofit research and analysis organization dedicated to the safety and security of the nation.

Daphné Richemond-Barak is an assistant professor at the Lauder School of Government, Diplomacy, and Strategy at Reichman University in Israel and an adjunct scholar at MWI.

The views expressed are those of the authors and do not reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense, or the opinions or views of any organization the authors are affiliated with, including CNA and its sponsors.

Image credit: Esri, via USGS EarthExplorer (adapted by MWI)