As global leaders, policymakers and tech innovators convene at the AI Action Summit in Paris on Monday, a glaring omission in its agenda raises concerns: the lack of any meaningful dialogue on the militarisation of artificial intelligence (AI).
This oversight is particularly alarming given recent revelations about the involvement of both Microsoft and Google in supplying the Israeli military with AI technology, which has concerning implications for human rights and international law.
Reports from DropSite News, +972 Magazine and The Guardian reveal that Microsoft’s Azure platform has been used extensively by Israeli intelligence units to power surveillance systems, contributing to systematic human rights abuses.
Recent revelations have also highlighted Google’s deep involvement in supplying advanced AI tools to the Israeli military as part of the $1.2bn Project Nimbus contract. During the October 2023 Gaza offensive, Google’s Vertex AI platform was reportedly deployed to process vast datasets for “predictions” where algorithms analyse behavioural patterns and metadata to identify potential threats.
Proprietary Israeli military systems such as Lavender, The Gospel, and Where’s Daddy? also played a central role in the Gaza war. Lavender, an AI-powered database, reportedly flagged more than 37,000 individuals as potential assassination targets during the first weeks of the war, with operators spending as little as 20 seconds reviewing each case.
New MEE newsletter: Jerusalem Dispatch
Sign up to get the latest insights and analysis on
Israel-Palestine, alongside Turkey Unpacked and other MEE newsletters
Where’s Daddy? tracked individuals via their smartphones, enabling precise air strikes that often targeted entire families.
Such tools demonstrate how AI is being weaponised against civilian populations, raising urgent concerns about accountability and compliance with international humanitarian law.
Mission undermined
The Paris AI Action Summit, organised under the banner of ethical AI innovation, appears disconnected from the realities of how AI technologies are being weaponised. Civil society groups, particularly those from the Global South, have struggled to gain access to this event for various reasons, including financial constraints and a lack of clarity on how to secure an invitation.
Several organisations, including my own organisation, report that they were not informed about the event or the criteria for participation, leading to confusion and frustration. Moreover, the high costs of attending the summit, including travel and accommodation, are prohibitive for many NGOs, particularly those operating in the Global South.
We need binding international standards to prevent tech companies from enabling human rights abuses through military contracts
The result is a further marginalisation of voices that could highlight the devastating human costs of militarised AI.
Key actors, especially those who work directly with communities impacted by AI-powered warfare, have been effectively shut out, as mentioned in a statement signed by more than 100 civil society organisations, which is calling for human rights to be at the heart of AI regulation.
Such exclusion undermines the summit’s stated mission to ensure that AI benefits all populations. Summit organisers did not reply to our emails – even the ones requesting visa support or an official invitation.
Civil society groups play a crucial role in challenging the militarisation of AI and advocating for international legal frameworks that prioritise human rights. Without their voices, the summit risks reinforcing a narrow, top-down view of AI development that overlooks the potential human costs.
The militarisation of AI is not a distant issue confined to conflict zones. Many of the tools used in Gaza, such as biometric identification systems, were developed in the West and continue to be used around the world for “security purposes”.
Concerns are also being raised globally about how AI technologies, such as facial recognition and surveillance systems, are disproportionately used to target vulnerable groups, violating privacy and exacerbating existing biases. These tools often lead to discriminatory racial profiling, further marginalising individuals who are already at risk.
Civil liberties
Investigate Europe has also highlighted the potential for the new European Union AI Act to infringe on civil liberties and human rights, after some governments lobbied for exceptions that would allow AI-powered surveillance by police and border authorities, posing a particular risk to groups such as migrants and asylum seekers.
This could exacerbate existing biases and discriminatory practices, including predictive policing and the use of biometric systems for real-time surveillance in public spaces. Such practices raise alarms about the increasing erosion of privacy and rights, especially for marginalised groups.
![](https://www.middleeasteye.net/sites/default/files/styles/read_more/public/images-story/Israeli-strike-Nuseirat-Gaza-Strip-July-2024-Eyad-BABA-AFP.jpg.webp?itok=aHvwQzdE)
War on Gaza: European AI Act must be expanded to protect Palestinians
Read More »
As Europe races to compete with US investments in AI infrastructure, it must prioritise ethical guidelines over unchecked innovation. Ignoring such concerns risks normalising technologies that reduce human oversight and potentially violate international humanitarian law.
To address these critical challenges, a comprehensive and multi-stakeholder approach is essential. Policymakers must prioritise the integration of discussions on militarised AI into global governance agendas, ensuring meaningful participation from civil society groups, particularly those representing Global South countries.
We need binding international standards to prevent tech companies from enabling human rights abuses through military contracts. Transparency must become a fundamental principle, with mandatory disclosure requirements for companies engaging in military partnerships.
Moreover, future AI summits should create dedicated spaces for critical dialogue, moving beyond technological innovation to examine the profound ethical implications of AI in warfare.
France often portrays itself as the land of human rights. To truly uphold this legacy, it must take a leading role in regulating AI technologies, ensuring they are used responsibly and not as instruments of oppression.
The views expressed in this article belong to the author and do not necessarily reflect the editorial policy of Middle East Eye.