European governments intend to challenge legal restrictions on AI in law enforcement despite the absence of significant public backing.
According to Sarah Chander,
senior policy adviser at European Digital Rights (EDRi), the European Union is in the process of determining which regulations will govern police technology in what is considered the most comprehensive legislation on artificial intelligence globally.
Artificial intelligence used in (law enforcement)
Throughout Europe, police, migration, and security authorities are increasingly seeking to develop and utilize AI in various contexts. This includes plans for AI-based video surveillance at the 2024 Paris Olympics and the substantial investments of EU funds into AI-based surveillance at European borders, making AI systems an integral part of the state’s surveillance infrastructure.
Furthermore, AI is being employed with the specific aim of targeting particular communities. Technologies like predictive policing, although presented as impartial tools in the fight against crime, are fundamentally based on the presumption that certain groups, particularly racialized, migrant, and working-class individuals, are more likely to engage in criminal activities.
In the Netherlands, we have observed the significant impact of predictive policing systems on Black and Brown young people. For example, the Top-600 system, designed for the preemptive identification of potential violent offenders, was found, upon investigation, to disproportionately target suspects of Moroccan and Surinamese descent.
In the realm of migration, there is a growing investment in AI tools for predicting migration patterns and evaluating migration claims in unconventional and concerning ways. EU agencies such as Frontex, which have faced allegations of assisting in the pushbacks of asylum-seekers from Europe, are exploring the use of AI to address the perceived “challenge” posed by increasing migration. There is a significant risk that these technologies will be used to anticipate and prevent people from seeking refuge in Europe, a clear and illegal violation of the right to seek asylum.
The expanding use of AI in law enforcement and migration has profound implications for racial discrimination and violence. Such technologies are likely to exacerbate structural racism by providing law enforcement with more tools, increased legal powers, and reduced accountability.
Enforce regulations on AI used by law enforcement
A growing movement is calling for restrictions on how the government deploys technology to surveil, identify, and make decisions about individuals. While governments argue that the police need more tools to combat crime and maintain order, questions arise about who safeguards individuals from the police, who establishes the boundaries of mass surveillance, and how we determine the limits, particularly when more AI usage leads to increased police stops, greater risk of arrests, and a growing threat of violence in interactions with law enforcement and border authorities.
Checks and balances on state and police authority are fundamental to a secure and functional democracy. No institution should possess unchecked power and trust, especially when they have access to tools that can monitor our every move. Moreover, the introduction of AI technologies brings the private sector and profit motives into state functions, intertwining profit considerations with public safety.
The call for regulating police AI is echoed in the European Parliament, where the need for legal constraints on the use of AI by law enforcement and migration control has been recognized. The European Parliament’s stance includes a complete prohibition on facial recognition in public spaces, predictive policing, and an expanded list of “high-risk” AI applications in migration control.
However, in the final stages of negotiations (“trilogues”) on the EU AI Act, European governments are planning a significant reduction in limitations on the use of AI by law enforcement.
Recently, 115 civil society organizations have urged the EU to prioritize safety and human rights over unchecked police authority. They have called for legal restrictions on the use of AI by law enforcement and migration authorities, including the banning of the most harmful systems, such as facial recognition in public spaces, predictive policing, and AI for predicting and preventing migration flows.
It is crucial for the public to be informed about when and where the state deploys AI for surveillance, assessment, and discrimination. Establishing limits on how law enforcement utilizes technology is essential. Without these safeguards, unchecked AI has the potential to lead to a state characterized by heavy-handed policing and surveillance.