Racist algorithms and AI can’t determine EU migration policy
Stranding people at sea and leaving them to drown instead of rescuing them. Decisions about people’s lives in the hands of unreliable lie detector tests. Major decisions about security in the hands of algorithms.
These are just a few examples of the path we are going down — that EU legislators now have a rare chance to prevent.
We have visited high-tech refugee camps in Greece, seen violent borders all over Europe, and spoken with hundreds of people who are at the sharp end of technologically-assisted brutality. AI in migration is increasingly used to make predictions, assessments, and evaluations based on racist assumptions it is programmed with.
But with upcoming, legislation to regulate Artificial Intelligence (the EU"s "AI Act") the EU has a chance to live its self-proclaimed values, set a global standard and draw red lines on the most harmful technologies.
Politicians have turned migration into a political weapon and the EU’s policies are becoming increasingly violent: hardening of borders, increased deportation, empowering agencies like Frontex which have been repeatedly implicated in severe human rights abuses, and even condoning the arrest and incarceration of search-and-rescue volunteers, doctors, lawyers, and journalists.
Increasingly, surveillance and automated technologies are being tested out at borders and in migration procedures — with people seeking safety being treated as guinea pigs.
Biometric data collection
This technology often relies on the large-scale systematic collection of people’s personal and biometric data. Enormous resources are invested in IT tools to store and manage colossal amounts of data.
The EU’s privacy watchdog called out this machinery for side-stepping Europe’s commitments to fundamental rights in the service of Fortress Europe.
In negotiations stepping up this week, the European Parliament will have a choice over which technologies it prohibits. They can make sure that the AI Act adequately regulates all harmful uses of this technology, and make a major difference to the lives of people-on-the-move and racialised people already living in Europe.
A coalition of civil society, academics, and international experts have been calling for amendments to the act for nearly a year, with nearly 200 signatories supporting much-needed changes and a new campaign lead by EDRi, AccessNow, PICUM, and the Refugee Law Lab called #ProtectNotSurveil to shed light on these issues.
The AI Act’s blind spot on border violence undermines the entire act as a tool to regulate dangerous tech. Already, compromises are being made behind closed doors of the European Parliament that do not include the necessary bans in the migration context.
This is both harmful and shortsighted. In the absence of such bans, governments and institutions will develop and use invasive technologies that will put them at odds with regional and international laws.
Specifically, if MEPs allow AI to be used to facilitate violence against people trying to reach Europe, states will be fundamentally undermining the right to seek asylum.
To protect the rights of all people, the AI Act must prohibit the use of individual risk assessments and profiling that uses personal and sensitive data; ban AI lie detectors in the migration context; prohibit the use of predictive analytics to facilitate pushbacks; and ban remote biometric identification and categorisation in public spaces, including in border and migration control.
The category of ‘high-risk’ must also be strengthened to include several uses of AI in the migration context, including biometric identification systems, and AI for monitoring and surveillance at borders.
Finally, the act needs stronger oversight and accountability measures that recognise the risks of inappropriate data sharing impacting people’s fundamental human rights of mobility and asylum, and ensure that the EU’s own migration databases are covered by the act.
Unless amended, the EU’s AI Act fails to prevent irreversible harms in migration and in so doing it undermines its very purpose — protecting the rights of all people affected by the use of AI.
Technology is always political. It reflects the society that creates it and so can speed up and automate racism, discrimination, and systemic violence.
And unless we take action now, the EU’s Artificial Intelligence Act will enable dangerous technology in migration and pave the way to a future where everyone’s rights are threatened.
With EU border forces expanding their use of surveillance technology and racial profiling; and deaths and human rights abuses routine at EU borders; new AI systems can only supercharge current abuses and risk more lives.
Once it’s in use there’s no going back — and we all risk being dragged into the experiment. The act is a once-in-a-generation chance to ensure AI cannot be used for ill — the European Parliament must act to save it.