The end of street anonymity — is Europe ready for that?

The end of street anonymity — is Europe ready for that?
Опубликовано: Tuesday, 26 December 2023 07:21
The facial recognition market in Europe is estimated to grow from $1.2bn [€1.09bn] in 2021 to $2.4bn by 2028 (Photo: EFF Photos)

In the wake of the November riots in Dublin, a simmering debate about whether police use of facial-recognition technologies could prevent further chaos on the streets broke out in Ireland — and across Europe.

"Facial-recognition technology will dramatically save time, speed up investigations and free up Garda [Irish police] resources for the high-visibility policing we all want to see," said Irish justice minister Helen McEntee recently.

  • The use of this technology is widely accepted in cases where citizens expect to be identified (Photo: Delta News Hub)

While these benefits are being repeated tested in controlled programmes, privacy campaigners have raised concerns about their chilling effect on democracies — as well as their inherent discriminatory risks.

The debate in Ireland resurfaced against the backdrop of intense negotiations in Brussels about the AI Act — the rulebook which will regulate AI-powered technologies such as facial recognition.

MEPs initially tried to push for a ban on the automated recognition of individuals in public spaces, but the final text includes several exceptions that would make the use of this technology legally-acceptable.

This includes, for example, the search for certain victims and crime suspects and the prevention of terror attacks.

And since Europe became the first to establish rules governing AI in the world, many cheered the agreement reached in early December.

But the EU’s failure to ban the use of this intrusive technology in public spaces is seen by campaigners such as Amnesty International as a "devastating precedent" since the EU law aims to set global standards.

The widespread adoption of these technologies by law-enforcement authorities over the past few years has sparked concerns about privacy and mass surveillance, with critics labelling an all-seeing cameras backed up by a database as ‘Big Brother’ or the ‘Orwellian Nightmare’.

The European Court of Human Rights recently ruled for the first time on the use of facial recognition by law enforcement.

The Strasbourg court found Russia in breach of the European convention on human rights when using biometric technologies to find and arrest a peaceful demonstrator.

But the implications remain uncertain as the court left many other questions open.

"Certainly, it found a violation of the right to private life. Still, it may have availed the deployment of facial recognition in Europe, without restraining its "fair" applications clearly," argues Isadora Neroni Rezende, a researcher at the University of Bologna.

The sacrifice

The UK has been a pioneer in using facial-recognition technologies to identify people in real-time with street cameras. In a few years, the country has deployed an estimated 7.2 million cameras — approximately one camera for every nine people.

From 2017 to 2019, the federal Belgian police utilised four facial-recognition cameras at Brussels Airport —scene of a deadly terrorist bomb attack in 2016 that killed 16 people — but the project had to stop as it did not comply with data protection laws.

And recently, the French government has fast-tracked legislation for the use of real-time cameras to spot suspicious behaviour during the 2024 Paris Olympic Games.

These are just a few examples of how this technology is reshaping the concept of security.

While the use of this technology is accepted in some cases, the real challenge arises when its use extends to wider public spaces where people are not expected to be identified, the EU’s data protection supervisor (EDPS) Wojciech Wiewiórowski told EUobserver in an interview.

This would de facto "remove the anonymity from the streets," he said. "I don’t think our culture is ready for that. I don’t think Europe is the place where we agree to this kind of sacrifice".

In 2021, Wiewiórowski called for a moratorium on the use of remote biometric identification systems, including facial recognition, in publicly-accessible spaces.

It also slammed the commission for not taking into consideration its recommendations when it first unveiled the AI Act proposal.

"I would not want to live in a society where privacy will be removed," he told EUobserver.

"Looking at the at some countries where there is much more openness for this kind of technology, we can see that it’s finally used to recognise the person wherever the person is, and to target and to track him or her," Wiewiórowski warned, pointing to China as the best example.

"The explanation that technology is used only against the bad people (…) is the same thing that I was told by the policemen in 1982 in totalitarian Poland, where telephone communication was also under surveillance," he also said.

Reinforce stereotypes

While these technologies can seen as an effective modern tool for law enforcement, academics and experts have documented how AI-powered biometric technologies can reflect stereotypes and discrimination against certain ethnic groups.

How well this technology works mostly depends on the data quality used to train the software and the quality of data used when is deployed.

For Ella Jakubowska, campaigner at EDRi, there is a misconception about how effective this technology can be. "There is a basic statistical misunderstanding from governments."

"We’ve already seen around the world that biometric systems are disproportionately deployed against Black and brown communities, people on the move, and other minoritised people," she said, arguing that manufacturers are selling "lucrative false promise of security".

An independent study on the use of live facial recognition by the London police revealed that the actual success rate of these systems was below 40 percent.

And a 2018 report revealed that the South Wales police system saw 91 percent of matches labelled as false positive, with 2,451 incorrect identifications.

Tech companies have lobbied against any potential ban on the use of these technologies in public places (Photo: Tony Gonzalez)

The implications of algorithmic errors on human rights are often highlighted as one of the main concerns for the development and use of this technology.

And one of the main issue for potential victims of AI discrimination is the significant legal obstacles they face to prove (prima facie) such discrimination — given the ‘black box’ problem of these technologies.

The risk of error has led several companies to remove themselves from the markets. This includes Axon, a well-known US company providing police body cameras, as well as Microsoft and Amazon.

But many still defend it as a crucial tool for law enforcement in our times — lobbying against any potential ban and in favour of exceptions for law enforcement under the AI Act.

Lobbying efforts

Google urged caution against banning or restricting this technology, arguing that it would put at risk "a multitude of beneficial, desired and legally-required use cases" including "child safety".

"Due to a certain lack of understanding, such innovative technologies [such as facial recognition and biometric data] are increasingly mis-portrayed as a risk to fundamental rights," said the Chinese camera company Hikvision, which is blacklisted in the US.

Likewise, the tech industry lobby DigitalEurope also praised the benefits. "It is crucial to recognise the significant public safety and national security benefits".

Additionally, security and defence companies have also been lobbying in favour of exceptions.

But it seems the greatest pressure in favour came from interior ministries and law enforcement agencies, according to Corporate Europe Observatory.

Meanwhile, the facial recognition market in Europe is estimated to grow from $1.2bn [€1.09bn] in 2021 to $2.4bn by 2028.