The fight over which uses of artificial intelligence Europe should ban
In 2019, guards at the borders of Greece, Hungary and Latvia have started testing an artificial intelligence-powered lie detector. The system, called iBorderCtrl, analyzed facial movements to try to spot signs that a person was lying to a border agent. The trial was powered by nearly $5 million in research funding from the European Union and nearly 20 years of research at Manchester Metropolitan University, UK.
The trial sparked controversy. Polygraphs and other technologies designed to detect lies based on physical attributes have been widely declared unreliable by psychologists. Soon errors were also reported by iBorderCtrl. The media reported that its lie prediction algorithm did not work, and the project’s own website acknowledged that the technology “may involve risks to basic human rights”.
This month, Silent Talker, a Manchester Met spin-off that made the technology behind iBorderCtrl, was dissolved. But that’s not the end of the story. Lawyers, activists and lawmakers are pushing for a European Union law to regulate AI that would ban systems that claim to detect human deception in migration, citing iBorderCtrl as an example of what can go wrong. Former Silent Talker executives could not be reached for comment.
Banning AI polygraphs at borders is one of thousands of amendments to AI law being considered by EU country officials and members of the European Parliament. The legislation aims to protect EU citizens’ fundamental rights, such as the right to live without discrimination or to seek asylum. It labels some AI use cases as “high risk”, others as “low risk”, and outright prohibits others. Among those pushing to change the AI law are human rights groups, labor unions and companies like Google and Microsoft, who want the AI law to make a distinction between those who make general-purpose AI systems and those who deploy them for specific purposes.
Last month, advocacy groups including European Digital Rights and the Platform for International Cooperation on Undocumented Migrants called for the law to ban the use of artificial intelligence polygraphs that measure things like eye movement , tone of voice or facial expression at borders. Statewatch, a civil liberties nonprofit, released an analysis warning that the AI law as written would allow the use of systems like iBorderCtrl, adding to the “ecosystem of state-funded border AI” existing in Europe. The analysis calculated that over the past two decades, about half of the €341 million ($356 million) in funding for the use of AI at the border, such as migrant profiling, is gone to private companies.
The use of artificial intelligence lie detectors at borders effectively creates new immigration politics through technology, says Petra Molnar, associate director of the nonprofit Refugee Law Lab, calling everyone a suspect. “You have to prove you’re a refugee, and you’re assumed to be a liar unless proven otherwise,” she says. “This logic underlies everything. It underpins AI lie detectors, and it underpins more border surveillance and pushback.
Molnar, an immigration lawyer, says people often avoid eye contact with border or immigration officials for trivial reasons, such as culture, religion or trauma, but this is sometimes interpreted mistakenly as a sign that a person is hiding something. Humans often have trouble communicating across cultures or talking to people who have experienced trauma, she says, so why would people believe a machine can do any better?