In April 2019, The Guardian reported that couples accused of sham marriages were increasingly being subjected to invasive investigations by the Home Office, the UK government body responsible for immigration policy. Couples reported having their wedding ceremonies interrupted to be quizzed about their sex life, being told they were not in a genuine relationship because they were wearing pajamas in bed, and being present while their intimate photos were shared between officials.
The official tactics reported are worrying enough, but it has since come to light through the efforts of a legal charity (the Public Law Project) and investigative journalists that an automated system is largely determining who gets investigated in the first place. An algorithm, hidden from public view, is sorting couples into “pass” and “fail” categories, based on eight unknown criteria.
Couples who “fail” this covert algorithmic test are subjected to intrusive investigations. They must attend an interview and hand over extensive evidence about their relationship, a process which has been described as “insulting” and “grueling.” These investigations can also prevent couples from getting married altogether. If the Home Office decides that a couple has failed to “comply” with an investigation—even if they are in a genuine relationship—the couple is denied a marriage certificate and forced to start the process all over again. One couple was reportedly ruled non-compliant for failing to provide six months of bank statements for an account that had only been open for four months. This makes it difficult for people to plan their weddings and their lives. And the investigation can lead to other immigration enforcement actions, such as visa cancellation, detention, and deportation. In one case, a sham marriage dawn raid led to a man being detained for four months, until the Home Office finally accepted that his relationship was genuine.
We know little about how this automated system operates in practice or its effectiveness in detecting sham marriages. The Home Office refuses to disclose or otherwise explain the eight criteria at the center of the system. There is a real risk that the system is racially discriminatory, however. The criteria were derived from historical data, which may well be skewed against certain nationalities. The Home Office’s own analysis shows that some nationalities, including Bulgarian, Greek, Romanian and Albanian people, receive “fail” ratings more frequently than others.
The sham marriages algorithm is, in many respects, a typical case of the deployment of automation in the UK immigration system. It is not difficult to understand why officials are seeking to automate immigration decision-making. Administering immigration policy is a tough job. Officials are often inexperienced and under pressure to process large volumes of decisions. Each decision will have profound effects for those subjected to it. This is not helped by the dense complexity of, and frequent changes in, immigration law and policy, which can bamboozle even the most hardened administrative lawyer. All of this, of course, takes place in an environment where migration remains one of the most vexed issues on the political agenda. Automation’s promised benefits of greater efficiency, lower costs, and increased consistency are, from the government’s perspective, inevitably attractive.
But in reality, a familiar pattern of risky experimentation and failure is already emerging. It begins with the Home Office deploying a novel automated system with the goal of cheaper, quicker, and more accurate decision-making. There is often little evidence to support the system’s effectiveness in delivering those goals and scant consideration of the risks of harm. Such systems are generally intended to benefit the government or the general, non-migrant population, rather than the people subject to them. When the system goes wrong and harms individuals, the Home Office fails to take adequate steps to address those harms. The justice system—with its principles and procedures developed in response to more traditional forms of public administration—is left to muddle through in trying to provide some form of redress. That redress, even where best efforts are made, is often unsatisfactory.
This is the story we seek to tell in our new book, Experiments in Automating Immigration Systems, through an exploration of three automated immigration systems in the UK: a voice recognition system used to detect fraud in English language testing; an algorithm for identifying “risky” visa applications; and automated decision-making in the process for EU citizens to apply to remain in the UK after Brexit. It is, at its core, a story of risky bureaucratic experimentation that routinely exposes people, including some of the most vulnerable, to unacceptable risks of harm. For example, some of the students caught up in the English language testing scandal were detained and deported, while others had to abandon their studies and fight for years through the courts to prove their innocence. While we focus on the UK experience, this story will no doubt be increasingly familiar in many countries around the world.
It is important to remember, however, that this story is just beginning. While it would be naïve to think that the tensions in public administration can ever be wholly overcome, the government must strive to reap the benefits of automation for all of society, in a way that is sensitive to and mitigates the attendant risks of injustice. That work is, of course, best led by the government itself.
But the collective work of journalists, charities, NGOs, lawyers, researchers, and others will continue to play a crucial role in ensuring, as far as possible, that automated administration is just and fair.