Marketizing the digital state: the failure of the ‘Verify’ model in the United Kingdom

TECHNOLOGY & HUMAN RIGHTS

Marketizing the digital state: the failure of the ‘Verify’ model in the United Kingdom

Verify, the UK government’s digital identity program, sought to construct a market for identity verification in which companies would compete. But the assumption that companies should be positioned between government and individuals who are trying to access services has gone unquestioned.

The story of the UK government’s Verify service has been told as one of outright failure and a colossal waste of money. Intended as the single digital portal through which individuals accessing online government services would prove their identity, Verify underperformed for years and is now effectively being replaced. But accounts of its demise often focus on technical failures and inter-departmental politics, rather than evaluating the underlying political vision that Verify represents. This is a vision of market creation, whereby the government constructs a market for identity verification within which private companies can compete. As Verify is replaced and the UK government’s ‘digital transformation’ continues, the failings of this model must be examined.

Whether an individual wants to claim a tax refund from Her Majesty’s Revenue and Customs, renew her driver’s license through the Driver and Vehicle Licensing Agency, or receive her welfare payment from the Department for Work and Pensions, the government’s intention was that she could prove her identity to any of these bodies through a single online platform: Verify. This was a flagship project of the Government Digital Service (GDS), a unit working across departments to lead the government’s digital transformation. Much of GDS’ work was driven by the notion of ‘government as a platform’: government should design and build “supporting infrastructure” upon which others can build.

Squarely in line with this idea, Verify provides a “platform for identity.” GDS technologists wrote the software for the Verify platform, while the government then accredits companies as ‘identity providers’ (IdPs) which ‘plug into’ the platform to compete. An individual who seeks to access a government service online will see Verify on her screen and will be prompted by Verify to choose an identity provider. She will be redirected to that IdP’s website and must enter information such as her passport number or bank details. The IdP then checks this information against public and private databases before confirming her identity to the government service being requested. The individual therefore leaves the government website to verify her identity with a separate, private entity.

As GDS “didn’t think there was a market,” it aimed to support “the development of a digital identity market that spans both public and private sectors” so that users could “use their verified identity accounts for private sector transactions as well as government services.” After Verify went live in 2016, the government accredited seven IdPs, including credit reporting agency Experian and Barclays bank. Government would pay IdPs per user, with the price per user decreasing as user volumes increased. GDS intended Verify to become self-funding: government funding would end in Spring 2020, at which point the companies would take over responsibility. GDS was confident that the IdPs would “keep investing in Verify” and would “ensure the success of the market.”

But a market failed to emerge. The government spent over £200 million on Verify and lowered its estimate of its financial benefits by 75%. Though IdPs were supposed to take over responsibility for Verify, almost every company withdrew. After April 2020, new users could register with either the (privatized) Post Office or Digidentity, the only two remaining IdPs. But the Post Office is “a ‘white-label’ version of Digidentity that runs off the same back-end identity engine.” Rather than creating a market, a monopoly effectively emerged.

This highlights the flaws of the underlying approach. Government paid to develop and maintain the software, and then paid companies to use that software. Government also bore most of the risk: companies could enter the scheme, be paid tens of millions, then withdraw if the service proved less profitable than expected, without having invested in building or maintaining the infrastructure. This is reminiscent of the UK government’s decision to bear the costs of maintaining railway tracks while having private companies profit from running trains on these tracks. Government effectively subsidizes profit.

GDS had been founded as a response to failings in outsourcing government-IT: instead of procuring overpriced technologies, GDS would write software itself. But this prioritization of in-house development was combined with an ideological notion that government technologists’ role is to “jump-start and encourage private sector investment” and to build digital infrastructure while relying on the market to deliver services using that infrastructure. This ideal of marketizing the digital state represents a new “orthodoxy” for digital government; the National Audit Office has highlighted the lack of “evidence underpinning GDS’s assumptions that a move to a private sector-led model [was] a viable option for Verify.”

These assumptions are particularly troubling here, as identity verification is an essential moment within state-to-individual interactions. Companies were positioned between government and individuals, and effectively became gatekeepers. An individual trying to access an online government service was disrupted, as she was redirected and required to go through a company. Equal access to services was splintered into a choice of corporate gateways.

This is significant as the rate of successful identity verifications through Verify hovered around 40-50%, meaning over half of attempts to access online government services failed. More worryingly, the verification rate depended on users’ demographic characteristics, with only 29% of Universal Credit (welfare benefits) claimants able to use Verify. If claimants were unable to prove their identity to the system, their benefits applications were often delayed. They had to wait longer to access payments to which they were entitled by right. Indeed, record numbers of claimants have been turning to food banks while they wait for their first payment. It is especially important to question the assumption that a company needed to be inserted between individuals and government services when the stakes – namely further deprivation, hunger, and devastating debt – are so high.

Verify’s replacement became inevitable, with only two IdPs remaining. Indeed, the government is now moving ahead with a new digital identity framework prototype. This arose from a consultation which focused on “enabling the use of digital identity in the private sector” and fostering and managing “the digital identity market.” A Cabinet Office spokesperson has stated that this framework is intended to work “for government and businesses.”

The government appears to be pushing on with the same model, despite recurrent warning signs throughout the Verify story. As the government’s digital transformation continues, it is vital that the assumptions underlying this marketization of the digital state are fundamentally questioned.

March 30, 2021. Victoria Adelmant, Director of the Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law. 

Locked In! How the South African Welfare State Came to Rely on a Digital Monopolist

TECHNOLOGY & HUMAN RIGHTS

Locked In! How the South African Welfare State Came to Rely on a Digital Monopolist

The South African Social Security Agency provides “social grants” to 18 million citizens. In using a single private company with its own biometric payment system to deliver grants, the state became dependent on a monopolist and exposed recipients to debt and financial exploitation.

On February 24, 2021, the Digital Welfare State and Human Rights Project hosted the fifth event in their “Transformer States” conversation series, which focuses on the human rights implications of the emerging digital state. In this conversation, Christiaan Van Veen and Victoria Adelmant explored the impacts of outsourcing at the heart of South Africa’s social security system with Lynette Maart, the National Director of the South African human rights organization The Black Sash. This blog summarizes the conversation and provides the event recording and additional readings below.

Delivering the right to social security

Section 27(1)(c) of the 1996 South African Constitution guarantees everyone the “right to have access” to social security. In the early years of the post-Apartheid era, the country’s nine provincial governments administered social security grants to fulfill this constitutional social right. In 2005, the South African Social Security Agency (SASSA) was established to consolidate these programs. The social grant system has expanded significantly since then, with about 18 million of South Africa’s roughly 60 million citizens receiving grants. The system’s growth and coverage has been a source of national pride. In 2017, the Constitutional Court remarked that the “establishment of an inclusive and effective program of social assistance” is “one of the signature achievements” of South Africa’s constitutional democracy.

Addressing logistical challenges through outsourcing

Despite SASSA’s progress in expanding the right to social security, its grant programs remain constrained by the country’s physical, digital, and financial infrastructure. Millions of impoverished South Africans live in rural areas lacking proper access to roads, telecommunications, internet connectivity, or banking, which makes the delivery of cash transfers difficult and expensive. Instead of investing in its own cash transfer delivery capabilities, SASSA awarded an exclusive contract in 2012 to Cash Paymaster Services (CPS), a subsidiary of South African technology company to administer all of SASSA’s cash transfers nationwide. This made CPS a welfare delivery monopolist overnight.

SASSA selected CPS in large part because its payment system, which included a smart card with an embedded fingerprint-based chip, could reach the poorest and most remote parts of the country. To obtain a banking license, CPS partnered with Grindrod Bank and opened 10 million new bank accounts for SASSA recipients. Cash transfers could be made via the CPS payment system to smart cards without the need for internet or electricity. CPS rolled out a network of 10,000 places where social grant payments could be withdrawn, known as “paypoints,” nationwide. Recipients were never further than 5km from a paypoint.

Thanks to its position as sole deliverer of SASSA grants and its autonomous payment system, CPS also had unique access to the financial data of millions of the poorest South Africans. Other Net1 subsidiaries including Moneyline (a lending group), Smartlife (a life insurance provider) and Manje Mobile (a mobile money service) were able to exploit this “customer base” to cross-sell services. Net1 subsidiaries were soon marketing loans, insurance, and airtime to SASSA recipients. These “customers” were particularly attractive because fees could be automatically deducted from the SASSA grants the very moment they were paid on CPS’ infrastructure. Recipients became a lucrative, practically risk-free market for lenders and other service providers due to these immediate automatic deductions from government transfers. The Black Sash has found that women were going to paypoints at 4.30am in their pajamas to try to withdraw their grants before deductions left them with hardly any of the grant left.

Through its “Hands off Our Grants” advocacy campaign, the Black Sash showed that these deductions were often unauthorized and unlawful. Lynette told the story of Ma Grace, an elderly pensioner who was sold airtime even though she did not own a mobile phone, and whose avenues to recourse were all but blocked off. She explained that telephone helplines were not free but required airtime (which poor people often did not have), and that they “deflected calls” and exploited language barriers to ensure customers “never really got an answer in the language of their choice.”

“Lockin” and the hollowing out of state capacity

Net1’s exploitation of SASSA beneficiaries is only part of the story. This is also about multidimensional governmental failure stemming from SASSA’s outright dependence on CPS. As academic Keith Breckenridge has written, the Net1/SASSA relationship involves “vendor lockin,” a situation in which “the state must confront large, perhaps unsustainable, switching costs to break free of its dependence on the company for grant delivery and data processing.” There are at least three key dimensions of this lockin dynamic which were explored in the conversation:

  • SASSA outsourced both cash transfer delivery and program oversight to CPS. CPS’s “foot soldiers” wore several hats: the same person might deliver grant payments at paypoints, field complaints as local SASSA representatives, and sell loans or airtime. Commercial activity and benefits delivery were conflated.
  • The program’s structure resulted in acute regulatory failures. Because CPS (not Grindrod Bank) ultimately delivered SASSA funds to recipients via its payment infrastructure outside the National Payment System, the payments were exempt from normal oversight by banking regulators. Accordingly, the regulators were blind to unauthorized deductions by Net1 subsidiaries from recipients’ payments.
  • SASSA was entirely reliant on CPS and unable to reach its own beneficiaries itself. Though the Constitutional Court declared SASSA’s 2012 contract with CPS unconstitutional due to irregularities in the procurement process, it ruled that the contract should continue as SASSA could not yet deliver the grants without CPS. In 2017, Net1 co-founder and former CEO Serge Belamant boasted that SASSA would “need to use pigeons” to deliver social grants without CPS. While this was an exaggeration, when SASSA finally transitioned to a partnership with the South African Post Office in 2018, it had to reduce the number of paypoints from 10,000 to 1,740. As Lynette observed, SASSA now has a weaker footprint in rural areas. Therefore, rural recipients “bear the costs of transport and banking fees in order to withdraw their own money.”

This story of SASSA, CPS, and social security grants in South Africa shows not only how outsourced digital delivery of welfare can lead to corporate exploitation and stymied access to social rights, but also how reliance on private technologies can induce “lockin” that undermines the state’s ability to perform basic and vital functions. As the Constitutional Court stated in 2017, the exclusive contract between SASSA and CPS led to a situation in which “the executive arm of government admits that it is not able to fulfill its constitutional and statutory obligations to provide for the social assistance of its people.”

March 11, 2021. Adam Ray, JD program, NYU School of Law; Human Rights Scholar with the Digital Welfare State & Human Rights Project in 2020. He holds a Masters degree from Yale University and previously worked as the CFO of Songkick.

I don’t see you, but you see me: asymmetric visibility in Brazil’s Bolsa Família Program

TECHNOLOGY & HUMAN RIGHTS

I don’t see you, but you see me: asymmetric visibility in Brazil’s Bolsa Família Program

Brazil’s Bolsa Família Program, the world’s largest conditional cash transfer program, is indicative of broader shifts in data-driven social security. While its beneficiaries are becoming “transparent” as their data is made available, the way the State uses beneficiaries’ data is increasingly opaque.

“She asked a lot of questions and started filling out the form. When I asked her about when I was going to get paid, she said, ‘That’s up to the Federal Government.’” This experience of applying for Brazil’s Bolsa Família Program (“Programa Bolsa Família” in Portuguese, or PBF), the world’s largest conditional cash transfer program, hints at the informational asymmetries between individuals and the State. Such asymmetries have long existed, but information and communications technologies (ICTs) can exacerbate these imbalances. ICTs enable States to handle an increasing amount of personal data, and this is especially true in the PBF. In June 2020, 14.2 million Brazilian families living in poverty – 43.7 million individuals – were beneficiaries of the Bolsa Família program.

At the core of the PBF’s structure is a register called CadÚnico, which is used for more than 20 social policies. It includes detailed data on heads of households and less granular data on other family members. The law designates women as the heads of household and thereby the main PBF beneficiary. Information is collected about income, number of people living together, level of education and literacy, housing conditions, access to work, disabilities, and ethnic groups. This data is used to select PBF beneficiaries and to monitor their compliance with the conditions on which the maintenance of the benefit depends, such as requirements that children attend school . The federal government also uses the CadÚnico for finding multidimensional vulnerabilities, granting other benefits, or enabling research. Although different programs feed the CadÚnico, the PBF is its most important information provider due to its colossal size. In March 2021, the CadÚnico comprised 75.2 million individual entries from 28.9 million families: PBF beneficiaries make up a half.

The person responsible for the family unit within the PBF must answer all of the entries of the “main form,” which consists of 77 questions with varying degrees of detail and sensitivity. All these data points expose the sensitive personal information and vulnerabilities of low-income individuals.

The scope of this large and comprehensive dataset is celebrated by social policy experts because it enables the State to target needs for other policies. Indeed, the CadÚnico has been used to identify the relevant beneficiaries for policies ranging from electricity tariff discounts to higher education subsidies. Holding huge amounts of information about low-income individuals can allow States to proactively target needs-based policies.

But when the State is not guided by the principle of data minimization (i.e. collecting only the necessary data and no more), this appetite for information increases and places the burden of risks on individuals. They are transparent to the State, while the State becomes increasingly opaque to them.

Upon registering for the PBF, citizens are not informed about what will happen to the information they provide. For example, the training materials for officials registering beneficiaries only note that they must warn potential beneficiaries of their liability for providing false and inaccurate information, but they do not state that officials must tell beneficiaries how their data will be used, nor about their data rights , nor any details about when or whether they might receive their cash transfer. The emphasis, therefore, lies on the responsibilities of the potential beneficiary instead of the State. The lack of transparency about how people’s data will be used reduces citizens’ ability to exercise their rights.

In addition to the increased visibility of recipients to the State, the PBF also releases the beneficiaries’ data to the public due to strict transparency requirements. Though CadÚnico data is generally confidential, PBF recipients’ personal data is publicly available through different paths:

  • The Federal Government’s Transparency Portal publishes a monthly list containing the beneficiary’s name, municipality, NIS (social security number) and the amounts paid.
  • The Caixa Econômica Federal’s portal– the public bank that administers social benefits–allows anyone to check the status of the benefit by inserting name, NIS and CPF (taxpayer’s identity number).
  • The NIS of any citizen can be queried at the Citizen’s Consultation Portal CadÚnico by providing name, mother’s name, and birth date.

In making a person’s status as a PBF beneficiary easily accessible, the (mostly female) beneficiaries suffer a lack of privacy from all sides and are stigmatized. Not only are they surveilled by the State as it closely monitors conditionalities for the PBF, but they are also monitored by fellow citizens. Citizens have made complaints to the PBF about beneficiaries they believe should not receive cash transfers. At InternetLab, we used the Brazilian Access to Information Law to gain access to some of these complaints. 60% of the complaints showed personal identification information about the accused beneficiary, suggesting that citizens are monitoring and reporting their “undeserving” neighbors and using the above portals to check databases.

The availability of this data has further worrying consequences: at InternetLab, we have witnessed several instances of fraud and electoral propaganda directed at PBF beneficiaries’ phones, and it is not clear where this contact data came from. Different actors are profiling and targeting Brazilian citizens according to their socio-economic vulnerabilities.

The public availability of beneficiaries’ data is backed by law and arises from a desire to fight corruption in Brazil. This requires government spending, including on social programs, to be transparent. But spending on social programs has become more controversial in recent years amidst an economic crisis and the rise of conservative political majorities, and misplaced ideas of “corrupted beneficiaries” have mingled with anti-corruption sentiments. The emphasis has been placed on making beneficiaries “transparent,” rather than government.

Anti-corruption laws do not adequately differentiate between transparency practices that confront corruption and favor democracy, and those which disproportionately reinforce vulnerabilities and inequalities in focusing on recipients of social programs. Public contracts, public employees’ salaries, and beneficiaries of social benefits are all exposed under the same grounds. But these are substantially different uses of public resources, and exposure of these different kinds of data has very unequal impacts, with beneficiaries more likely to be harmed by this “transparency.”

The personal data of social program beneficiaries should be treated with more care, and we should question whether disclosing so much information about them is necessary. In the wake of Brazil’s General Data Protection Law which came into force last year, it is vital that the work to increase the transparency of the State continues while the privacy of the vulnerable is protected, not the other way around.

May 3, 2021. Nathalie Fragoso and Mariana Valente.
Nathalie Fragoso, Head of Research, Privacy and Surveillance, Internet Lab.
Mariana Valente, Associate Director of Internet Lab.

Fearing the future without romanticizing the past: the role for international human rights law(yers) in the digital welfare state to be

TECHNOLOGY & HUMAN RIGHTS

Fearing the future without romanticizing the past: the role for international human rights law(yers) in the digital welfare state to be

Universal Credit is one of the foremost examples of a digital welfare system and the UK’s approach to digital government is widely copied. What can we learn from this case study for the future of international human rights law in the digital welfare state?

Last week, Victoria Adelmant and I organized a two-day workshop on digital welfare and the international rule and role of law, which was part of a series curated by Edinburgh Law School. While zooming in on Universal Credit (UC) in the United Kingdom, arguably one of the most developed digital welfare systems in the world, our objective was broader: namely to imagine how and why law, especially international human rights law, does and should play a role when the state goes digital. Below are some initial and brief reflections on the rich discussions we had with close to 50 civil servants, legal scholars, computer scientists, digital designers, philosophers, welfare rights practitioners, and human rights lawyers.

What is “digital welfare?” There is no agreed upon definition. At the end of a United Nations country visit to the UK in 2018, where I accompanied the UN Special Rapporteur on extreme poverty and human rights, we coined the term by writing that “a digital welfare state is emerging”. Since then, I have spent years researching and advocating around these developments in the UK and elsewhere. For me, the term digital welfare can be (imperfectly) defined as a welfare system in which the interaction with beneficiaries and internal government operations is reliant on various digital technologies.

In UC, that means you apply for and maintain your benefits online, your identity is verified online, your monthly benefits calculation is automated in real-time, fraud detection happens with the help of algorithmic models, etc. Obviously, this does not mean there is no human interaction or decision-making in UC. And the digitalization of the welfare state did not start yesterday either; it is a process many decades in the making. For example, a 1967 book titled The Automated State mentions the Social Security Administration in the United States as having “among the most extensive second-generation computer systems.” Today, digitalization is no longer just about data centers or government websites, and systems like UC exemplify how digital technologies affect each part of the welfare state.

So, what are some implications of digital welfare for the role of law, especially for international human rights law?

First, as was pointed out repeatedly in the workshop, law has not disappeared from the digital welfare state altogether. Laws and regulations, government lawyers, welfare rights advisors, and courts are still relevant. As for international human rights law, it is no secret that its institutionalization by governments, especially where it comes to economic and social rights, has never been perfect. And neither should we romanticize the past by imagining a previous law and rules-based welfare state as a rule of law utopia. I was reminded of this recently when I watched a 1975 documentary by Frederick Wiseman about a welfare office in downtown Manhattan which was far from utopian. Applying law and rights to the welfare state has been a long and continuous battle.

Second, while there is much to fear about digitalization, we shouldn’t lose sight of its promises for the reimagination of a future welfare state. Several workshop participants emphasized the potential user-friendliness and rationality that digital systems can bring. For example, the UC system quickly responded to a rise in unemployment caused by the pandemic, while online application systems for unemployment benefits in the United States crashed. Welfare systems also have a long history of bureaucratic errors. Automation offers, at least in theory, a more rational approach to government. Such digital promises, however, are only as good as the political impetus that drives digital reform, which is often more focused on cost-savings, efficiency, and detecting supposedly ubiquitous benefit fraud than truly making welfare more user-friendly and less error-prone.

What role does law play in the future digital welfare state? Several speakers emphasized a previous approach to the delivery of welfare benefits as top-down (“waterfall”). Legislation would be passed, regulations would be written and then implemented by the welfare bureaucracy as a final step. Not only is delivery now taking place digitally, but such digital delivery follows a different logic. Digital delivery has become “agile,” “iterative,” and “user-centric,” creating a feedback loop between legislation, ministerial rules and lower-level policy-making, and implementation. Implementation changes fast and often (we are now at UC 167.0).

It is also an open question what role lawyers will play. Government lawyers are changing primary social security legislation to make it fit the needs of digital systems. The idea of ‘Rules as Code’ is gaining steam and aims to produce legislation while also making sure it is machine-readable to support digital delivery. But how influential are lawyers in the overall digital transformation? While digital designers are crucial actors in designing digital welfare, lawyers may increasingly be seen as “dinosaurs,” slightly out of place when wandering into technologist-dominated meetings with post-it notes, flowcharts, and bouncy balls. Another “dinosaur” may be the “street-level bureaucrat.” Such bureaucrats have played an important role in interpreting and individualizing general laws. Yet, they are also at risk of being side-lined by coders and digital designers who increasingly shape and form welfare delivery and thereby engage in their own form of legal interpretation.

Most importantly, from the perspective of human rights: what happens to humans who have to interact with the digital welfare state? In discussions about digital systems, they are all too easily forgotten. Yet, there is substantial evidence of the human harm that may be inflicted by digital welfare, including deaths. While many digital transformations in the welfare state are premised on the methodology of “user-centered design,” its promise is not matched by its practice. Maybe the problem starts with conceptualizing human beings as “users,” but the shortcomings go deeper and include a limited mandate for change and interacting only with “users” who are already digitally visible.

While there is every reason to fear the future of digital welfare states, especially if developments turn toward lawlessness, such fear does not have to lead to outright rejection. Like law, digital systems are human constructs, and humans can influence their shape and form. The challenge for human rights lawyers and others is to imagine not only how law can be injected into digital welfare systems, but how such systems can be built on and can embed the values of (human rights) law. Whether it is through expanding the concept and practice of “user-centered design” or being involved in designing rights-respecting digital welfare platforms, (human rights) lawyers need to be at the coalface of the digital welfare state.

March 23, 2021. Christiaan van Veen, Director of the Digital Welfare State and Human Rights Project (2019-2022) at the Center for Human Rights and Global Justice at NYU School of Law.

Experimental automation in the UK immigration system

TECHNOLOGY & HUMAN RIGHTS

Experimental automation in the UK immigration system

The UK government is experimenting with automated immigration systems. The promised benefits of automation are inevitably attractive, but these experiments routinely expose people—including some of the most vulnerable—to unacceptable risks of harm.

In April 2019, The Guardian reported that couples accused of sham marriages were increasingly being subjected to invasive investigations by the Home Office, the UK government body responsible for immigration policy. Couples reported having their wedding ceremonies interrupted to be quizzed about their sex life, being told they were not in a genuine relationship because they were wearing pajamas in bed, and being present while their intimate photos were shared between officials.

The official tactics reported are worrying enough, but it has since come to light through the efforts of a legal charity (the Public Law Project) and investigative journalists that an automated system is largely determining who gets investigated in the first place. An algorithm, hidden from public view, is sorting couples into “pass” and “fail” categories, based on eight unknown criteria.
Couples who “fail” this covert algorithmic test are subjected to intrusive investigations. They must attend an interview and hand over extensive evidence about their relationship, a process which has been described as “insulting” and “grueling.” These investigations can also prevent couples from getting married altogether. If the Home Office decides that a couple has failed to “comply” with an investigation—even if they are in a genuine relationship—the couple is denied a marriage certificate and forced to start the process all over again. One couple was reportedly ruled non-compliant for failing to provide six months of bank statements for an account that had only been open for four months. This makes it difficult for people to plan their weddings and their lives. And the investigation can lead to other immigration enforcement actions, such as visa cancellation, detention, and deportation. In one case, a sham marriage dawn raid led to a man being detained for four months, until the Home Office finally accepted that his relationship was genuine.

We know little about how this automated system operates in practice or its effectiveness in detecting sham marriages. The Home Office refuses to disclose or otherwise explain the eight criteria at the center of the system. There is a real risk that the system is racially discriminatory, however. The criteria were derived from historical data, which may well be skewed against certain nationalities. The Home Office’s own analysis shows that some nationalities, including Bulgarian, Greek, Romanian and Albanian people, receive “fail” ratings more frequently than others.

The sham marriages algorithm is, in many respects, a typical case of the deployment of automation in the UK immigration system. It is not difficult to understand why officials are seeking to automate immigration decision-making. Administering immigration policy is a tough job. Officials are often inexperienced and under pressure to process large volumes of decisions. Each decision will have profound effects for those subjected to it. This is not helped by the dense complexity of, and frequent changes in, immigration law and policy, which can bamboozle even the most hardened administrative lawyer. All of this, of course, takes place in an environment where migration remains one of the most vexed issues on the political agenda. Automation’s promised benefits of greater efficiency, lower costs, and increased consistency are, from the government’s perspective, inevitably attractive.

But in reality, a familiar pattern of risky experimentation and failure is already emerging. It begins with the Home Office deploying a novel automated system with the goal of cheaper, quicker, and more accurate decision-making. There is often little evidence to support the system’s effectiveness in delivering those goals and scant consideration of the risks of harm. Such systems are generally intended to benefit the government or the general, non-migrant population, rather than the people subject to them. When the system goes wrong and harms individuals, the Home Office fails to take adequate steps to address those harms. The justice system—with its principles and procedures developed in response to more traditional forms of public administration—is left to muddle through in trying to provide some form of redress. That redress, even where best efforts are made, is often unsatisfactory.

This is the story we seek to tell in our new book, Experiments in Automating Immigration Systems, through an exploration of three automated immigration systems in the UK: a voice recognition system used to detect fraud in English language testing; an algorithm for identifying “risky” visa applications; and automated decision-making in the process for EU citizens to apply to remain in the UK after Brexit. It is, at its core, a story of risky bureaucratic experimentation that routinely exposes people, including some of the most vulnerable, to unacceptable risks of harm. For example, some of the students caught up in the English language testing scandal were detained and deported, while others had to abandon their studies and fight for years through the courts to prove their innocence. While we focus on the UK experience, this story will no doubt be increasingly familiar in many countries around the world.

It is important to remember, however, that this story is just beginning. While it would be naïve to think that the tensions in public administration can ever be wholly overcome, the government must strive to reap the benefits of automation for all of society, in a way that is sensitive to and mitigates the attendant risks of injustice. That work is, of course, best led by the government itself.

But the collective work of journalists, charities, NGOs, lawyers, researchers, and others will continue to play a crucial role in ensuring, as far as possible, that automated administration is just and fair.

March 14, 2022. Joe Tomlinson and Jack Maxwell.
Dr. Joe Tomlinson is a Senior Lecturer in Public Law at the University of York.
Jack Maxwell is a barrister at the Victorian Bar.

Digital Paternalism: A Recap of our Conversation about Australia’s Cashless Debit Card with Eve Vincent

TECHNOLOGY & HUMAN RIGHTS

Digital Paternalism: A Recap of our Conversation about Australia’s Cashless Debit Card with Eve Vincent

On November 23, 2020, the Center for Human Rights and Global Justice’s Digital Welfare State and Human Rights Project hosted the third virtual conversation in its “Transformer States: A Conversation Series on Digital Government and Human Rights” series. Christiaan van Veen and Victoria Adelmant interviewed Eve Vincent, senior lecturer in the Department of Anthropology at Macquarie University and author of a crucial report on the lived experiences of one of the first Cashless Debit Card trials in Ceduna, South Australia.

The Cashless Debit Card is a debit card which is currently used in parts of Australia to deliver benefit income to welfare recipients. Vitally, it is a tool of compulsory income management: the card “quarantines” 80% of a recipient’s payment, preventing this 80% from being withdrawn as cash and blocking attempted purchases of alcohol or gambling products. It is similar to, and intensifies, a previous scheme of debit card-based income management, known as the “Basics Card.” This earlier card was introduced after a 2007 report into child sexual abuse in indigenous communities in Australia’s Northern Territory which identified alcoholism, substance abuse, and gambling as major causes of such abuse. One of the measures taken was the requirement that indigenous communities’ benefit income be received on a Basics Card which quarantined 50% of benefit payments. The Basics Card was later extended to non-indigenous welfare recipients, but it remained disproportionately targeted at indigenous communities.

Following a 2014 report by mining magnate Andrew Forrest on inequality between indigenous and non-indigenous groups in Australia, the government launched the Cashless Debit Card to gradually replace the Basics Card. The Cashless Debit Card would quarantine 80% of benefit income on the card, and the card would block spending where alcohol is sold or where gambling takes place. Initial trials were targeted, again, in remote indigenous areas. The communities in the first trials were presented as parasitic on the welfare state and in crisis with regard to alcohol abuse, assault, and gambling. It was argued that drastic intervention was warranted: the government should step in to take care of these communities as they were unable to look after themselves. Income management would assist in this paternalistic intervention, fostering responsibility and curbing alcoholism and gambling through blocking their purchases. Many of Eve’s research participants found these justifications offensive and infantilizing. The Cashless Debit Card is now being trialed in more populous areas with more non-indigenous people, and the narrative has shifted. Justifications for cards for non-indigenous people have focused more on the need to teach financial literacy and budgeting skills.

Beyond the humiliating underlying stereotypes, the Cashless Debit Card itself leads cardholders feeling stigmatized. While the non-acceptance of Basics Cards at certain shops had led to prominent “Basics Card not accepted here” signs, the Cashless Debit Card was intended to be more subtle. It is integrated with EFTPOS technology, meaning it can theoretically be used in any shop with one of these ubiquitous card-reading devices. ETPOS terminals in casinos or pubs are blocked, but these establishments can arrange with the government to have some discretion. A pub can arrange to allow Cashless Debit Card-holders to pay for food but not alcohol, for example, thereby not excluding them entirely. Despite this purported subtlety, individuals reported feeling anxious about using the card as the technology was proving unreliable and inconsistent, accepted one day but not the next. When the card was declined, sometimes seemingly randomly, this was deeply humiliating. Card-holders would have to gather their shopping and return it to the shelves under the judging gaze of others, potentially of people they know.

Separately, some card-holders had to use public computers to log into their accounts to check their cards’ balance, highlighting the reliance of such schemes on strong digital infrastructure and on individuals’ access to connected devices. But some Cashless Debit Card-holders were quite positive about the card: there is, of course, a diversity of opinions and experiences. Some found that the card’s fortnightly cycle had helped them with budgeting and thought the app upon which they could check their balance was a user-friendly and effective budgeting tool.

The Cashless Debit Card scheme is run by a company named Indue, continuing decades-long trends of outsourcing welfare delivery. Many participants in Eve’s research spoke positively of their experience with Indue, finding staff on helplines to be helpful and efficient. But many objected to the principle that the card is privatized and that profits are being made on the basis of their poverty. The Cashless Debit Card costs AUD 10,000 per participant per year to administer: many card-holders were outraged that such an expense is outlaid to try to control how they spend their very meager income. Recently, the biggest four banks in Australia and government-owned Australia Post have been in talks about taking over the management of the scheme. This raises an interesting parallel with South Africa, where social grants were originally paid through a private provider but, following a scandal regarding the tender process and the financial exploitation of poor grant recipients, public providers stepped in again.

As an anthropologist, Eve’s research takes as a starting point the importance of listening to the people affected and foregrounding their lived experience, resonating with a common approach to human rights research. Interestingly, many Cashless Debit Card-holders used the language of human rights to express indignation about the scheme and what it represents. Reminiscent of Sally Engle Merry’s work on the ‘vernacularization’ of human rights, card-holders invoked human rights in a manner quite specific to the Aboriginal Australian context and history. Eve’s research participants often compared the Cashless Debit Card trials to the past, when the wages of indigenous peoples had been stolen and their access to money was tightly controlled. They referred to that time as the “time before rights”; before legislative equal citizen rights had been gained. Today, they argued, now that indigenous communities have rights, this kind of intervention and control of communities by the government is unacceptable. As one of Eve’s research participants put it, the government has through the Cashless Debit Card “taken away our rights.”

December 4, 2020. Victoria Adelmant, Director of the Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law. 

A GPS Tracker on Every “Boda Boda”: A Tale of Mass Surveillance in Uganda

TECHNOLOGY & HUMAN RIGHTS

A GPS Tracker on Every “Boda Boda”: A Tale of Mass Surveillance in Uganda

The Ugandan government recently announced that GPS trackers would be placed on every vehicle in the country. This is just the latest example of the proliferation of technology-driven mass surveillance, spurred by a national security agenda and the desire to suppress political opposition.

Following the June 2021 assassination attempt on Uganda’s Transport Minister and former army commander, General Katumba Wamala, President Yoweri Museveni suggested mandatory Global Positioning System (GPS) tracking of all private and public vehicles. This includes motorcycle taxis (commonly known as boda bodas) and water vessels. Museveni also suggested collecting and storing the palm prints and DNA of every Ugandan.

Hardly a month later, reports emerged that the government, through the Ministry of Security, had entered into a 10-year secretive contract with a Russian security firm to undertake the installation of GPS trackers in vehicles. Selection of the firm was never subjected to the procurement procedures required by Ugandan law, and a few days after this news broke, it emerged that the Russian firm was facing bankruptcy litigation. The line minister who endorsed the contract subsequently distanced himself from the deal, saying that he was merely enforcing a presidential directive. The government has confirmed that Ugandans will have to pay 20,000 UGX (approximately $6 USD) annually to the Russian firm for the installation of trackers on their vehicles. This controversial move means Ugandans are paying for their own surveillance.
According to 2020 statistics by the Ugandan Bureau of Statistics, a total of 38,182 motor vehicles and 102,273 motor cycles are registered in Uganda. Most of these motorcycles function as boda bodas and are a de facto mode of public transport in Uganda commonly used by people of all social classes. In the capital of Kampala, boda bodas are essential because of their ability to navigate heavy traffic jams. In remote locations where public transport is inaccessible, boda bodas are the only means of transportation for most people, except the elites. While a boda boda motorcycle was allegedly used in the assassination attempt on General Katumba Wamala, those same boda bodas also function as ambulances (including bringing the General to a hospital after the attack) and many other essential purposes.

It should be emphasized that this latest attempt at boda boda mass surveillance is part of a broader effort by the government of Uganda to exert power and control via digital surveillance and thereby limit the full enjoyment of human rights offline and online. One example is the widespread use of indiscriminate drone surveillance. Another is the Cyber Crimes Unit in the Ugandan police which, since 2014, has had overly broad powers to monitor the social media activity of Ugandans. Unwanted Witness has raised concerns about the intrusive powers of this unit, which violate Article 27 of the 1995 Uganda Constitution that guarantees the right to privacy.

And that is not all. In 2018, the Ugandan government contracted the Chinese firm Huawei to install CCTV cameras in all major cities and on all highways, spending over $126 million USD on these cameras and related facial recognition technology. In the absence of any judicial oversight, there are also concerns about backdoor access to this system for illegal facial recognition surveillance on potential targets and the use of this system to stifle all opposition to the regime.

The fears about the use of this CCTV system to violate human rights and stifle dissent came true in November 2020. Following the arrest of two opposition presidential candidates, political protests erupted in Uganda, and this CCTV system was used to crack down on dissent after these protests. Long before these protests, the Wall Street Journal had already reported on how Huawei technicians assisted the Ugandan government to spy on political opponents.

This is taking place in a wider context of attacks on human rights defenders and NGOs. Under the guise of seeking to pre-empt terror threats, the state has instituted cumbersome regulations on nonprofits and granted authorities the power to monitor and interfere in their work. Last year, a number of well-known human rights groups were falsely accused of funding terrorism and had their bank accounts frozen. The latest government clampdown on NGOs resulted in the suspension of the operations of 54 organizations on allegations of non-compliance with registration laws. Uganda’s pervasive surveillance apparatus will be instrumental in these efforts at censoring and silencing human rights organizations, activists, and other forms of dissent.
The intrusive application of digital surveillance harms the right to privacy of Ugandans. Privacy is a fundamental right enshrined in the 1995 Constitution and numerous international human rights treaties and other legal instruments. The right to privacy is also a central pillar of a well-functioning democracy. But in the quest to surveil its population, the Ugandan government has either underplayed or ignored the violation of human rights.

What is especially problematic here is the partial privatization of government surveillance to individual corporations. There is a long and unfortunate track record in Uganda of private corporations evading all human rights accountability for their involvement in surveillance. In 2019, for example, Unwanted Witness wrote a report that faulted a transport hailing app—SafeBoda—for sharing customers’ data with third parties without their consent. With the planned GPS tracking, Ugandan boda boda users will have their privacy eroded further, with the help of the Russian security firm. Driven by a national security agenda and the desire to control and suppress any opposition to the long-running Museveni presidency, digital surveillance is proliferating as Ugandans’ rights to privacy, to freedom of expression, and to freedom of assembly are harmed.

October 13, 2021. Dorothy Mukasa is the Chief Executive Officer of Unwanted Witness, a leading digital rights organization in Uganda. 

“Killing two birds with one stone?” The Cashless COVID Welfare Payments Aimed at Boosting Consumption

TECHNOLOGY & HUMAN RIGHTS

“Killing two birds with one stone?” The Cashless COVID Welfare Payments Aimed at Boosting Consumption

In launching its COVID-19 relief payments scheme, the South Korean government had two goals: providing a safety net for its citizens and boosting consumption for the economy. It therefore provided cashless payments, issuing credit card points rather than cash. However, this had serious implications for the vulnerable.

In May 2020, South Korea’s government distributed its COVID-19 emergency relief payments to all households through cashless channels. Recipients predominantly received points on credit cards rather than cash transfers. From the outset, the government stated explicitly that this universal transfer scheme had two goals: it was not only intended to mitigate the devastating impacts of the pandemic on people’s livelihoods, but also explicitly aimed at simultaneously boosting consumption in the South Korean economy. Providing cash would not necessarily boost consumption as it could be placed in savings accounts. Therefore, credit card points were offered instead to require recipients to spend the relief. But in trying to “kill two birds with one stone” by promoting consumption through the relief program, the government jeopardized the welfare aim of this program.

Once the payouts began, the government boasted that the delivery of the relief funds was timely and efficient. The relief program had been launched based on business agreements with credit card companies for “rapid and smooth” payment, and indeed, it was true that the card-based channel enabled distribution which was much faster than in other countries. Although “offline” applications for the relief program could be made in-person at banks, the scheme was designed around the submission of applications through credit-card companies’ websites or apps. The relief funds were then deposited onto recipients’ credit card or debit card in the form of points—which were separated from normal credit card points—within two days after applying. In September 2021, during the second round of universal relief payments known as the “COVID-19 Win-Win National Relief Fund,” 90% of expected recipients received their payments within 12 days.

Restricting spending to boost spending

However, paying recipients in credit card points meant restricting their access to cash. While low-income households received the relief fund in cash during the first round of COVID-19 relief, they had to apply for the payment in the second round and could only choose among cashless methods which included credit cards and debit cards. To make matters worse, the policy placed constraints on where points could be used, in the name of encouraging consumption and growing the local economy. The points could only be used in designated places, and could not be used to pay for utility bills, repay a mortgage, nor for online shopping. They could not be transferred to others’ bank accounts or withdrawn as cash. Therefore, recipients had no choice but to use their relief funds in certain local restaurants, markets, or clothing stores, etc. If the points had not been used approximately 3-4 months after disbursement, then they were returned to the national treasury. All of these conditions were the outcome of the fact that the policy specifically aimed at boosting consumption.

Jeopardizing the welfare aim

These restrictions had significant repercussions on people in poverty, in two key ways. First, the relief fund failed to fulfill the right to social protection of vulnerable people at risk. As utility bills, telecommunication fees, and even health insurance fees could not be paid with the points, many were left unable to pay for the things they needed to pay for, while much-needed funds remained effectively stranded on the card. What use is a card meant only for restaurants and shops when one is in arrears on utility bills, health insurance fees, and at risk of electricity supply and health insurance benefits being cut off? Those who needed cash immediately sometimes handed their credit cards to other people to use, and then requested payment back in cash below the value. It was also reported that a number of people bought products at stores where relief fund points could be used, and then sold the products at a lower price on the second-hand online market to obtain cash. Although the government warned that it would crack down on such “illegal transactions,” the demand for cash could not be controlled.

Second, the right to housing of vulnerable populations was not sufficiently protected through this scheme. Homeless persons, who needed the most help, were severely affected because the cashless relief funds could not function as a payment method for monthly rent. Homeless people and slice-room dwellers were the group which most strongly agreed that “the COVID-19 relief fund should be distributed in cash” in a survey. Further, given that low-income people spent a higher proportion of their income on rent than those from other social classes, the fact that the relief funds could not be used on rent also significantly affected low-income households. A number of temporary or informal workers who lost their jobs due to the pandemic were on the verge of being pushed into poorer conditions because they could not afford their rent. The relief program could not help these groups cover some of their most urgent expenditures—housing costs—at all.

Boosting consumption can be expected as an indirect effect of government relief funds, but it must not be adopted as a specific goal of such programs. Attempting to achieve this consumption-oriented goal through the relief payments resulted in the scheme’s design imposing limitations on the use of funds, thereby undermining the scheme’s ability to help those in the most extreme need. As the government set boosting consumption as one of the aims of the program and seemingly prioritized it over the welfare aim, the delivery of the payments was devised in an inappropriate way that did not take the most vulnerable into account.

Killing two birds with one stone?

The Korea Development Institute (KDI) found that only about 30% of the first emergency relief funds led to an increase in consumption, while the remaining 70% led to household debt repayment or savings. In the end, it seemed that the cashless relief stipend did not successfully increase consumption, all while it caused the weakening of its social security function.
Such schemes aimed at “killing two birds with one stone” were doomed to fail from the beginning because these two goals come into tension with one another in the program’s design. The consumption aim is likely to harm the welfare aim through pushing for cashless, controlled, and restricted use. The sole purpose of emergency relief funds in a crisis should be to provide assistance for the most vulnerable. Such schemes should be delivered in a way that will best fulfill this aim, they should be focused on providing a safety net, and should be designed from the perspective of right-holders, and not of consumers.

April 19, 2022. Bo Eun Kwon, LLM program, NYU School of Law whose interests include international human rights law, economic and social rights, and digital governance. She has worked at the National Human Rights Commission of Korea.