Paving a Digital Road to Hell? A Primer on the Role of the World Bank and Global Networks in Promoting Digital ID

TECHNOLOGY AND HUMAN RIGHTS

Paving a Digital Road to Hell? 

A Primer on the Role of the World Bank and Global Networks in Promoting Digital ID

Around the world, governments are enthusiastically adopting digital identification systems. In this 2022 report, we show how global actors, led by the World Bank, are energetically promoting such systems. They proclaim that digital ID will provide an indispensable foundation for an equitable, inclusive future. But a specific model of digital ID is being promoted—and a growing body of evidence shows that this model of digital ID is linked to large-scale human rights violations. In this report, we argue that, despite undoubted good intentions, this model of digital ID is failing to live up to its promises and may in fact be causing severe harm. As international development actors continue to promote and support digital ID rollouts, there is an urgent need to consider the full implications of these systems and to ensure that digital ID realizes rather than violates human rights.

In this report, we provide a carefully researched primer, as well as a call to action with practical recommendations. We first compile evidence from around the world, providing a rigorous overview of the impacts that digital ID systems have had on human rights across different contexts. We show that the implementation of the dominant model of digital ID is increasingly causing severe and large-scale human rights violations, especially since such systems may exacerbate pre-existing forms of exclusion from public and private services. The use of new technologies may also lead to new forms of harm, including biometric exclusion, discrimination along new cleavages, and the many harms associated with surveillance capitalism. Meanwhile, the promised benefits of such systems have not been convincingly proven. This primer draws on the work of experts and activists working across multiple fields to identify critical concerns and evidentiary gaps within this new development consensus on digital ID.

The report points specifically to the World Bank and its Identification for Development (ID4D) Initiative as playing a central role in the rapid proliferation of a particular model of digital ID, one that is heavily inspired by the Aadhaar system in India. Under this approach to digital ID, the aim is to provide individuals with a ‘transactional’ identity, rather than to engage with questions surrounding legal status and rights. We argue that a driving force behind the widespread and rapid adoption of such systems is a powerful new development consensus, which holds that digital ID can contribute to inclusive and sustainable development—and is even a prerequisite for the realization of human rights. This consensus is packaged and promoted by key global actors like the World Bank, as well as by governments, foundations, vendors and consulting firms. It is contributing to the proliferation of digital ID around the world, all while insufficient attention is paid to risks and necessary safeguards.

The report concludes by arguing for a shift in policy discussions around digital ID, including the need to open new critical conversations around the “Identification for Development Agenda,” and encourage greater discourse around the role of human rights in a digital age. We issue a call to action for civil society actors and human rights stakeholders, with practical suggestions for those in the human rights ecosystem to consider. The report sets out key questions that civil society can ask of governments and international development institutions, and specific asks that can be made—including demanding that processes be slowed down so that sufficient care is taken, and increasing transparency surrounding discussions about digital ID systems, among others—to ensure that human rights are safeguarded in the implementation of digital ID systems.

Sorting in Place of Solutions for Homeless Populations: How Federal Directives Prioritize Data Over Services

TECHNOLOGY & HUMAN RIGHTS

Sorting in Place of Solutions for Homeless Populations: How Federal Directives Prioritize Data Over Services

National data collection and service prioritization were supposed to make homeless services more equitable and efficient. Instead, they have created more risks and bureaucratic burdens for homeless individuals and homeless service organizations.

While serving as an AmeriCorps VISTA member supporting the IT and holistic defense teams at a California public defender, much of my time was spent navigating the data bureaucracy that now weighs down social service providers across the country. In particular, I helped social workers and other staff members use tools like the Vulnerability Index – Service Prioritization Decision Assistance Tool (VI-SPDAT) and a Homeless Management Information System (HMIS). While these tools were ostensibly designed to improve care for homeless and housing insecure people, all too often they did the opposite.

An HMIS is a localized information network and database used to collect client-level data and data on the provision of housing and services to homeless or at-risk persons. In 2011, Congress passed the HEARTH Act, mandating the use of HMIS by communities in order to receive federal funding. HMIS demands coordinated entry, a process by which certain types of data are cataloged and clients are ranked according to their perceived need. One of the most common tools for coordinated entry—and the one used by the social workers I worked with—is VI-SPDAT. VI-SPDAT is effectively a questionnaire which involves a battery of highly invasive questions which seek to determine the level of need of the homeless or housing insecure individual to whom it is administered.

These tools have been touted as game-changers, but while homelessness across the country, and especially in California, continued to decrease modestly in the years immediately following the enactment of the HEARTH act, it began to increase again in 2019 and sharply increased in 2020, even before the onset of the COVID-19 pandemic. This is not to suggest a causal link; indeed, the evidence suggests that factors such as rising housing costs and a worsening methamphetamine epidemic are at the heart of rising homelessness. But there is little evidence that intrusive tools like VI-SPDAT alleviate these problems.

Indeed, these tools have themselves been creating problems for homeless persons and social workers alike. There have been harsh criticisms from scholars like Virginia Eubanks about the accuracy and usefulness of VI-SPDAT. It has been found to produce unreliable and racially biased results. Rather than decreasing bias as it purports to do, VI-SPDAT has baked bias into its algorithms, providing a veneer of scientific objectivity for government officials to hide behind.

But, even if these tools were to be made more reliable and less biased,  they would nonetheless cause harm and stigmatization. Homeless individuals and social workers alike report finding the assessment dehumanizing and distressing. For homeless individuals, it can also feel deeply risky. Those who don’t score high enough on the assessment are often denied housing and assistance altogether. Those who score too high run the risk of involuntary institutionalization.

Meanwhile, these tools place significant burdens on social workers. To receive federal funding, organizations must provide not only an intense amount of highly intimate information about homeless persons and their life histories, but also a minute accounting of every interaction between the social worker and the client. One social worker would frequently work with clients from 9-5, go home to make dinner for her children, and then work into the wee hours of the night attempting to log all of her data requirements.

I once sat through a 45-minute video call with a veteran social worker who broke down into tears worried that the grant funding her position might be taken away if her record keeping was less than perfect, but the design of the HMIS made it virtually impossible to be completely honest. The system anticipated that four-hour client interactions could easily be broken down into distinct chunks—discussed x problem from 4:15 to 4:30, y problem from 4:30 to 4:45, and so on. Of course, anyone who has ever had a conversation with another human being, let alone a human being with mental disabilities or substance use problems, knows that interactions are rarely so tidy and linear.

While this data is claimed to be kept very secure, in reality, hundreds of people in dozens of organizations typically have access to any given HMIS. There are guidelines in place to protect the data, but there is minimal monitoring to ensure that these guidelines are being followed, and many users found them very difficult to follow while working from home during the pandemic. I heard multiple stories of police or prosecutors improperly accessing information from HMIS. Clients can request to have their information removed from the system, but the process for doing so is rarely made clear to them, nor is this process clear even for the social workers processing the data.

After years of criticism, OrgCode—the group which develops VI-SPDAT—announced in 2021 that it would no longer be pushing VI-SPDAT updates, and as of 2022 it is no longer providing support for the current iteration of VI-SPDAT. While this is a commendable move from OrgCode, stakeholders in homeless services must acknowledge the larger failures of HMIS and coordinate entry more generally. Many of the other tools used to perform coordinated entry have similar problems to VI-SPDAT, in part because coordinated entry in effect requires this intrusive data collection about highly personal issues to determine needs and rank clients accordingly. The problems are baked into the data requirements of coordinated entry itself.

The answer to this problem cannot be to completely do away with any classification tools for housing insecure individuals, because understanding the scope and demographics of homelessness is important in tackling it. But clearly a drastic overhaul of these systems is needed to make sure that they are efficient, noninvasive, and accurate. Above all, it is crucial to remember that tools for sorting homeless individuals are only useful to the extent that they ultimately provide better access to the services that actually alleviate homelessness, like affordable housing, mental health treatment, and addiction support. Demanding that beleaguered social service providers prioritize data collection over services, all while using intrusive, racially biased, and dehumanizing tools, will only worsen an intensifying crisis.

May 17, 2022. Batya Kemper, J.D. program, NYU School of Law.

Risk Scoring Children in Chile

TECHNOLOGY & HUMAN RIGHTS

Risk Scoring Children in Chile

On March 30, 2022, Christiaan van Veen and Victoria Adelmant hosted the eleventh event in our “Transformer States” interview series on digital government and human rights. In conversation with human rights expert and activist Paz Peña, we examined the implications of Chile’s “Childhood Alert System,” an “early warning” mechanism which assigns risk scores to children based on their calculated probability of facing various harms. This blog picks up on the themes of the conversation. The video recording and additional readings can be found below.

The deaths of over a thousand children in privatized care homes in Chile between 2005 and 2016 have, in recent years, pushed the issue of child protection high onto the political agenda. The country’s limited legal and institutional protections for children have been consistently critiqued in the past decade, and calls for more state intervention, to reverse the legacies of Pinochet-era commitments to “hands-off” government, have been intensifying. On his first day in office in 2018, former president Sebastián Piñera promised to significantly strengthen and institutionalize state protections for children. He launched a National Agreement for Childhood; established local “childhood offices” and an Undersecretariat for Children; a law guaranteeing children’s rights was passed; and the Sistema Alerta Niñez (“Childhood Alert System”) was developed. This system uses predictive modelling software to calculate children’s likelihood of facing harm or abuse, dropping out of school, and other such risks.

Predictive modelling calculates the probabilities of certain outcomes by identifying patterns within datasets. It operates through a logic of correlation: where persons with certain characteristics experienced harm in the past, those with similar characteristics are likely to experience harm in the future. Developed jointly by researchers at Auckland University of Technology’s Centre for Social Data Analytics and the Universidad Adolfo Ibáñez’s GobLab, the Childhood Alert predictive modelling software analyzes existing government databases to identify combinations of individual and social factors which are correlated with harmful outcomes, and flags children accordingly. The aim is to “prioritize minors [and] achieve greater efficiency in the intervention.”

A skewed picture of risk

But the Childhood Alert System is fundamentally skewed. The tool analyzes databases about the beneficiaries of public programs and services, such as Chile’s Social Information Registry. It thereby only examines a subset of the population of children—those whose families are accessing public programs. Families in higher socioeconomic brackets—who do not receive social assistance and thus do not appear in these databases—are already excluded from the picture, despite the fact that children from these groups can also face abuse. Indeed, the Childhood Alert system’s developers themselves acknowledged in their final report that the tool has “reduced capability for identifying children at high risk from a higher socioeconomic level” due to the nature of the databases analyzed. The tool, from its inception and by its very design, is limited in scope and completely ignores wealthier groups.

The analysis then proceeds on a problematic basis, whereby socioeconomic disadvantage is equated with risk. Selected variables include: social programs of which the child’s family are beneficiaries; families’ educational backgrounds; socioeconomic measures from Chile’s Social Registry of Households; and a whole host of geographical variables, including the number of burglaries, percentage of single parent households, and unemployment rate in the child’s neighborhood. Each of these variables are direct measures of poverty. Through this design, children in poorer areas can be expected to receive higher risk scores. This is likely to perpetuate over-intervention in certain neighborhoods.

Economic and social inequalities, including significant regional disparities in living conditions, persist in Chile. As elsewhere, poverty and marginalization do not fall evenly. Women, migrants, those living in rural areas, and indigenous groups are more likely to live in poverty—those from indigenous groups have Chile’s highest poverty rates. As the Alert System is skewed towards low-income populations, it will likely disproportionately flag children from indigenous groups thus raising issues of racial and ethnic bias. Furthermore, the datasets used will also reflect inequalities and biases. Public datasets about families’ previous interactions with child protective services, for example, are populated through social workers’ inputs. Biases against indigenous families, young mothers, or migrants—reflected through disproportionate investigations or stereotyped judgments about parenting—will be fed into the database.

The developers of this predictive tool wrote in their evaluation that, while concerns about racial disparities “have been expressed in the context of countries like the United States, where there are greater challenges related to racism. In the local Chilean context, we frankly don’t see similar concerns about race.” As Paz Peña points out, this dismissal is “difficult to understand” in light of the evidence of racism and racialized poverty in Chile.

Predictive systems such as these are premised on linking individuals’ characteristics and circumstances with the incidence of harm. As Abeba Birhane puts it, such approaches by their nature “force determinability [and] create a world that resembles the past” through reinforcing stereotypes, because they attach risk factors to certain individual traits.

The global context

These issues of bias, disproportionality, and determinacy in predictive child welfare tools have already been raised in other countries. Public outcry, ethical concerns, and evidence that these tools simply do not work as intended, have led many such systems to be scrapped. In the United Kingdom, a local authority’s Early Help Profiling System which “translates data on families into risk profiles [of] the 20 families in most urgent need” was abandoned after it had “not realized the expected benefits.” The U.S. state of Illinois’ child welfare agency strongly criticized and scrapped its predictive tool which had flagged hundreds of children as 100% likely to be injured while failing to flag any of the children who did tragically die from mistreatment. And in New Zealand, the Social Development Minister prevented the deployment of a predictive tool on ethical grounds, purportedly noting: “These are children, not lab rats.”

But while predictive tools are being scrapped on grounds of ethics and ineffectiveness in certain contexts, these same systems are spreading across the Global South. Indeed, the Chilean case demonstrates this trend especially clearly. The team of researchers who developed Chile’s Childhood Alert System is the very same team whose modelling was halted by the New Zealand government due to ethical questions, and whose predictive tool for the U.S. state of Pennsylvania was the subject of high-profile and powerful critique by many actors including Virginia Eubanks in her 2018 book Automating Inequality.

As Paz Peña noted, it should come as no surprise that systems which are increasingly deemed too harmful in some Global North contexts are proliferating in the Global South. These spaces are often seen as an “easier target,” with lower chances of backlash than places like New Zealand or the United States. In Chile, weaker institutions resulting from the legacies of military dictatorship and the staunch commitment to a “subsidiary” (streamlined, outsourced, neoliberal) state may be deemed to provide more fertile ground for such systems. Indeed, the tool’s developers wrote in a report that achieving acceptance of the system in Chile would be “simpler as it is the citizens’ custom to have their data processed to stratify their socioeconomic status for the purpose of targeting social benefits.”

This highlights the indispensability of international comparison, cooperation, and solidarity. Those of us working in this space must pay close attention to developments around the world as these systems continue to be hawked at breakneck speed. Identifying parallels, sharing information, and collaborating across constituencies is vital to support the organizations and activists who are working to raise awareness of these systems.

April 20, 2022. Victoria Adelmant, Director of the Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law. 

“Killing two birds with one stone?” The Cashless COVID Welfare Payments Aimed at Boosting Consumption

TECHNOLOGY & HUMAN RIGHTS

“Killing two birds with one stone?” The Cashless COVID Welfare Payments Aimed at Boosting Consumption

In launching its COVID-19 relief payments scheme, the South Korean government had two goals: providing a safety net for its citizens and boosting consumption for the economy. It therefore provided cashless payments, issuing credit card points rather than cash. However, this had serious implications for the vulnerable.

In May 2020, South Korea’s government distributed its COVID-19 emergency relief payments to all households through cashless channels. Recipients predominantly received points on credit cards rather than cash transfers. From the outset, the government stated explicitly that this universal transfer scheme had two goals: it was not only intended to mitigate the devastating impacts of the pandemic on people’s livelihoods, but also explicitly aimed at simultaneously boosting consumption in the South Korean economy. Providing cash would not necessarily boost consumption as it could be placed in savings accounts. Therefore, credit card points were offered instead to require recipients to spend the relief. But in trying to “kill two birds with one stone” by promoting consumption through the relief program, the government jeopardized the welfare aim of this program.

Once the payouts began, the government boasted that the delivery of the relief funds was timely and efficient. The relief program had been launched based on business agreements with credit card companies for “rapid and smooth” payment, and indeed, it was true that the card-based channel enabled distribution which was much faster than in other countries. Although “offline” applications for the relief program could be made in-person at banks, the scheme was designed around the submission of applications through credit-card companies’ websites or apps. The relief funds were then deposited onto recipients’ credit card or debit card in the form of points—which were separated from normal credit card points—within two days after applying. In September 2021, during the second round of universal relief payments known as the “COVID-19 Win-Win National Relief Fund,” 90% of expected recipients received their payments within 12 days.

Restricting spending to boost spending

However, paying recipients in credit card points meant restricting their access to cash. While low-income households received the relief fund in cash during the first round of COVID-19 relief, they had to apply for the payment in the second round and could only choose among cashless methods which included credit cards and debit cards. To make matters worse, the policy placed constraints on where points could be used, in the name of encouraging consumption and growing the local economy. The points could only be used in designated places, and could not be used to pay for utility bills, repay a mortgage, nor for online shopping. They could not be transferred to others’ bank accounts or withdrawn as cash. Therefore, recipients had no choice but to use their relief funds in certain local restaurants, markets, or clothing stores, etc. If the points had not been used approximately 3-4 months after disbursement, then they were returned to the national treasury. All of these conditions were the outcome of the fact that the policy specifically aimed at boosting consumption.

Jeopardizing the welfare aim

These restrictions had significant repercussions on people in poverty, in two key ways. First, the relief fund failed to fulfill the right to social protection of vulnerable people at risk. As utility bills, telecommunication fees, and even health insurance fees could not be paid with the points, many were left unable to pay for the things they needed to pay for, while much-needed funds remained effectively stranded on the card. What use is a card meant only for restaurants and shops when one is in arrears on utility bills, health insurance fees, and at risk of electricity supply and health insurance benefits being cut off? Those who needed cash immediately sometimes handed their credit cards to other people to use, and then requested payment back in cash below the value. It was also reported that a number of people bought products at stores where relief fund points could be used, and then sold the products at a lower price on the second-hand online market to obtain cash. Although the government warned that it would crack down on such “illegal transactions,” the demand for cash could not be controlled.

Second, the right to housing of vulnerable populations was not sufficiently protected through this scheme. Homeless persons, who needed the most help, were severely affected because the cashless relief funds could not function as a payment method for monthly rent. Homeless people and slice-room dwellers were the group which most strongly agreed that “the COVID-19 relief fund should be distributed in cash” in a survey. Further, given that low-income people spent a higher proportion of their income on rent than those from other social classes, the fact that the relief funds could not be used on rent also significantly affected low-income households. A number of temporary or informal workers who lost their jobs due to the pandemic were on the verge of being pushed into poorer conditions because they could not afford their rent. The relief program could not help these groups cover some of their most urgent expenditures—housing costs—at all.

Boosting consumption can be expected as an indirect effect of government relief funds, but it must not be adopted as a specific goal of such programs. Attempting to achieve this consumption-oriented goal through the relief payments resulted in the scheme’s design imposing limitations on the use of funds, thereby undermining the scheme’s ability to help those in the most extreme need. As the government set boosting consumption as one of the aims of the program and seemingly prioritized it over the welfare aim, the delivery of the payments was devised in an inappropriate way that did not take the most vulnerable into account.

Killing two birds with one stone?

The Korea Development Institute (KDI) found that only about 30% of the first emergency relief funds led to an increase in consumption, while the remaining 70% led to household debt repayment or savings. In the end, it seemed that the cashless relief stipend did not successfully increase consumption, all while it caused the weakening of its social security function.
Such schemes aimed at “killing two birds with one stone” were doomed to fail from the beginning because these two goals come into tension with one another in the program’s design. The consumption aim is likely to harm the welfare aim through pushing for cashless, controlled, and restricted use. The sole purpose of emergency relief funds in a crisis should be to provide assistance for the most vulnerable. Such schemes should be delivered in a way that will best fulfill this aim, they should be focused on providing a safety net, and should be designed from the perspective of right-holders, and not of consumers.

April 19, 2022. Bo Eun Kwon, LLM program, NYU School of Law whose interests include international human rights law, economic and social rights, and digital governance. She has worked at the National Human Rights Commission of Korea.

Singapore’s “smart city” initiative: one step further in the surveillance, regulation and disciplining of those at the margins

TECHNOLOGY & HUMAN RIGHTS

Singapore’s “smart city” initiative: one step further in the surveillance, regulation and disciplining of those at the margins

Singapore’s smart city initiative creates an interconnected web of digital infrastructures which promises citizens safety, convenience, and efficiency. But the smart city is experienced differently by individuals at the margins, particularly migrant workers, who are experimented on at the forefront of technological innovation.

On February 23, 2022, we hosted the tenth event of the Transformer States Series on Digital Government and Human Rights, titled “Surveillance of the Poor in Singapore: Poverty in ‘Smart City’.” Christiaan van Veen and Victoria Adelmant spoke with Dr. Monamie Bhadra Haines about the deployment of surveillance technologies as part of Singapore’s “smart city” initiative. This blog outlines the key themes discussed during the conversation.

The smart city in the context of institutionalized racial hierarchy

Singapore has consistently been hailed as the world’s leading smart city. For a decade, the city-state has been covering its territory with ubiquitous sensors and integrated digital infrastructures with the aim, in the government’s words, of collecting information on “everyone, everything, everywhere, all the time.” But these smart city technologies are layered on top of pre-existing structures and inequalities, which mediate how these innovations are experienced.

One such structure is an explicit racial hierarchy. As an island nation with a long history of multi-ethnicity and migration, Singapore has witnessed significant migration from Southern China, the Malay Peninsula, India, and Bangladesh. Borrowing from the British model of race-based regulation, this multi-ethnicity is governed by the post-colonial state through the explicit adoption of four racial categories – Chinese, Malay, Indian and Others (or “CMIO” for short) – which are institutionalized within immigration policies, housing, education and employment. As a result, while migrant workers from South and Southeast Asia are the backbone of Singapore’s blue-collar labor market, they occupy the bottom tier of the racial hierarchy; are subject to stark precarity; and have become the “objects” of extensive surveillance by the state.

The promise of the smart city

Singapore’s smart city initiative is “sold” to the public through narratives of economic opportunities and job creation in the knowledge economy, improving environmental sustainability, and increasing efficiency and convenience. Through collecting and inter-connecting all kinds of “mundane” data – such as electricity patterns, data from increasingly-intrusive IoT products, and geo-location and mobility data – into centralized databases, smart cities are said to provide more safety and convenience. Singapore’s hyper-modern technologically-advanced society promises efficient and seamless public services, and the constant technology-driven surveillance and the loss of a few civil liberties are viewed by many as a small price to pay for such efficiency.

Further, the collection of large quantities of data from individuals is promised to enable citizens to be better connected with the government; while governments’ decisions, in turn, will be based upon the purportedly objective data from sensors and devices, thereby freeing decision-making from human fallibility and rendering it more neutral.

The realities: disparate impacts of smart city surveillance on migrant workers

However, smart cities are not merely economic or technological endeavors, but techno-social assemblages that create and impact different publics differently. As Monamie noted, specific imaginations and imagery of Singapore as a hyper-modern, interconnected, and efficient smart city can obscure certain types of racialized physical labor, such as the domestic labor of female Southeast-Asian migrant workers.

Migrant workers are uniquely impacted by increasing digitalization and datafication in Singapore. For years, these workers have been housed in dormitories with occupancy often exceeding capacity, located in the literal “margins” or outskirts of the city: migrant workers have long been physically kept separate from the rest of Singapore’s population within these dormitory complexes. They are stereotyped as violent or frequently inebriated, and the dormitories have for years been surveilled through digital technologies including security cameras, biometric sensors, and data from social media and transport services.

The pandemic highlighted and intensified the disproportionate surveillance of migrant workers within Singapore. Layered on top of the existing technological surveillance of migrants’ dormitories, a surveillance assemblage for COVID-19 contact tracing was created. Measures in the name of public health were deployed to carefully surveil these workers’ bodies and movements. Migrant workers became “objects” of technological experimentation as they were required to use a multitude of new mobile-based apps that integrated immigration data and work permit data with health data (such as body temperature and oximeter readings) and Covid-19 contact tracing data. The permissions required by these apps were also quite broad – including access to Bluetooth services and location data. All the data was stored in a centralized database.

Even though surveillant contact-tracing technologies were later rolled out across Singapore and normalized around the world, the important point here is that these systems were deployed exclusively on migrant workers first. Some apps, Monamie pointed out, were indeed only required by migrant workers, while citizens did not have to use them. This use of interconnected networks of surveillance technologies thus highlights the selective experimentation that underpins smart city initiatives. While smart city initiatives are, by their nature, premised on large-scale surveillance, we often see that policies, apps, and technologies are tried on individuals and communities with the least power first, before spilling out to the rest of the population. In Singapore, the objects of such experimentation are migrant workers who occupy “exceptional spaces” – of being needed to ensure the existence of certain labor markets, but also of needing to be disciplined and regulated. These technological initiatives, in subjecting specific groups at the margins to more surveillance than the rest of the population and requiring them to use more tech-based tools than others, serve to exacerbate the “othering” and isolation of migrant workers.

Forging eddies of resistance

While Monamie noted that “activism” is “still considered a dirty word in Singapore,” there have been some localized efforts to challenge some of the technologies within the smart city, in part due to the intensification of surveillance spurred by the pandemic. These efforts, and a rapidly-growing recognition of the disproportionate targeting and disparate impacts of such technologies, indicate that the smart city is also a site of contestation with growing resistance to its tech-based tools.

March 18, 2022. Ramya Chandrasekhar, LLM program at NYU School of Law whose research interests relate to data governance, critical infrastructure studies, and critical theory. She previously worked with technology policy organizations and at a reputed law firm in India.

Experimental automation in the UK immigration system

TECHNOLOGY & HUMAN RIGHTS

Experimental automation in the UK immigration system

The UK government is experimenting with automated immigration systems. The promised benefits of automation are inevitably attractive, but these experiments routinely expose people—including some of the most vulnerable—to unacceptable risks of harm.

In April 2019, The Guardian reported that couples accused of sham marriages were increasingly being subjected to invasive investigations by the Home Office, the UK government body responsible for immigration policy. Couples reported having their wedding ceremonies interrupted to be quizzed about their sex life, being told they were not in a genuine relationship because they were wearing pajamas in bed, and being present while their intimate photos were shared between officials.

The official tactics reported are worrying enough, but it has since come to light through the efforts of a legal charity (the Public Law Project) and investigative journalists that an automated system is largely determining who gets investigated in the first place. An algorithm, hidden from public view, is sorting couples into “pass” and “fail” categories, based on eight unknown criteria.
Couples who “fail” this covert algorithmic test are subjected to intrusive investigations. They must attend an interview and hand over extensive evidence about their relationship, a process which has been described as “insulting” and “grueling.” These investigations can also prevent couples from getting married altogether. If the Home Office decides that a couple has failed to “comply” with an investigation—even if they are in a genuine relationship—the couple is denied a marriage certificate and forced to start the process all over again. One couple was reportedly ruled non-compliant for failing to provide six months of bank statements for an account that had only been open for four months. This makes it difficult for people to plan their weddings and their lives. And the investigation can lead to other immigration enforcement actions, such as visa cancellation, detention, and deportation. In one case, a sham marriage dawn raid led to a man being detained for four months, until the Home Office finally accepted that his relationship was genuine.

We know little about how this automated system operates in practice or its effectiveness in detecting sham marriages. The Home Office refuses to disclose or otherwise explain the eight criteria at the center of the system. There is a real risk that the system is racially discriminatory, however. The criteria were derived from historical data, which may well be skewed against certain nationalities. The Home Office’s own analysis shows that some nationalities, including Bulgarian, Greek, Romanian and Albanian people, receive “fail” ratings more frequently than others.

The sham marriages algorithm is, in many respects, a typical case of the deployment of automation in the UK immigration system. It is not difficult to understand why officials are seeking to automate immigration decision-making. Administering immigration policy is a tough job. Officials are often inexperienced and under pressure to process large volumes of decisions. Each decision will have profound effects for those subjected to it. This is not helped by the dense complexity of, and frequent changes in, immigration law and policy, which can bamboozle even the most hardened administrative lawyer. All of this, of course, takes place in an environment where migration remains one of the most vexed issues on the political agenda. Automation’s promised benefits of greater efficiency, lower costs, and increased consistency are, from the government’s perspective, inevitably attractive.

But in reality, a familiar pattern of risky experimentation and failure is already emerging. It begins with the Home Office deploying a novel automated system with the goal of cheaper, quicker, and more accurate decision-making. There is often little evidence to support the system’s effectiveness in delivering those goals and scant consideration of the risks of harm. Such systems are generally intended to benefit the government or the general, non-migrant population, rather than the people subject to them. When the system goes wrong and harms individuals, the Home Office fails to take adequate steps to address those harms. The justice system—with its principles and procedures developed in response to more traditional forms of public administration—is left to muddle through in trying to provide some form of redress. That redress, even where best efforts are made, is often unsatisfactory.

This is the story we seek to tell in our new book, Experiments in Automating Immigration Systems, through an exploration of three automated immigration systems in the UK: a voice recognition system used to detect fraud in English language testing; an algorithm for identifying “risky” visa applications; and automated decision-making in the process for EU citizens to apply to remain in the UK after Brexit. It is, at its core, a story of risky bureaucratic experimentation that routinely exposes people, including some of the most vulnerable, to unacceptable risks of harm. For example, some of the students caught up in the English language testing scandal were detained and deported, while others had to abandon their studies and fight for years through the courts to prove their innocence. While we focus on the UK experience, this story will no doubt be increasingly familiar in many countries around the world.

It is important to remember, however, that this story is just beginning. While it would be naïve to think that the tensions in public administration can ever be wholly overcome, the government must strive to reap the benefits of automation for all of society, in a way that is sensitive to and mitigates the attendant risks of injustice. That work is, of course, best led by the government itself.

But the collective work of journalists, charities, NGOs, lawyers, researchers, and others will continue to play a crucial role in ensuring, as far as possible, that automated administration is just and fair.

March 14, 2022. Joe Tomlinson and Jack Maxwell.
Dr. Joe Tomlinson is a Senior Lecturer in Public Law at the University of York.
Jack Maxwell is a barrister at the Victorian Bar.

U.S. government must adopt moratorium on mandatory use of biometric technologies in critical sectors, look to evidence abroad, urge human rights experts

TECHNOLOGY AND HUMAN RIGHTS

U.S. Government must adopt moratorium on mandatory use of biometric technologies in critical sectors, look to evidence abroad, urge human rights experts

As the White House Office of Science and Technology Policy (OSTP) embarks on an initiative to design a ‘Bill of Rights for an AI-Powered World,’ it must begin by immediately imposing a moratorium on the mandatory use of AI-enabled biometrics in critical sectors, such as health, social welfare programs, and education, argue a group of human rights experts at the Digital Welfare State & Human Rights Project (the DWS Project) at the Center for Human Rights and Global Justice at NYU School of Law, and the Institute for Law, Innovation & Technology (iLIT) at Temple University School of Law.

In a 10-page submission responding to OSTP’s Request for Information, the DWS Project and iLIT argue that biometric identification technologies such as facial recognition and fingerprint-based recognition pose existential threats to human rights, democracy, and the rule of law. Drawing on comparative research and consultation with some of the leading international experts on biometrics and human rights, the submission details evidence of some of the concerns raised in countries including Ireland, India, Uganda, and Kenya. It catalogues the often-catastrophic effects of biometric failure, of unwieldly administrative requirements imposed on public services, and the pervasive lack of legal remedies and basic transparency about use of biometrics in government.

“We now have a great deal of evidence about the ways that biometric identification can exclude and discriminate, denying entire groups access to basic social rights,” said Katelyn Cioffi, a Research Scholar at the DWS Project, “Under many biometric identification systems, you can be denied health care, access to education, or even a drivers’ license, if you are not able or willing to authenticate aspects of your identity biometrically.” An AI Bill of Rights that allows for equal enjoyment of rights must learn from comparative examples, the submission argues, and ensure that AI-enabled biometrics do not merely perpetuate systematic discrimination. This means looking beyond frequently-raised concerns about surveillance and privacy, to how biometric technologies affect social rights such as health, social security, education, housing, and employment.

A key factor of success for the initiative will be much-needed legal and regulatory reform across the United States federal system. “This initiative represents an opportunity for the U.S. government to examine the shortcomings of current laws and regulations, including equal protection, civil rights laws, and administrative law,” Laura Bingham, Executive Director of iLIT stated. “The protections that Americans depend on fail to provide the necessary legal tools to defend their rights and safeguard democratic institutions in a society that increasingly relies on digital technologies to make critical decisions.”

The submission also urges the White House to place constraints on the actions of the U.S. government and U.S. companies abroad. “The United States plays a major role in the development and uptake of biometric technologies globally, through its foreign investment, foreign policy, and development aid,” said Victoria Adelmant, a Research Scholar at the DWS Project. “As the government moves to regulate biometric technologies, it must not ignore U.S. companies’ roles in developing, selling, and promoting such technologies abroad, as well as the government’s own actions in spheres such as international development, defense, and migration.”

For the government to mount an effective response to these harms, the experts argue that it must also take heed of parallel efforts of other powerful political actors, including China and the European Union, which are currently attempting to regulate biometric technologies. However, it must also avoid a race to the bottom or jump into a perceived ‘arms race’ with countries like China, by pursuing an increasingly securitized biometric state and allowing the private sector to continue its unfettered ‘self-regulation’ and experimentation. Instead, the U.S. government should focus on acting as a global leader in enabling human rights-sustaining technological innovation.

The submission makes the following recommendations:

  1. Impose an immediate moratorium on the use of biometric technologies in critical sectors: biometric identification should never be mandatory in critical sectors such as education, welfare benefits programs, or healthcare.
  2. Propose and enact legislation to address the indirect and disparate impact of biometrics.
  3. Engage in further review and study of the human rights impacts of biometric technologies as well as of different legal and regulatory approaches.
  4. Build a comprehensive legal and regulatory approach that addresses the complex, systemic concerns raised by AI-enabled biometric identification technologies.
  5. Ensure that any new laws, regulations, and policies are subject to a democratic, transparent, and open process.
  6. Ensure that public education materials and any new laws, regulations, and policies are described and written in clear, non-technical, and easily accessible language.

This post was originally published as a press release on January 17, 2022.

The Digital Welfare State and Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law aims to investigate systems of social protection and assistance in countries worldwide that are increasingly driven by digital data and technologies.

The Temple University Institute for Law, Innovation & Technology (iLIT) at Beasley School of Law pursues action research, experiential instruction, and advocacy with a mission to deliver equity, bridge academic and practical boundaries, and inform new approaches to technological innovation in the public interest.

Response to the White House Office of Science and Technology Policy’s Request for Information on Biometric Identification Technologies

TECHNOLOGY AND HUMAN RIGHTS

Response to the White House Office of Science and Technology Policy’s Request for Information on Biometric Identification Technologies

In January 2022, the Digital Welfare State & Human Rights Project team at the Center together with their partners at the Institute for Law, Innovation & Technology (iLIT) at Temple University, Beasley School of Law, submitted expert commentary to the United States White House’s Blueprint for an AI Bill of Rights initiative. 

The White House Office of Science and Technology Policy (OSTP) had embarked on an initiative to design a “Bill of Rights for an AI-Powered World,” and issued a Request for Information on Biometric Identification Technologies. The OSTP asked for input from varied experts to provide information about the scope and extent of the usage of biometric technologies, and to help the OSTP to better understand ‘the stakeholders that are, or may be, impacted by their use or regulation.’ In response to this request, our team submitted a 10-page submission to provide international and comparative information to inform OSTP’s understanding of the social, economic, and political impacts of biometric technologies, in research and regulation. The submission discusses the implications of AI-driven biometric technologies for human rights law, democracy, and the rule of law, and provides information about the ways in which various groups and communities can be negatively impacted by such technologies.

In this submission, we sought especially to draw attention to the importance of learning from other countries’ experiences with biometrics, and to show that the implications of biometric technologies go far beyond the frequently-raised concerns about surveillance and privacy. We therefore provided a range of comparative examples from countries around the world where biometric technologies have been adopted, including within essential services such as social security and housing sectors. We argued that the OSTP, in drafting its upcoming “AI Bill of Rights,” should learn from these comparative examples, to take account of how biometric technologies can affect social rights such as health, social security, education, housing, and employment. The submission also urges the OSTP to place constraints on the actions of the U.S. government and U.S. companies abroad.

This submission fed into the United States White House’s Blueprint for an AI Bill of Rights, released in October 2022. The Blueprint has since laid the groundwork for regulatory efforts to assess, manage, and prevent the risks posed by AI in the United States and abroad, and has been built upon in subsequent policy efforts.

Chosen by a Secret Algorithm: Colombia’s top-down pandemic payments

TECHNOLOGY AND HUMAN RIGHTS

Chosen by a Secret Algorithm: Colombia’s top-down pandemic payments

The Colombian government was applauded for delivering payments to 2.9 million people in just 2 weeks during the pandemic, thanks to a big-data-driven approach. But this new approach represents a fundamental change in social policy which shifts away from political participation and from a notion of rights.

On Wednesday, November 24, 2021, the Digital Welfare State and Human Rights Project hosted the ninth episode in the Transformer States conversation series on Digital Government and Human Rights, in an event entitled: “Chosen by a secret algorithm: A closer look at Colombia’s Pandemic Payments.” Christiaan van Veen and Victoria Adelmant had a conversation with Joan López, Researcher at the Global Data Justice Initiative and at Colombian NGO Fundación Karisma about Colombia’s pandemic payments and its reliance on data-driven technologies and prediction. This blog highlights some core issues related to taking a top-down, data-driven approach to social protection.

From expert interviews to a top-down approach

The System of Possible Beneficiaries of Social Programs (SISBEN in Spanish) was created to assist in the targeting of social programs in Colombia. This system classifies the Colombian population along a spectrum of vulnerability through the collection of information about households, including health data, family composition, access to social programs, financial information, and earnings. This data is collected through nationwide interviews conducted by experts. Beneficiaries are then rated on a scale of 1 to 100, with 0 as the least prosperous and 100 as the most prosperous, through a simple algorithm. SISBEN therefore aims to identify and rank “the poorest of the poor.” This centralized classification system is used by 19 different social programs to determine eligibility: each social program chooses its own cut-off score between 1 and 100 as a threshold for eligibility.

But in 2016, the National Development Office – the Colombian entity in charge of SISBEN – changed the calculation used to determine the profile of the poorest. It introduced a new and secret algorithm which would create a profile based on predicted income generation capacity. Experts collecting data for SISBEN through interviews had previously looked at the realities of people’s conditions: if a person had access to basic services such as water, sanitation, education, health and/or employment, the person was not deemed poor. But the new system sought instead to create detailed profiles about what a person could earn, rather than what a person has. This approach sought, through modelling, to predict households’ situation, rather than to document beneficiaries’ realities.

A new approach to social policy

During the pandemic, the government launched a new system of payments called the Ingreso Solidario (meaning “solidarity income”). This system would provide monthly payments to people who were not covered by any other existing social program that relied on SISBEN; the ultimate goal of Ingreso Solidario was to send money to 2.9 million people who needed assistance due to the crisis caused by COVID-19. The Ingreso Solidario was, in some ways, very effective. People did not have to apply for this program: if they were selected as eligible, they would automatically receive a payment. Many people received the money immediately into their bank accounts, and payments were made very rapidly, within just a few weeks. Moreover, the Ingreso Solidario was an unconditional transfer and did not condition the receipt of the money to the fulfillment of certain requirements.

But the Ingreso Solidario was based on a new approach to social policy, driven by technology and data sharing. The Government entered agreements with private companies, including Experian and Transunion, to access their databases. Agreements were also made between different government agencies and departments. Through data-sharing arrangements across 34 public and private databases, the government cross- checked the information provided in the interviews with information in dozens of databases to find inconsistencies and exclude anyone deemed not to require social assistance. In relying on cross-checking databases to “find” people who are in need, this approach depends heavily on enormous data collection, and it increases government’s reliance on the private sector.

The implications of this new approach

This new approach to social policy, as implemented through the Ingreso Solidario, has fundamental implications. First, this system is difficult to challenge. The algorithm used to profile vulnerability, to predict income generating capacity, and to assign a score to people living in poverty, is confidential. The Government consistently argued that disclosing information about the algorithm would lead to a macroeconomic crisis because if people knew how the system worked, they would try to cheat the system. Additionally, SISBEN has been normalized. Though there are many other ways that eligibility for social programs could be assessed, the public accepts it as natural and inevitable that the government has taken this arbitrary approach reliant on numerical scoring and predictions. Due to this normalization, combined with the lack of transparency, this new approach to determining eligibility for social programs has therefore not been contested.

Second, in adopting an approach which relies on cross-checking and analyzing data, the Ingreso Solidario is designed to avoid any contestation in the design and implementation of the algorithm. This is a thoroughly technocratic endeavor. The idea is to use databases and avoid going to, and working with, the communities. The government was, in Joan’s words, “trying to control everything from a distance” to “avoid having political discussions about who should be eligible.” There were no discussions and negotiations between the citizens and the Government to jointly address the challenges of using this technology to target poor people. Decisions about who the extra 2.9 million beneficiaries should be were taken unilaterally from above. As Joan argued, this was intentional: “The mindset of avoiding political discussion is clearly part of the idea of Ingreso Solidario.”

Third, because people were unaware that they were going to receive money, those who received a payment felt like they had won the lottery. Thus, as Joan argued, people saw this money not “as an entitlement, but just as a gift that this person was lucky to get.” This therefore represents a shift away from a conception of assistance as something we are entitled to by right. But in re-centering the notion of rights, we are reminded of the importance of taking human rights seriously when analyzing and redesigning these kinds of systems. Joan noted that we need to move away from an approach of deciding what poverty is from above, and instead move towards working with communities. We must use fundamental rights as guidance in designing a system that will provide support to those in poverty in an open, transparent, and participatory manner which does not seek to bypass political discussion.

María Beatriz Jiménez, LLM program, NYU School of Law with research focus on digital rights. She previously worked for the Colombian government in the Ministry of Information and Communication Technologies and the Ministry of Trade.

India’s New National Digital Health Mission: A Trojan Horse for Privatization

TECHNOLOGY & HUMAN RIGHTS

India’s New National Digital Health Mission: A Trojan Horse for Privatization

Through the national Digital Health ID, India’s Modi government is implementing techno-solutionist and market-based reforms to further entrench the centrality of the private sector in healthcare. This has serious consequences for all Indians, but most of all, for its vulnerable populations.

On August 15, 2021, India’s Prime Minister Narendra Modi launched the National Digital Health Mission (NDHM), under which every Indian citizen is to be provided with a unique digital health ID. This ID will contain patients’ health records—including prescriptions, diagnostic reports, and medical histories—and will enable easy access for both patients and health service providers. The aim of the NDHM is to allow patients to seamlessly switch between health service providers by facilitating their access to patients’ health data and enabling insurance providers to quickly verify and process claims. Accessible registries of health master data will also be created. But this digital health ID program is emblematic of a larger problem in India—the government’s steady withdrawal from healthcare, both as welfare and as a public service.

The digital health ID is a crucial part of Modi’s plans to create a new digital health infrastructure called the National Health Stack. This will form the health component of the existing India Stack, which is defined as “a set of digital public goods” that are intended to make it easy for innovators to introduce digital services in India across different sectors. The India Stack is built on the existing foundational user-base provided by Aadhaar digital ID numbers. A “Unified Health Interface” will be created as a digital platform to manage healthcare-related transactions. It will be administered by the National Health Authority (NHA), which is also responsible for administering the flagship public health insurance scheme, the Ayushman Bharat Pradhan Mantri Jan Arogya Yojana (AB-PMJAY), providing health coverage for around 500 million poor Indians.

The Modi government proclaims that the NDHM and digital health ID will revolutionize the Indian healthcare system through technology-driven solutions. But this glosses over the government’s real motive, which is to incentivize the private sector to participate in and rescue India’s ailing healthcare system. Rather than invest more funds in public health infrastructure, the Indian government has decided to outsource healthcare services to private healthcare providers and insurance companies, using access to vast troves of health data as the proverbial carrot.
Indeed, the benefits of the NDHM for the private healthcare sector are numerous. It will provide valuable, interoperable data in the form of “health registries” which link data silos and act as a “single source of truth” for all healthcare stakeholders. This will enable quicker processing of claims and payments to health service providers. In an op-ed lauding the NDHM, the head of a major Indian hospital chain noted that the NDHM will “reduce administrative burden related to doctor onboarding, regulatory approvals and renewals, and hospital or payer empanelment.”
The government appears to have learned its lessons from the implementation of the AB-PMJAY, which allowed people below the poverty line to purchase healthcare services through state-funded health insurance. Although the scheme included both private and public hospitals, it relied heavily on private hospitals, as public hospitals lacked sufficient facilities. However, not enough private hospitals onboarded because rates were non-competitive as compared to the market, and because the scheme was plagued by long delays in insurance payments and insurance fraud. But, instead of building up public healthcare and reducing dependency on the private sector, the government is eager to fix this problem by providing better incentives to private providers through the NDHM.

Meanwhile, it is unclear what the benefits to the public will be. Digitizing the healthcare system and making it easier for insurance companies to pay private hospitals for services does not solve more urgent and serious problems, such as the lack of healthcare facilities in rural areas. The COVID-19 pandemic saw public hospitals playing a dominant role in treatment and vaccination, while private hospitals took a backseat. Given this, increasing the reliance placed on the private healthcare system through the NDHM is counterintuitive.

This growing reliance on the private sector is also likely to further disadvantage people living in poverty. The lack of suitable government hospitals forces people into private hospitals, and they are often required to pay more than the amount covered by the government-funded AB-PMJAY. Further, India’s National Human Rights Commission has taken the position that denial of care by private service providers is outside its ambit, notwithstanding their enrollment into state-funded insurance schemes like AB-PMJAY. Also, as the digital health ID will enable insurance companies’ access to sensitive health data, they may deny insurance or charge higher premiums to those most in need, thereby further entrenching discrimination and inequalities. Getting coverage with a genetic disorder, for instance, is already extremely difficult in India, something a digital health ID could worsen because insurance companies could access this information, rendering premiums prohibitively expensive for millions who need it. Digitization also renders highly-personal health records susceptible to breaches: such privacy concerns led many persons living with HIV to drop out of treatment programs when antiretroviral therapy centers began collecting Aadhaar details from patients.

Not having a digital health ID could lead to exclusion from vital healthcare. This is not a hypothetical. The government had to issue a clarification that no one should be denied COVID-19 vaccines or oxygen for lack of Aadhaar after numerous concerning reports, including allegations that a patient died after two hospitals demanded Aadhaar details which he did not have.

Nonetheless, plans are speeding ahead as the “usual suspects” of India’s techno-solutionist projects turn their efforts to healthcare. RS Sharma, the ex-Director General of the government agency responsible for Aadhaar, is the current CEO of the NHA. The National Health Stack was reportedly developed in consultation with i-SPIRT, a group of so-called “volunteers” with private sector backgrounds who act as a go-between between the Indian government and the tech sector and played a vital role in embedding Aadhaar in society through private companies. A committee set up to examine the merits of the National Health Stack was headed by another former UIDAI chairman.

Steered by individuals with an endless faith in the power of technology and in the private sector’s entrepreneurial drive to save the Indian government and governance, India is determinedly marching forward with its technology-driven and market-based reforms in public services and welfare. This is all underlined by a heavy tendency towards privatization and is in turn inspired by the private sector. The NDHM, for instance, is guided by the tagline “Think Big, Start Small, Scale Fast,” a business philosophy for start-ups.

Perhaps most concerningly, the neoliberal withdrawal of government from crucial public services to make space for the private sector has resulted in the rationing of those goods and services, with fewer people having access to them. Having a digital health ID is not likely to change this for India’s health sector, and is allowing for this privatization by stealth.

December 14, 2021. Sharngan Aravindakshan, LL.M. program, NYU School of Law; Human Rights Scholar with the Digital Welfare State & Human Rights Project in 2021-22. He previously worked for the Centre for Communication Governance in India.