Risk Scoring Children in Chile

TECHNOLOGY & HUMAN RIGHTS

Risk Scoring Children in Chile

On March 30, 2022, Christiaan van Veen and Victoria Adelmant hosted the eleventh event in our “Transformer States” interview series on digital government and human rights. In conversation with human rights expert and activist Paz Peña, we examined the implications of Chile’s “Childhood Alert System,” an “early warning” mechanism which assigns risk scores to children based on their calculated probability of facing various harms. This blog picks up on the themes of the conversation. The video recording and additional readings can be found below.

The deaths of over a thousand children in privatized care homes in Chile between 2005 and 2016 have, in recent years, pushed the issue of child protection high onto the political agenda. The country’s limited legal and institutional protections for children have been consistently critiqued in the past decade, and calls for more state intervention, to reverse the legacies of Pinochet-era commitments to “hands-off” government, have been intensifying. On his first day in office in 2018, former president Sebastián Piñera promised to significantly strengthen and institutionalize state protections for children. He launched a National Agreement for Childhood; established local “childhood offices” and an Undersecretariat for Children; a law guaranteeing children’s rights was passed; and the Sistema Alerta Niñez (“Childhood Alert System”) was developed. This system uses predictive modelling software to calculate children’s likelihood of facing harm or abuse, dropping out of school, and other such risks.

Predictive modelling calculates the probabilities of certain outcomes by identifying patterns within datasets. It operates through a logic of correlation: where persons with certain characteristics experienced harm in the past, those with similar characteristics are likely to experience harm in the future. Developed jointly by researchers at Auckland University of Technology’s Centre for Social Data Analytics and the Universidad Adolfo Ibáñez’s GobLab, the Childhood Alert predictive modelling software analyzes existing government databases to identify combinations of individual and social factors which are correlated with harmful outcomes, and flags children accordingly. The aim is to “prioritize minors [and] achieve greater efficiency in the intervention.”

A skewed picture of risk

But the Childhood Alert System is fundamentally skewed. The tool analyzes databases about the beneficiaries of public programs and services, such as Chile’s Social Information Registry. It thereby only examines a subset of the population of children—those whose families are accessing public programs. Families in higher socioeconomic brackets—who do not receive social assistance and thus do not appear in these databases—are already excluded from the picture, despite the fact that children from these groups can also face abuse. Indeed, the Childhood Alert system’s developers themselves acknowledged in their final report that the tool has “reduced capability for identifying children at high risk from a higher socioeconomic level” due to the nature of the databases analyzed. The tool, from its inception and by its very design, is limited in scope and completely ignores wealthier groups.

The analysis then proceeds on a problematic basis, whereby socioeconomic disadvantage is equated with risk. Selected variables include: social programs of which the child’s family are beneficiaries; families’ educational backgrounds; socioeconomic measures from Chile’s Social Registry of Households; and a whole host of geographical variables, including the number of burglaries, percentage of single parent households, and unemployment rate in the child’s neighborhood. Each of these variables are direct measures of poverty. Through this design, children in poorer areas can be expected to receive higher risk scores. This is likely to perpetuate over-intervention in certain neighborhoods.

Economic and social inequalities, including significant regional disparities in living conditions, persist in Chile. As elsewhere, poverty and marginalization do not fall evenly. Women, migrants, those living in rural areas, and indigenous groups are more likely to live in poverty—those from indigenous groups have Chile’s highest poverty rates. As the Alert System is skewed towards low-income populations, it will likely disproportionately flag children from indigenous groups thus raising issues of racial and ethnic bias. Furthermore, the datasets used will also reflect inequalities and biases. Public datasets about families’ previous interactions with child protective services, for example, are populated through social workers’ inputs. Biases against indigenous families, young mothers, or migrants—reflected through disproportionate investigations or stereotyped judgments about parenting—will be fed into the database.

The developers of this predictive tool wrote in their evaluation that, while concerns about racial disparities “have been expressed in the context of countries like the United States, where there are greater challenges related to racism. In the local Chilean context, we frankly don’t see similar concerns about race.” As Paz Peña points out, this dismissal is “difficult to understand” in light of the evidence of racism and racialized poverty in Chile.

Predictive systems such as these are premised on linking individuals’ characteristics and circumstances with the incidence of harm. As Abeba Birhane puts it, such approaches by their nature “force determinability [and] create a world that resembles the past” through reinforcing stereotypes, because they attach risk factors to certain individual traits.

The global context

These issues of bias, disproportionality, and determinacy in predictive child welfare tools have already been raised in other countries. Public outcry, ethical concerns, and evidence that these tools simply do not work as intended, have led many such systems to be scrapped. In the United Kingdom, a local authority’s Early Help Profiling System which “translates data on families into risk profiles [of] the 20 families in most urgent need” was abandoned after it had “not realized the expected benefits.” The U.S. state of Illinois’ child welfare agency strongly criticized and scrapped its predictive tool which had flagged hundreds of children as 100% likely to be injured while failing to flag any of the children who did tragically die from mistreatment. And in New Zealand, the Social Development Minister prevented the deployment of a predictive tool on ethical grounds, purportedly noting: “These are children, not lab rats.”

But while predictive tools are being scrapped on grounds of ethics and ineffectiveness in certain contexts, these same systems are spreading across the Global South. Indeed, the Chilean case demonstrates this trend especially clearly. The team of researchers who developed Chile’s Childhood Alert System is the very same team whose modelling was halted by the New Zealand government due to ethical questions, and whose predictive tool for the U.S. state of Pennsylvania was the subject of high-profile and powerful critique by many actors including Virginia Eubanks in her 2018 book Automating Inequality.

As Paz Peña noted, it should come as no surprise that systems which are increasingly deemed too harmful in some Global North contexts are proliferating in the Global South. These spaces are often seen as an “easier target,” with lower chances of backlash than places like New Zealand or the United States. In Chile, weaker institutions resulting from the legacies of military dictatorship and the staunch commitment to a “subsidiary” (streamlined, outsourced, neoliberal) state may be deemed to provide more fertile ground for such systems. Indeed, the tool’s developers wrote in a report that achieving acceptance of the system in Chile would be “simpler as it is the citizens’ custom to have their data processed to stratify their socioeconomic status for the purpose of targeting social benefits.”

This highlights the indispensability of international comparison, cooperation, and solidarity. Those of us working in this space must pay close attention to developments around the world as these systems continue to be hawked at breakneck speed. Identifying parallels, sharing information, and collaborating across constituencies is vital to support the organizations and activists who are working to raise awareness of these systems.

April 20, 2022. Victoria Adelmant, Director of the Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law. 

Regulating Artificial Intelligence in Brazil

TECHNOLOGY & HUMAN RIGHTS

Regulating Artificial Intelligence in Brazil

On May 25, 2023, the Center for Human Rights and Global Justice’s Technology & Human Rights team hosted an event entitled Regulating Artificial Intelligence: The Brazilian Approach, in the fourteenth episode of the “Transformer States” interview series on digital government and human rights. This in-depth conversation with Professor Mariana Valente, a member of the Commission of Jurists created by the Brazilian Senate to work on a draft bill to regulate artificial intelligence, raised timely questions about the specificities of ongoing regulatory efforts in Brazil. These developments in Brazil may have significant global implications, potentially inspiring other more creative, rights-based, and socio-economically grounded regulation of emerging technologies in the Global South.

In recent years, numerous initiatives to regulate and govern Artificial Intelligence (AI) systems have arisen in Brazil. First, there was the Brazilian Strategy for Artificial Intelligence (EBIA), launched in 2021. Second, legislation known as Bill 21/20, which sought to specifically regulate AI, was approved by the House of Representatives in 2021. And in 2022, a Commission of Jurists was appointed by the Senate to draft a substitute bill on AI. This latter initiative holds significant promise. While the EBIA and Bill 21/20 were heavily criticized for the limited value given to public input in comparison to the available participatory and multi-stakeholder mechanisms, the Commission of Jurists took specific precautions to be more open to public input. Their proposed alternative draft legislation, which is grounded in Brazil’s socio-economic realities and legal tradition, may inspire further legal regulation of AI, especially for the Global South, considering Brazil’s position in other discussions related to internet and technology governance.

Bill 21/20 was the first bill directed specifically at AI. But this was a very minimal bill; it effectively established that regulating AI should be the exception. It was also based on a decentralized model, meaning that each economic sector would regulate its own applications of AI: for example, the federal agency dedicated to regulating the healthcare sector would regulate AI applications in that sector. There were no specific obligations or sanctions for the companies developing or employing AI, and there were some guidelines for the government on how it should promote the development of AI. Overall, the bill was very friendly to the private sector’s preference for the most minimal regulation possible. The bill was quickly approved in the House of Representatives, without public hearings or much public attention.

It is important to note that this bill does not exist in isolation. There is other legislation that applies to AI in the country, such as consumer law and data protection law, as well as the Marco Civil da Internet (Brazilian Civil Rights Framework for the Internet). These existing laws have been leveraged by civil society to protect people from AI harms. For example, Instituto Brasileiro de Defesa do Consumidor (IDEC), a consumer rights organization, successfully brought a public civil action using consumer protection legislation against Via Quatro, a private company responsible for the subway line 4-Yellow of Sao Paulo. The company was fined R$500,000 for collecting and processing individuals’ biometric data for advertising purposes without informed consent.

But, given that Bill 21/20 sought to specifically address the regulation of AI, academics and NGOs raised concerns that it would reduce the legal protections afforded in Brazil: it “gravely undermines the exercise of fundamental rights such as data protection, freedom of expression and equality” and “fails to address the risks of AI, while at the same time facilitating a laissez-faire approach for the public and private sectors to develop, commercialize and operate systems that are far from trustworthy and human-centric (…) Brazil risks becoming a playground for irresponsible agents to attempt against rights and freedoms without fearing for liability for their acts.”

As a result, the Senate decided that instead of voting on Bill 21/20, they would create a Commission of Jurists to propose a new bill.

The Commission of Jurists and the new bill

The Commission of Jurists was established in April 2022 and delivered its final report in December 2022. Even if the establishment of the Commission was considered a positive development, it was not exempt from criticism from civil society, for the lack of racial and regional diversity of the Commission’s membership, as well as the need for different areas of knowledge to contribute to the debate. This criticism comes from a reflection of the socio-economic realities of Brazil, which is one of the most unequal countries in the world, and those inequalities are intersectional, considering race, gender, income, territorial origin. Therefore, AI applications will have different effects on different segments of the population. This is already clear from the use of facial recognition in public security: more than 90% of the individuals arrested using this technology were Black. Another example is the use of an algorithm to evaluate requests for emergency aid amid the pandemic, where many vulnerable people had their benefits denied based on incorrect data.

During its mandate, the Commission of Jurists held public hearings, invited specialists from different areas of knowledge, and developed a public consultation mechanism allowing for written proposals. Following this process, the new proposed bill had several elements that were very different from Bill 21/20. First, the new bill borrows from the EU’s AI Act by adopting a risk-based approach: obligations are distinguished according to the risks they pose. However, the new bill, following the Brazilian tradition of structuring regulation from the perspective of individual and collective rights, merges the European risk-based approach with a rights-based approach. The bill confers individual and collective rights that apply in relation to all AI systems, independent of the level of risk they pose.

Secondly, the new bill includes some additional obligations for the public sector, considering its differential impact on people’s rights. For example, there is a ban on the treatment of racial information, and provisions on public participation in decisions regarding the adoption of these systems. Importantly, though the Commission discussed the inclusion of a complete ban on facial recognition technologies in public spaces for public security, this proposal was not included: instead, the bill included a moratorium, establishing that a law must be approved regulating this use.

What the future holds for AI regulation in Brazil

After the Commission submitted its report, in May 2023 the president of the Senate presented a new bill for AI regulation replicating the Commission’s proposal. On 16th August 2023, the Senate established a temporary internal commission to discuss the different proposals for AI regulation that have been presented in the Senate to date.

It is difficult to predict what will happen following the end of the internal commission’s work, as political decisions will shape the next developments. However, what is important to have in mind is the progress that the discussion has reached so far, from an initial bill that was very minimal in scope, and supported the idea of minimal regulation, to one that is much more protective of individual and collective rights and considerate of Brazil’s particular socio-economic realities. Brazil has played an important progressive role historically in global discussions on the regulation of emerging technologies, for example with the discussions of its Marco Civil da Internet. As Mariana Valente put it, “Brazil has had in the past a very strong tradition of creative legislation for regulating technologies.” The Commission of Jurists’ proposal repositions Brazil in such a role.

September 28, 2023. Marina Garrote, LLM program, NYU School of Law whose research interests lie at the intersection of digital rights and social justice. Marina holds a bachelor and master’s degree from Universidade de São Paulo and previously worked at Data Privacy Brazil, a civil society association dedicated to public interest research on digital rights.

Putting Profit Before Welfare: A Closer Look at India’s Digital Identification System

TECHNOLOGY & HUMAN RIGHTS

Putting Profit Before Welfare: A Closer Look at India’s Digital Identification System 

Aadhaar is the largest national biometric digital identification program in the world, with over 1.2 billion registered users. While the poor have been used as a “marketing strategy” for this program, the “real agenda” is the pursuit of private profit.

Over the past months, the Digital Welfare State and Human Rights Project’s “Transformer States” conversations have highlighted the tensions and deceits that underlie attempts by governments around the world to digitize welfare systems and wider attempts to digitize the state. On January 27, 2021, Christiaan van Veen and Victoria Adelmant explored the particular complexities and failures of Aadhaar, India’s digital identification system, in an interview with Dr. Usha Ramanathan, a recognized human rights expert.

What is Aadhaar?

Aadhaar is the largest national digital identification program in the world; over 1.2 billion Indian residents are registered and have been given unique Aadhaar identification numbers. In order to create an Aadhaar identity, individuals must provide biometric data including fingerprints, iris scans, facial photographs, and demographic information including name, birthdate and address. Once an individual is set up in the Aadhaar system (which can be complicated depending on whether the individual’s biometric data can be gathered easily, where they live and their mobility), they can use their Aadhaar number to access public and, increasingly, private services. In many instances, accessing food rations, opening a bank account, and registering a marriage all require an individual to authenticate through Aadhaar. Authentication is mainly done by scanning one’s finger or iris, though One-Time Passcodes or QR codes can also be used.

The welfare “façade”

Unique Identification Authority of India (UIDAI) is the government agency responsible for administering the Aadhaar system. Its vision, mission, and values include empowerment, good governance, transparency, efficiency, sustainability, integrity and inclusivity. UIDAI has stated that Aadhaar is intended to facilitate “inclusion of the underprivileged and weaker sections of the society and is therefore a tool of distributive justice and equality.” Like many of the digitization schemes examined in the Transformer States series, the Aadhaar project promised all Indians formal identification that would better enable them to access welfare entitlements. In particular, early government statements claimed that many poorer Indians did not have any form of identification, therefore justifying Aadhaar as a way for them to access welfare. However, recent research suggests that less than 0.03% of Indian residents did not have formal identification such as birth certificates.

Although most Indians now have an Aadhaar “identity,” the Aadhaar system fails to live up to its lofty promises. The main issues preventing Indians from effectively claiming their entitlements are:

  • Shifting the onus of establishing authorization and entitlement onto citizens. A system that is supposed to make accessing entitlements and complying with regulations “straightforward” or “efficient” often results in frustrating and disempowering rejections or denials of services. The government asserts that the system is “self-cleaning,” which means that individuals have to fix their identity record themselves. For example, they must manually correct errors in their name or date of birth, despite not always having resources to do so.
  • Concerns with biometrics as a foundation for the system. When the project started, there was limited data or research on the effectiveness of biometric technologies for accurately establishing identity in the context of developing countries. However, the last decade of research reveals that biometric technologies do not work well in India. It can be impossible to reliably provide a fingerprint in populations with a substantial proportion of manual laborers and agricultural workers, and in hot and humid environments. Given that biometric data is used for both enrolment and authentication, these difficulties frustrate access to essential services on an ongoing basis.

Given these issues, Usha expressed concern that the system, initially presented as a voluntary program, is now effectively compulsory for those who depend on the state for support.

Private motives against the public good

The Aadhaar system is therefore failing the very individuals it was purported to be designed to help. The poorest are used as a “marketing strategy,” but it is clear that private profit is, and always was, the main motivation. From the outset, the Aadhaar “business model” would benefit private companies by growing India’s “digital economy” and creating a rich and valuable dataset. In particular, it was envisioned that the Aadhaar database could be used by banks and fintech companies to develop products and services, which further propelled the drive to get all Indians onto the database. Given the breadth and reach of the database, it is an attractive asset to private enterprises for profit-making and is seen as providing the foundation for the creation of an “Indian Silicon Valley.” Tellingly, the acronym “KYC,” used by UIDAI to assert that Aadhaar would help the government “know your citizen” is now understood as “know your customer.”

Protecting the right to identity

The right to identity cannot be confused with identification. Usha notes that “identity is complex and cannot be reduced to a number or a card,” because doing so empowers the data controller or data system to effectively choose whether to recognize the person seeking identification, or to “paralyse” their life by rejecting, or even deleting, their identification number. History shows the disastrous effects of using population databases to control and persecute individuals and communities, such as during the Holocaust and the Yugoslav Wars. Further, risks arise from the fact that identification systems like Aadhaar “fix” a single identity for individuals. Parts of a person’s identity that they may wish to keep separate—for example, their status as a sex worker, health information, or socio-economic status—are combined in a single dataset and made available in a variety of contexts, even if that data may be outdated, irrelevant, or confidential.

Usha concluded that there is a compelling need to reconsider and redraw attempts at developing universal identification systems to ensure they are transparent, democratic, and rights-based. They must, from the outset, prioritize the needs and welfare of people over claims of “efficiency,” which in reality, have been attempts to obtain profit and control.

February 15, 2021. Holly Ritson, LLM program, NYU School of Law; and Human Rights Scholar with the Digital Welfare State and Human Rights Project.

On the Frontlines of the Digital Welfare State: Musings from Australia

TECHNOLOGY & HUMAN RIGHTS

On the Frontlines of the Digital Welfare State: Musings from Australia

Welfare beneficiaries are in danger of losing their payments to “glitches” or because they lack internet access. So why is digitization still seen as the shiny panacea to poverty?

I sit here in my local pub in South Australia using the Wi-Fi, wondering whether this will still be possible next week. A month ago, we were in lockdown, but my routine for writing required me to leave the house because I did not have reliable internet at home.

Not having internet may seem alien to many. When you are in a low-income bracket, things people take for granted become huge obstacles to navigate. This is becoming especially apparent as social security systems are increasingly digitized. Not having access to technologies can mean losing access to crucial survival payments.

A working phone with internet data is required to access the Australian social security system. Applicants must generally apply for payments through the government website, which is notorious for crashing. When the pandemic hit, millions of the newly-unemployed were outraged that they could not access the website. Those of us already receiving payments just smiled wryly; we are used to this. We are told to use the website, but then it crashes, so we call and are put on hold for an hour. Then we get cut off and have to call back. This is normal. You also need a phone to fulfill reporting obligations. If you don’t have a working phone, or your battery dies, or your phone credit runs out, your payment can be suspended through the assumption that you’re deliberately shirking your reporting obligations.

In the last month, I was booted off my social security disability employment service. Although I had a certified disability affecting my job-seeking ability, the digital system had unceremoniously dumped me onto the regular job-seeking system, which punishes people for missing appointments. Unfortunately, the system had “glitched,” a popular term used by those in power for when payment systems fail. After narrowly missing a scheduled phone appointment, my payment was suspended indefinitely. Phone calls of over an hour didn’t resolve it; I didn’t even get to speak to a person, who could have resolved the issue. This is the danger of trusting digital technology above humans.

This is also the huge flaw in Income Management (IM), the “banking system” through which social security payments are controlled. I put “banking system” in quotation marks because it’s not run by a bank; there are none of the consumer protections of financial institutions, nor the choice to move if you’re unhappy with the service. The cashless welfare card is a tool for such IM: beneficiaries on the card can only withdraw 20% of their payment as cash, and the card restricts how the remaining 80% can be spent (for example, purchases of alcohol and online retailers like eBay are restricted). IM was introduced in certain rural areas of Australia deemed “disadvantaged” by the government.

The cashless welfare card is operated by Indue, a company contracted by the Australian government to administer social security payments. This is not a company with a good reputation for dealing with vulnerable populations. It is a monolith that is almost impossible to fight. Indue’s digital system can’t recognize rent cycles, meaning after a certain point in the month, the ‘limit’ for rent can be reached and a rent debit rejected. People have had to call and beg Indue to let them pay their landlords; others have been made homeless when the card stopped them from paying rent. They are stripped of agency over their own lives. They can’t use their own payments for second-hand school uniforms, or community fêtes, or buying a second-hand fridge. When you can’t use cash, avenues of obtaining cheaper goods are blocked off.

Certain politicians tout the cashless welfare card as a way to stop the poor from spending on alcohol and drugs. In reality, the vast majority affected by this system have no such problems with addiction. But when you are on the card, you are automatically classified as someone who cannot be trusted with your own money; an addict, a gambler, a criminal.

Politicians claim it’s like any other card, but this is a lie. It makes you a pariah in the community and is a tacit license for others to judge you. When you are at the whim and mercy of government policy, when you are reliant on government payments controlled by a third party, you are on the outside looking in. You’re automatically othered; you’re made to feel ashamed, stupid, and incapable.

Beyond this stigma, there are practical issues too. The cashless welfare card system assumes you have access to a smartphone and internet to check your account balance, which can be impossible for those with low incomes. Pandemic restrictions close the pubs, universities, cafes, and libraries which people rely on for internet access. Those without access are left by the wayside. “Glitches” are also common in Indue accounts: money can go missing without explanation. This ruins account-holders’ plans and forces them to waste hours having non-stop arguments with brick-wall bureaucracy and faceless people telling them they don’t have access to their own money.

Politicians recently had the opportunity to reject this system of brutality. The “Cashless Welfare Card trials” were slated to end on December 31, 2020, and a bill was voted on to determine if these “trials” would continue. The people affected by this system already told politicians how much it ruins their lives. Once again, they used their meager funds to call politicians’ offices and beg them to see the hell they’re experiencing. They used their internet data to email and rally others to do the same. I personally delivered letters to two politicians’ offices, complete with academic studies detailing the problems with IM. For a split second, it seemed like the politicians listened and some even promised to vote to end the trials. But a last-minute backroom deal meant that these promises were broken. Lived experiences of welfare recipients did not matter.

The global push to digitize welfare systems must be interrogated. When the most vulnerable in society are in danger of losing their payments to “glitches” or because they lack internet access, it begs the question: why is digitization still seen as the shiny panacea to poverty?

February 1, 2021. Nijole Naujokas, an Australian activist and writer who is passionate about social justice rights for the vulnerable. She is the current Secretary of the Australian Unemployed Workers’ Union, and is doing her Bachelor of Honors in Creative Writing at The University of Adelaide.

Marketizing the digital state: the failure of the ‘Verify’ model in the United Kingdom

TECHNOLOGY & HUMAN RIGHTS

Marketizing the digital state: the failure of the ‘Verify’ model in the United Kingdom

Verify, the UK government’s digital identity program, sought to construct a market for identity verification in which companies would compete. But the assumption that companies should be positioned between government and individuals who are trying to access services has gone unquestioned.

The story of the UK government’s Verify service has been told as one of outright failure and a colossal waste of money. Intended as the single digital portal through which individuals accessing online government services would prove their identity, Verify underperformed for years and is now effectively being replaced. But accounts of its demise often focus on technical failures and inter-departmental politics, rather than evaluating the underlying political vision that Verify represents. This is a vision of market creation, whereby the government constructs a market for identity verification within which private companies can compete. As Verify is replaced and the UK government’s ‘digital transformation’ continues, the failings of this model must be examined.

Whether an individual wants to claim a tax refund from Her Majesty’s Revenue and Customs, renew her driver’s license through the Driver and Vehicle Licensing Agency, or receive her welfare payment from the Department for Work and Pensions, the government’s intention was that she could prove her identity to any of these bodies through a single online platform: Verify. This was a flagship project of the Government Digital Service (GDS), a unit working across departments to lead the government’s digital transformation. Much of GDS’ work was driven by the notion of ‘government as a platform’: government should design and build “supporting infrastructure” upon which others can build.

Squarely in line with this idea, Verify provides a “platform for identity.” GDS technologists wrote the software for the Verify platform, while the government then accredits companies as ‘identity providers’ (IdPs) which ‘plug into’ the platform to compete. An individual who seeks to access a government service online will see Verify on her screen and will be prompted by Verify to choose an identity provider. She will be redirected to that IdP’s website and must enter information such as her passport number or bank details. The IdP then checks this information against public and private databases before confirming her identity to the government service being requested. The individual therefore leaves the government website to verify her identity with a separate, private entity.

As GDS “didn’t think there was a market,” it aimed to support “the development of a digital identity market that spans both public and private sectors” so that users could “use their verified identity accounts for private sector transactions as well as government services.” After Verify went live in 2016, the government accredited seven IdPs, including credit reporting agency Experian and Barclays bank. Government would pay IdPs per user, with the price per user decreasing as user volumes increased. GDS intended Verify to become self-funding: government funding would end in Spring 2020, at which point the companies would take over responsibility. GDS was confident that the IdPs would “keep investing in Verify” and would “ensure the success of the market.”

But a market failed to emerge. The government spent over £200 million on Verify and lowered its estimate of its financial benefits by 75%. Though IdPs were supposed to take over responsibility for Verify, almost every company withdrew. After April 2020, new users could register with either the (privatized) Post Office or Digidentity, the only two remaining IdPs. But the Post Office is “a ‘white-label’ version of Digidentity that runs off the same back-end identity engine.” Rather than creating a market, a monopoly effectively emerged.

This highlights the flaws of the underlying approach. Government paid to develop and maintain the software, and then paid companies to use that software. Government also bore most of the risk: companies could enter the scheme, be paid tens of millions, then withdraw if the service proved less profitable than expected, without having invested in building or maintaining the infrastructure. This is reminiscent of the UK government’s decision to bear the costs of maintaining railway tracks while having private companies profit from running trains on these tracks. Government effectively subsidizes profit.

GDS had been founded as a response to failings in outsourcing government-IT: instead of procuring overpriced technologies, GDS would write software itself. But this prioritization of in-house development was combined with an ideological notion that government technologists’ role is to “jump-start and encourage private sector investment” and to build digital infrastructure while relying on the market to deliver services using that infrastructure. This ideal of marketizing the digital state represents a new “orthodoxy” for digital government; the National Audit Office has highlighted the lack of “evidence underpinning GDS’s assumptions that a move to a private sector-led model [was] a viable option for Verify.”

These assumptions are particularly troubling here, as identity verification is an essential moment within state-to-individual interactions. Companies were positioned between government and individuals, and effectively became gatekeepers. An individual trying to access an online government service was disrupted, as she was redirected and required to go through a company. Equal access to services was splintered into a choice of corporate gateways.

This is significant as the rate of successful identity verifications through Verify hovered around 40-50%, meaning over half of attempts to access online government services failed. More worryingly, the verification rate depended on users’ demographic characteristics, with only 29% of Universal Credit (welfare benefits) claimants able to use Verify. If claimants were unable to prove their identity to the system, their benefits applications were often delayed. They had to wait longer to access payments to which they were entitled by right. Indeed, record numbers of claimants have been turning to food banks while they wait for their first payment. It is especially important to question the assumption that a company needed to be inserted between individuals and government services when the stakes – namely further deprivation, hunger, and devastating debt – are so high.

Verify’s replacement became inevitable, with only two IdPs remaining. Indeed, the government is now moving ahead with a new digital identity framework prototype. This arose from a consultation which focused on “enabling the use of digital identity in the private sector” and fostering and managing “the digital identity market.” A Cabinet Office spokesperson has stated that this framework is intended to work “for government and businesses.”

The government appears to be pushing on with the same model, despite recurrent warning signs throughout the Verify story. As the government’s digital transformation continues, it is vital that the assumptions underlying this marketization of the digital state are fundamentally questioned.

March 30, 2021. Victoria Adelmant, Director of the Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law. 

I don’t see you, but you see me: asymmetric visibility in Brazil’s Bolsa Família Program

TECHNOLOGY & HUMAN RIGHTS

I don’t see you, but you see me: asymmetric visibility in Brazil’s Bolsa Família Program

Brazil’s Bolsa Família Program, the world’s largest conditional cash transfer program, is indicative of broader shifts in data-driven social security. While its beneficiaries are becoming “transparent” as their data is made available, the way the State uses beneficiaries’ data is increasingly opaque.

“She asked a lot of questions and started filling out the form. When I asked her about when I was going to get paid, she said, ‘That’s up to the Federal Government.’” This experience of applying for Brazil’s Bolsa Família Program (“Programa Bolsa Família” in Portuguese, or PBF), the world’s largest conditional cash transfer program, hints at the informational asymmetries between individuals and the State. Such asymmetries have long existed, but information and communications technologies (ICTs) can exacerbate these imbalances. ICTs enable States to handle an increasing amount of personal data, and this is especially true in the PBF. In June 2020, 14.2 million Brazilian families living in poverty – 43.7 million individuals – were beneficiaries of the Bolsa Família program.

At the core of the PBF’s structure is a register called CadÚnico, which is used for more than 20 social policies. It includes detailed data on heads of households and less granular data on other family members. The law designates women as the heads of household and thereby the main PBF beneficiary. Information is collected about income, number of people living together, level of education and literacy, housing conditions, access to work, disabilities, and ethnic groups. This data is used to select PBF beneficiaries and to monitor their compliance with the conditions on which the maintenance of the benefit depends, such as requirements that children attend school . The federal government also uses the CadÚnico for finding multidimensional vulnerabilities, granting other benefits, or enabling research. Although different programs feed the CadÚnico, the PBF is its most important information provider due to its colossal size. In March 2021, the CadÚnico comprised 75.2 million individual entries from 28.9 million families: PBF beneficiaries make up a half.

The person responsible for the family unit within the PBF must answer all of the entries of the “main form,” which consists of 77 questions with varying degrees of detail and sensitivity. All these data points expose the sensitive personal information and vulnerabilities of low-income individuals.

The scope of this large and comprehensive dataset is celebrated by social policy experts because it enables the State to target needs for other policies. Indeed, the CadÚnico has been used to identify the relevant beneficiaries for policies ranging from electricity tariff discounts to higher education subsidies. Holding huge amounts of information about low-income individuals can allow States to proactively target needs-based policies.

But when the State is not guided by the principle of data minimization (i.e. collecting only the necessary data and no more), this appetite for information increases and places the burden of risks on individuals. They are transparent to the State, while the State becomes increasingly opaque to them.

Upon registering for the PBF, citizens are not informed about what will happen to the information they provide. For example, the training materials for officials registering beneficiaries only note that they must warn potential beneficiaries of their liability for providing false and inaccurate information, but they do not state that officials must tell beneficiaries how their data will be used, nor about their data rights , nor any details about when or whether they might receive their cash transfer. The emphasis, therefore, lies on the responsibilities of the potential beneficiary instead of the State. The lack of transparency about how people’s data will be used reduces citizens’ ability to exercise their rights.

In addition to the increased visibility of recipients to the State, the PBF also releases the beneficiaries’ data to the public due to strict transparency requirements. Though CadÚnico data is generally confidential, PBF recipients’ personal data is publicly available through different paths:

  • The Federal Government’s Transparency Portal publishes a monthly list containing the beneficiary’s name, municipality, NIS (social security number) and the amounts paid.
  • The Caixa Econômica Federal’s portal– the public bank that administers social benefits–allows anyone to check the status of the benefit by inserting name, NIS and CPF (taxpayer’s identity number).
  • The NIS of any citizen can be queried at the Citizen’s Consultation Portal CadÚnico by providing name, mother’s name, and birth date.

In making a person’s status as a PBF beneficiary easily accessible, the (mostly female) beneficiaries suffer a lack of privacy from all sides and are stigmatized. Not only are they surveilled by the State as it closely monitors conditionalities for the PBF, but they are also monitored by fellow citizens. Citizens have made complaints to the PBF about beneficiaries they believe should not receive cash transfers. At InternetLab, we used the Brazilian Access to Information Law to gain access to some of these complaints. 60% of the complaints showed personal identification information about the accused beneficiary, suggesting that citizens are monitoring and reporting their “undeserving” neighbors and using the above portals to check databases.

The availability of this data has further worrying consequences: at InternetLab, we have witnessed several instances of fraud and electoral propaganda directed at PBF beneficiaries’ phones, and it is not clear where this contact data came from. Different actors are profiling and targeting Brazilian citizens according to their socio-economic vulnerabilities.

The public availability of beneficiaries’ data is backed by law and arises from a desire to fight corruption in Brazil. This requires government spending, including on social programs, to be transparent. But spending on social programs has become more controversial in recent years amidst an economic crisis and the rise of conservative political majorities, and misplaced ideas of “corrupted beneficiaries” have mingled with anti-corruption sentiments. The emphasis has been placed on making beneficiaries “transparent,” rather than government.

Anti-corruption laws do not adequately differentiate between transparency practices that confront corruption and favor democracy, and those which disproportionately reinforce vulnerabilities and inequalities in focusing on recipients of social programs. Public contracts, public employees’ salaries, and beneficiaries of social benefits are all exposed under the same grounds. But these are substantially different uses of public resources, and exposure of these different kinds of data has very unequal impacts, with beneficiaries more likely to be harmed by this “transparency.”

The personal data of social program beneficiaries should be treated with more care, and we should question whether disclosing so much information about them is necessary. In the wake of Brazil’s General Data Protection Law which came into force last year, it is vital that the work to increase the transparency of the State continues while the privacy of the vulnerable is protected, not the other way around.

May 3, 2021. Nathalie Fragoso and Mariana Valente.
Nathalie Fragoso, Head of Research, Privacy and Surveillance, Internet Lab.
Mariana Valente, Associate Director of Internet Lab.

Fearing the future without romanticizing the past: the role for international human rights law(yers) in the digital welfare state to be

TECHNOLOGY & HUMAN RIGHTS

Fearing the future without romanticizing the past: the role for international human rights law(yers) in the digital welfare state to be

Universal Credit is one of the foremost examples of a digital welfare system and the UK’s approach to digital government is widely copied. What can we learn from this case study for the future of international human rights law in the digital welfare state?

Last week, Victoria Adelmant and I organized a two-day workshop on digital welfare and the international rule and role of law, which was part of a series curated by Edinburgh Law School. While zooming in on Universal Credit (UC) in the United Kingdom, arguably one of the most developed digital welfare systems in the world, our objective was broader: namely to imagine how and why law, especially international human rights law, does and should play a role when the state goes digital. Below are some initial and brief reflections on the rich discussions we had with close to 50 civil servants, legal scholars, computer scientists, digital designers, philosophers, welfare rights practitioners, and human rights lawyers.

What is “digital welfare?” There is no agreed upon definition. At the end of a United Nations country visit to the UK in 2018, where I accompanied the UN Special Rapporteur on extreme poverty and human rights, we coined the term by writing that “a digital welfare state is emerging”. Since then, I have spent years researching and advocating around these developments in the UK and elsewhere. For me, the term digital welfare can be (imperfectly) defined as a welfare system in which the interaction with beneficiaries and internal government operations is reliant on various digital technologies.

In UC, that means you apply for and maintain your benefits online, your identity is verified online, your monthly benefits calculation is automated in real-time, fraud detection happens with the help of algorithmic models, etc. Obviously, this does not mean there is no human interaction or decision-making in UC. And the digitalization of the welfare state did not start yesterday either; it is a process many decades in the making. For example, a 1967 book titled The Automated State mentions the Social Security Administration in the United States as having “among the most extensive second-generation computer systems.” Today, digitalization is no longer just about data centers or government websites, and systems like UC exemplify how digital technologies affect each part of the welfare state.

So, what are some implications of digital welfare for the role of law, especially for international human rights law?

First, as was pointed out repeatedly in the workshop, law has not disappeared from the digital welfare state altogether. Laws and regulations, government lawyers, welfare rights advisors, and courts are still relevant. As for international human rights law, it is no secret that its institutionalization by governments, especially where it comes to economic and social rights, has never been perfect. And neither should we romanticize the past by imagining a previous law and rules-based welfare state as a rule of law utopia. I was reminded of this recently when I watched a 1975 documentary by Frederick Wiseman about a welfare office in downtown Manhattan which was far from utopian. Applying law and rights to the welfare state has been a long and continuous battle.

Second, while there is much to fear about digitalization, we shouldn’t lose sight of its promises for the reimagination of a future welfare state. Several workshop participants emphasized the potential user-friendliness and rationality that digital systems can bring. For example, the UC system quickly responded to a rise in unemployment caused by the pandemic, while online application systems for unemployment benefits in the United States crashed. Welfare systems also have a long history of bureaucratic errors. Automation offers, at least in theory, a more rational approach to government. Such digital promises, however, are only as good as the political impetus that drives digital reform, which is often more focused on cost-savings, efficiency, and detecting supposedly ubiquitous benefit fraud than truly making welfare more user-friendly and less error-prone.

What role does law play in the future digital welfare state? Several speakers emphasized a previous approach to the delivery of welfare benefits as top-down (“waterfall”). Legislation would be passed, regulations would be written and then implemented by the welfare bureaucracy as a final step. Not only is delivery now taking place digitally, but such digital delivery follows a different logic. Digital delivery has become “agile,” “iterative,” and “user-centric,” creating a feedback loop between legislation, ministerial rules and lower-level policy-making, and implementation. Implementation changes fast and often (we are now at UC 167.0).

It is also an open question what role lawyers will play. Government lawyers are changing primary social security legislation to make it fit the needs of digital systems. The idea of ‘Rules as Code’ is gaining steam and aims to produce legislation while also making sure it is machine-readable to support digital delivery. But how influential are lawyers in the overall digital transformation? While digital designers are crucial actors in designing digital welfare, lawyers may increasingly be seen as “dinosaurs,” slightly out of place when wandering into technologist-dominated meetings with post-it notes, flowcharts, and bouncy balls. Another “dinosaur” may be the “street-level bureaucrat.” Such bureaucrats have played an important role in interpreting and individualizing general laws. Yet, they are also at risk of being side-lined by coders and digital designers who increasingly shape and form welfare delivery and thereby engage in their own form of legal interpretation.

Most importantly, from the perspective of human rights: what happens to humans who have to interact with the digital welfare state? In discussions about digital systems, they are all too easily forgotten. Yet, there is substantial evidence of the human harm that may be inflicted by digital welfare, including deaths. While many digital transformations in the welfare state are premised on the methodology of “user-centered design,” its promise is not matched by its practice. Maybe the problem starts with conceptualizing human beings as “users,” but the shortcomings go deeper and include a limited mandate for change and interacting only with “users” who are already digitally visible.

While there is every reason to fear the future of digital welfare states, especially if developments turn toward lawlessness, such fear does not have to lead to outright rejection. Like law, digital systems are human constructs, and humans can influence their shape and form. The challenge for human rights lawyers and others is to imagine not only how law can be injected into digital welfare systems, but how such systems can be built on and can embed the values of (human rights) law. Whether it is through expanding the concept and practice of “user-centered design” or being involved in designing rights-respecting digital welfare platforms, (human rights) lawyers need to be at the coalface of the digital welfare state.

March 23, 2021. Christiaan van Veen, Director of the Digital Welfare State and Human Rights Project (2019-2022) at the Center for Human Rights and Global Justice at NYU School of Law.

Experimental automation in the UK immigration system

TECHNOLOGY & HUMAN RIGHTS

Experimental automation in the UK immigration system

The UK government is experimenting with automated immigration systems. The promised benefits of automation are inevitably attractive, but these experiments routinely expose people—including some of the most vulnerable—to unacceptable risks of harm.

In April 2019, The Guardian reported that couples accused of sham marriages were increasingly being subjected to invasive investigations by the Home Office, the UK government body responsible for immigration policy. Couples reported having their wedding ceremonies interrupted to be quizzed about their sex life, being told they were not in a genuine relationship because they were wearing pajamas in bed, and being present while their intimate photos were shared between officials.

The official tactics reported are worrying enough, but it has since come to light through the efforts of a legal charity (the Public Law Project) and investigative journalists that an automated system is largely determining who gets investigated in the first place. An algorithm, hidden from public view, is sorting couples into “pass” and “fail” categories, based on eight unknown criteria.
Couples who “fail” this covert algorithmic test are subjected to intrusive investigations. They must attend an interview and hand over extensive evidence about their relationship, a process which has been described as “insulting” and “grueling.” These investigations can also prevent couples from getting married altogether. If the Home Office decides that a couple has failed to “comply” with an investigation—even if they are in a genuine relationship—the couple is denied a marriage certificate and forced to start the process all over again. One couple was reportedly ruled non-compliant for failing to provide six months of bank statements for an account that had only been open for four months. This makes it difficult for people to plan their weddings and their lives. And the investigation can lead to other immigration enforcement actions, such as visa cancellation, detention, and deportation. In one case, a sham marriage dawn raid led to a man being detained for four months, until the Home Office finally accepted that his relationship was genuine.

We know little about how this automated system operates in practice or its effectiveness in detecting sham marriages. The Home Office refuses to disclose or otherwise explain the eight criteria at the center of the system. There is a real risk that the system is racially discriminatory, however. The criteria were derived from historical data, which may well be skewed against certain nationalities. The Home Office’s own analysis shows that some nationalities, including Bulgarian, Greek, Romanian and Albanian people, receive “fail” ratings more frequently than others.

The sham marriages algorithm is, in many respects, a typical case of the deployment of automation in the UK immigration system. It is not difficult to understand why officials are seeking to automate immigration decision-making. Administering immigration policy is a tough job. Officials are often inexperienced and under pressure to process large volumes of decisions. Each decision will have profound effects for those subjected to it. This is not helped by the dense complexity of, and frequent changes in, immigration law and policy, which can bamboozle even the most hardened administrative lawyer. All of this, of course, takes place in an environment where migration remains one of the most vexed issues on the political agenda. Automation’s promised benefits of greater efficiency, lower costs, and increased consistency are, from the government’s perspective, inevitably attractive.

But in reality, a familiar pattern of risky experimentation and failure is already emerging. It begins with the Home Office deploying a novel automated system with the goal of cheaper, quicker, and more accurate decision-making. There is often little evidence to support the system’s effectiveness in delivering those goals and scant consideration of the risks of harm. Such systems are generally intended to benefit the government or the general, non-migrant population, rather than the people subject to them. When the system goes wrong and harms individuals, the Home Office fails to take adequate steps to address those harms. The justice system—with its principles and procedures developed in response to more traditional forms of public administration—is left to muddle through in trying to provide some form of redress. That redress, even where best efforts are made, is often unsatisfactory.

This is the story we seek to tell in our new book, Experiments in Automating Immigration Systems, through an exploration of three automated immigration systems in the UK: a voice recognition system used to detect fraud in English language testing; an algorithm for identifying “risky” visa applications; and automated decision-making in the process for EU citizens to apply to remain in the UK after Brexit. It is, at its core, a story of risky bureaucratic experimentation that routinely exposes people, including some of the most vulnerable, to unacceptable risks of harm. For example, some of the students caught up in the English language testing scandal were detained and deported, while others had to abandon their studies and fight for years through the courts to prove their innocence. While we focus on the UK experience, this story will no doubt be increasingly familiar in many countries around the world.

It is important to remember, however, that this story is just beginning. While it would be naïve to think that the tensions in public administration can ever be wholly overcome, the government must strive to reap the benefits of automation for all of society, in a way that is sensitive to and mitigates the attendant risks of injustice. That work is, of course, best led by the government itself.

But the collective work of journalists, charities, NGOs, lawyers, researchers, and others will continue to play a crucial role in ensuring, as far as possible, that automated administration is just and fair.

March 14, 2022. Joe Tomlinson and Jack Maxwell.
Dr. Joe Tomlinson is a Senior Lecturer in Public Law at the University of York.
Jack Maxwell is a barrister at the Victorian Bar.

A GPS Tracker on Every “Boda Boda”: A Tale of Mass Surveillance in Uganda

TECHNOLOGY & HUMAN RIGHTS

A GPS Tracker on Every “Boda Boda”: A Tale of Mass Surveillance in Uganda

The Ugandan government recently announced that GPS trackers would be placed on every vehicle in the country. This is just the latest example of the proliferation of technology-driven mass surveillance, spurred by a national security agenda and the desire to suppress political opposition.

Following the June 2021 assassination attempt on Uganda’s Transport Minister and former army commander, General Katumba Wamala, President Yoweri Museveni suggested mandatory Global Positioning System (GPS) tracking of all private and public vehicles. This includes motorcycle taxis (commonly known as boda bodas) and water vessels. Museveni also suggested collecting and storing the palm prints and DNA of every Ugandan.

Hardly a month later, reports emerged that the government, through the Ministry of Security, had entered into a 10-year secretive contract with a Russian security firm to undertake the installation of GPS trackers in vehicles. Selection of the firm was never subjected to the procurement procedures required by Ugandan law, and a few days after this news broke, it emerged that the Russian firm was facing bankruptcy litigation. The line minister who endorsed the contract subsequently distanced himself from the deal, saying that he was merely enforcing a presidential directive. The government has confirmed that Ugandans will have to pay 20,000 UGX (approximately $6 USD) annually to the Russian firm for the installation of trackers on their vehicles. This controversial move means Ugandans are paying for their own surveillance.
According to 2020 statistics by the Ugandan Bureau of Statistics, a total of 38,182 motor vehicles and 102,273 motor cycles are registered in Uganda. Most of these motorcycles function as boda bodas and are a de facto mode of public transport in Uganda commonly used by people of all social classes. In the capital of Kampala, boda bodas are essential because of their ability to navigate heavy traffic jams. In remote locations where public transport is inaccessible, boda bodas are the only means of transportation for most people, except the elites. While a boda boda motorcycle was allegedly used in the assassination attempt on General Katumba Wamala, those same boda bodas also function as ambulances (including bringing the General to a hospital after the attack) and many other essential purposes.

It should be emphasized that this latest attempt at boda boda mass surveillance is part of a broader effort by the government of Uganda to exert power and control via digital surveillance and thereby limit the full enjoyment of human rights offline and online. One example is the widespread use of indiscriminate drone surveillance. Another is the Cyber Crimes Unit in the Ugandan police which, since 2014, has had overly broad powers to monitor the social media activity of Ugandans. Unwanted Witness has raised concerns about the intrusive powers of this unit, which violate Article 27 of the 1995 Uganda Constitution that guarantees the right to privacy.

And that is not all. In 2018, the Ugandan government contracted the Chinese firm Huawei to install CCTV cameras in all major cities and on all highways, spending over $126 million USD on these cameras and related facial recognition technology. In the absence of any judicial oversight, there are also concerns about backdoor access to this system for illegal facial recognition surveillance on potential targets and the use of this system to stifle all opposition to the regime.

The fears about the use of this CCTV system to violate human rights and stifle dissent came true in November 2020. Following the arrest of two opposition presidential candidates, political protests erupted in Uganda, and this CCTV system was used to crack down on dissent after these protests. Long before these protests, the Wall Street Journal had already reported on how Huawei technicians assisted the Ugandan government to spy on political opponents.

This is taking place in a wider context of attacks on human rights defenders and NGOs. Under the guise of seeking to pre-empt terror threats, the state has instituted cumbersome regulations on nonprofits and granted authorities the power to monitor and interfere in their work. Last year, a number of well-known human rights groups were falsely accused of funding terrorism and had their bank accounts frozen. The latest government clampdown on NGOs resulted in the suspension of the operations of 54 organizations on allegations of non-compliance with registration laws. Uganda’s pervasive surveillance apparatus will be instrumental in these efforts at censoring and silencing human rights organizations, activists, and other forms of dissent.
The intrusive application of digital surveillance harms the right to privacy of Ugandans. Privacy is a fundamental right enshrined in the 1995 Constitution and numerous international human rights treaties and other legal instruments. The right to privacy is also a central pillar of a well-functioning democracy. But in the quest to surveil its population, the Ugandan government has either underplayed or ignored the violation of human rights.

What is especially problematic here is the partial privatization of government surveillance to individual corporations. There is a long and unfortunate track record in Uganda of private corporations evading all human rights accountability for their involvement in surveillance. In 2019, for example, Unwanted Witness wrote a report that faulted a transport hailing app—SafeBoda—for sharing customers’ data with third parties without their consent. With the planned GPS tracking, Ugandan boda boda users will have their privacy eroded further, with the help of the Russian security firm. Driven by a national security agenda and the desire to control and suppress any opposition to the long-running Museveni presidency, digital surveillance is proliferating as Ugandans’ rights to privacy, to freedom of expression, and to freedom of assembly are harmed.

October 13, 2021. Dorothy Mukasa is the Chief Executive Officer of Unwanted Witness, a leading digital rights organization in Uganda. 

“Killing two birds with one stone?” The Cashless COVID Welfare Payments Aimed at Boosting Consumption

TECHNOLOGY & HUMAN RIGHTS

“Killing two birds with one stone?” The Cashless COVID Welfare Payments Aimed at Boosting Consumption

In launching its COVID-19 relief payments scheme, the South Korean government had two goals: providing a safety net for its citizens and boosting consumption for the economy. It therefore provided cashless payments, issuing credit card points rather than cash. However, this had serious implications for the vulnerable.

In May 2020, South Korea’s government distributed its COVID-19 emergency relief payments to all households through cashless channels. Recipients predominantly received points on credit cards rather than cash transfers. From the outset, the government stated explicitly that this universal transfer scheme had two goals: it was not only intended to mitigate the devastating impacts of the pandemic on people’s livelihoods, but also explicitly aimed at simultaneously boosting consumption in the South Korean economy. Providing cash would not necessarily boost consumption as it could be placed in savings accounts. Therefore, credit card points were offered instead to require recipients to spend the relief. But in trying to “kill two birds with one stone” by promoting consumption through the relief program, the government jeopardized the welfare aim of this program.

Once the payouts began, the government boasted that the delivery of the relief funds was timely and efficient. The relief program had been launched based on business agreements with credit card companies for “rapid and smooth” payment, and indeed, it was true that the card-based channel enabled distribution which was much faster than in other countries. Although “offline” applications for the relief program could be made in-person at banks, the scheme was designed around the submission of applications through credit-card companies’ websites or apps. The relief funds were then deposited onto recipients’ credit card or debit card in the form of points—which were separated from normal credit card points—within two days after applying. In September 2021, during the second round of universal relief payments known as the “COVID-19 Win-Win National Relief Fund,” 90% of expected recipients received their payments within 12 days.

Restricting spending to boost spending

However, paying recipients in credit card points meant restricting their access to cash. While low-income households received the relief fund in cash during the first round of COVID-19 relief, they had to apply for the payment in the second round and could only choose among cashless methods which included credit cards and debit cards. To make matters worse, the policy placed constraints on where points could be used, in the name of encouraging consumption and growing the local economy. The points could only be used in designated places, and could not be used to pay for utility bills, repay a mortgage, nor for online shopping. They could not be transferred to others’ bank accounts or withdrawn as cash. Therefore, recipients had no choice but to use their relief funds in certain local restaurants, markets, or clothing stores, etc. If the points had not been used approximately 3-4 months after disbursement, then they were returned to the national treasury. All of these conditions were the outcome of the fact that the policy specifically aimed at boosting consumption.

Jeopardizing the welfare aim

These restrictions had significant repercussions on people in poverty, in two key ways. First, the relief fund failed to fulfill the right to social protection of vulnerable people at risk. As utility bills, telecommunication fees, and even health insurance fees could not be paid with the points, many were left unable to pay for the things they needed to pay for, while much-needed funds remained effectively stranded on the card. What use is a card meant only for restaurants and shops when one is in arrears on utility bills, health insurance fees, and at risk of electricity supply and health insurance benefits being cut off? Those who needed cash immediately sometimes handed their credit cards to other people to use, and then requested payment back in cash below the value. It was also reported that a number of people bought products at stores where relief fund points could be used, and then sold the products at a lower price on the second-hand online market to obtain cash. Although the government warned that it would crack down on such “illegal transactions,” the demand for cash could not be controlled.

Second, the right to housing of vulnerable populations was not sufficiently protected through this scheme. Homeless persons, who needed the most help, were severely affected because the cashless relief funds could not function as a payment method for monthly rent. Homeless people and slice-room dwellers were the group which most strongly agreed that “the COVID-19 relief fund should be distributed in cash” in a survey. Further, given that low-income people spent a higher proportion of their income on rent than those from other social classes, the fact that the relief funds could not be used on rent also significantly affected low-income households. A number of temporary or informal workers who lost their jobs due to the pandemic were on the verge of being pushed into poorer conditions because they could not afford their rent. The relief program could not help these groups cover some of their most urgent expenditures—housing costs—at all.

Boosting consumption can be expected as an indirect effect of government relief funds, but it must not be adopted as a specific goal of such programs. Attempting to achieve this consumption-oriented goal through the relief payments resulted in the scheme’s design imposing limitations on the use of funds, thereby undermining the scheme’s ability to help those in the most extreme need. As the government set boosting consumption as one of the aims of the program and seemingly prioritized it over the welfare aim, the delivery of the payments was devised in an inappropriate way that did not take the most vulnerable into account.

Killing two birds with one stone?

The Korea Development Institute (KDI) found that only about 30% of the first emergency relief funds led to an increase in consumption, while the remaining 70% led to household debt repayment or savings. In the end, it seemed that the cashless relief stipend did not successfully increase consumption, all while it caused the weakening of its social security function.
Such schemes aimed at “killing two birds with one stone” were doomed to fail from the beginning because these two goals come into tension with one another in the program’s design. The consumption aim is likely to harm the welfare aim through pushing for cashless, controlled, and restricted use. The sole purpose of emergency relief funds in a crisis should be to provide assistance for the most vulnerable. Such schemes should be delivered in a way that will best fulfill this aim, they should be focused on providing a safety net, and should be designed from the perspective of right-holders, and not of consumers.

April 19, 2022. Bo Eun Kwon, LLM program, NYU School of Law whose interests include international human rights law, economic and social rights, and digital governance. She has worked at the National Human Rights Commission of Korea.