Poor Enough for the Algorithm? Exploring Jordan’s Poverty Targeting System

TECHNOLOGY AND HUMAN RIGHTS

Poor Enough for the Algorithm? Exploring Jordan’s Poverty Targeting System

The Jordanian government is using an algorithm to rank social protection applicants from least poor to poorest, as part of a poverty alleviation program. While helpful to those individuals who receive aid, the system is excluding beneficiaries in need, as it is failing to accurately reflect the complex realities of poverty. It uses an outdated poverty measure, weights imperfect indicators—such as utility consumption—and relies on a static view of socioeconomic status.

On November 28, 2023, the Digital Welfare State and Human Rights project hosted the sixteenth episode in the Transformer States conversation series on Digital Government and Human Rights. Victoria Adelmant and Katelyn Cioffi interviewed Hiba Zayadin, a senior researcher in the Middle East and North Africa division at Human Rights Watch (HRW), about a report published by HRW on the Jordanian government’s use of an algorithmic system to rank applicants for a welfare program based on their poverty level, using data like electricity usage and car ownership. This blog highlights key issues related to the system’s inability to reflect the complexities of poverty and its algorithmic exclusion of individuals in need.

The context behind Jordan’s poverty targeting program 

Poverty targeting’ is generally understood to mean directing social program benefits towards those most in need, with the aim of efficiently using limited government resources and improving living conditions for the poorest individuals. This approach entails the collection of wide-ranging information about socioeconomic circumstances, often through in-depth surveys and interviews, to enable means testing or proxy means testing. Some governments have adopted an approach in which beneficiaries are ‘ranked’ from richest to poorest, and target aid only to those falling below a certain threshold. The World Bank has long advocated for poverty targeting in social assistance. For example, since 2003, the World Bank has supported Brazil’s Bolsa Família program, which is a program targeted at the poorest 40% of the population

Increasingly, the World Bank has turned to new technologies to seek to improve the accuracy of poverty targeting programs. It has provided funding to many countries for data-driven, algorithm-enabled solutions to enhance targeting. Similar programs have been implemented in countries including Jordan, Mauritania, Palestine, Morocco, Iraq, Tunis, Jordan, Egypt, and Lebanon.

Launched in 2019 with World Bank support, Jordan’s Takaful program, an automated cash transfer program, provides monthly support to families (roughly US $56 to $192) to mitigate poverty. Managed by the National Aid Fund, the program targets the more than 24% of Jordan’s population that falls under the poverty line. The Takaful program has been especially welcome in Jordan, in light of rising living costs. However, policy choices underpinning this program have excluded many individuals who are in need: eligibility restrictions limit access solely to Jordanian nationals, such that the program does not cover registered Syrian refugees, Palestinians without Jordanian passports, migrant workers, and the non-Jordanian families of Jordanian women—since Jordanian women cannot pass on citizenship to their children. Initial phases of the program entailed broader eligibility, but criteria were tightened in subsequent iterations.

Mismatch between the Takaful program’s indicators and the reality of people’s lives

In addition, further exclusions have arisen because of the operation of the algorithmic system used in the program. When a person applies to Takaful, the system first determines eligibility by checking whether an applicant is a citizen and whether they are under the poverty line. It subsequently employs an algorithm, relying on 57 socioeconomic indicators, to rank people from least poor to poorest. The National Aid Fund uses existing databases as well as applicants’ answers to a questionnaire – that they must fill out online. Indicators include household size, geographic location, utilities consumption, ownership of businesses, and car ownership. It is unclear how these indicators are weighted, but the National Aid Fund has admitted that some indicators will lead to the automatic exclusion of applicants from the Takaful program. Applicants who own a car that is less than five years old or a business valued at over 3000 Jordanian Dinars, for instance, are automatically excluded. 

In its recent report, HRW highlights a number of shortcomings of the algorithmic system deployed in the Takaful program, critiquing its inability to reflect the complex and dynamic nature of poverty. The system, HRW argues, uses an outdated poverty measure, and embeds many problematic assumptions. For example, the algorithm gives some weight to whether an applicant owns a car. However, there are cars in people’s names that they do not actually own; some people own cars that broke down long ago, but they cannot afford to repair them. Additionally, the algorithm assumes that higher electricity and water consumption indicates that a family is less vulnerable. However, poorer households in Jordan in many cases actually have higher consumption—a 2020 survey showed that almost 75% of low- to middle-income households lived in apartments with poor thermal insulation.

Furthermore, this algorithmic system is designed on the basis of a single assessment of socioeconomic circumstances at a fixed point in time. But poverty is not static; people’s lives change and their level of need fluctuates. Another challenge is the unpredictability of aid: in this conversation with CHRGJ’s Digital Welfare State and Human Rights team, Hiba shared the story of a new mother who had been suddenly and unexpectedly cut off from the Takaful program, precisely when she was most in need.

At a broader level, introducing an algorithmic system such as this can also exacerbate information asymmetries. HRW’s report highlights issues concerning opacity in algorithmic decision-making—both for government officials themselves and those subject to the algorithm’s decisions—such that it is more difficult to understand how decisions are being made within this system.

Recommendations to improve the Takaful program

Given these wide-ranging implications, HRW’s primary recommendation is to move away from poverty targeting algorithms and toward universal social protection, which could cost under 1% of the country’s GDP. This could be funded through existing resources, tackling tax avoidance, implementing progressive taxes, and leveraging the influence of the World Bank to guide governments towards sustainable solutions. 

When asked during this conversation whether the algorithm used in the Takaful program could be improved, Hiba noted that a technically perfect algorithm executing a flawed policy will still lead to negative outcomes. She argued that it is the policy itself – the attempt to rank people from least poor to poorest – that is prone to exclusion errors, and warns that technology may be shiny, promising to make targeting accurate, effective, and efficient, but that it can also be a distraction from the policy issues at hand.

Thus, instead of flattening economic realities and leading to the exclusion of people who are, in reality, in immense need, Hiba recommended that support be provided inclusively and universally—to everyone during vulnerable stages of life, regardless of their income and their wealth. Therefore, rather than focusing on using technology that will enable ever-more precise targeting, Jordan should focus on embracing solutions that allow for more universal social protection.

Rebecca Kahn, JD program, NYU School of Law;  and  Human Rights Scholar at the Digital Welfare State & Human Rights project. Her research interests relate to responsible AI governance, digital rights, and consumer protection. She previously worked in the U.S. House and Senate as a legislative staffer.

Regulating Artificial Intelligence in Brazil

TECHNOLOGY & HUMAN RIGHTS

Regulating Artificial Intelligence in Brazil

On May 25, 2023, the Center for Human Rights and Global Justice’s Technology & Human Rights team hosted an event entitled Regulating Artificial Intelligence: The Brazilian Approach, in the fourteenth episode of the “Transformer States” interview series on digital government and human rights. This in-depth conversation with Professor Mariana Valente, a member of the Commission of Jurists created by the Brazilian Senate to work on a draft bill to regulate artificial intelligence, raised timely questions about the specificities of ongoing regulatory efforts in Brazil. These developments in Brazil may have significant global implications, potentially inspiring other more creative, rights-based, and socio-economically grounded regulation of emerging technologies in the Global South.

In recent years, numerous initiatives to regulate and govern Artificial Intelligence (AI) systems have arisen in Brazil. First, there was the Brazilian Strategy for Artificial Intelligence (EBIA), launched in 2021. Second, legislation known as Bill 21/20, which sought to specifically regulate AI, was approved by the House of Representatives in 2021. And in 2022, a Commission of Jurists was appointed by the Senate to draft a substitute bill on AI. This latter initiative holds significant promise. While the EBIA and Bill 21/20 were heavily criticized for the limited value given to public input in comparison to the available participatory and multi-stakeholder mechanisms, the Commission of Jurists took specific precautions to be more open to public input. Their proposed alternative draft legislation, which is grounded in Brazil’s socio-economic realities and legal tradition, may inspire further legal regulation of AI, especially for the Global South, considering Brazil’s position in other discussions related to internet and technology governance.

Bill 21/20 was the first bill directed specifically at AI. But this was a very minimal bill; it effectively established that regulating AI should be the exception. It was also based on a decentralized model, meaning that each economic sector would regulate its own applications of AI: for example, the federal agency dedicated to regulating the healthcare sector would regulate AI applications in that sector. There were no specific obligations or sanctions for the companies developing or employing AI, and there were some guidelines for the government on how it should promote the development of AI. Overall, the bill was very friendly to the private sector’s preference for the most minimal regulation possible. The bill was quickly approved in the House of Representatives, without public hearings or much public attention.

It is important to note that this bill does not exist in isolation. There is other legislation that applies to AI in the country, such as consumer law and data protection law, as well as the Marco Civil da Internet (Brazilian Civil Rights Framework for the Internet). These existing laws have been leveraged by civil society to protect people from AI harms. For example, Instituto Brasileiro de Defesa do Consumidor (IDEC), a consumer rights organization, successfully brought a public civil action using consumer protection legislation against Via Quatro, a private company responsible for the subway line 4-Yellow of Sao Paulo. The company was fined R$500,000 for collecting and processing individuals’ biometric data for advertising purposes without informed consent.

But, given that Bill 21/20 sought to specifically address the regulation of AI, academics and NGOs raised concerns that it would reduce the legal protections afforded in Brazil: it “gravely undermines the exercise of fundamental rights such as data protection, freedom of expression and equality” and “fails to address the risks of AI, while at the same time facilitating a laissez-faire approach for the public and private sectors to develop, commercialize and operate systems that are far from trustworthy and human-centric (…) Brazil risks becoming a playground for irresponsible agents to attempt against rights and freedoms without fearing for liability for their acts.”

As a result, the Senate decided that instead of voting on Bill 21/20, they would create a Commission of Jurists to propose a new bill.

The Commission of Jurists and the new bill

The Commission of Jurists was established in April 2022 and delivered its final report in December 2022. Even if the establishment of the Commission was considered a positive development, it was not exempt from criticism from civil society, for the lack of racial and regional diversity of the Commission’s membership, as well as the need for different areas of knowledge to contribute to the debate. This criticism comes from a reflection of the socio-economic realities of Brazil, which is one of the most unequal countries in the world, and those inequalities are intersectional, considering race, gender, income, territorial origin. Therefore, AI applications will have different effects on different segments of the population. This is already clear from the use of facial recognition in public security: more than 90% of the individuals arrested using this technology were Black. Another example is the use of an algorithm to evaluate requests for emergency aid amid the pandemic, where many vulnerable people had their benefits denied based on incorrect data.

During its mandate, the Commission of Jurists held public hearings, invited specialists from different areas of knowledge, and developed a public consultation mechanism allowing for written proposals. Following this process, the new proposed bill had several elements that were very different from Bill 21/20. First, the new bill borrows from the EU’s AI Act by adopting a risk-based approach: obligations are distinguished according to the risks they pose. However, the new bill, following the Brazilian tradition of structuring regulation from the perspective of individual and collective rights, merges the European risk-based approach with a rights-based approach. The bill confers individual and collective rights that apply in relation to all AI systems, independent of the level of risk they pose.

Secondly, the new bill includes some additional obligations for the public sector, considering its differential impact on people’s rights. For example, there is a ban on the treatment of racial information, and provisions on public participation in decisions regarding the adoption of these systems. Importantly, though the Commission discussed the inclusion of a complete ban on facial recognition technologies in public spaces for public security, this proposal was not included: instead, the bill included a moratorium, establishing that a law must be approved regulating this use.

What the future holds for AI regulation in Brazil

After the Commission submitted its report, in May 2023 the president of the Senate presented a new bill for AI regulation replicating the Commission’s proposal. On 16th August 2023, the Senate established a temporary internal commission to discuss the different proposals for AI regulation that have been presented in the Senate to date.

It is difficult to predict what will happen following the end of the internal commission’s work, as political decisions will shape the next developments. However, what is important to have in mind is the progress that the discussion has reached so far, from an initial bill that was very minimal in scope, and supported the idea of minimal regulation, to one that is much more protective of individual and collective rights and considerate of Brazil’s particular socio-economic realities. Brazil has played an important progressive role historically in global discussions on the regulation of emerging technologies, for example with the discussions of its Marco Civil da Internet. As Mariana Valente put it, “Brazil has had in the past a very strong tradition of creative legislation for regulating technologies.” The Commission of Jurists’ proposal repositions Brazil in such a role.

September 28, 2023. Marina Garrote, LLM program, NYU School of Law whose research interests lie at the intersection of digital rights and social justice. Marina holds a bachelor and master’s degree from Universidade de São Paulo and previously worked at Data Privacy Brazil, a civil society association dedicated to public interest research on digital rights.

Risk Scoring Children in Chile

TECHNOLOGY & HUMAN RIGHTS

Risk Scoring Children in Chile

On March 30, 2022, Christiaan van Veen and Victoria Adelmant hosted the eleventh event in our “Transformer States” interview series on digital government and human rights. In conversation with human rights expert and activist Paz Peña, we examined the implications of Chile’s “Childhood Alert System,” an “early warning” mechanism which assigns risk scores to children based on their calculated probability of facing various harms. This blog picks up on the themes of the conversation. The video recording and additional readings can be found below.

The deaths of over a thousand children in privatized care homes in Chile between 2005 and 2016 have, in recent years, pushed the issue of child protection high onto the political agenda. The country’s limited legal and institutional protections for children have been consistently critiqued in the past decade, and calls for more state intervention, to reverse the legacies of Pinochet-era commitments to “hands-off” government, have been intensifying. On his first day in office in 2018, former president Sebastián Piñera promised to significantly strengthen and institutionalize state protections for children. He launched a National Agreement for Childhood; established local “childhood offices” and an Undersecretariat for Children; a law guaranteeing children’s rights was passed; and the Sistema Alerta Niñez (“Childhood Alert System”) was developed. This system uses predictive modelling software to calculate children’s likelihood of facing harm or abuse, dropping out of school, and other such risks.

Predictive modelling calculates the probabilities of certain outcomes by identifying patterns within datasets. It operates through a logic of correlation: where persons with certain characteristics experienced harm in the past, those with similar characteristics are likely to experience harm in the future. Developed jointly by researchers at Auckland University of Technology’s Centre for Social Data Analytics and the Universidad Adolfo Ibáñez’s GobLab, the Childhood Alert predictive modelling software analyzes existing government databases to identify combinations of individual and social factors which are correlated with harmful outcomes, and flags children accordingly. The aim is to “prioritize minors [and] achieve greater efficiency in the intervention.”

A skewed picture of risk

But the Childhood Alert System is fundamentally skewed. The tool analyzes databases about the beneficiaries of public programs and services, such as Chile’s Social Information Registry. It thereby only examines a subset of the population of children—those whose families are accessing public programs. Families in higher socioeconomic brackets—who do not receive social assistance and thus do not appear in these databases—are already excluded from the picture, despite the fact that children from these groups can also face abuse. Indeed, the Childhood Alert system’s developers themselves acknowledged in their final report that the tool has “reduced capability for identifying children at high risk from a higher socioeconomic level” due to the nature of the databases analyzed. The tool, from its inception and by its very design, is limited in scope and completely ignores wealthier groups.

The analysis then proceeds on a problematic basis, whereby socioeconomic disadvantage is equated with risk. Selected variables include: social programs of which the child’s family are beneficiaries; families’ educational backgrounds; socioeconomic measures from Chile’s Social Registry of Households; and a whole host of geographical variables, including the number of burglaries, percentage of single parent households, and unemployment rate in the child’s neighborhood. Each of these variables are direct measures of poverty. Through this design, children in poorer areas can be expected to receive higher risk scores. This is likely to perpetuate over-intervention in certain neighborhoods.

Economic and social inequalities, including significant regional disparities in living conditions, persist in Chile. As elsewhere, poverty and marginalization do not fall evenly. Women, migrants, those living in rural areas, and indigenous groups are more likely to live in poverty—those from indigenous groups have Chile’s highest poverty rates. As the Alert System is skewed towards low-income populations, it will likely disproportionately flag children from indigenous groups thus raising issues of racial and ethnic bias. Furthermore, the datasets used will also reflect inequalities and biases. Public datasets about families’ previous interactions with child protective services, for example, are populated through social workers’ inputs. Biases against indigenous families, young mothers, or migrants—reflected through disproportionate investigations or stereotyped judgments about parenting—will be fed into the database.

The developers of this predictive tool wrote in their evaluation that, while concerns about racial disparities “have been expressed in the context of countries like the United States, where there are greater challenges related to racism. In the local Chilean context, we frankly don’t see similar concerns about race.” As Paz Peña points out, this dismissal is “difficult to understand” in light of the evidence of racism and racialized poverty in Chile.

Predictive systems such as these are premised on linking individuals’ characteristics and circumstances with the incidence of harm. As Abeba Birhane puts it, such approaches by their nature “force determinability [and] create a world that resembles the past” through reinforcing stereotypes, because they attach risk factors to certain individual traits.

The global context

These issues of bias, disproportionality, and determinacy in predictive child welfare tools have already been raised in other countries. Public outcry, ethical concerns, and evidence that these tools simply do not work as intended, have led many such systems to be scrapped. In the United Kingdom, a local authority’s Early Help Profiling System which “translates data on families into risk profiles [of] the 20 families in most urgent need” was abandoned after it had “not realized the expected benefits.” The U.S. state of Illinois’ child welfare agency strongly criticized and scrapped its predictive tool which had flagged hundreds of children as 100% likely to be injured while failing to flag any of the children who did tragically die from mistreatment. And in New Zealand, the Social Development Minister prevented the deployment of a predictive tool on ethical grounds, purportedly noting: “These are children, not lab rats.”

But while predictive tools are being scrapped on grounds of ethics and ineffectiveness in certain contexts, these same systems are spreading across the Global South. Indeed, the Chilean case demonstrates this trend especially clearly. The team of researchers who developed Chile’s Childhood Alert System is the very same team whose modelling was halted by the New Zealand government due to ethical questions, and whose predictive tool for the U.S. state of Pennsylvania was the subject of high-profile and powerful critique by many actors including Virginia Eubanks in her 2018 book Automating Inequality.

As Paz Peña noted, it should come as no surprise that systems which are increasingly deemed too harmful in some Global North contexts are proliferating in the Global South. These spaces are often seen as an “easier target,” with lower chances of backlash than places like New Zealand or the United States. In Chile, weaker institutions resulting from the legacies of military dictatorship and the staunch commitment to a “subsidiary” (streamlined, outsourced, neoliberal) state may be deemed to provide more fertile ground for such systems. Indeed, the tool’s developers wrote in a report that achieving acceptance of the system in Chile would be “simpler as it is the citizens’ custom to have their data processed to stratify their socioeconomic status for the purpose of targeting social benefits.”

This highlights the indispensability of international comparison, cooperation, and solidarity. Those of us working in this space must pay close attention to developments around the world as these systems continue to be hawked at breakneck speed. Identifying parallels, sharing information, and collaborating across constituencies is vital to support the organizations and activists who are working to raise awareness of these systems.

April 20, 2022. Victoria Adelmant, Director of the Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law. 

Singapore’s “smart city” initiative: one step further in the surveillance, regulation and disciplining of those at the margins

TECHNOLOGY & HUMAN RIGHTS

Singapore’s “smart city” initiative: one step further in the surveillance, regulation and disciplining of those at the margins

Singapore’s smart city initiative creates an interconnected web of digital infrastructures which promises citizens safety, convenience, and efficiency. But the smart city is experienced differently by individuals at the margins, particularly migrant workers, who are experimented on at the forefront of technological innovation.

On February 23, 2022, we hosted the tenth event of the Transformer States Series on Digital Government and Human Rights, titled “Surveillance of the Poor in Singapore: Poverty in ‘Smart City’.” Christiaan van Veen and Victoria Adelmant spoke with Dr. Monamie Bhadra Haines about the deployment of surveillance technologies as part of Singapore’s “smart city” initiative. This blog outlines the key themes discussed during the conversation.

The smart city in the context of institutionalized racial hierarchy

Singapore has consistently been hailed as the world’s leading smart city. For a decade, the city-state has been covering its territory with ubiquitous sensors and integrated digital infrastructures with the aim, in the government’s words, of collecting information on “everyone, everything, everywhere, all the time.” But these smart city technologies are layered on top of pre-existing structures and inequalities, which mediate how these innovations are experienced.

One such structure is an explicit racial hierarchy. As an island nation with a long history of multi-ethnicity and migration, Singapore has witnessed significant migration from Southern China, the Malay Peninsula, India, and Bangladesh. Borrowing from the British model of race-based regulation, this multi-ethnicity is governed by the post-colonial state through the explicit adoption of four racial categories – Chinese, Malay, Indian and Others (or “CMIO” for short) – which are institutionalized within immigration policies, housing, education and employment. As a result, while migrant workers from South and Southeast Asia are the backbone of Singapore’s blue-collar labor market, they occupy the bottom tier of the racial hierarchy; are subject to stark precarity; and have become the “objects” of extensive surveillance by the state.

The promise of the smart city

Singapore’s smart city initiative is “sold” to the public through narratives of economic opportunities and job creation in the knowledge economy, improving environmental sustainability, and increasing efficiency and convenience. Through collecting and inter-connecting all kinds of “mundane” data – such as electricity patterns, data from increasingly-intrusive IoT products, and geo-location and mobility data – into centralized databases, smart cities are said to provide more safety and convenience. Singapore’s hyper-modern technologically-advanced society promises efficient and seamless public services, and the constant technology-driven surveillance and the loss of a few civil liberties are viewed by many as a small price to pay for such efficiency.

Further, the collection of large quantities of data from individuals is promised to enable citizens to be better connected with the government; while governments’ decisions, in turn, will be based upon the purportedly objective data from sensors and devices, thereby freeing decision-making from human fallibility and rendering it more neutral.

The realities: disparate impacts of smart city surveillance on migrant workers

However, smart cities are not merely economic or technological endeavors, but techno-social assemblages that create and impact different publics differently. As Monamie noted, specific imaginations and imagery of Singapore as a hyper-modern, interconnected, and efficient smart city can obscure certain types of racialized physical labor, such as the domestic labor of female Southeast-Asian migrant workers.

Migrant workers are uniquely impacted by increasing digitalization and datafication in Singapore. For years, these workers have been housed in dormitories with occupancy often exceeding capacity, located in the literal “margins” or outskirts of the city: migrant workers have long been physically kept separate from the rest of Singapore’s population within these dormitory complexes. They are stereotyped as violent or frequently inebriated, and the dormitories have for years been surveilled through digital technologies including security cameras, biometric sensors, and data from social media and transport services.

The pandemic highlighted and intensified the disproportionate surveillance of migrant workers within Singapore. Layered on top of the existing technological surveillance of migrants’ dormitories, a surveillance assemblage for COVID-19 contact tracing was created. Measures in the name of public health were deployed to carefully surveil these workers’ bodies and movements. Migrant workers became “objects” of technological experimentation as they were required to use a multitude of new mobile-based apps that integrated immigration data and work permit data with health data (such as body temperature and oximeter readings) and Covid-19 contact tracing data. The permissions required by these apps were also quite broad – including access to Bluetooth services and location data. All the data was stored in a centralized database.

Even though surveillant contact-tracing technologies were later rolled out across Singapore and normalized around the world, the important point here is that these systems were deployed exclusively on migrant workers first. Some apps, Monamie pointed out, were indeed only required by migrant workers, while citizens did not have to use them. This use of interconnected networks of surveillance technologies thus highlights the selective experimentation that underpins smart city initiatives. While smart city initiatives are, by their nature, premised on large-scale surveillance, we often see that policies, apps, and technologies are tried on individuals and communities with the least power first, before spilling out to the rest of the population. In Singapore, the objects of such experimentation are migrant workers who occupy “exceptional spaces” – of being needed to ensure the existence of certain labor markets, but also of needing to be disciplined and regulated. These technological initiatives, in subjecting specific groups at the margins to more surveillance than the rest of the population and requiring them to use more tech-based tools than others, serve to exacerbate the “othering” and isolation of migrant workers.

Forging eddies of resistance

While Monamie noted that “activism” is “still considered a dirty word in Singapore,” there have been some localized efforts to challenge some of the technologies within the smart city, in part due to the intensification of surveillance spurred by the pandemic. These efforts, and a rapidly-growing recognition of the disproportionate targeting and disparate impacts of such technologies, indicate that the smart city is also a site of contestation with growing resistance to its tech-based tools.

March 18, 2022. Ramya Chandrasekhar, LLM program at NYU School of Law whose research interests relate to data governance, critical infrastructure studies, and critical theory. She previously worked with technology policy organizations and at a reputed law firm in India.

Chosen by a Secret Algorithm: Colombia’s top-down pandemic payments

TECHNOLOGY AND HUMAN RIGHTS

Chosen by a Secret Algorithm: Colombia’s top-down pandemic payments

The Colombian government was applauded for delivering payments to 2.9 million people in just 2 weeks during the pandemic, thanks to a big-data-driven approach. But this new approach represents a fundamental change in social policy which shifts away from political participation and from a notion of rights.

On Wednesday, November 24, 2021, the Digital Welfare State and Human Rights Project hosted the ninth episode in the Transformer States conversation series on Digital Government and Human Rights, in an event entitled: “Chosen by a secret algorithm: A closer look at Colombia’s Pandemic Payments.” Christiaan van Veen and Victoria Adelmant had a conversation with Joan López, Researcher at the Global Data Justice Initiative and at Colombian NGO Fundación Karisma about Colombia’s pandemic payments and its reliance on data-driven technologies and prediction. This blog highlights some core issues related to taking a top-down, data-driven approach to social protection.

From expert interviews to a top-down approach

The System of Possible Beneficiaries of Social Programs (SISBEN in Spanish) was created to assist in the targeting of social programs in Colombia. This system classifies the Colombian population along a spectrum of vulnerability through the collection of information about households, including health data, family composition, access to social programs, financial information, and earnings. This data is collected through nationwide interviews conducted by experts. Beneficiaries are then rated on a scale of 1 to 100, with 0 as the least prosperous and 100 as the most prosperous, through a simple algorithm. SISBEN therefore aims to identify and rank “the poorest of the poor.” This centralized classification system is used by 19 different social programs to determine eligibility: each social program chooses its own cut-off score between 1 and 100 as a threshold for eligibility.

But in 2016, the National Development Office – the Colombian entity in charge of SISBEN – changed the calculation used to determine the profile of the poorest. It introduced a new and secret algorithm which would create a profile based on predicted income generation capacity. Experts collecting data for SISBEN through interviews had previously looked at the realities of people’s conditions: if a person had access to basic services such as water, sanitation, education, health and/or employment, the person was not deemed poor. But the new system sought instead to create detailed profiles about what a person could earn, rather than what a person has. This approach sought, through modelling, to predict households’ situation, rather than to document beneficiaries’ realities.

A new approach to social policy

During the pandemic, the government launched a new system of payments called the Ingreso Solidario (meaning “solidarity income”). This system would provide monthly payments to people who were not covered by any other existing social program that relied on SISBEN; the ultimate goal of Ingreso Solidario was to send money to 2.9 million people who needed assistance due to the crisis caused by COVID-19. The Ingreso Solidario was, in some ways, very effective. People did not have to apply for this program: if they were selected as eligible, they would automatically receive a payment. Many people received the money immediately into their bank accounts, and payments were made very rapidly, within just a few weeks. Moreover, the Ingreso Solidario was an unconditional transfer and did not condition the receipt of the money to the fulfillment of certain requirements.

But the Ingreso Solidario was based on a new approach to social policy, driven by technology and data sharing. The Government entered agreements with private companies, including Experian and Transunion, to access their databases. Agreements were also made between different government agencies and departments. Through data-sharing arrangements across 34 public and private databases, the government cross- checked the information provided in the interviews with information in dozens of databases to find inconsistencies and exclude anyone deemed not to require social assistance. In relying on cross-checking databases to “find” people who are in need, this approach depends heavily on enormous data collection, and it increases government’s reliance on the private sector.

The implications of this new approach

This new approach to social policy, as implemented through the Ingreso Solidario, has fundamental implications. First, this system is difficult to challenge. The algorithm used to profile vulnerability, to predict income generating capacity, and to assign a score to people living in poverty, is confidential. The Government consistently argued that disclosing information about the algorithm would lead to a macroeconomic crisis because if people knew how the system worked, they would try to cheat the system. Additionally, SISBEN has been normalized. Though there are many other ways that eligibility for social programs could be assessed, the public accepts it as natural and inevitable that the government has taken this arbitrary approach reliant on numerical scoring and predictions. Due to this normalization, combined with the lack of transparency, this new approach to determining eligibility for social programs has therefore not been contested.

Second, in adopting an approach which relies on cross-checking and analyzing data, the Ingreso Solidario is designed to avoid any contestation in the design and implementation of the algorithm. This is a thoroughly technocratic endeavor. The idea is to use databases and avoid going to, and working with, the communities. The government was, in Joan’s words, “trying to control everything from a distance” to “avoid having political discussions about who should be eligible.” There were no discussions and negotiations between the citizens and the Government to jointly address the challenges of using this technology to target poor people. Decisions about who the extra 2.9 million beneficiaries should be were taken unilaterally from above. As Joan argued, this was intentional: “The mindset of avoiding political discussion is clearly part of the idea of Ingreso Solidario.”

Third, because people were unaware that they were going to receive money, those who received a payment felt like they had won the lottery. Thus, as Joan argued, people saw this money not “as an entitlement, but just as a gift that this person was lucky to get.” This therefore represents a shift away from a conception of assistance as something we are entitled to by right. But in re-centering the notion of rights, we are reminded of the importance of taking human rights seriously when analyzing and redesigning these kinds of systems. Joan noted that we need to move away from an approach of deciding what poverty is from above, and instead move towards working with communities. We must use fundamental rights as guidance in designing a system that will provide support to those in poverty in an open, transparent, and participatory manner which does not seek to bypass political discussion.

María Beatriz Jiménez, LLM program, NYU School of Law with research focus on digital rights. She previously worked for the Colombian government in the Ministry of Information and Communication Technologies and the Ministry of Trade.

Pilots, Pushbacks, and the Panopticon: Digital Technologies at the EU’s Borders

TECHNOLOGY & HUMAN RIGHTS

Pilots, Pushbacks, and the Panopticon: Digital Technologies at the EU’s Borders

The European Union is increasingly introducing digital technologies into its border control operations. But conversations about these emerging “digital borders” are often silent about the significant harms experienced by those subjected to these technologies, their experimental nature, and their discriminatory impacts.

On October 27, 2021, we hosted the eighth episode in our Transformer States Series on Digital Government and Human Rights, in an event entitled “Artificial Borders? The Digital and Extraterritorial Protection of ‘Fortress Europe.’” Christiaan van Veen and Ngozi Nwanta interviewed Petra Molnar about the European Union’s introduction of digital technologies into its border control and migration management operations. The video and transcript of the event, along with additional reading materials, can be found below. This blog post outlines key themes from the conversation.

Digital technologies are increasingly central to the EU’s efforts to curb migration and “secure” its borders. Against a background of growing violent pushbacks, surveillance technologies such as unpiloted drones and aerostat machines with thermo-vision sensors are being deployed at the borders. The EU-funded “ROBORDER” project aims to develop “a fully-functional autonomous border surveillance system with unmanned mobile robots.” Refugee camps on the EU’s borders, meanwhile, are being turned into a “surveillance panopticon,” as the adults and children living within them are constantly monitored by cameras, drones, and motion-detection sensors. Technologies also mediate immigration and refugee determination processes, from automated decision-making, to social media screening, and a pilot AI-driven “lie detector.”

In this Transformer States conversation, Petra argued that technologies are enabling a “sharpening” of existing border control policies. As discussed in her excellent report entitled “Technological Testing Grounds,” completed with European Digital Rights and the Refugee Law Lab, new technologies are not only being used at the EU’s borders, but also to surveil and control communities on the move before they reach European territory. The EU has long practiced “border externalization,” where it shifts its border control operations ever-further away from its physical territory, partly through contracting non-Member States to try to prevent migration. New technologies are increasingly instrumental in these aims. The EU is funding African states’ construction of biometric ID systems for migration control purposes; it is providing cameras and surveillance software to third countries to prevent travel towards Europe; and it supports efforts to predict migration flows through big data-driven modeling. Further, borders are increasingly “located” on our smartphones and in enormous databases as data-based risk profiles and pre-screening become a central part of the EU’s border control agenda.

Ignoring human experience and impacts

But all too often, discussions about these technologies are sanitized and depoliticized. People on the move are viewed as a security problem, and policymakers, consultancies, and the private sector focus on the “opportunities” presented by technologies in securitizing borders and “preventing migration.” The human stories of those who are subjected to these new technological tools and the discriminatory and deadly realities of “digital borders” are ignored within these technocratic discussions. Some EU policy documents describe the “European Border Surveillance System” without mentioning people at all.

In this interview, Petra emphasized these silences. She noted that “human experience has been left to the wayside.” First-person accounts of the harmful impacts of these technologies are not deemed to be “expert knowledge” by policymakers in Brussels, but it is vital to expose the human realities and counter the sanitized policy discussions. Those who are subjected to constant surveillance and tracking are dehumanized: Petra reports that some are left feeling “like a piece of meat without a life, just fingerprints and eye scans.” People are being forced to take ever-deadlier routes to avoid high-tech surveillance infrastructures, and technology-enabled interdictions and pushbacks are leading to deaths. Further, difference in treatment is baked into these technological systems, as they enable and exacerbate discriminatory inferences along racialized lines. As UN Special Rapporteur on Racism E. Tendayi Achiume writes, “digital border technologies are reinforcing parallel border regimes that segregate the mobility and migration of different groups” and are being deployed in racially discriminatory ways. Indeed, some algorithmic “risk assessments” of migrants have been argued to represent racial profiling.

Policy discussions about “digital borders” also do not acknowledge that, while the EU spends vast sums on technologies, the refugee camps at its borders have neither running water nor sufficient food. Enormous investment in digital migration management infrastructures is being “prioritized over human rights.” As one man commented, “now we have flying computers instead of more asylum.”

Technological experimentation and pilot programs in “gray zones”

Crucially, these developments are occurring within largely-unregulated spaces. A central theme of this Transformer States conversation—mirroring the title of Petra’s report, “Technological Testing Grounds”—was the notion of experimentation within the “gray zones” of border control and migration management. Not only are non-citizens and stateless persons accorded fewer rights and protections than EU citizens, but immigration and asylum decision-making is also an area of law which is highly discretionary and contains fewer legal safeguards.

This low-rights, high-discretion environment makes it rife for testing new technologies. This is especially the case in “external” spaces far from European territory which are subject to even less regulation. Projects which would not be allowed in other spaces are being tested on populations who are literally at the margins, as refugee camps become testing zones. The abovementioned “lie detector,” whereby an “avatar” border guard flagged “biomarkers of deceit,” was “merely” a pilot program. This has since been fiercely criticized, including by the European Parliament, and challenged in court.

Experimentation is deliberately occurring in these zones as refugees and migrants have limited opportunities to challenge this experimentation. The UN Special Rapporteur on Racism has noted that digital technologies in this area are therefore “uniquely experimental.” This has parallels with our work, where we consistently see governments and international organizations piloting new technologies on marginalized and low-income communities. In a previous Transformer States conversation, we discussed Australia’s Cashless Debit Card system, in which technologies were deployed upon aboriginal people through a pilot program. In the UK, radical reform to the welfare system through digitalization was also piloted, with low-income groups being tested on with “catastrophic” effects.

Where these developments are occurring within largely-unregulated areas, human rights norms and institutions may prove useful. As Petra noted, the human rights framework requires courts and policymakers to focus upon the human impacts of these digital border technologies, and highlights the discriminatory lines along which their effects are felt. The UN Special Rapporteur on Racism has outlined how human rights norms require mandatory impact assessments, moratoria on surveillance technologies, and strong regulation to prevent discrimination and harm.

November 23, 2021. Victoria Adelmant,Director of the Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law. 

Social rights disrupted: how should human rights organizations adapt to digital government?

TECHNOLOGY & HUMAN RIGHTS

Social rights disrupted: how should human rights organizations adapt to digital government?

As the digitalization of government is accelerating worldwide, human rights organizations who have not historically engaged with questions surrounding digital technologies are beginning to grapple with these issues. This challenges these organizations to adapt both their substantive focus and working methods while remaining true to their values and ideals.

On September 29, 2021, Katelyn Cioffi and I hosted the seventh event in the Transformer States conversation series, which focuses on the human rights implications of the emerging digital state. We interviewed Salima Namusobya, Executive Director of the Initiative for Social and Economic Rights (ISER) in Uganda, about how socioeconomic rights organizations are having to adapt to respond to issues arising from the digitalization of government. In this blog post, I outline parts of the conversation. The event recording, transcript, and additional readings can be found below.

Questions surrounding digital technologies are often seen as issues for “digital rights” organizations, which generally focus on a privileged set of human rights issues such as privacy, data protection, free speech online, or cybersecurity. But, as governments everywhere enthusiastically adopt digital technologies to “transform” their operations and services, these developments are starting to be confronted by actors who have not historically engaged with the consequences of digitalization.

Digital government as a new “core issue”

The Initiative for Social and Economic Rights (ISER) in Uganda is one such human rights organization. Its mission is to improve respect, recognition, and accountability for social and economic rights in Uganda, focusing on the right to health, education, and social protection. It had never worked on government digitalization until recently.

But, through its work on social protection schemes, ISER was confronted with the implications of Uganda’s national digital ID program. While monitoring the implementation of the Senior Citizens grant in which persons over 80 years old receive cash grants, ISER staff frequently encountered people who were clearly over 80 but were not receiving grants. This program had been linked to Uganda’s national identification scheme, which holds individuals’ biographic and biometric information in a centralized electronic database called the National Identity Register and issues unique IDs to enrolled individuals. Many older persons had struggled to obtain IDs because their fingerprints could not be captured. Many other older persons had obtained national IDs, but the wrong birthdates were entered into the ID Register. In one instance, a man’s birthdate was wrong by nine years. In each case, the Senior Citizens grant was not paid to eligible beneficiaries because of faulty or missing data within the National Identity Register. Witnessing these significant exclusions led  ISER to become  actively involved in research and advocacy surrounding the digital ID. They partnered with CHRGJ’s Digital Welfare State team and Ugandan digital rights NGO Unwanted Witness, and the collective work culminated in a joint report. This has now become a “core issue” for ISER.

Key challenges

While moving into this area of work, ISER has faced some challenges. First, digitalization is spreading quickly across various government services. From the introduction of online education despite significant numbers of people having no access to electricity or the internet, to the delivery of COVID-19 relief via mobile money when only 71% of Ugandans own a mobile phone, exclusions are arising across multiple government initiatives. As technology-driven approaches are being rapidly adopted and new avenues of potential harm are continually materializing, organizations can find it difficult to keep up.

The widespread nature of these developments mean that organizations are finding themselves making the same argument again and again to different parts of government. It is often proclaimed that digitized identity registers will enable integration and interoperability across government, and that introducing technologies into governance “overcomes bureaucratic legacies, verticality and silos.” But ministries in Uganda remain fragmented and are each separately linking their services to the national ID. ISER must go to different ministries whenever new initiatives are announced to explain, yet again, the significant level of exclusion that using the National Identity Register entails. While fragmentation was a pre-existing problem, the rapid proliferation of initiatives across government is leaving organizations “firefighting.”

Second, organizations face an uphill battle in convincing the government to slow down in their deployment of technology. Government officials often see enormous potential in technologies for cracking down on security threats and political dissent. Digital surveillance is proliferating in Uganda, and the national ID contributes to this agenda by enabling the government to identify individuals. Where such technologies are presented as combating terrorism, advocating against them is a challenge.

Third, powerful actors are advocating the benefits of government digitalization. International agencies such as the World Bank are providing encouragement and technical assistance and are praising governments’ digitalization efforts. Salima noted that governments take this seriously, and if publications from these organizations are “not balanced enough to bring out the exclusionary impact of the digitalization, it becomes a problem.” Civil society faces an enormous challenge in countering overly-positive reports from influential organizations.

Lessons for human rights organizations

In light of these challenges, several key lessons arise for human rights organizations who are not used to working on technology-related problems but who are witnessing harmful impacts from digital government.

One important lesson is that organizations will need to adopt new and different methods in dealing with challenges arising from the rapid spread of digitalization; they should use “every tool available to them.” ISER is an advocacy organization which only uses litigation as a last resort. But when the Ugandan Ministry of Health announced that national ID would be required to access COVID-19 vaccinations, “time was of the essence”, in Salima’s words. Together with Unwanted Witness, it immediately launched litigation seeking an injunction, arguing that this would exclude millions, and the policy was reversed.

ISER’s working methods have changed in other ways. ISER is not a service provision charity. But, in seeing countless people unable to access services because they were unable to enroll in the ID Register, ISER felt obliged to provide direct assistance. Staff compiled lists of people without ID, provided legal services, and helped individuals to navigate enrolment. Advocacy organizations may find themselves taking on such roles to assist those who are left behind in the transition to digital government.

Another key lesson is that organizations have much to gain from sharing their experiences with practitioners who are working in different national contexts. ISER has been comparing its experiences and sharing successful advocacy approaches with Kenyan and Indian counterparts and has found “important parallels.”

Last, organizations must engage in active monitoring and documentation to create an evidence base which can credibly show how digital initiatives are, in practice, affecting some of the most vulnerable. As Salima noted, “without evidence, you can make as much noise as you like,” but it will not lead to change. From taking videos and pictures, to interviewing and writing comprehensive reports, organizations should be working to ensure that affected communities’ experiences can be amplified and reflected to demonstrate the true impacts of government digitalization.

October 19, 2021. Victoria Adelmant, Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law. 

False Promises and Multiple Exclusion: Summary of Our RightsCon Event on Uganda’s National Digital ID System

TECHNOLOGY & HUMAN RIGHTS

False Promises and Multiple Exclusion: Summary of Our RightsCon Event on Uganda’s National Digital ID System 

Despite its promotion as a tool for social inclusion and development, Uganda’s National Digital ID System is motivated primarily by national security concerns. As a result, the ID system has generated both direct and indirect exclusion, particularly affecting women and older persons.

On June 10, 2021, the Center for Human Rights and Global Justife at NYU School of Law co-hosted the panel “Digital ID: what is it good for? Lessons from our research on Uganda’s identity system and access to social services” as part of RightsCon, the leading summit on human rights in the digital age. The panelists included Salima Namusobya, Executive Director of the Initiative for Social and Economic Rights (ISER), Dorothy Mukasa, Team Leader of Unwanted Witness, Grace Mutung’u, Research Fellow at the Centre for IP and IT Law at Strathmore University, and Christiaan van Veen, Director of the Digital Welfare State & Human Rights Project at the Center . This blog summarizes highlights of the panel discussion. A recording and transcript of the conversation, as well as additional readings, can be found below.

Uganda’s national digital ID system, known as Ndaga Muntu, was introduced in 2014 through a mass registration campaign. The government aimed to collect the biographic and biometric information including photographs and fingerprints of every adult in the country, to record this data in a centralized database known as the National Identity Register, and to issue a national ID card and unique ID number to each adult. Since its introduction, having a national ID has become a prerequisite to access a whole host of services, from registering for a SIM card and opening a bank account, to accessing health services and social protection schemes.

This linkage of Ndaga Muntu to public services has raised significant human rights concerns and is serving to lock millions of people in Uganda out of critical services. Seven years from its inception, it is clear that the national digital ID is a tool for exclusion rather than for inclusion. Drawing on the joint report by the Center , ISER, and Unwanted Witness, this event made clear that Ndaga Muntu was grounded in false promises and is resulting in multiple forms of exclusion.

The False Promise of Inclusion

The Ugandan government argued that this digital ID system would enhance social inclusion by allowing Ugandans to prove their identity more easily. Having this proof of identity would facilitate access to public services such as healthcare, enable people to sign up for private services such as bank accounts, and allow people to move freely throughout Uganda. The same rhetoric of inclusion was used to sell Aadhaar, India’s digital ID system, to the Indian public.

But for many Ugandans this was a false promise. From the very outset, Ndaga Muntu was developed chiefly as a tool for national security. The powerful Ugandan military had long pushed for the collection of sensitive identity information and biometric data: in the context of a volatile region, a centralized information database is appealing because of its ability to verify identity and indicate who is “really Ugandan” and who is not. Therefore, the national ID project was housed in the Ministry of Internal Affairs, overseen by prominent members of the Ugandan People’s Defense Force, and designed to serve only those who succeeded in completing a rigorous citizenship verification process.

The panelist from Kenya, Grace Mutung’u, shared how Kenya’s hundred-year-old national identification system was similarly rooted in a colonial regime that focused on national security and exclusion. Those design principles created a system that sought only to “empower the already empowered” and not to extend benefits beyond already-privileged constituencies. The result in both Kenya and Uganda was the same: digital ID systems that are designed to ensure that certain individuals and groups remain excluded from political, economic, and social life.

Proliferating Forms of Exclusion

Beyond the fact that Ndaga Muntu was designed to directly exclude anyone not entitled to access public services, those who are entitled are also being excluded in the millions. For ordinary Ugandans, accessing Ndaga Muntu is a nightmarish process rife with problems every step of the way. These problems, such as corruption, incorrect data entry, and technical errors, have impeded Ugandans’ access to the ID. Vulnerable populations who rely on social protection programs that require proof of ID bear the brunt of such errors. For example, one older woman was told that the national ID registration system could not capture her picture because of her grey hair. Other elderly Ugandans have had trouble with fingerprint scanners that could not capture fingerprints worn away from years of manual labor.

The many individuals who have not succeeded in registering for Ndaga Muntu are therefore being left out of the critical services which are increasingly linked to the ID. At least 50,000 of the 200,000 eligible persons over the age of 80 in Uganda were unable to access potentially lifesaving benefits such as the Senior Citizens’ Grant cash transfer program. Women have been similarly disproportionately impacted by the national ID requirement; for instance, pregnant women have been refused services by healthcare workers for failing to provide ID. To make matters worse, ID requirements are increasingly ubiquitous in Uganda: proof of ID is often required to book transportation, to vote, to access educational services, healthcare, social protection grants, and food donations. Having a national ID has become necessary for basic survival, especially for those who live in extreme poverty.

Digital ID systems should not prohibit people from living their lives and utilizing basic services that should be universally accessible, particularly when they are justified on the basis that they will improve access to services. Not only was the promise of inclusion for Ndaga Muntu false, but the rollout of the system has also been incompetent and faulty, leading to even greater exclusion. The profound impact of this double discrimination in Uganda demonstrates that such digital ID systems and their impacts on social and economic rights warrant greater and urgent attention from the human rights community at large.

June 12, 2021. Madeleine Matsui, JD program, Harvard Law School; intern with the Digital Welfare State and Human Rights.

Social Credit in China: Looking Beyond the “Black Mirror” Nightmare

TECHNOLOGY & HUMAN RIGHTS

Social Credit in China: Looking Beyond the “Black Mirror” Nightmare

The Chinese government’s Social Credit program has received much attention from Western media and academics, but misrepresentations have led to confusion over what it truly entails. Such mischaracterizations unhelpfully distract from the dangers and impacts of the realities of Social Credit. On March 31, 2021, Christiaan Van Veen and I hosted the sixth event in the Transformer States conversation series, which focuses on the human rights implications of the emerging digital state. We interviewed Dr. Chenchen Zhang, Assistant Professor at Queen’s University Belfast, to explore the much-discussed but little-understood Social Credit program in China.

Though the Chinese government’s Social Credit program has received significant attention from Western media and rights organizations, much of this discussion has often misrepresented the program. Social Credit is imagined as a comprehensive, nation-wide system in which every action is monitored and a single score is assigned to each individual, much like a Black Mirror episode. This is in fact quite far from reality. But this image has become entrenched in the West, as discussions and some academic debate has focused on abstracted portrayals of what Social Credit could be. In addition, the widely-discussed voluntary, private systems run by corporations, such as Alipay’s Sesame Credit or Tencent’s WeChat score, are often mistakenly conflated with the government’s Social Credit program.

Jeremy Daum has argued that these widespread misrepresentations of Social Credit serve to distract from examining “the true causes for concern” within the systems actually in place. They also distract from similar technological developments occurring in the West, which seem acceptable by comparison. An accurate understanding is required to acknowledge the human rights concerns that this program raises.

The crucial starting point here is that the government’s Social Credit system is a heterogeneous assemblage of fragmented and decentralized systems. Central government, specific government agencies, public transport networks, municipal governments, and others are experimenting with diverse initiatives with different aims. Indeed, xinyong, the term which is translated as “credit” in Social Credit, encompasses notions of financial creditworthiness, regulatory compliance, and moral trustworthiness, therefore covering programs with different visions and narratives. A common thread across these systems is a reliance on information-sharing and lists to encourage or discourage certain behaviors, including blacklists to “shame” wrongdoers and “redlists” publicizing those with a good record.

One national-level program called the Joint Rewards and Sanctions mechanism shares information across government agencies about companies which have violated regulations. Once a company is included on one agency’s blacklist for having, for example, failed to pay migrant workers’ wages, other agencies may also sanction that company and refuse to grant it a license or contract. But blacklisting mechanisms also affect individuals: the People’s Court of China maintains a list of shixin (dishonest) people who default on judgments. Individuals on this list are prevented from accessing “non-essential consumption” (including travel by plane or high-speed train) and their names are published, adding an element of public shaming. Other local or sector-specific “credit” programs aim at disciplining individual behavior: anyone caught smoking on the high-speed train is placed on the railway system’s list of shixin persons and subjected to a 6-month ban from taking the train. Localized “citizen scoring” schemes are also being piloted in a dozen cities. Currently, these resemble “club membership” schemes with minor benefits and have low sign-up rates; some have been very controversial. In 2019, in response to controversies, the National Development and Reform Commission issued guidelines stating that citizen scores must only be used for incentivizing behavior and not as sanctions or to limit access to basic public services. Presently, each of the systems described here are separate from one another.

But even where generalizations and mischaracterizations of Social Credit are dispelled, many aspects nonetheless raise significant concerns. Such systems will, of course, worsen issues surrounding privacy, chilling effects, discrimination, and disproportionate punishment. These have been explored at length elsewhere, but this conversation with Chenchen raised additional important issues.

First, a stated objective behind the use of blacklists and shaming is the need to encourage compliance with existing laws and regulations, since non-compliance undermines market order. This is not a unique approach: the US Department of Labor names and shames corporations that violate labor laws, and the World Bank has a similar mechanism. But the laws which are enforced through Social Credit exist in and constitute an extremely repressive context, and these mechanisms are applied to individuals. An individual can be arrested for protesting labor conditions or for speaking about certain issues on social media, and systems like the People’s Court blacklist amplify the consequences of these repressive laws. Mechanisms which “merely” seek to increase legal compliance are deeply problematic in this context.

Second, as with so many of the digital government initiatives discussed in the Transformer States series, Social Credit schemes exhibit technological solutionism which invisibilizes the causes of the problems they seek to address. Non-payment of migrant workers’ wages, for example, is a legitimate issue which must be tackled. But in turning to digital solutions such as an app which “scores” firms based on their record of wage payments, a depoliticized technological fix is promised to solve systemic problems. In the process, it obscures the structural reasons behind migrant workers’ difficulties in accessing their wages, including a differentiated citizenship regime that denies them equal access to social provisions.

Separately, there are disparities in how individuals in different parts of the country are affected by Social Credit. Around the world, governments’ new digital systems are consistently trialed on the poorest or most vulnerable groups: for example, smartcard technology for quarantining benefit income in Australia was first introduced within indigenous communities. Similarly, experimentation with Social Credit systems is unequally targeted, especially on a geographical basis. There is a hierarchy of cities in China with provincial-level cities like Beijing at the top, followed by prefectural-level cities, county-level cities, then towns and villages. A pattern is emerging whereby smaller or “lower-ranked” cities have adopted more comprehensive and aggressive citizen scoring schemes. While Shanghai has local legislation that defines the boundaries of its Social Credit scheme, less-known cities seeking to improve their “branding” are subjecting residents to more arbitrary and concerning practices.

Of course, the biggest concern surrounding Social Credit relates to how it may develop in the future. While this is currently a fragmented landscape of disparate schemes, the worry is that these may be consolidated. Chenchen stated that a centralized, nationwide “citizen scoring” system remains unlikely and would not enjoy support from the public or the Central Bank which oversees the Social Credit program. But it is not out of the question that privately-run schemes such as Sesame Credit might eventually be linked to the government’s Social Credit system. Though the system is not (yet) as comprehensive and coordinated as has been portrayed, its logics and methodologies of sharing ever-more information across siloes to shape behaviors may well push in this direction, in China and elsewhere.

April 20, 2021. Victoria Adelmant, Director of the Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law. 

Locked In! How the South African Welfare State Came to Rely on a Digital Monopolist

TECHNOLOGY & HUMAN RIGHTS

Locked In! How the South African Welfare State Came to Rely on a Digital Monopolist

The South African Social Security Agency provides “social grants” to 18 million citizens. In using a single private company with its own biometric payment system to deliver grants, the state became dependent on a monopolist and exposed recipients to debt and financial exploitation.

On February 24, 2021, the Digital Welfare State and Human Rights Project hosted the fifth event in their “Transformer States” conversation series, which focuses on the human rights implications of the emerging digital state. In this conversation, Christiaan Van Veen and Victoria Adelmant explored the impacts of outsourcing at the heart of South Africa’s social security system with Lynette Maart, the National Director of the South African human rights organization The Black Sash. This blog summarizes the conversation and provides the event recording and additional readings below.

Delivering the right to social security

Section 27(1)(c) of the 1996 South African Constitution guarantees everyone the “right to have access” to social security. In the early years of the post-Apartheid era, the country’s nine provincial governments administered social security grants to fulfill this constitutional social right. In 2005, the South African Social Security Agency (SASSA) was established to consolidate these programs. The social grant system has expanded significantly since then, with about 18 million of South Africa’s roughly 60 million citizens receiving grants. The system’s growth and coverage has been a source of national pride. In 2017, the Constitutional Court remarked that the “establishment of an inclusive and effective program of social assistance” is “one of the signature achievements” of South Africa’s constitutional democracy.

Addressing logistical challenges through outsourcing

Despite SASSA’s progress in expanding the right to social security, its grant programs remain constrained by the country’s physical, digital, and financial infrastructure. Millions of impoverished South Africans live in rural areas lacking proper access to roads, telecommunications, internet connectivity, or banking, which makes the delivery of cash transfers difficult and expensive. Instead of investing in its own cash transfer delivery capabilities, SASSA awarded an exclusive contract in 2012 to Cash Paymaster Services (CPS), a subsidiary of South African technology company to administer all of SASSA’s cash transfers nationwide. This made CPS a welfare delivery monopolist overnight.

SASSA selected CPS in large part because its payment system, which included a smart card with an embedded fingerprint-based chip, could reach the poorest and most remote parts of the country. To obtain a banking license, CPS partnered with Grindrod Bank and opened 10 million new bank accounts for SASSA recipients. Cash transfers could be made via the CPS payment system to smart cards without the need for internet or electricity. CPS rolled out a network of 10,000 places where social grant payments could be withdrawn, known as “paypoints,” nationwide. Recipients were never further than 5km from a paypoint.

Thanks to its position as sole deliverer of SASSA grants and its autonomous payment system, CPS also had unique access to the financial data of millions of the poorest South Africans. Other Net1 subsidiaries including Moneyline (a lending group), Smartlife (a life insurance provider) and Manje Mobile (a mobile money service) were able to exploit this “customer base” to cross-sell services. Net1 subsidiaries were soon marketing loans, insurance, and airtime to SASSA recipients. These “customers” were particularly attractive because fees could be automatically deducted from the SASSA grants the very moment they were paid on CPS’ infrastructure. Recipients became a lucrative, practically risk-free market for lenders and other service providers due to these immediate automatic deductions from government transfers. The Black Sash has found that women were going to paypoints at 4.30am in their pajamas to try to withdraw their grants before deductions left them with hardly any of the grant left.

Through its “Hands off Our Grants” advocacy campaign, the Black Sash showed that these deductions were often unauthorized and unlawful. Lynette told the story of Ma Grace, an elderly pensioner who was sold airtime even though she did not own a mobile phone, and whose avenues to recourse were all but blocked off. She explained that telephone helplines were not free but required airtime (which poor people often did not have), and that they “deflected calls” and exploited language barriers to ensure customers “never really got an answer in the language of their choice.”

“Lockin” and the hollowing out of state capacity

Net1’s exploitation of SASSA beneficiaries is only part of the story. This is also about multidimensional governmental failure stemming from SASSA’s outright dependence on CPS. As academic Keith Breckenridge has written, the Net1/SASSA relationship involves “vendor lockin,” a situation in which “the state must confront large, perhaps unsustainable, switching costs to break free of its dependence on the company for grant delivery and data processing.” There are at least three key dimensions of this lockin dynamic which were explored in the conversation:

  • SASSA outsourced both cash transfer delivery and program oversight to CPS. CPS’s “foot soldiers” wore several hats: the same person might deliver grant payments at paypoints, field complaints as local SASSA representatives, and sell loans or airtime. Commercial activity and benefits delivery were conflated.
  • The program’s structure resulted in acute regulatory failures. Because CPS (not Grindrod Bank) ultimately delivered SASSA funds to recipients via its payment infrastructure outside the National Payment System, the payments were exempt from normal oversight by banking regulators. Accordingly, the regulators were blind to unauthorized deductions by Net1 subsidiaries from recipients’ payments.
  • SASSA was entirely reliant on CPS and unable to reach its own beneficiaries itself. Though the Constitutional Court declared SASSA’s 2012 contract with CPS unconstitutional due to irregularities in the procurement process, it ruled that the contract should continue as SASSA could not yet deliver the grants without CPS. In 2017, Net1 co-founder and former CEO Serge Belamant boasted that SASSA would “need to use pigeons” to deliver social grants without CPS. While this was an exaggeration, when SASSA finally transitioned to a partnership with the South African Post Office in 2018, it had to reduce the number of paypoints from 10,000 to 1,740. As Lynette observed, SASSA now has a weaker footprint in rural areas. Therefore, rural recipients “bear the costs of transport and banking fees in order to withdraw their own money.”

This story of SASSA, CPS, and social security grants in South Africa shows not only how outsourced digital delivery of welfare can lead to corporate exploitation and stymied access to social rights, but also how reliance on private technologies can induce “lockin” that undermines the state’s ability to perform basic and vital functions. As the Constitutional Court stated in 2017, the exclusive contract between SASSA and CPS led to a situation in which “the executive arm of government admits that it is not able to fulfill its constitutional and statutory obligations to provide for the social assistance of its people.”

March 11, 2021. Adam Ray, JD program, NYU School of Law; Human Rights Scholar with the Digital Welfare State & Human Rights Project in 2020. He holds a Masters degree from Yale University and previously worked as the CFO of Songkick.