Center Chair gives keynote talk in Brazil Supreme Court’s seminar on structural litigation

CLIMATE AND ENVIRONMENT

Center Chair gives keynote talk in Brazil Supreme Court’s Seminar on structural litigation  

On October 7, 2024 as part of the Center for Human Rights and Global Justice’s ongoing academic exchange with Brazil’s Supreme Federal Court (STF), Professor César Rodríguez-Garavito gave a keynote talk in the seminar “Structural Litigation: Advances and Challenges” in Brasilia.

The event was organized by STF Chief Justice Luís Roberto Barroso as well as other high-ranking Brazilian judges, including STF’s Deputy Chief Justice Edson Fachin and the Federal High Court’s Chief Justice Antonio Herman Benjamin. 

In his opening remarks, Minister Barroso highlighted the significance of structural litigation—that is, constitutional cases addressing systemic policy issues that affect the rights of large groups. Among ongoing structural cases before the Brazilian Supreme Court are those dealing with violations of Indigenous rights in the Amazon, prison overcrowding, and police violence in informal settlements. He underscored that this emerging area is central to the Brazilian judiciary, urging judges to proactively identify such issues and ensure that relevant governmental institutions develop and implement effective solutions. Other judges on the panel echoed the importance of the judiciary’s authority to act in these matters and emphasized the need for effective monitoring of structural court decisions. 

The seminar also featured discussions on the judiciary’s role in resolving complex structural conflicts. Professor Rodríguez-Garavito shared insights on how structural cases are handled in comparative law, focusing on their impacts and potential applications to climate litigation. He highlighted the STF’s contributions to the protection of constitutional rights through structural rulings and suggested ways forward to ensure the legitimacy and effective implementation of the Court’s rulings. 

The Center for Human Rights and Global Justice’s participation in this seminar is one of many initiatives planned with high courts from around the world for the upcoming year, underscoring the Center’s commitment to supporting judicial engagement in innovative legal areas while protecting rights and advancing justice for all. 

Poor Enough for the Algorithm? Exploring Jordan’s Poverty Targeting System

TECHNOLOGY AND HUMAN RIGHTS

Poor Enough for the Algorithm? Exploring Jordan’s Poverty Targeting System

The Jordanian government is using an algorithm to rank social protection applicants from least poor to poorest, as part of a poverty alleviation program. While helpful to those individuals who receive aid, the system is excluding beneficiaries in need, as it is failing to accurately reflect the complex realities of poverty. It uses an outdated poverty measure, weights imperfect indicators—such as utility consumption—and relies on a static view of socioeconomic status.

On November 28, 2023, the Digital Welfare State and Human Rights project hosted the sixteenth episode in the Transformer States conversation series on Digital Government and Human Rights. Victoria Adelmant and Katelyn Cioffi interviewed Hiba Zayadin, a senior researcher in the Middle East and North Africa division at Human Rights Watch (HRW), about a report published by HRW on the Jordanian government’s use of an algorithmic system to rank applicants for a welfare program based on their poverty level, using data like electricity usage and car ownership. This blog highlights key issues related to the system’s inability to reflect the complexities of poverty and its algorithmic exclusion of individuals in need.

The context behind Jordan’s poverty targeting program 

Poverty targeting’ is generally understood to mean directing social program benefits towards those most in need, with the aim of efficiently using limited government resources and improving living conditions for the poorest individuals. This approach entails the collection of wide-ranging information about socioeconomic circumstances, often through in-depth surveys and interviews, to enable means testing or proxy means testing. Some governments have adopted an approach in which beneficiaries are ‘ranked’ from richest to poorest, and target aid only to those falling below a certain threshold. The World Bank has long advocated for poverty targeting in social assistance. For example, since 2003, the World Bank has supported Brazil’s Bolsa Família program, which is a program targeted at the poorest 40% of the population

Increasingly, the World Bank has turned to new technologies to seek to improve the accuracy of poverty targeting programs. It has provided funding to many countries for data-driven, algorithm-enabled solutions to enhance targeting. Similar programs have been implemented in countries including Jordan, Mauritania, Palestine, Morocco, Iraq, Tunis, Jordan, Egypt, and Lebanon.

Launched in 2019 with World Bank support, Jordan’s Takaful program, an automated cash transfer program, provides monthly support to families (roughly US $56 to $192) to mitigate poverty. Managed by the National Aid Fund, the program targets the more than 24% of Jordan’s population that falls under the poverty line. The Takaful program has been especially welcome in Jordan, in light of rising living costs. However, policy choices underpinning this program have excluded many individuals who are in need: eligibility restrictions limit access solely to Jordanian nationals, such that the program does not cover registered Syrian refugees, Palestinians without Jordanian passports, migrant workers, and the non-Jordanian families of Jordanian women—since Jordanian women cannot pass on citizenship to their children. Initial phases of the program entailed broader eligibility, but criteria were tightened in subsequent iterations.

Mismatch between the Takaful program’s indicators and the reality of people’s lives

In addition, further exclusions have arisen because of the operation of the algorithmic system used in the program. When a person applies to Takaful, the system first determines eligibility by checking whether an applicant is a citizen and whether they are under the poverty line. It subsequently employs an algorithm, relying on 57 socioeconomic indicators, to rank people from least poor to poorest. The National Aid Fund uses existing databases as well as applicants’ answers to a questionnaire – that they must fill out online. Indicators include household size, geographic location, utilities consumption, ownership of businesses, and car ownership. It is unclear how these indicators are weighted, but the National Aid Fund has admitted that some indicators will lead to the automatic exclusion of applicants from the Takaful program. Applicants who own a car that is less than five years old or a business valued at over 3000 Jordanian Dinars, for instance, are automatically excluded. 

In its recent report, HRW highlights a number of shortcomings of the algorithmic system deployed in the Takaful program, critiquing its inability to reflect the complex and dynamic nature of poverty. The system, HRW argues, uses an outdated poverty measure, and embeds many problematic assumptions. For example, the algorithm gives some weight to whether an applicant owns a car. However, there are cars in people’s names that they do not actually own; some people own cars that broke down long ago, but they cannot afford to repair them. Additionally, the algorithm assumes that higher electricity and water consumption indicates that a family is less vulnerable. However, poorer households in Jordan in many cases actually have higher consumption—a 2020 survey showed that almost 75% of low- to middle-income households lived in apartments with poor thermal insulation.

Furthermore, this algorithmic system is designed on the basis of a single assessment of socioeconomic circumstances at a fixed point in time. But poverty is not static; people’s lives change and their level of need fluctuates. Another challenge is the unpredictability of aid: in this conversation with CHRGJ’s Digital Welfare State and Human Rights team, Hiba shared the story of a new mother who had been suddenly and unexpectedly cut off from the Takaful program, precisely when she was most in need.

At a broader level, introducing an algorithmic system such as this can also exacerbate information asymmetries. HRW’s report highlights issues concerning opacity in algorithmic decision-making—both for government officials themselves and those subject to the algorithm’s decisions—such that it is more difficult to understand how decisions are being made within this system.

Recommendations to improve the Takaful program

Given these wide-ranging implications, HRW’s primary recommendation is to move away from poverty targeting algorithms and toward universal social protection, which could cost under 1% of the country’s GDP. This could be funded through existing resources, tackling tax avoidance, implementing progressive taxes, and leveraging the influence of the World Bank to guide governments towards sustainable solutions. 

When asked during this conversation whether the algorithm used in the Takaful program could be improved, Hiba noted that a technically perfect algorithm executing a flawed policy will still lead to negative outcomes. She argued that it is the policy itself – the attempt to rank people from least poor to poorest – that is prone to exclusion errors, and warns that technology may be shiny, promising to make targeting accurate, effective, and efficient, but that it can also be a distraction from the policy issues at hand.

Thus, instead of flattening economic realities and leading to the exclusion of people who are, in reality, in immense need, Hiba recommended that support be provided inclusively and universally—to everyone during vulnerable stages of life, regardless of their income and their wealth. Therefore, rather than focusing on using technology that will enable ever-more precise targeting, Jordan should focus on embracing solutions that allow for more universal social protection.

Rebecca Kahn, JD program, NYU School of Law;  and  Human Rights Scholar at the Digital Welfare State & Human Rights project. Her research interests relate to responsible AI governance, digital rights, and consumer protection. She previously worked in the U.S. House and Senate as a legislative staffer.

Singapore’s “smart city” initiative: one step further in the surveillance, regulation and disciplining of those at the margins

TECHNOLOGY & HUMAN RIGHTS

Singapore’s “smart city” initiative: one step further in the surveillance, regulation and disciplining of those at the margins

Singapore’s smart city initiative creates an interconnected web of digital infrastructures which promises citizens safety, convenience, and efficiency. But the smart city is experienced differently by individuals at the margins, particularly migrant workers, who are experimented on at the forefront of technological innovation.

On February 23, 2022, we hosted the tenth event of the Transformer States Series on Digital Government and Human Rights, titled “Surveillance of the Poor in Singapore: Poverty in ‘Smart City’.” Christiaan van Veen and Victoria Adelmant spoke with Dr. Monamie Bhadra Haines about the deployment of surveillance technologies as part of Singapore’s “smart city” initiative. This blog outlines the key themes discussed during the conversation.

The smart city in the context of institutionalized racial hierarchy

Singapore has consistently been hailed as the world’s leading smart city. For a decade, the city-state has been covering its territory with ubiquitous sensors and integrated digital infrastructures with the aim, in the government’s words, of collecting information on “everyone, everything, everywhere, all the time.” But these smart city technologies are layered on top of pre-existing structures and inequalities, which mediate how these innovations are experienced.

One such structure is an explicit racial hierarchy. As an island nation with a long history of multi-ethnicity and migration, Singapore has witnessed significant migration from Southern China, the Malay Peninsula, India, and Bangladesh. Borrowing from the British model of race-based regulation, this multi-ethnicity is governed by the post-colonial state through the explicit adoption of four racial categories – Chinese, Malay, Indian and Others (or “CMIO” for short) – which are institutionalized within immigration policies, housing, education and employment. As a result, while migrant workers from South and Southeast Asia are the backbone of Singapore’s blue-collar labor market, they occupy the bottom tier of the racial hierarchy; are subject to stark precarity; and have become the “objects” of extensive surveillance by the state.

The promise of the smart city

Singapore’s smart city initiative is “sold” to the public through narratives of economic opportunities and job creation in the knowledge economy, improving environmental sustainability, and increasing efficiency and convenience. Through collecting and inter-connecting all kinds of “mundane” data – such as electricity patterns, data from increasingly-intrusive IoT products, and geo-location and mobility data – into centralized databases, smart cities are said to provide more safety and convenience. Singapore’s hyper-modern technologically-advanced society promises efficient and seamless public services, and the constant technology-driven surveillance and the loss of a few civil liberties are viewed by many as a small price to pay for such efficiency.

Further, the collection of large quantities of data from individuals is promised to enable citizens to be better connected with the government; while governments’ decisions, in turn, will be based upon the purportedly objective data from sensors and devices, thereby freeing decision-making from human fallibility and rendering it more neutral.

The realities: disparate impacts of smart city surveillance on migrant workers

However, smart cities are not merely economic or technological endeavors, but techno-social assemblages that create and impact different publics differently. As Monamie noted, specific imaginations and imagery of Singapore as a hyper-modern, interconnected, and efficient smart city can obscure certain types of racialized physical labor, such as the domestic labor of female Southeast-Asian migrant workers.

Migrant workers are uniquely impacted by increasing digitalization and datafication in Singapore. For years, these workers have been housed in dormitories with occupancy often exceeding capacity, located in the literal “margins” or outskirts of the city: migrant workers have long been physically kept separate from the rest of Singapore’s population within these dormitory complexes. They are stereotyped as violent or frequently inebriated, and the dormitories have for years been surveilled through digital technologies including security cameras, biometric sensors, and data from social media and transport services.

The pandemic highlighted and intensified the disproportionate surveillance of migrant workers within Singapore. Layered on top of the existing technological surveillance of migrants’ dormitories, a surveillance assemblage for COVID-19 contact tracing was created. Measures in the name of public health were deployed to carefully surveil these workers’ bodies and movements. Migrant workers became “objects” of technological experimentation as they were required to use a multitude of new mobile-based apps that integrated immigration data and work permit data with health data (such as body temperature and oximeter readings) and Covid-19 contact tracing data. The permissions required by these apps were also quite broad – including access to Bluetooth services and location data. All the data was stored in a centralized database.

Even though surveillant contact-tracing technologies were later rolled out across Singapore and normalized around the world, the important point here is that these systems were deployed exclusively on migrant workers first. Some apps, Monamie pointed out, were indeed only required by migrant workers, while citizens did not have to use them. This use of interconnected networks of surveillance technologies thus highlights the selective experimentation that underpins smart city initiatives. While smart city initiatives are, by their nature, premised on large-scale surveillance, we often see that policies, apps, and technologies are tried on individuals and communities with the least power first, before spilling out to the rest of the population. In Singapore, the objects of such experimentation are migrant workers who occupy “exceptional spaces” – of being needed to ensure the existence of certain labor markets, but also of needing to be disciplined and regulated. These technological initiatives, in subjecting specific groups at the margins to more surveillance than the rest of the population and requiring them to use more tech-based tools than others, serve to exacerbate the “othering” and isolation of migrant workers.

Forging eddies of resistance

While Monamie noted that “activism” is “still considered a dirty word in Singapore,” there have been some localized efforts to challenge some of the technologies within the smart city, in part due to the intensification of surveillance spurred by the pandemic. These efforts, and a rapidly-growing recognition of the disproportionate targeting and disparate impacts of such technologies, indicate that the smart city is also a site of contestation with growing resistance to its tech-based tools.

March 18, 2022. Ramya Chandrasekhar, LLM program at NYU School of Law whose research interests relate to data governance, critical infrastructure studies, and critical theory. She previously worked with technology policy organizations and at a reputed law firm in India.

Chosen by a Secret Algorithm: Colombia’s top-down pandemic payments

TECHNOLOGY AND HUMAN RIGHTS

Chosen by a Secret Algorithm: Colombia’s top-down pandemic payments

The Colombian government was applauded for delivering payments to 2.9 million people in just 2 weeks during the pandemic, thanks to a big-data-driven approach. But this new approach represents a fundamental change in social policy which shifts away from political participation and from a notion of rights.

On Wednesday, November 24, 2021, the Digital Welfare State and Human Rights Project hosted the ninth episode in the Transformer States conversation series on Digital Government and Human Rights, in an event entitled: “Chosen by a secret algorithm: A closer look at Colombia’s Pandemic Payments.” Christiaan van Veen and Victoria Adelmant had a conversation with Joan López, Researcher at the Global Data Justice Initiative and at Colombian NGO Fundación Karisma about Colombia’s pandemic payments and its reliance on data-driven technologies and prediction. This blog highlights some core issues related to taking a top-down, data-driven approach to social protection.

From expert interviews to a top-down approach

The System of Possible Beneficiaries of Social Programs (SISBEN in Spanish) was created to assist in the targeting of social programs in Colombia. This system classifies the Colombian population along a spectrum of vulnerability through the collection of information about households, including health data, family composition, access to social programs, financial information, and earnings. This data is collected through nationwide interviews conducted by experts. Beneficiaries are then rated on a scale of 1 to 100, with 0 as the least prosperous and 100 as the most prosperous, through a simple algorithm. SISBEN therefore aims to identify and rank “the poorest of the poor.” This centralized classification system is used by 19 different social programs to determine eligibility: each social program chooses its own cut-off score between 1 and 100 as a threshold for eligibility.

But in 2016, the National Development Office – the Colombian entity in charge of SISBEN – changed the calculation used to determine the profile of the poorest. It introduced a new and secret algorithm which would create a profile based on predicted income generation capacity. Experts collecting data for SISBEN through interviews had previously looked at the realities of people’s conditions: if a person had access to basic services such as water, sanitation, education, health and/or employment, the person was not deemed poor. But the new system sought instead to create detailed profiles about what a person could earn, rather than what a person has. This approach sought, through modelling, to predict households’ situation, rather than to document beneficiaries’ realities.

A new approach to social policy

During the pandemic, the government launched a new system of payments called the Ingreso Solidario (meaning “solidarity income”). This system would provide monthly payments to people who were not covered by any other existing social program that relied on SISBEN; the ultimate goal of Ingreso Solidario was to send money to 2.9 million people who needed assistance due to the crisis caused by COVID-19. The Ingreso Solidario was, in some ways, very effective. People did not have to apply for this program: if they were selected as eligible, they would automatically receive a payment. Many people received the money immediately into their bank accounts, and payments were made very rapidly, within just a few weeks. Moreover, the Ingreso Solidario was an unconditional transfer and did not condition the receipt of the money to the fulfillment of certain requirements.

But the Ingreso Solidario was based on a new approach to social policy, driven by technology and data sharing. The Government entered agreements with private companies, including Experian and Transunion, to access their databases. Agreements were also made between different government agencies and departments. Through data-sharing arrangements across 34 public and private databases, the government cross- checked the information provided in the interviews with information in dozens of databases to find inconsistencies and exclude anyone deemed not to require social assistance. In relying on cross-checking databases to “find” people who are in need, this approach depends heavily on enormous data collection, and it increases government’s reliance on the private sector.

The implications of this new approach

This new approach to social policy, as implemented through the Ingreso Solidario, has fundamental implications. First, this system is difficult to challenge. The algorithm used to profile vulnerability, to predict income generating capacity, and to assign a score to people living in poverty, is confidential. The Government consistently argued that disclosing information about the algorithm would lead to a macroeconomic crisis because if people knew how the system worked, they would try to cheat the system. Additionally, SISBEN has been normalized. Though there are many other ways that eligibility for social programs could be assessed, the public accepts it as natural and inevitable that the government has taken this arbitrary approach reliant on numerical scoring and predictions. Due to this normalization, combined with the lack of transparency, this new approach to determining eligibility for social programs has therefore not been contested.

Second, in adopting an approach which relies on cross-checking and analyzing data, the Ingreso Solidario is designed to avoid any contestation in the design and implementation of the algorithm. This is a thoroughly technocratic endeavor. The idea is to use databases and avoid going to, and working with, the communities. The government was, in Joan’s words, “trying to control everything from a distance” to “avoid having political discussions about who should be eligible.” There were no discussions and negotiations between the citizens and the Government to jointly address the challenges of using this technology to target poor people. Decisions about who the extra 2.9 million beneficiaries should be were taken unilaterally from above. As Joan argued, this was intentional: “The mindset of avoiding political discussion is clearly part of the idea of Ingreso Solidario.”

Third, because people were unaware that they were going to receive money, those who received a payment felt like they had won the lottery. Thus, as Joan argued, people saw this money not “as an entitlement, but just as a gift that this person was lucky to get.” This therefore represents a shift away from a conception of assistance as something we are entitled to by right. But in re-centering the notion of rights, we are reminded of the importance of taking human rights seriously when analyzing and redesigning these kinds of systems. Joan noted that we need to move away from an approach of deciding what poverty is from above, and instead move towards working with communities. We must use fundamental rights as guidance in designing a system that will provide support to those in poverty in an open, transparent, and participatory manner which does not seek to bypass political discussion.

María Beatriz Jiménez, LLM program, NYU School of Law with research focus on digital rights. She previously worked for the Colombian government in the Ministry of Information and Communication Technologies and the Ministry of Trade.

Pilots, Pushbacks, and the Panopticon: Digital Technologies at the EU’s Borders

TECHNOLOGY & HUMAN RIGHTS

Pilots, Pushbacks, and the Panopticon: Digital Technologies at the EU’s Borders

The European Union is increasingly introducing digital technologies into its border control operations. But conversations about these emerging “digital borders” are often silent about the significant harms experienced by those subjected to these technologies, their experimental nature, and their discriminatory impacts.

On October 27, 2021, we hosted the eighth episode in our Transformer States Series on Digital Government and Human Rights, in an event entitled “Artificial Borders? The Digital and Extraterritorial Protection of ‘Fortress Europe.’” Christiaan van Veen and Ngozi Nwanta interviewed Petra Molnar about the European Union’s introduction of digital technologies into its border control and migration management operations. The video and transcript of the event, along with additional reading materials, can be found below. This blog post outlines key themes from the conversation.

Digital technologies are increasingly central to the EU’s efforts to curb migration and “secure” its borders. Against a background of growing violent pushbacks, surveillance technologies such as unpiloted drones and aerostat machines with thermo-vision sensors are being deployed at the borders. The EU-funded “ROBORDER” project aims to develop “a fully-functional autonomous border surveillance system with unmanned mobile robots.” Refugee camps on the EU’s borders, meanwhile, are being turned into a “surveillance panopticon,” as the adults and children living within them are constantly monitored by cameras, drones, and motion-detection sensors. Technologies also mediate immigration and refugee determination processes, from automated decision-making, to social media screening, and a pilot AI-driven “lie detector.”

In this Transformer States conversation, Petra argued that technologies are enabling a “sharpening” of existing border control policies. As discussed in her excellent report entitled “Technological Testing Grounds,” completed with European Digital Rights and the Refugee Law Lab, new technologies are not only being used at the EU’s borders, but also to surveil and control communities on the move before they reach European territory. The EU has long practiced “border externalization,” where it shifts its border control operations ever-further away from its physical territory, partly through contracting non-Member States to try to prevent migration. New technologies are increasingly instrumental in these aims. The EU is funding African states’ construction of biometric ID systems for migration control purposes; it is providing cameras and surveillance software to third countries to prevent travel towards Europe; and it supports efforts to predict migration flows through big data-driven modeling. Further, borders are increasingly “located” on our smartphones and in enormous databases as data-based risk profiles and pre-screening become a central part of the EU’s border control agenda.

Ignoring human experience and impacts

But all too often, discussions about these technologies are sanitized and depoliticized. People on the move are viewed as a security problem, and policymakers, consultancies, and the private sector focus on the “opportunities” presented by technologies in securitizing borders and “preventing migration.” The human stories of those who are subjected to these new technological tools and the discriminatory and deadly realities of “digital borders” are ignored within these technocratic discussions. Some EU policy documents describe the “European Border Surveillance System” without mentioning people at all.

In this interview, Petra emphasized these silences. She noted that “human experience has been left to the wayside.” First-person accounts of the harmful impacts of these technologies are not deemed to be “expert knowledge” by policymakers in Brussels, but it is vital to expose the human realities and counter the sanitized policy discussions. Those who are subjected to constant surveillance and tracking are dehumanized: Petra reports that some are left feeling “like a piece of meat without a life, just fingerprints and eye scans.” People are being forced to take ever-deadlier routes to avoid high-tech surveillance infrastructures, and technology-enabled interdictions and pushbacks are leading to deaths. Further, difference in treatment is baked into these technological systems, as they enable and exacerbate discriminatory inferences along racialized lines. As UN Special Rapporteur on Racism E. Tendayi Achiume writes, “digital border technologies are reinforcing parallel border regimes that segregate the mobility and migration of different groups” and are being deployed in racially discriminatory ways. Indeed, some algorithmic “risk assessments” of migrants have been argued to represent racial profiling.

Policy discussions about “digital borders” also do not acknowledge that, while the EU spends vast sums on technologies, the refugee camps at its borders have neither running water nor sufficient food. Enormous investment in digital migration management infrastructures is being “prioritized over human rights.” As one man commented, “now we have flying computers instead of more asylum.”

Technological experimentation and pilot programs in “gray zones”

Crucially, these developments are occurring within largely-unregulated spaces. A central theme of this Transformer States conversation—mirroring the title of Petra’s report, “Technological Testing Grounds”—was the notion of experimentation within the “gray zones” of border control and migration management. Not only are non-citizens and stateless persons accorded fewer rights and protections than EU citizens, but immigration and asylum decision-making is also an area of law which is highly discretionary and contains fewer legal safeguards.

This low-rights, high-discretion environment makes it rife for testing new technologies. This is especially the case in “external” spaces far from European territory which are subject to even less regulation. Projects which would not be allowed in other spaces are being tested on populations who are literally at the margins, as refugee camps become testing zones. The abovementioned “lie detector,” whereby an “avatar” border guard flagged “biomarkers of deceit,” was “merely” a pilot program. This has since been fiercely criticized, including by the European Parliament, and challenged in court.

Experimentation is deliberately occurring in these zones as refugees and migrants have limited opportunities to challenge this experimentation. The UN Special Rapporteur on Racism has noted that digital technologies in this area are therefore “uniquely experimental.” This has parallels with our work, where we consistently see governments and international organizations piloting new technologies on marginalized and low-income communities. In a previous Transformer States conversation, we discussed Australia’s Cashless Debit Card system, in which technologies were deployed upon aboriginal people through a pilot program. In the UK, radical reform to the welfare system through digitalization was also piloted, with low-income groups being tested on with “catastrophic” effects.

Where these developments are occurring within largely-unregulated areas, human rights norms and institutions may prove useful. As Petra noted, the human rights framework requires courts and policymakers to focus upon the human impacts of these digital border technologies, and highlights the discriminatory lines along which their effects are felt. The UN Special Rapporteur on Racism has outlined how human rights norms require mandatory impact assessments, moratoria on surveillance technologies, and strong regulation to prevent discrimination and harm.

November 23, 2021. Victoria Adelmant,Director of the Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law. 

False Promises and Multiple Exclusion: Summary of Our RightsCon Event on Uganda’s National Digital ID System

TECHNOLOGY & HUMAN RIGHTS

False Promises and Multiple Exclusion: Summary of Our RightsCon Event on Uganda’s National Digital ID System 

Despite its promotion as a tool for social inclusion and development, Uganda’s National Digital ID System is motivated primarily by national security concerns. As a result, the ID system has generated both direct and indirect exclusion, particularly affecting women and older persons.

On June 10, 2021, the Center for Human Rights and Global Justife at NYU School of Law co-hosted the panel “Digital ID: what is it good for? Lessons from our research on Uganda’s identity system and access to social services” as part of RightsCon, the leading summit on human rights in the digital age. The panelists included Salima Namusobya, Executive Director of the Initiative for Social and Economic Rights (ISER), Dorothy Mukasa, Team Leader of Unwanted Witness, Grace Mutung’u, Research Fellow at the Centre for IP and IT Law at Strathmore University, and Christiaan van Veen, Director of the Digital Welfare State & Human Rights Project at the Center . This blog summarizes highlights of the panel discussion. A recording and transcript of the conversation, as well as additional readings, can be found below.

Uganda’s national digital ID system, known as Ndaga Muntu, was introduced in 2014 through a mass registration campaign. The government aimed to collect the biographic and biometric information including photographs and fingerprints of every adult in the country, to record this data in a centralized database known as the National Identity Register, and to issue a national ID card and unique ID number to each adult. Since its introduction, having a national ID has become a prerequisite to access a whole host of services, from registering for a SIM card and opening a bank account, to accessing health services and social protection schemes.

This linkage of Ndaga Muntu to public services has raised significant human rights concerns and is serving to lock millions of people in Uganda out of critical services. Seven years from its inception, it is clear that the national digital ID is a tool for exclusion rather than for inclusion. Drawing on the joint report by the Center , ISER, and Unwanted Witness, this event made clear that Ndaga Muntu was grounded in false promises and is resulting in multiple forms of exclusion.

The False Promise of Inclusion

The Ugandan government argued that this digital ID system would enhance social inclusion by allowing Ugandans to prove their identity more easily. Having this proof of identity would facilitate access to public services such as healthcare, enable people to sign up for private services such as bank accounts, and allow people to move freely throughout Uganda. The same rhetoric of inclusion was used to sell Aadhaar, India’s digital ID system, to the Indian public.

But for many Ugandans this was a false promise. From the very outset, Ndaga Muntu was developed chiefly as a tool for national security. The powerful Ugandan military had long pushed for the collection of sensitive identity information and biometric data: in the context of a volatile region, a centralized information database is appealing because of its ability to verify identity and indicate who is “really Ugandan” and who is not. Therefore, the national ID project was housed in the Ministry of Internal Affairs, overseen by prominent members of the Ugandan People’s Defense Force, and designed to serve only those who succeeded in completing a rigorous citizenship verification process.

The panelist from Kenya, Grace Mutung’u, shared how Kenya’s hundred-year-old national identification system was similarly rooted in a colonial regime that focused on national security and exclusion. Those design principles created a system that sought only to “empower the already empowered” and not to extend benefits beyond already-privileged constituencies. The result in both Kenya and Uganda was the same: digital ID systems that are designed to ensure that certain individuals and groups remain excluded from political, economic, and social life.

Proliferating Forms of Exclusion

Beyond the fact that Ndaga Muntu was designed to directly exclude anyone not entitled to access public services, those who are entitled are also being excluded in the millions. For ordinary Ugandans, accessing Ndaga Muntu is a nightmarish process rife with problems every step of the way. These problems, such as corruption, incorrect data entry, and technical errors, have impeded Ugandans’ access to the ID. Vulnerable populations who rely on social protection programs that require proof of ID bear the brunt of such errors. For example, one older woman was told that the national ID registration system could not capture her picture because of her grey hair. Other elderly Ugandans have had trouble with fingerprint scanners that could not capture fingerprints worn away from years of manual labor.

The many individuals who have not succeeded in registering for Ndaga Muntu are therefore being left out of the critical services which are increasingly linked to the ID. At least 50,000 of the 200,000 eligible persons over the age of 80 in Uganda were unable to access potentially lifesaving benefits such as the Senior Citizens’ Grant cash transfer program. Women have been similarly disproportionately impacted by the national ID requirement; for instance, pregnant women have been refused services by healthcare workers for failing to provide ID. To make matters worse, ID requirements are increasingly ubiquitous in Uganda: proof of ID is often required to book transportation, to vote, to access educational services, healthcare, social protection grants, and food donations. Having a national ID has become necessary for basic survival, especially for those who live in extreme poverty.

Digital ID systems should not prohibit people from living their lives and utilizing basic services that should be universally accessible, particularly when they are justified on the basis that they will improve access to services. Not only was the promise of inclusion for Ndaga Muntu false, but the rollout of the system has also been incompetent and faulty, leading to even greater exclusion. The profound impact of this double discrimination in Uganda demonstrates that such digital ID systems and their impacts on social and economic rights warrant greater and urgent attention from the human rights community at large.

June 12, 2021. Madeleine Matsui, JD program, Harvard Law School; intern with the Digital Welfare State and Human Rights.

Social Credit in China: Looking Beyond the “Black Mirror” Nightmare

TECHNOLOGY & HUMAN RIGHTS

Social Credit in China: Looking Beyond the “Black Mirror” Nightmare

The Chinese government’s Social Credit program has received much attention from Western media and academics, but misrepresentations have led to confusion over what it truly entails. Such mischaracterizations unhelpfully distract from the dangers and impacts of the realities of Social Credit. On March 31, 2021, Christiaan Van Veen and I hosted the sixth event in the Transformer States conversation series, which focuses on the human rights implications of the emerging digital state. We interviewed Dr. Chenchen Zhang, Assistant Professor at Queen’s University Belfast, to explore the much-discussed but little-understood Social Credit program in China.

Though the Chinese government’s Social Credit program has received significant attention from Western media and rights organizations, much of this discussion has often misrepresented the program. Social Credit is imagined as a comprehensive, nation-wide system in which every action is monitored and a single score is assigned to each individual, much like a Black Mirror episode. This is in fact quite far from reality. But this image has become entrenched in the West, as discussions and some academic debate has focused on abstracted portrayals of what Social Credit could be. In addition, the widely-discussed voluntary, private systems run by corporations, such as Alipay’s Sesame Credit or Tencent’s WeChat score, are often mistakenly conflated with the government’s Social Credit program.

Jeremy Daum has argued that these widespread misrepresentations of Social Credit serve to distract from examining “the true causes for concern” within the systems actually in place. They also distract from similar technological developments occurring in the West, which seem acceptable by comparison. An accurate understanding is required to acknowledge the human rights concerns that this program raises.

The crucial starting point here is that the government’s Social Credit system is a heterogeneous assemblage of fragmented and decentralized systems. Central government, specific government agencies, public transport networks, municipal governments, and others are experimenting with diverse initiatives with different aims. Indeed, xinyong, the term which is translated as “credit” in Social Credit, encompasses notions of financial creditworthiness, regulatory compliance, and moral trustworthiness, therefore covering programs with different visions and narratives. A common thread across these systems is a reliance on information-sharing and lists to encourage or discourage certain behaviors, including blacklists to “shame” wrongdoers and “redlists” publicizing those with a good record.

One national-level program called the Joint Rewards and Sanctions mechanism shares information across government agencies about companies which have violated regulations. Once a company is included on one agency’s blacklist for having, for example, failed to pay migrant workers’ wages, other agencies may also sanction that company and refuse to grant it a license or contract. But blacklisting mechanisms also affect individuals: the People’s Court of China maintains a list of shixin (dishonest) people who default on judgments. Individuals on this list are prevented from accessing “non-essential consumption” (including travel by plane or high-speed train) and their names are published, adding an element of public shaming. Other local or sector-specific “credit” programs aim at disciplining individual behavior: anyone caught smoking on the high-speed train is placed on the railway system’s list of shixin persons and subjected to a 6-month ban from taking the train. Localized “citizen scoring” schemes are also being piloted in a dozen cities. Currently, these resemble “club membership” schemes with minor benefits and have low sign-up rates; some have been very controversial. In 2019, in response to controversies, the National Development and Reform Commission issued guidelines stating that citizen scores must only be used for incentivizing behavior and not as sanctions or to limit access to basic public services. Presently, each of the systems described here are separate from one another.

But even where generalizations and mischaracterizations of Social Credit are dispelled, many aspects nonetheless raise significant concerns. Such systems will, of course, worsen issues surrounding privacy, chilling effects, discrimination, and disproportionate punishment. These have been explored at length elsewhere, but this conversation with Chenchen raised additional important issues.

First, a stated objective behind the use of blacklists and shaming is the need to encourage compliance with existing laws and regulations, since non-compliance undermines market order. This is not a unique approach: the US Department of Labor names and shames corporations that violate labor laws, and the World Bank has a similar mechanism. But the laws which are enforced through Social Credit exist in and constitute an extremely repressive context, and these mechanisms are applied to individuals. An individual can be arrested for protesting labor conditions or for speaking about certain issues on social media, and systems like the People’s Court blacklist amplify the consequences of these repressive laws. Mechanisms which “merely” seek to increase legal compliance are deeply problematic in this context.

Second, as with so many of the digital government initiatives discussed in the Transformer States series, Social Credit schemes exhibit technological solutionism which invisibilizes the causes of the problems they seek to address. Non-payment of migrant workers’ wages, for example, is a legitimate issue which must be tackled. But in turning to digital solutions such as an app which “scores” firms based on their record of wage payments, a depoliticized technological fix is promised to solve systemic problems. In the process, it obscures the structural reasons behind migrant workers’ difficulties in accessing their wages, including a differentiated citizenship regime that denies them equal access to social provisions.

Separately, there are disparities in how individuals in different parts of the country are affected by Social Credit. Around the world, governments’ new digital systems are consistently trialed on the poorest or most vulnerable groups: for example, smartcard technology for quarantining benefit income in Australia was first introduced within indigenous communities. Similarly, experimentation with Social Credit systems is unequally targeted, especially on a geographical basis. There is a hierarchy of cities in China with provincial-level cities like Beijing at the top, followed by prefectural-level cities, county-level cities, then towns and villages. A pattern is emerging whereby smaller or “lower-ranked” cities have adopted more comprehensive and aggressive citizen scoring schemes. While Shanghai has local legislation that defines the boundaries of its Social Credit scheme, less-known cities seeking to improve their “branding” are subjecting residents to more arbitrary and concerning practices.

Of course, the biggest concern surrounding Social Credit relates to how it may develop in the future. While this is currently a fragmented landscape of disparate schemes, the worry is that these may be consolidated. Chenchen stated that a centralized, nationwide “citizen scoring” system remains unlikely and would not enjoy support from the public or the Central Bank which oversees the Social Credit program. But it is not out of the question that privately-run schemes such as Sesame Credit might eventually be linked to the government’s Social Credit system. Though the system is not (yet) as comprehensive and coordinated as has been portrayed, its logics and methodologies of sharing ever-more information across siloes to shape behaviors may well push in this direction, in China and elsewhere.

April 20, 2021. Victoria Adelmant, Director of the Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law. 

Locked In! How the South African Welfare State Came to Rely on a Digital Monopolist

TECHNOLOGY & HUMAN RIGHTS

Locked In! How the South African Welfare State Came to Rely on a Digital Monopolist

The South African Social Security Agency provides “social grants” to 18 million citizens. In using a single private company with its own biometric payment system to deliver grants, the state became dependent on a monopolist and exposed recipients to debt and financial exploitation.

On February 24, 2021, the Digital Welfare State and Human Rights Project hosted the fifth event in their “Transformer States” conversation series, which focuses on the human rights implications of the emerging digital state. In this conversation, Christiaan Van Veen and Victoria Adelmant explored the impacts of outsourcing at the heart of South Africa’s social security system with Lynette Maart, the National Director of the South African human rights organization The Black Sash. This blog summarizes the conversation and provides the event recording and additional readings below.

Delivering the right to social security

Section 27(1)(c) of the 1996 South African Constitution guarantees everyone the “right to have access” to social security. In the early years of the post-Apartheid era, the country’s nine provincial governments administered social security grants to fulfill this constitutional social right. In 2005, the South African Social Security Agency (SASSA) was established to consolidate these programs. The social grant system has expanded significantly since then, with about 18 million of South Africa’s roughly 60 million citizens receiving grants. The system’s growth and coverage has been a source of national pride. In 2017, the Constitutional Court remarked that the “establishment of an inclusive and effective program of social assistance” is “one of the signature achievements” of South Africa’s constitutional democracy.

Addressing logistical challenges through outsourcing

Despite SASSA’s progress in expanding the right to social security, its grant programs remain constrained by the country’s physical, digital, and financial infrastructure. Millions of impoverished South Africans live in rural areas lacking proper access to roads, telecommunications, internet connectivity, or banking, which makes the delivery of cash transfers difficult and expensive. Instead of investing in its own cash transfer delivery capabilities, SASSA awarded an exclusive contract in 2012 to Cash Paymaster Services (CPS), a subsidiary of South African technology company to administer all of SASSA’s cash transfers nationwide. This made CPS a welfare delivery monopolist overnight.

SASSA selected CPS in large part because its payment system, which included a smart card with an embedded fingerprint-based chip, could reach the poorest and most remote parts of the country. To obtain a banking license, CPS partnered with Grindrod Bank and opened 10 million new bank accounts for SASSA recipients. Cash transfers could be made via the CPS payment system to smart cards without the need for internet or electricity. CPS rolled out a network of 10,000 places where social grant payments could be withdrawn, known as “paypoints,” nationwide. Recipients were never further than 5km from a paypoint.

Thanks to its position as sole deliverer of SASSA grants and its autonomous payment system, CPS also had unique access to the financial data of millions of the poorest South Africans. Other Net1 subsidiaries including Moneyline (a lending group), Smartlife (a life insurance provider) and Manje Mobile (a mobile money service) were able to exploit this “customer base” to cross-sell services. Net1 subsidiaries were soon marketing loans, insurance, and airtime to SASSA recipients. These “customers” were particularly attractive because fees could be automatically deducted from the SASSA grants the very moment they were paid on CPS’ infrastructure. Recipients became a lucrative, practically risk-free market for lenders and other service providers due to these immediate automatic deductions from government transfers. The Black Sash has found that women were going to paypoints at 4.30am in their pajamas to try to withdraw their grants before deductions left them with hardly any of the grant left.

Through its “Hands off Our Grants” advocacy campaign, the Black Sash showed that these deductions were often unauthorized and unlawful. Lynette told the story of Ma Grace, an elderly pensioner who was sold airtime even though she did not own a mobile phone, and whose avenues to recourse were all but blocked off. She explained that telephone helplines were not free but required airtime (which poor people often did not have), and that they “deflected calls” and exploited language barriers to ensure customers “never really got an answer in the language of their choice.”

“Lockin” and the hollowing out of state capacity

Net1’s exploitation of SASSA beneficiaries is only part of the story. This is also about multidimensional governmental failure stemming from SASSA’s outright dependence on CPS. As academic Keith Breckenridge has written, the Net1/SASSA relationship involves “vendor lockin,” a situation in which “the state must confront large, perhaps unsustainable, switching costs to break free of its dependence on the company for grant delivery and data processing.” There are at least three key dimensions of this lockin dynamic which were explored in the conversation:

  • SASSA outsourced both cash transfer delivery and program oversight to CPS. CPS’s “foot soldiers” wore several hats: the same person might deliver grant payments at paypoints, field complaints as local SASSA representatives, and sell loans or airtime. Commercial activity and benefits delivery were conflated.
  • The program’s structure resulted in acute regulatory failures. Because CPS (not Grindrod Bank) ultimately delivered SASSA funds to recipients via its payment infrastructure outside the National Payment System, the payments were exempt from normal oversight by banking regulators. Accordingly, the regulators were blind to unauthorized deductions by Net1 subsidiaries from recipients’ payments.
  • SASSA was entirely reliant on CPS and unable to reach its own beneficiaries itself. Though the Constitutional Court declared SASSA’s 2012 contract with CPS unconstitutional due to irregularities in the procurement process, it ruled that the contract should continue as SASSA could not yet deliver the grants without CPS. In 2017, Net1 co-founder and former CEO Serge Belamant boasted that SASSA would “need to use pigeons” to deliver social grants without CPS. While this was an exaggeration, when SASSA finally transitioned to a partnership with the South African Post Office in 2018, it had to reduce the number of paypoints from 10,000 to 1,740. As Lynette observed, SASSA now has a weaker footprint in rural areas. Therefore, rural recipients “bear the costs of transport and banking fees in order to withdraw their own money.”

This story of SASSA, CPS, and social security grants in South Africa shows not only how outsourced digital delivery of welfare can lead to corporate exploitation and stymied access to social rights, but also how reliance on private technologies can induce “lockin” that undermines the state’s ability to perform basic and vital functions. As the Constitutional Court stated in 2017, the exclusive contract between SASSA and CPS led to a situation in which “the executive arm of government admits that it is not able to fulfill its constitutional and statutory obligations to provide for the social assistance of its people.”

March 11, 2021. Adam Ray, JD program, NYU School of Law; Human Rights Scholar with the Digital Welfare State & Human Rights Project in 2020. He holds a Masters degree from Yale University and previously worked as the CFO of Songkick.

Digital Paternalism: A Recap of our Conversation about Australia’s Cashless Debit Card with Eve Vincent

TECHNOLOGY & HUMAN RIGHTS

Digital Paternalism: A Recap of our Conversation about Australia’s Cashless Debit Card with Eve Vincent

On November 23, 2020, the Center for Human Rights and Global Justice’s Digital Welfare State and Human Rights Project hosted the third virtual conversation in its “Transformer States: A Conversation Series on Digital Government and Human Rights” series. Christiaan van Veen and Victoria Adelmant interviewed Eve Vincent, senior lecturer in the Department of Anthropology at Macquarie University and author of a crucial report on the lived experiences of one of the first Cashless Debit Card trials in Ceduna, South Australia.

The Cashless Debit Card is a debit card which is currently used in parts of Australia to deliver benefit income to welfare recipients. Vitally, it is a tool of compulsory income management: the card “quarantines” 80% of a recipient’s payment, preventing this 80% from being withdrawn as cash and blocking attempted purchases of alcohol or gambling products. It is similar to, and intensifies, a previous scheme of debit card-based income management, known as the “Basics Card.” This earlier card was introduced after a 2007 report into child sexual abuse in indigenous communities in Australia’s Northern Territory which identified alcoholism, substance abuse, and gambling as major causes of such abuse. One of the measures taken was the requirement that indigenous communities’ benefit income be received on a Basics Card which quarantined 50% of benefit payments. The Basics Card was later extended to non-indigenous welfare recipients, but it remained disproportionately targeted at indigenous communities.

Following a 2014 report by mining magnate Andrew Forrest on inequality between indigenous and non-indigenous groups in Australia, the government launched the Cashless Debit Card to gradually replace the Basics Card. The Cashless Debit Card would quarantine 80% of benefit income on the card, and the card would block spending where alcohol is sold or where gambling takes place. Initial trials were targeted, again, in remote indigenous areas. The communities in the first trials were presented as parasitic on the welfare state and in crisis with regard to alcohol abuse, assault, and gambling. It was argued that drastic intervention was warranted: the government should step in to take care of these communities as they were unable to look after themselves. Income management would assist in this paternalistic intervention, fostering responsibility and curbing alcoholism and gambling through blocking their purchases. Many of Eve’s research participants found these justifications offensive and infantilizing. The Cashless Debit Card is now being trialed in more populous areas with more non-indigenous people, and the narrative has shifted. Justifications for cards for non-indigenous people have focused more on the need to teach financial literacy and budgeting skills.

Beyond the humiliating underlying stereotypes, the Cashless Debit Card itself leads cardholders feeling stigmatized. While the non-acceptance of Basics Cards at certain shops had led to prominent “Basics Card not accepted here” signs, the Cashless Debit Card was intended to be more subtle. It is integrated with EFTPOS technology, meaning it can theoretically be used in any shop with one of these ubiquitous card-reading devices. ETPOS terminals in casinos or pubs are blocked, but these establishments can arrange with the government to have some discretion. A pub can arrange to allow Cashless Debit Card-holders to pay for food but not alcohol, for example, thereby not excluding them entirely. Despite this purported subtlety, individuals reported feeling anxious about using the card as the technology was proving unreliable and inconsistent, accepted one day but not the next. When the card was declined, sometimes seemingly randomly, this was deeply humiliating. Card-holders would have to gather their shopping and return it to the shelves under the judging gaze of others, potentially of people they know.

Separately, some card-holders had to use public computers to log into their accounts to check their cards’ balance, highlighting the reliance of such schemes on strong digital infrastructure and on individuals’ access to connected devices. But some Cashless Debit Card-holders were quite positive about the card: there is, of course, a diversity of opinions and experiences. Some found that the card’s fortnightly cycle had helped them with budgeting and thought the app upon which they could check their balance was a user-friendly and effective budgeting tool.

The Cashless Debit Card scheme is run by a company named Indue, continuing decades-long trends of outsourcing welfare delivery. Many participants in Eve’s research spoke positively of their experience with Indue, finding staff on helplines to be helpful and efficient. But many objected to the principle that the card is privatized and that profits are being made on the basis of their poverty. The Cashless Debit Card costs AUD 10,000 per participant per year to administer: many card-holders were outraged that such an expense is outlaid to try to control how they spend their very meager income. Recently, the biggest four banks in Australia and government-owned Australia Post have been in talks about taking over the management of the scheme. This raises an interesting parallel with South Africa, where social grants were originally paid through a private provider but, following a scandal regarding the tender process and the financial exploitation of poor grant recipients, public providers stepped in again.

As an anthropologist, Eve’s research takes as a starting point the importance of listening to the people affected and foregrounding their lived experience, resonating with a common approach to human rights research. Interestingly, many Cashless Debit Card-holders used the language of human rights to express indignation about the scheme and what it represents. Reminiscent of Sally Engle Merry’s work on the ‘vernacularization’ of human rights, card-holders invoked human rights in a manner quite specific to the Aboriginal Australian context and history. Eve’s research participants often compared the Cashless Debit Card trials to the past, when the wages of indigenous peoples had been stolen and their access to money was tightly controlled. They referred to that time as the “time before rights”; before legislative equal citizen rights had been gained. Today, they argued, now that indigenous communities have rights, this kind of intervention and control of communities by the government is unacceptable. As one of Eve’s research participants put it, the government has through the Cashless Debit Card “taken away our rights.”

December 4, 2020. Victoria Adelmant, Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law. 

“We are not Data Points”: Highlights from our Conversation on the Kenyan Digital ID System

TECHNOLOGY AND HUMAN RIGHTS

Seeing the Unseen: Inclusion and Exclusion in Kenya’s Digital ID
System

On October 28, 2020, the Digital Welfare State and Human Rights Project held a virtual conversation with Nanjala Nyabola for the second in the Transformer States Conversation Series on the topic of inclusion and exclusion in Kenya’s digital ID system. Nanjala is a writer, political analyst, and activist based in Nairobi and author of Digital Democracy, Analogue Politics: How the Internet Era is Transforming Politics in Kenya. Through an energetic and enlightening conversation with Christiaan van Veen and Victoria Adelmant, Nanjala explained the historical context of the Huduma Namba system, Kenya’s latest digital ID scheme, and pointed out a number of pressing concerns with the project.

Kenya’s new digital identity system, known as Huduma Namba, was announced in 2018 and involved the establishment of the Kenyan National Integrated Identity Management System (NIIMS). According to its enabling legislation, NIIMS is intended to be a comprehensive national registration and identity system to promote efficient delivery of public services, by consolidating and harmonizing the law on the registration of persons. This ‘master database’ would, according to the government, become the ‘single source of truth’ on Kenyans. A “Huduma Namba” (a unique identifying number) and “Huduma Card” (a biometric identity card) would be assigned to Kenyan citizens and residents.

Huduma Namba is the latest in a long series of biometric identity systems in Kenya that began with colonization. Kenya has had a form of mandatory identification under the Kipande system since the Native Registration Ordinance of 1915 under the British colonial government. The Kipande system required black men over the age of 16 to be fingerprinted and to carry identification that effectively restricted their freedom of movement and association. Non-compliance carried the threat of criminal punishment and forced labor. Rather than repealing this “cornerstone of the colonial project” upon independence, the government instead embraced and further formalized the Kipande system, making it mandatory for all men over 18. New ID systems were introduced, but always maintained several core elements: biometrics, the collection of ethnic data, and punishment. ID remained necessary for accessing certain buildings, opening bank accounts, buying or selling property and free movement both within and out of Kenya. The fact that women were not included in the national ID system until 1978 further reveals the exclusionary nature of such systems, in this instance along gendered lines.

While, in theory, these ID systems have been mandatory such that anyone should be able to demand and receive an ID, in practice, Kenyans from border communities must be “vetted” before receiving their ID. They must return to their paternal family village to be “vetted” by the local chief as to their community membership. Given the contested nature of Kenya’s borders, many Kenyans who may be ethnically Somali or Masai can face significant difficulty in proving they are “Kenyan” and obtaining the necessary ID. The vetting process can also serve to significantly delay applications. Nanjala explained that some ethnically Somali Kenyans who struggled to gain access to legal identification and therefore were excluded from basic entitlements had resorted to registering as refugees in order to access services.

Given the history of legal identity systems in Kenya, Huduma Namba may offer a promising break from the past and may serve to better include marginalized groups. Huduma Namba is supposed to give a “360 degree legal identity” to Kenyan citizens and residents; it includes women and children; and it is more than just a legal identity, it is also a form of entitlement. For example, Huduma Namba has been said to provide the enabling conditions for universal healthcare, to “facilitate adequate resource allocation” and to “enable citizens to get government services”. However, Nanjala also emphasized that Huduma Namba does not address any of the pre-existing exclusions experienced by certain Kenyans, especially those from border communities. Nanjala noted that the Huduma Namba is “layered over a history of exclusion,” and preserves many of the discriminatory practices experienced under previous systems. As residents must present existing identity documents in order to obtain a Huduma Card, vetting practices will still hinder border communities’ access to the new system, and thereby hinder access to the services to which Huduma Namba will be tied.

Over the course of the conversation Nanjala drew on her rich knowledge and experience to highlight what she sees as a number of ‘red flags’ raised by the Huduma Namba project. These go to the need to properly examine the true motivations behind such digital ID schemes and the actors who promote them. In brief, these are:

  • The false promise of the efficiency argument, being that “efficient’ technological solutions and data will fix social problems. This argument ignores the social, political and historical context and complexities of governing a state, and merely perpetuates the ‘McKinseyfication’ of government (being an increasing pervasiveness of management consultancy in development). Further, there is little evidence that such efficient solutions will actually work, as was seen in relation to the Integrated Financial Management Information System (IFMIS) rolled out in Kenya in 2013. Such arguments detract attention from examining why problems such as poor infrastructure, healthcare or education systems have arisen or have not been addressed. Nanjala noted that the ongoing COVID-19 pandemic has made the risks of this clear: while the Kenyan government has spent over $6million on the Huduma Namba system, the country has only 518 ICU beds.
  • The fact that the government is relying on threats and intimidation to “encourage” citizens to register for Huduma Namba. Nanjala posited that if a government is offering citizens a real service or benefit, it should be able to articulate a strong case for adoption such that citizens will see the benefit and willingly sign up.
  • The lack of clear information and analysis, including any cost benefit analysis or clear articulation of the why and how of the Huduma Namba system, available to citizens or researchers.
  • The complex political motivations behind the government’s actions, which hinge primarily on the current administration’s campaign promises and eye to the next election, rather than centring longer-term benefits to the population.
  • The risks associated with unchecked data collection, which include improper use and monetization of citizens’ data by government.

While much of the conversation addressed clear concerns with the Huduma Namba project, Nanjala also discussed how human rights law, movements and actors can help bring about more positive developments in this area. Firstly, this year’s decision by the Kenyan High Court, which was brought by the Kenyan Human Rights Commission, Kenya National Commission on Human Rights and Nubian Rights Forum, held that the Huduma Namba scheme could not proceed without appropriate data protection and privacy safeguards, was an inspiring example of the effectiveness of grassroots activism and rights-based litigation.

Further, this case provided an example of how human rights frameworks can enable transnational conversations about rights issues. Nanjala reminded us to question why it is that the UK can vote to avoid digital ID systems while British companies are simultaneously deploying digital ID technologies in the developing world, that is, why digital ID might be seen to be good enough for the colonized, but not the colonizers. And as digital ID systems are being widely promulgated by the World Bank throughout the Global South, Nanjala identified the successful south-south collaboration and knowledge exchange between Indian and Kenyan activists, lawyers and scholars in relation to India’s widely criticized digital ID system, Aadhaar. By learning about the Indian experience, Kenyan organizations were able to more effectively push back against some of the particular concerns with Huduma Namba. Looking at the severe harms that have arisen from the centralized biometric system in India can also help demonstrate the risks of such schemes.

Digital ID systems risk reducing humanity to mere data points, and so, to the extent that they do so, should be resisted. We are not just data points, and considering data as the “new” gold or oil positions our identities as resources to be exploited by companies and governments as they see fit. Nanjala explained that the point of government is not to oversimplify or exploit the human experience, but rather to leverage the resources that government collects to maximize the human experience of its residents. In the context of ever increasing intrusions into privacy cloaked in claims of making life “easier”, Nanjala’s comments and critique provided a timely reminder to focus on the humans at the center of ongoing debates about our digital lives, identities and rights.

Holly Ritson, LLM program, NYU School of Law; and Human Rights Scholar with the Digital Welfare State and Human Rights Project.