Pilots, Pushbacks, and the Panopticon: Digital Technologies at the EU’s Borders

TECHNOLOGY & HUMAN RIGHTS

Pilots, Pushbacks, and the Panopticon: Digital Technologies at the EU’s Borders

The European Union is increasingly introducing digital technologies into its border control operations. But conversations about these emerging “digital borders” are often silent about the significant harms experienced by those subjected to these technologies, their experimental nature, and their discriminatory impacts.

On October 27, 2021, we hosted the eighth episode in our Transformer States Series on Digital Government and Human Rights, in an event entitled “Artificial Borders? The Digital and Extraterritorial Protection of ‘Fortress Europe.’” Christiaan van Veen and Ngozi Nwanta interviewed Petra Molnar about the European Union’s introduction of digital technologies into its border control and migration management operations. The video and transcript of the event, along with additional reading materials, can be found below. This blog post outlines key themes from the conversation.

Digital technologies are increasingly central to the EU’s efforts to curb migration and “secure” its borders. Against a background of growing violent pushbacks, surveillance technologies such as unpiloted drones and aerostat machines with thermo-vision sensors are being deployed at the borders. The EU-funded “ROBORDER” project aims to develop “a fully-functional autonomous border surveillance system with unmanned mobile robots.” Refugee camps on the EU’s borders, meanwhile, are being turned into a “surveillance panopticon,” as the adults and children living within them are constantly monitored by cameras, drones, and motion-detection sensors. Technologies also mediate immigration and refugee determination processes, from automated decision-making, to social media screening, and a pilot AI-driven “lie detector.”

In this Transformer States conversation, Petra argued that technologies are enabling a “sharpening” of existing border control policies. As discussed in her excellent report entitled “Technological Testing Grounds,” completed with European Digital Rights and the Refugee Law Lab, new technologies are not only being used at the EU’s borders, but also to surveil and control communities on the move before they reach European territory. The EU has long practiced “border externalization,” where it shifts its border control operations ever-further away from its physical territory, partly through contracting non-Member States to try to prevent migration. New technologies are increasingly instrumental in these aims. The EU is funding African states’ construction of biometric ID systems for migration control purposes; it is providing cameras and surveillance software to third countries to prevent travel towards Europe; and it supports efforts to predict migration flows through big data-driven modeling. Further, borders are increasingly “located” on our smartphones and in enormous databases as data-based risk profiles and pre-screening become a central part of the EU’s border control agenda.

Ignoring human experience and impacts

But all too often, discussions about these technologies are sanitized and depoliticized. People on the move are viewed as a security problem, and policymakers, consultancies, and the private sector focus on the “opportunities” presented by technologies in securitizing borders and “preventing migration.” The human stories of those who are subjected to these new technological tools and the discriminatory and deadly realities of “digital borders” are ignored within these technocratic discussions. Some EU policy documents describe the “European Border Surveillance System” without mentioning people at all.

In this interview, Petra emphasized these silences. She noted that “human experience has been left to the wayside.” First-person accounts of the harmful impacts of these technologies are not deemed to be “expert knowledge” by policymakers in Brussels, but it is vital to expose the human realities and counter the sanitized policy discussions. Those who are subjected to constant surveillance and tracking are dehumanized: Petra reports that some are left feeling “like a piece of meat without a life, just fingerprints and eye scans.” People are being forced to take ever-deadlier routes to avoid high-tech surveillance infrastructures, and technology-enabled interdictions and pushbacks are leading to deaths. Further, difference in treatment is baked into these technological systems, as they enable and exacerbate discriminatory inferences along racialized lines. As UN Special Rapporteur on Racism E. Tendayi Achiume writes, “digital border technologies are reinforcing parallel border regimes that segregate the mobility and migration of different groups” and are being deployed in racially discriminatory ways. Indeed, some algorithmic “risk assessments” of migrants have been argued to represent racial profiling.

Policy discussions about “digital borders” also do not acknowledge that, while the EU spends vast sums on technologies, the refugee camps at its borders have neither running water nor sufficient food. Enormous investment in digital migration management infrastructures is being “prioritized over human rights.” As one man commented, “now we have flying computers instead of more asylum.”

Technological experimentation and pilot programs in “gray zones”

Crucially, these developments are occurring within largely-unregulated spaces. A central theme of this Transformer States conversation—mirroring the title of Petra’s report, “Technological Testing Grounds”—was the notion of experimentation within the “gray zones” of border control and migration management. Not only are non-citizens and stateless persons accorded fewer rights and protections than EU citizens, but immigration and asylum decision-making is also an area of law which is highly discretionary and contains fewer legal safeguards.

This low-rights, high-discretion environment makes it rife for testing new technologies. This is especially the case in “external” spaces far from European territory which are subject to even less regulation. Projects which would not be allowed in other spaces are being tested on populations who are literally at the margins, as refugee camps become testing zones. The abovementioned “lie detector,” whereby an “avatar” border guard flagged “biomarkers of deceit,” was “merely” a pilot program. This has since been fiercely criticized, including by the European Parliament, and challenged in court.

Experimentation is deliberately occurring in these zones as refugees and migrants have limited opportunities to challenge this experimentation. The UN Special Rapporteur on Racism has noted that digital technologies in this area are therefore “uniquely experimental.” This has parallels with our work, where we consistently see governments and international organizations piloting new technologies on marginalized and low-income communities. In a previous Transformer States conversation, we discussed Australia’s Cashless Debit Card system, in which technologies were deployed upon aboriginal people through a pilot program. In the UK, radical reform to the welfare system through digitalization was also piloted, with low-income groups being tested on with “catastrophic” effects.

Where these developments are occurring within largely-unregulated areas, human rights norms and institutions may prove useful. As Petra noted, the human rights framework requires courts and policymakers to focus upon the human impacts of these digital border technologies, and highlights the discriminatory lines along which their effects are felt. The UN Special Rapporteur on Racism has outlined how human rights norms require mandatory impact assessments, moratoria on surveillance technologies, and strong regulation to prevent discrimination and harm.

November 23, 2021. Victoria Adelmant,Director of the Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law. 

Social rights disrupted: how should human rights organizations adapt to digital government?

TECHNOLOGY & HUMAN RIGHTS

Social rights disrupted: how should human rights organizations adapt to digital government?

As the digitalization of government is accelerating worldwide, human rights organizations who have not historically engaged with questions surrounding digital technologies are beginning to grapple with these issues. This challenges these organizations to adapt both their substantive focus and working methods while remaining true to their values and ideals.

On September 29, 2021, Katelyn Cioffi and I hosted the seventh event in the Transformer States conversation series, which focuses on the human rights implications of the emerging digital state. We interviewed Salima Namusobya, Executive Director of the Initiative for Social and Economic Rights (ISER) in Uganda, about how socioeconomic rights organizations are having to adapt to respond to issues arising from the digitalization of government. In this blog post, I outline parts of the conversation. The event recording, transcript, and additional readings can be found below.

Questions surrounding digital technologies are often seen as issues for “digital rights” organizations, which generally focus on a privileged set of human rights issues such as privacy, data protection, free speech online, or cybersecurity. But, as governments everywhere enthusiastically adopt digital technologies to “transform” their operations and services, these developments are starting to be confronted by actors who have not historically engaged with the consequences of digitalization.

Digital government as a new “core issue”

The Initiative for Social and Economic Rights (ISER) in Uganda is one such human rights organization. Its mission is to improve respect, recognition, and accountability for social and economic rights in Uganda, focusing on the right to health, education, and social protection. It had never worked on government digitalization until recently.

But, through its work on social protection schemes, ISER was confronted with the implications of Uganda’s national digital ID program. While monitoring the implementation of the Senior Citizens grant in which persons over 80 years old receive cash grants, ISER staff frequently encountered people who were clearly over 80 but were not receiving grants. This program had been linked to Uganda’s national identification scheme, which holds individuals’ biographic and biometric information in a centralized electronic database called the National Identity Register and issues unique IDs to enrolled individuals. Many older persons had struggled to obtain IDs because their fingerprints could not be captured. Many other older persons had obtained national IDs, but the wrong birthdates were entered into the ID Register. In one instance, a man’s birthdate was wrong by nine years. In each case, the Senior Citizens grant was not paid to eligible beneficiaries because of faulty or missing data within the National Identity Register. Witnessing these significant exclusions led  ISER to become  actively involved in research and advocacy surrounding the digital ID. They partnered with CHRGJ’s Digital Welfare State team and Ugandan digital rights NGO Unwanted Witness, and the collective work culminated in a joint report. This has now become a “core issue” for ISER.

Key challenges

While moving into this area of work, ISER has faced some challenges. First, digitalization is spreading quickly across various government services. From the introduction of online education despite significant numbers of people having no access to electricity or the internet, to the delivery of COVID-19 relief via mobile money when only 71% of Ugandans own a mobile phone, exclusions are arising across multiple government initiatives. As technology-driven approaches are being rapidly adopted and new avenues of potential harm are continually materializing, organizations can find it difficult to keep up.

The widespread nature of these developments mean that organizations are finding themselves making the same argument again and again to different parts of government. It is often proclaimed that digitized identity registers will enable integration and interoperability across government, and that introducing technologies into governance “overcomes bureaucratic legacies, verticality and silos.” But ministries in Uganda remain fragmented and are each separately linking their services to the national ID. ISER must go to different ministries whenever new initiatives are announced to explain, yet again, the significant level of exclusion that using the National Identity Register entails. While fragmentation was a pre-existing problem, the rapid proliferation of initiatives across government is leaving organizations “firefighting.”

Second, organizations face an uphill battle in convincing the government to slow down in their deployment of technology. Government officials often see enormous potential in technologies for cracking down on security threats and political dissent. Digital surveillance is proliferating in Uganda, and the national ID contributes to this agenda by enabling the government to identify individuals. Where such technologies are presented as combating terrorism, advocating against them is a challenge.

Third, powerful actors are advocating the benefits of government digitalization. International agencies such as the World Bank are providing encouragement and technical assistance and are praising governments’ digitalization efforts. Salima noted that governments take this seriously, and if publications from these organizations are “not balanced enough to bring out the exclusionary impact of the digitalization, it becomes a problem.” Civil society faces an enormous challenge in countering overly-positive reports from influential organizations.

Lessons for human rights organizations

In light of these challenges, several key lessons arise for human rights organizations who are not used to working on technology-related problems but who are witnessing harmful impacts from digital government.

One important lesson is that organizations will need to adopt new and different methods in dealing with challenges arising from the rapid spread of digitalization; they should use “every tool available to them.” ISER is an advocacy organization which only uses litigation as a last resort. But when the Ugandan Ministry of Health announced that national ID would be required to access COVID-19 vaccinations, “time was of the essence”, in Salima’s words. Together with Unwanted Witness, it immediately launched litigation seeking an injunction, arguing that this would exclude millions, and the policy was reversed.

ISER’s working methods have changed in other ways. ISER is not a service provision charity. But, in seeing countless people unable to access services because they were unable to enroll in the ID Register, ISER felt obliged to provide direct assistance. Staff compiled lists of people without ID, provided legal services, and helped individuals to navigate enrolment. Advocacy organizations may find themselves taking on such roles to assist those who are left behind in the transition to digital government.

Another key lesson is that organizations have much to gain from sharing their experiences with practitioners who are working in different national contexts. ISER has been comparing its experiences and sharing successful advocacy approaches with Kenyan and Indian counterparts and has found “important parallels.”

Last, organizations must engage in active monitoring and documentation to create an evidence base which can credibly show how digital initiatives are, in practice, affecting some of the most vulnerable. As Salima noted, “without evidence, you can make as much noise as you like,” but it will not lead to change. From taking videos and pictures, to interviewing and writing comprehensive reports, organizations should be working to ensure that affected communities’ experiences can be amplified and reflected to demonstrate the true impacts of government digitalization.

October 19, 2021. Victoria Adelmant, Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law. 

A GPS Tracker on Every “Boda Boda”: A Tale of Mass Surveillance in Uganda

TECHNOLOGY & HUMAN RIGHTS

A GPS Tracker on Every “Boda Boda”: A Tale of Mass Surveillance in Uganda

The Ugandan government recently announced that GPS trackers would be placed on every vehicle in the country. This is just the latest example of the proliferation of technology-driven mass surveillance, spurred by a national security agenda and the desire to suppress political opposition.

Following the June 2021 assassination attempt on Uganda’s Transport Minister and former army commander, General Katumba Wamala, President Yoweri Museveni suggested mandatory Global Positioning System (GPS) tracking of all private and public vehicles. This includes motorcycle taxis (commonly known as boda bodas) and water vessels. Museveni also suggested collecting and storing the palm prints and DNA of every Ugandan.

Hardly a month later, reports emerged that the government, through the Ministry of Security, had entered into a 10-year secretive contract with a Russian security firm to undertake the installation of GPS trackers in vehicles. Selection of the firm was never subjected to the procurement procedures required by Ugandan law, and a few days after this news broke, it emerged that the Russian firm was facing bankruptcy litigation. The line minister who endorsed the contract subsequently distanced himself from the deal, saying that he was merely enforcing a presidential directive. The government has confirmed that Ugandans will have to pay 20,000 UGX (approximately $6 USD) annually to the Russian firm for the installation of trackers on their vehicles. This controversial move means Ugandans are paying for their own surveillance.
According to 2020 statistics by the Ugandan Bureau of Statistics, a total of 38,182 motor vehicles and 102,273 motor cycles are registered in Uganda. Most of these motorcycles function as boda bodas and are a de facto mode of public transport in Uganda commonly used by people of all social classes. In the capital of Kampala, boda bodas are essential because of their ability to navigate heavy traffic jams. In remote locations where public transport is inaccessible, boda bodas are the only means of transportation for most people, except the elites. While a boda boda motorcycle was allegedly used in the assassination attempt on General Katumba Wamala, those same boda bodas also function as ambulances (including bringing the General to a hospital after the attack) and many other essential purposes.

It should be emphasized that this latest attempt at boda boda mass surveillance is part of a broader effort by the government of Uganda to exert power and control via digital surveillance and thereby limit the full enjoyment of human rights offline and online. One example is the widespread use of indiscriminate drone surveillance. Another is the Cyber Crimes Unit in the Ugandan police which, since 2014, has had overly broad powers to monitor the social media activity of Ugandans. Unwanted Witness has raised concerns about the intrusive powers of this unit, which violate Article 27 of the 1995 Uganda Constitution that guarantees the right to privacy.

And that is not all. In 2018, the Ugandan government contracted the Chinese firm Huawei to install CCTV cameras in all major cities and on all highways, spending over $126 million USD on these cameras and related facial recognition technology. In the absence of any judicial oversight, there are also concerns about backdoor access to this system for illegal facial recognition surveillance on potential targets and the use of this system to stifle all opposition to the regime.

The fears about the use of this CCTV system to violate human rights and stifle dissent came true in November 2020. Following the arrest of two opposition presidential candidates, political protests erupted in Uganda, and this CCTV system was used to crack down on dissent after these protests. Long before these protests, the Wall Street Journal had already reported on how Huawei technicians assisted the Ugandan government to spy on political opponents.

This is taking place in a wider context of attacks on human rights defenders and NGOs. Under the guise of seeking to pre-empt terror threats, the state has instituted cumbersome regulations on nonprofits and granted authorities the power to monitor and interfere in their work. Last year, a number of well-known human rights groups were falsely accused of funding terrorism and had their bank accounts frozen. The latest government clampdown on NGOs resulted in the suspension of the operations of 54 organizations on allegations of non-compliance with registration laws. Uganda’s pervasive surveillance apparatus will be instrumental in these efforts at censoring and silencing human rights organizations, activists, and other forms of dissent.
The intrusive application of digital surveillance harms the right to privacy of Ugandans. Privacy is a fundamental right enshrined in the 1995 Constitution and numerous international human rights treaties and other legal instruments. The right to privacy is also a central pillar of a well-functioning democracy. But in the quest to surveil its population, the Ugandan government has either underplayed or ignored the violation of human rights.

What is especially problematic here is the partial privatization of government surveillance to individual corporations. There is a long and unfortunate track record in Uganda of private corporations evading all human rights accountability for their involvement in surveillance. In 2019, for example, Unwanted Witness wrote a report that faulted a transport hailing app—SafeBoda—for sharing customers’ data with third parties without their consent. With the planned GPS tracking, Ugandan boda boda users will have their privacy eroded further, with the help of the Russian security firm. Driven by a national security agenda and the desire to control and suppress any opposition to the long-running Museveni presidency, digital surveillance is proliferating as Ugandans’ rights to privacy, to freedom of expression, and to freedom of assembly are harmed.

October 13, 2021. Dorothy Mukasa is the Chief Executive Officer of Unwanted Witness, a leading digital rights organization in Uganda. 

“Leapfrogging” to Digital Financial Inclusion through “Moonshot” Initiatives

TECHNOLOGY & HUMAN RIGHTS

“Leapfrogging” to Digital Financial Inclusion through “Moonshot” Initiatives

The notion that new technological solutions can overcome entrenched exclusion from banking services and fair credit is quickly gaining widespread acceptance. But tech-based “fixes” often funnel low-income groups into separate, inferior systems and create new tech-driven divisions.

In July 2021, the New York City Mayor’s Office of the Chief Technology Officer launched the NYC[x] Moonshot: Financial Inclusion Challenge. This initiative seeks to deploy digital solutions to address inequalities in access to financial institutions. As the Chief Technology Officer stated, “Too many people have been left out of the financial system for too long. This disparity means that financial transactions … end up costing more for those who can least afford it.”

One in ten Americans are “unbanked,” meaning that they do not have a bank account. People of color are disproportionately excluded from traditional financial institutions. Banks consistently operate fewer branches in Black, Native American, and Latinx communities, creating “banking deserts,” while the practice of redlining continues. Poorly-regulated predatory financial institutions such as payday lenders, which impose higher costs than banks and trap customers in cycles of debt, are highly concentrated in these communities and take advantage of financial exclusion. In New York’s borough of the Bronx, over 49% of households are unbanked and high-cost lenders significantly outnumber banks.

Unequal access to banking means unequal access to fair credit. This compounds inequalities, as a poor credit record increasingly determines crucial outcomes, including higher interest rates on loans, higher insurance premiums, and difficulty obtaining employment or housing.

NYC is pursuing technology-based solutions to address these issues. The Moonshot initiative, which seeks proposalsutilizing breakthrough financial inclusion technology” to bring the unbanked into the financial system, follows previous tech-driven schemes. A recent initiative involved IDNYC, the city’s official identification card launched in 2015. This ID scheme had sought to facilitate access to banking by providing government-issued IDs to groups previously unable to open bank accounts for want of official identification; the ID is explicitly available to undocumented immigrants. However, shortly after its launch, the city’s largest banks dealt a blow to the IDNYC scheme by refusing to accept it as sufficient identification to open accounts. In response, the Mayor’s Office turned to technology. In 2018, it solicited proposals from financial firms to introduce electronic chips—the same smartcards used in debit cards—into the ID cards. This would allow IDNYC cardholders to load money onto their ID cards and make payments using these cards. Such reloadable cards are known as prepaid cards.

This proposed integration of identification and payment functions was not unique. In the U.S., the city of Oakland’s municipal identification scheme enabled cardholders to have their welfare benefits deposited onto the ID card and make payments with it. Also in California, the city of Richmond’s ID similarly functions as a prepaid card. In 2020, MasterCard’s “City Key” card, which combines official identification and payments, was distributed to low-income residents in Honolulu. Outside of the U.S., MasterCard was involved in adding electronic chips to national ID cards in Nigeria, and the Malaysian national ID also functions as a reloadable debit card.

But the proposal to incorporate smartcards into IDNYC was abandoned. Dozens of immigrants’ rights organizations warned that the integration of payment functions increased immigrant cardholders’ risk of surveillance and profiling. Adding the chip would lead to “massive data collection” by the financial technology firm brought into IDNYC and, because such firms are legally required to retain information about cardholders, undocumented immigrants’ data could be subpoenaed by the Trump administration. The Mayor’s Office accepted that these risks were fundamentally in conflict with the inclusionary goals of IDNYC and withdrew the plan.

While the proposal was abandoned, the narratives and driving forces behind it have intensified. Turning to a prepaid card system to “eliminate banking deserts” in NYC followed a well-established script that promises to “leapfrog” over deeply-rooted social problems using new technologies. The Gates Foundation, McKinsey, MasterCard, and others have long furthered this narrative that groups left behind by traditional financial institutions can be reached through innovative technological solutions which “leapfrog” banks. Bill Gates was famously quoted saying, “banking is necessary but banks are not”—and today, actors which are not banks, such as payment technology companies and telecommunications firms, are increasingly offering “financially-inclusive” services such as mobile money and smartcard solutions in explicit efforts “to ‘disrupt’… traditional banking services.” Prepaid cards especially seek to bypass banks: by their very design they operate without any link to bank accounts.

As such, these technological solutions funnel unbanked groups into a separate, “parallel banking system.” Prepaid cards do not provide access to bank accounts, so cardholders remain unbanked. This is an inferior banking product; cardholders do not gain the same access to the services and fairer credit that bank accounts enable. Financial inclusion persists, but the unbanked now have smartcards.

Further, the companies “disrupting” banking are usually not subject to the same legal obligations as banks, nor do they provide the same financial protections. Within these separate, technology-enabled payment systems for the unbanked, the extractivism and predatory practices that financial inclusion efforts are supposed to address re-emerge. NYC’s Chief Technology Officer had lamented that financial exclusion means that transactions cost “more for those who can least afford it”—but when Oakland launched its smartcard ID, the company running the prepaid function levied countless fees on cardholders, including $0.75 per transaction, $1 per reloading of funds, and a $2.99 monthly fee. The fees were higher than those of banks. Further, the insistence that electronic payments will solve financial exclusion is motivated by a desire to monetize new customers’ transaction data. Companies are racing to “capture the data of the newly ‘included’” and uncover the “financial lives of the poor” as a new market segment.

As the Immigrant Defense Project and others argued, turning IDNYC into a prepaid card would therefore “be perpetuating, not resolving, inequality in our banking system.” Within our work outside the U.S., we see the same technological solutions being embraced, all while they siphon low-income groups toward less-regulated, separate systems. For example, in South Africa and Australia, recipients of state benefits are forced onto prepaid cards not linked to traditional bank accounts. Still, “digital financial inclusion” through these technologies is being hailed as the solution to financial exclusion.

The 2021 Moonshot initiative appears to be based on the same ideals. The very notion of a “moonshot” is solutionist—it connotes a monumental (technologically-driven) effort to achieve a lofty goal. Official “launch” documents state that technology can “help solve the most pressing issues of people’s lives.” Rather than seeking to work with banks, the scheme turns to developers: the unbanked need “new options.” This focus on technology can obscure the root causes of financial exclusion—namely racism, discrimination, and predatory financial practices. “New options” will too often mean separate, inferior systems; and eschewing attempts to resolve inequalities within the “old options” leaves harmful practices—such as the linking of everything from housing to insurance with credit reports, continuing redlining, and the closing of bank branches without regard for those left behind—unaddressed. 

September 21, 2021. Victoria Adelmant, Director of the Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law. 

False Promises and Multiple Exclusion: Summary of Our RightsCon Event on Uganda’s National Digital ID System

TECHNOLOGY & HUMAN RIGHTS

False Promises and Multiple Exclusion: Summary of Our RightsCon Event on Uganda’s National Digital ID System 

Despite its promotion as a tool for social inclusion and development, Uganda’s National Digital ID System is motivated primarily by national security concerns. As a result, the ID system has generated both direct and indirect exclusion, particularly affecting women and older persons.

On June 10, 2021, the Center for Human Rights and Global Justife at NYU School of Law co-hosted the panel “Digital ID: what is it good for? Lessons from our research on Uganda’s identity system and access to social services” as part of RightsCon, the leading summit on human rights in the digital age. The panelists included Salima Namusobya, Executive Director of the Initiative for Social and Economic Rights (ISER), Dorothy Mukasa, Team Leader of Unwanted Witness, Grace Mutung’u, Research Fellow at the Centre for IP and IT Law at Strathmore University, and Christiaan van Veen, Director of the Digital Welfare State & Human Rights Project at the Center . This blog summarizes highlights of the panel discussion. A recording and transcript of the conversation, as well as additional readings, can be found below.

Uganda’s national digital ID system, known as Ndaga Muntu, was introduced in 2014 through a mass registration campaign. The government aimed to collect the biographic and biometric information including photographs and fingerprints of every adult in the country, to record this data in a centralized database known as the National Identity Register, and to issue a national ID card and unique ID number to each adult. Since its introduction, having a national ID has become a prerequisite to access a whole host of services, from registering for a SIM card and opening a bank account, to accessing health services and social protection schemes.

This linkage of Ndaga Muntu to public services has raised significant human rights concerns and is serving to lock millions of people in Uganda out of critical services. Seven years from its inception, it is clear that the national digital ID is a tool for exclusion rather than for inclusion. Drawing on the joint report by the Center , ISER, and Unwanted Witness, this event made clear that Ndaga Muntu was grounded in false promises and is resulting in multiple forms of exclusion.

The False Promise of Inclusion

The Ugandan government argued that this digital ID system would enhance social inclusion by allowing Ugandans to prove their identity more easily. Having this proof of identity would facilitate access to public services such as healthcare, enable people to sign up for private services such as bank accounts, and allow people to move freely throughout Uganda. The same rhetoric of inclusion was used to sell Aadhaar, India’s digital ID system, to the Indian public.

But for many Ugandans this was a false promise. From the very outset, Ndaga Muntu was developed chiefly as a tool for national security. The powerful Ugandan military had long pushed for the collection of sensitive identity information and biometric data: in the context of a volatile region, a centralized information database is appealing because of its ability to verify identity and indicate who is “really Ugandan” and who is not. Therefore, the national ID project was housed in the Ministry of Internal Affairs, overseen by prominent members of the Ugandan People’s Defense Force, and designed to serve only those who succeeded in completing a rigorous citizenship verification process.

The panelist from Kenya, Grace Mutung’u, shared how Kenya’s hundred-year-old national identification system was similarly rooted in a colonial regime that focused on national security and exclusion. Those design principles created a system that sought only to “empower the already empowered” and not to extend benefits beyond already-privileged constituencies. The result in both Kenya and Uganda was the same: digital ID systems that are designed to ensure that certain individuals and groups remain excluded from political, economic, and social life.

Proliferating Forms of Exclusion

Beyond the fact that Ndaga Muntu was designed to directly exclude anyone not entitled to access public services, those who are entitled are also being excluded in the millions. For ordinary Ugandans, accessing Ndaga Muntu is a nightmarish process rife with problems every step of the way. These problems, such as corruption, incorrect data entry, and technical errors, have impeded Ugandans’ access to the ID. Vulnerable populations who rely on social protection programs that require proof of ID bear the brunt of such errors. For example, one older woman was told that the national ID registration system could not capture her picture because of her grey hair. Other elderly Ugandans have had trouble with fingerprint scanners that could not capture fingerprints worn away from years of manual labor.

The many individuals who have not succeeded in registering for Ndaga Muntu are therefore being left out of the critical services which are increasingly linked to the ID. At least 50,000 of the 200,000 eligible persons over the age of 80 in Uganda were unable to access potentially lifesaving benefits such as the Senior Citizens’ Grant cash transfer program. Women have been similarly disproportionately impacted by the national ID requirement; for instance, pregnant women have been refused services by healthcare workers for failing to provide ID. To make matters worse, ID requirements are increasingly ubiquitous in Uganda: proof of ID is often required to book transportation, to vote, to access educational services, healthcare, social protection grants, and food donations. Having a national ID has become necessary for basic survival, especially for those who live in extreme poverty.

Digital ID systems should not prohibit people from living their lives and utilizing basic services that should be universally accessible, particularly when they are justified on the basis that they will improve access to services. Not only was the promise of inclusion for Ndaga Muntu false, but the rollout of the system has also been incompetent and faulty, leading to even greater exclusion. The profound impact of this double discrimination in Uganda demonstrates that such digital ID systems and their impacts on social and economic rights warrant greater and urgent attention from the human rights community at large.

June 12, 2021. Madeleine Matsui, JD program, Harvard Law School; intern with the Digital Welfare State and Human Rights.

‘Chased Away and Left to Die’: New human rights report finds that Uganda’s national digital ID system leads to mass exclusion

TECHNOLOGY & HUMAN RIGHTS

‘Chased Away and Left to Die’: New human rights report finds that Uganda’s national digital ID system leads to mass exclusion

Uganda’s national digital ID system, a government showpiece that is of major importance for how individuals in Uganda access their social rights, leads to mass exclusion. This is the key finding in a new report titled Chased Away and Left to Die, published today by a collective of human rights organizations. The report is the outcome of 7 months of in-depth interviews with a multitude of victims, health workers, welfare workers, government officials and other experts on the national ID, referred to by Ugandans as Ndaga Muntu.

Report cover featuring an interviewee holding documents and being photographed on a phone.

The report argues that the Ugandan government has sacrificed the potential of digital ID for social inclusion and the realization of human rights at the altar of national security. “Ndaga Muntu is primarily a national security weapon built with the help of Uganda’s powerful military and not the ‘unrivaled success’ that the World Bank and others have claimed it is,” said Christiaan van Veen, one of the authors of the report and based at the Center for Human Rights and Global Justice at New York University School of Law.

Obtaining a national digital ID is described as “a nightmare” in the report. Based on official sources, the report estimates that as many as one third (33%) of Uganda’s adult population has not yet received a National Identity Card (NIC), a number that may even be rising. Many others in the country have errors on their card or are unable to replace lost or stolen IDs.

Since Ndaga Muntu is mandatory to access health care, social benefits, to vote, get a bank account, obtain a mobile phone or travel, the national ID has become a critical gateway to access these human rights. As one individual in Nebbi in Northern Uganda, put it succinctly in the report: “Ndaga Muntu is like a key to my door; without it, I can’t enter.” This can literally mean the difference between life and death. A woman in Amudat, in Northern Uganda, described the consequences of not having the national ID for access to health care: “Without an ID […] no treatment. Many people fall sick and stay home and die.”

The report urges the Ugandan government to immediately stop requiring the national digital ID to access social rights. “Government has to go back to the drawing table and rethink the use of Ndaga Muntu,” said Angella Nabwowe of the Initiative for Social and Economic Rights, “especially when it comes to tagging it to service delivery, because many people are being left out.”

Researchers focused their fieldwork in various parts of Uganda on documenting evidence of exclusion of women and older persons from health services and the Senior Citizens’ Grant (SCG) tied to Ndaga Muntu. Since 2019, patients are required to show the national ID to access public health centers. The report details how women, including pregnant women, are ‘chased away’ by health care workers for failure to show their ID. Previously, there was no single, rigid ID requirement to access health care in Uganda.

In March, the Ugandan government also announced its intention to require the national digital ID for access to Covid-19 vaccines. But a lawsuit based on this research by two organizations that co-authored the report, the Initiative for Social and Economic Rights and Unwanted Witness, led to a quick reversal of that policy by the government.

The impact of Ndaga Muntu on the elderly in Uganda is equally heart-wrenching. The report recounts the story of Okye, an 88-year old man from Namayingo in Eastern Uganda whose date of birth was registered incorrectly, ‘making’ him 79-years old instead. The result for Okye is that he is not eligible for the life-saving government cash transfer for persons over 80 (SCG). Okye is not an exception. Senior sources confirmed to the authors of the report that at least 50,000 Ugandans over 80 have similar mistakes on their national ID that make them ineligible for government assistance or do not have a national ID at all. That number is almost certainly an undercount and points to mass exclusion among Uganda’s 200,000 older persons over 80.
The consequences of not having a national ID for older persons can be tragic. Nakaddu, an 87-year old woman in Kayunga district in Central Uganda told researchers that she did not get the cash grant for the elderly: “I don’t get the money, but I don’t know what to do. […] I can no longer dig. My arm is not okay. I cook for myself. Those ones [pointing to the neighbours] give me some food.”

The report blames the struggles and failures of the National Identification and Registration Authority (NIRA) for many of the exclusionary problems with Ndaga Muntu. NIRA has faced criticism for its failure to enroll a larger part of the population, problems with issuing ID cards, high rates of errors, high costs imposed on individuals and allegations of bribery and corruption.
Perhaps NIRA’s biggest failure, however, has been the neglect of its responsibility for registering births. By prioritizing the registration of adults for the national ID over birth registration, the birth registration rate may have plummeted to as low as 13% of children under 1 years old. Meanwhile, the percentage of adults excluded from the national ID may be rising even as NIRA appears unable to keep up with the growing number of young people who turn 18 and become eligible for the national ID card.

“It is quite absurd to invest in registering the adult population for a national ID and forget about the next generation. It is as if NIRA’s left hand does not know, and does not care, what its right hand is doing,” said Dorothy Mukasa, Team leader at Unwanted Witness.
Digital ID systems have been widely hailed by international development organizations and private actors as ways to foster social inclusion and development and promise poor African nations the ability to ‘leapfrog’ towards becoming modern, digital economies. The report by the collective of human rights organizations shows a much darker picture of exclusion, missed opportunities, and significant financial costs.

Not only does the report estimate that the Ugandan government has already spent more than USD 200 million on its digital ID system in the past decade, comparable to the total budget of its Ministry of Gender, Labour and Social Development in that same period. But international organizations and bilateral donors have also poured many millions into Uganda’s health and social protection programs that are now risking to exclude millions from their reach because of Ndaga Muntu’s dysfunction. In an ironic twist, some of those same development partners, like the World Bank, are among the foremost champions of digital ID systems in Africa and have also funded NIRA.

Equally tragic is the fact that many of the benefits of digitalization are missed in this digital ID system. While NIRA maintains air-conditioned servers to house its National Identity Register in Kampala, Uganda’s capital, health care workers still register patients’ national identity information in paper booklets provided by NIRA. And the promised benefits of biometric verification are missed because many remote areas do not have fingerprint scanners or the internet and electricity to make them usable. And when modern biometric equipment worked, many older Ugandans, whose fingerprints have been worn away after many years of manual labor, were, as victims told us, “refused by those machines.”

The report recounts one macabre result of these missed digital opportunities, when an old and sick man was forced by officials to personally travel to a cash transfer distribution point to verify his fingerprints and receive his social benefit. The man set out on a boda boda motorcycle taxi and died on his way there. The last payment due to a deceased beneficiary will customarily be given to family members. Therefore, officials proceeded to take the dead man’s fingerprints.

A short documentary on the impact of Ndaga Muntu on women and older persons can be found here.

This post was initially published as a press release on June 8, 2021.

‘Chased Away and Left to Die’

TECHNOLOGY & HUMAN RIGHTS

Chased Away and Left to Die

How a National Security Approach to Uganda’s National ID Has Led to Wholesale Exclusion of Women and Older Persons

The Ugandan government launched a new national digital ID system in 2014, promising to issue all Ugandans with a national ID number and national ID card, while also building a large central database of identity information, including personal biographic information and digitized biometric information such as fingerprints and facial photographs. This 2020 report documents the continuing wholesale exclusion of large swaths of the Ugandan population from this national digital ID system, known as Ndaga Muntu. Based on 7 months of research together with our Ugandan partners the Initiative for Social and Economic Rights (ISER) and Unwanted Witness, the report takes an in-depth look at the implications of this exclusion for pregnant women and older persons attempting to access their rights to health and social protection.

The report begins with a thoroughly researched overview of the origins and design of the national digital ID system, which was originally described by a prominent government Minister as a “national security weapon.” Although it was strongly linked to national security priorities of the government, the national ID system was also intended to serve a wide variety of uses, including identification and authentication for access to social services and healthcare. However, the implementation of this ambitious system has been filled with challenges—with the result that up to one-third of the adult population remains excluded. Despite robust political support and several waves of mass registration, progress in increasing coverage in the system continues to be frustrated by implementation challenges including budget shortfalls, as well as physical, financial, technological, and administrative barriers to access. All of these challenges have been exacerbated by an environment marked by inequality and discrimination. 

This has led to severe human rights consequences, especially for vulnerable groups such as older persons and women, who have been denied access to lifesaving social services. The report describes how Ndaga Muntu has now become a mandatory requirement to access both government and private services. This includes access to health care and social pensions, as well as the ability to vote, get a bank account, and obtain a mobile phone. In short, exclusion from the national digital ID has become a life and death matter for many people in Uganda. The report draws on focus group conversations and individual interviews with affected persons, as well as discussions with numerous government administrators and scholars, to share deeply contextualized personal accounts of how this mandatory requirement has had an impact on individual lives. 

Based on these extremely concerning accounts of exclusion, discrimination, and violations of economic and social rights, the report concludes with a series of actionable recommendations to mitigate the most pressing human rights concerns. This includes the need to ensure that the mandatory national ID requirement does not continue to lead to exclusion from fundamental rights and services, for instance by allowing for the use of alternative forms of ID. It also emphasizes the need to re-examine whether a national ID system designed to be a national security tool is fit for the purposes of inclusion and human rights. 

I don’t see you, but you see me: asymmetric visibility in Brazil’s Bolsa Família Program

TECHNOLOGY & HUMAN RIGHTS

I don’t see you, but you see me: asymmetric visibility in Brazil’s Bolsa Família Program

Brazil’s Bolsa Família Program, the world’s largest conditional cash transfer program, is indicative of broader shifts in data-driven social security. While its beneficiaries are becoming “transparent” as their data is made available, the way the State uses beneficiaries’ data is increasingly opaque.

“She asked a lot of questions and started filling out the form. When I asked her about when I was going to get paid, she said, ‘That’s up to the Federal Government.’” This experience of applying for Brazil’s Bolsa Família Program (“Programa Bolsa Família” in Portuguese, or PBF), the world’s largest conditional cash transfer program, hints at the informational asymmetries between individuals and the State. Such asymmetries have long existed, but information and communications technologies (ICTs) can exacerbate these imbalances. ICTs enable States to handle an increasing amount of personal data, and this is especially true in the PBF. In June 2020, 14.2 million Brazilian families living in poverty – 43.7 million individuals – were beneficiaries of the Bolsa Família program.

At the core of the PBF’s structure is a register called CadÚnico, which is used for more than 20 social policies. It includes detailed data on heads of households and less granular data on other family members. The law designates women as the heads of household and thereby the main PBF beneficiary. Information is collected about income, number of people living together, level of education and literacy, housing conditions, access to work, disabilities, and ethnic groups. This data is used to select PBF beneficiaries and to monitor their compliance with the conditions on which the maintenance of the benefit depends, such as requirements that children attend school . The federal government also uses the CadÚnico for finding multidimensional vulnerabilities, granting other benefits, or enabling research. Although different programs feed the CadÚnico, the PBF is its most important information provider due to its colossal size. In March 2021, the CadÚnico comprised 75.2 million individual entries from 28.9 million families: PBF beneficiaries make up a half.

The person responsible for the family unit within the PBF must answer all of the entries of the “main form,” which consists of 77 questions with varying degrees of detail and sensitivity. All these data points expose the sensitive personal information and vulnerabilities of low-income individuals.

The scope of this large and comprehensive dataset is celebrated by social policy experts because it enables the State to target needs for other policies. Indeed, the CadÚnico has been used to identify the relevant beneficiaries for policies ranging from electricity tariff discounts to higher education subsidies. Holding huge amounts of information about low-income individuals can allow States to proactively target needs-based policies.

But when the State is not guided by the principle of data minimization (i.e. collecting only the necessary data and no more), this appetite for information increases and places the burden of risks on individuals. They are transparent to the State, while the State becomes increasingly opaque to them.

Upon registering for the PBF, citizens are not informed about what will happen to the information they provide. For example, the training materials for officials registering beneficiaries only note that they must warn potential beneficiaries of their liability for providing false and inaccurate information, but they do not state that officials must tell beneficiaries how their data will be used, nor about their data rights , nor any details about when or whether they might receive their cash transfer. The emphasis, therefore, lies on the responsibilities of the potential beneficiary instead of the State. The lack of transparency about how people’s data will be used reduces citizens’ ability to exercise their rights.

In addition to the increased visibility of recipients to the State, the PBF also releases the beneficiaries’ data to the public due to strict transparency requirements. Though CadÚnico data is generally confidential, PBF recipients’ personal data is publicly available through different paths:

  • The Federal Government’s Transparency Portal publishes a monthly list containing the beneficiary’s name, municipality, NIS (social security number) and the amounts paid.
  • The Caixa Econômica Federal’s portal– the public bank that administers social benefits–allows anyone to check the status of the benefit by inserting name, NIS and CPF (taxpayer’s identity number).
  • The NIS of any citizen can be queried at the Citizen’s Consultation Portal CadÚnico by providing name, mother’s name, and birth date.

In making a person’s status as a PBF beneficiary easily accessible, the (mostly female) beneficiaries suffer a lack of privacy from all sides and are stigmatized. Not only are they surveilled by the State as it closely monitors conditionalities for the PBF, but they are also monitored by fellow citizens. Citizens have made complaints to the PBF about beneficiaries they believe should not receive cash transfers. At InternetLab, we used the Brazilian Access to Information Law to gain access to some of these complaints. 60% of the complaints showed personal identification information about the accused beneficiary, suggesting that citizens are monitoring and reporting their “undeserving” neighbors and using the above portals to check databases.

The availability of this data has further worrying consequences: at InternetLab, we have witnessed several instances of fraud and electoral propaganda directed at PBF beneficiaries’ phones, and it is not clear where this contact data came from. Different actors are profiling and targeting Brazilian citizens according to their socio-economic vulnerabilities.

The public availability of beneficiaries’ data is backed by law and arises from a desire to fight corruption in Brazil. This requires government spending, including on social programs, to be transparent. But spending on social programs has become more controversial in recent years amidst an economic crisis and the rise of conservative political majorities, and misplaced ideas of “corrupted beneficiaries” have mingled with anti-corruption sentiments. The emphasis has been placed on making beneficiaries “transparent,” rather than government.

Anti-corruption laws do not adequately differentiate between transparency practices that confront corruption and favor democracy, and those which disproportionately reinforce vulnerabilities and inequalities in focusing on recipients of social programs. Public contracts, public employees’ salaries, and beneficiaries of social benefits are all exposed under the same grounds. But these are substantially different uses of public resources, and exposure of these different kinds of data has very unequal impacts, with beneficiaries more likely to be harmed by this “transparency.”

The personal data of social program beneficiaries should be treated with more care, and we should question whether disclosing so much information about them is necessary. In the wake of Brazil’s General Data Protection Law which came into force last year, it is vital that the work to increase the transparency of the State continues while the privacy of the vulnerable is protected, not the other way around.

May 3, 2021. Nathalie Fragoso and Mariana Valente.
Nathalie Fragoso, Head of Research, Privacy and Surveillance, Internet Lab.
Mariana Valente, Associate Director of Internet Lab.

Social Credit in China: Looking Beyond the “Black Mirror” Nightmare

TECHNOLOGY & HUMAN RIGHTS

Social Credit in China: Looking Beyond the “Black Mirror” Nightmare

The Chinese government’s Social Credit program has received much attention from Western media and academics, but misrepresentations have led to confusion over what it truly entails. Such mischaracterizations unhelpfully distract from the dangers and impacts of the realities of Social Credit. On March 31, 2021, Christiaan Van Veen and I hosted the sixth event in the Transformer States conversation series, which focuses on the human rights implications of the emerging digital state. We interviewed Dr. Chenchen Zhang, Assistant Professor at Queen’s University Belfast, to explore the much-discussed but little-understood Social Credit program in China.

Though the Chinese government’s Social Credit program has received significant attention from Western media and rights organizations, much of this discussion has often misrepresented the program. Social Credit is imagined as a comprehensive, nation-wide system in which every action is monitored and a single score is assigned to each individual, much like a Black Mirror episode. This is in fact quite far from reality. But this image has become entrenched in the West, as discussions and some academic debate has focused on abstracted portrayals of what Social Credit could be. In addition, the widely-discussed voluntary, private systems run by corporations, such as Alipay’s Sesame Credit or Tencent’s WeChat score, are often mistakenly conflated with the government’s Social Credit program.

Jeremy Daum has argued that these widespread misrepresentations of Social Credit serve to distract from examining “the true causes for concern” within the systems actually in place. They also distract from similar technological developments occurring in the West, which seem acceptable by comparison. An accurate understanding is required to acknowledge the human rights concerns that this program raises.

The crucial starting point here is that the government’s Social Credit system is a heterogeneous assemblage of fragmented and decentralized systems. Central government, specific government agencies, public transport networks, municipal governments, and others are experimenting with diverse initiatives with different aims. Indeed, xinyong, the term which is translated as “credit” in Social Credit, encompasses notions of financial creditworthiness, regulatory compliance, and moral trustworthiness, therefore covering programs with different visions and narratives. A common thread across these systems is a reliance on information-sharing and lists to encourage or discourage certain behaviors, including blacklists to “shame” wrongdoers and “redlists” publicizing those with a good record.

One national-level program called the Joint Rewards and Sanctions mechanism shares information across government agencies about companies which have violated regulations. Once a company is included on one agency’s blacklist for having, for example, failed to pay migrant workers’ wages, other agencies may also sanction that company and refuse to grant it a license or contract. But blacklisting mechanisms also affect individuals: the People’s Court of China maintains a list of shixin (dishonest) people who default on judgments. Individuals on this list are prevented from accessing “non-essential consumption” (including travel by plane or high-speed train) and their names are published, adding an element of public shaming. Other local or sector-specific “credit” programs aim at disciplining individual behavior: anyone caught smoking on the high-speed train is placed on the railway system’s list of shixin persons and subjected to a 6-month ban from taking the train. Localized “citizen scoring” schemes are also being piloted in a dozen cities. Currently, these resemble “club membership” schemes with minor benefits and have low sign-up rates; some have been very controversial. In 2019, in response to controversies, the National Development and Reform Commission issued guidelines stating that citizen scores must only be used for incentivizing behavior and not as sanctions or to limit access to basic public services. Presently, each of the systems described here are separate from one another.

But even where generalizations and mischaracterizations of Social Credit are dispelled, many aspects nonetheless raise significant concerns. Such systems will, of course, worsen issues surrounding privacy, chilling effects, discrimination, and disproportionate punishment. These have been explored at length elsewhere, but this conversation with Chenchen raised additional important issues.

First, a stated objective behind the use of blacklists and shaming is the need to encourage compliance with existing laws and regulations, since non-compliance undermines market order. This is not a unique approach: the US Department of Labor names and shames corporations that violate labor laws, and the World Bank has a similar mechanism. But the laws which are enforced through Social Credit exist in and constitute an extremely repressive context, and these mechanisms are applied to individuals. An individual can be arrested for protesting labor conditions or for speaking about certain issues on social media, and systems like the People’s Court blacklist amplify the consequences of these repressive laws. Mechanisms which “merely” seek to increase legal compliance are deeply problematic in this context.

Second, as with so many of the digital government initiatives discussed in the Transformer States series, Social Credit schemes exhibit technological solutionism which invisibilizes the causes of the problems they seek to address. Non-payment of migrant workers’ wages, for example, is a legitimate issue which must be tackled. But in turning to digital solutions such as an app which “scores” firms based on their record of wage payments, a depoliticized technological fix is promised to solve systemic problems. In the process, it obscures the structural reasons behind migrant workers’ difficulties in accessing their wages, including a differentiated citizenship regime that denies them equal access to social provisions.

Separately, there are disparities in how individuals in different parts of the country are affected by Social Credit. Around the world, governments’ new digital systems are consistently trialed on the poorest or most vulnerable groups: for example, smartcard technology for quarantining benefit income in Australia was first introduced within indigenous communities. Similarly, experimentation with Social Credit systems is unequally targeted, especially on a geographical basis. There is a hierarchy of cities in China with provincial-level cities like Beijing at the top, followed by prefectural-level cities, county-level cities, then towns and villages. A pattern is emerging whereby smaller or “lower-ranked” cities have adopted more comprehensive and aggressive citizen scoring schemes. While Shanghai has local legislation that defines the boundaries of its Social Credit scheme, less-known cities seeking to improve their “branding” are subjecting residents to more arbitrary and concerning practices.

Of course, the biggest concern surrounding Social Credit relates to how it may develop in the future. While this is currently a fragmented landscape of disparate schemes, the worry is that these may be consolidated. Chenchen stated that a centralized, nationwide “citizen scoring” system remains unlikely and would not enjoy support from the public or the Central Bank which oversees the Social Credit program. But it is not out of the question that privately-run schemes such as Sesame Credit might eventually be linked to the government’s Social Credit system. Though the system is not (yet) as comprehensive and coordinated as has been portrayed, its logics and methodologies of sharing ever-more information across siloes to shape behaviors may well push in this direction, in China and elsewhere.

April 20, 2021. Victoria Adelmant, Director of the Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law. 

Everyone Counts! Ensuring that the human rights of all are respected in digital ID systems

TECHNOLOGY & HUMAN RIGHTS

Everyone Counts! Ensuring that the human rights of all are respected in digital ID systems

The Everyone Counts! initiative was launched in the fall of 2020 with a firm commitment to a simple principle: the digital transformation of the state can only qualify as a success if everyone’s human rights are respected. Nowhere is this more urgent than in the context of so-called digital ID systems.

Research, litigation and broader advocacy on digital ID in countries like India and Kenya has already revealed the dangers of exclusion from digital ID for ethnic minority groups[1] and for people living in poverty.[2] However, a significant gap still exists between the magnitude of the human rights risks involved and the urgency of research and action on digital ID in many countries. Despite their active promotion and use by governments, international organizations and the private sector, in many cases we simply do not know how these digital ID systems lead to social exclusion and human rights violations, especially for the poorest and most marginalized.

Therefore, the Everyone Counts! initiative aims to engage in both research and action to address social exclusion and related human rights violations that are facilitated by government-sponsored digital ID systems.

Does the emperor have new clothes? The yawning evidence gap on digital ID

The common narrative behind the rush towards digital ID systems, especially in the Global South, is by now familiar: “As many as 1 billion people across the world do not have basic proof of identity, which is essential for protecting their rights and enabling access to services and opportunities.”[3] Digital ID is presented as a key solution to this problem, while simultaneously promising lower income countries the opportunity to “leapfrog” years of development via digital systems that assist in “improving governance and service delivery, increasing financial inclusion, reducing gender inequalities by empowering women and girls, and increasing access to health services and social safety nets for the poor.”[4]

This perspective, for which the World Bank and its Identification for Development (ID4D) Initiative have become the official “anchor” internationally, presents digital ID systems as a force for good. The Bank acknowledges that exclusionary issues may arise, but is confident that such issues may be overcome through good intentions and safeguards. Digging underneath the surface of these confident assertions, however, one finds that there appears to be remarkably little research into the overall impact of digital ID systems on social exclusion and a range of related human rights. For instance, after entering the digital ID space in 2014, publishing prolifically, and guiding billions of development dollars into furthering this agenda, the World Bank’s ID4D team concedes in its 2020 Annual Report that “given that this topic is relatively new to the development agenda, empirical research that rigorously evaluates the impact of ID systems on development outcomes and the effectiveness of strategies to mitigate risks has been limited.”[5] In other words, despite warning signs from several countries around the world, including chilling stories of people who have died because they were shut out of biometric ID systems,[6] the digital ID agenda moves full steam ahead without full understanding of its exclusionary potential.

Making sure that everyone truly counts

While the Everyone Counts! initiative only has a fraction of the resources of ID4D, we hope to inject some much needed reality into this discourse through our work. We will do this by undertaking–together with research partners in different countries–empirical human rights research that investigates how the introduction of a digital ID system leads to or exacerbates social exclusion. For example, we are currently undertaking a joint research project with Ugandan research partners focused on Uganda’s digital ID system, Ndaga Muntu, and its impact on poor women’s right to health, and older persons’ right to social assistance.

Our presence at a leading university and law school underlines our commitment to high quality and cutting-edge research, but we are not in the business of knowledge accumulation purely for its own sake. We will aim to transform our research into action. This could come in the form of strategic litigation and advocacy, such as the work by our partners described below, or in the form of network building and information sharing. For instance, together with co-sponsors like the UN Economic Commission for Africa (UNECA) and the Open Society Justice Initiative (OSJI), we are hosting a workshop series for African civil society organizations on digital ID and exclusion. The series creates a space where activists hoping to resist the exclusion associated with digital ID can come together, gain access to tools, information and networks, and form a community of practice that facilitates further activism.

Ensuring non-discriminatory access to vaccines: An early case study 

A recent example from Uganda demonstrates just how effective targeted action against digital ID systems can be. The government began rollout of its national digital ID system Ndaga Muntu as early as 2015, and it has gradually become a mandatory requirement to access a range of social services in Uganda.

To address the threat of COVID-19, the Ugandan government recently began a free, national vaccine program. One of the groups eligible to receive the vaccine would be all adults over the age of 50. On March 2, however, the Ugandan Minister of Health announced that only those Ugandan citizens who could produce a Ndaga Muntucard, or at least a national ID number (NIN), would be able to receive the vaccine. Conservative estimates suggest that over 7 million eligible Ugandans have not yet received their national ID card.

Our research partners, the Initiative for Social and Economic Rights (ISER) and Unwanted Witness (UW), sued the Ugandan government on March 5 to challenge the mandatory requirement of the Ndaga Muntu.[7] They argued that not only would the requirement of the national ID in exclude millions of eligible older persons from receiving the vaccine, but also that it would set a dangerous precedent that would allow for further discrimination in other areas of social services.[8]

On March 9, the Ministry of Health announced that it would change the national ID requirement so that alternative forms of identification documents, which are much more accessible to poor Ugandans, could be used to access the COVID-19 vaccine.[9] This was a critical victory for the millions of Ugandans who seek access to the life-saving vaccine–but it is also a warning sign of the subtle and pernicious ways that the digital ID system may be used to exclude.

Humans first, not systems first

The Ugandan case study shows the urgent need for the human rights movement to engage in discussions about digital transformation so that fundamental rights are not lost in the rush to build a “modern, digital state.” In our work on this initiative, we will remain similarly committed to prioritizing how individual human beings are affected by digital ID systems. Listening to their stories, understanding the harms they experience, and channeling their anger and frustration to other, more privileged and powerful audiences, is our core purpose.

Digital transformation is a field prone to a utilitarian logic: “if 99% of the population is able to register for a digital ID system, we should celebrate it as a success.” Our qualitative work does not only challenge the supposed benefits for these 99%, but emphasizes that the remaining 1% equals a multitude of individual human beings who may be victimized. Our research so far has only confirmed our intuition that digital ID systems can deliver significant harms, particularly for those who are poorest, most vulnerable, and least powerful in society. These excluded voices deserve to be heard and to become a decisive factor in deciding the shape of our digital future.

April 6, 2021. Christiaan van Veen and Katelyn Cioffi.

Christiaan van Veen, Director of the Digital Welfare State and Human Rights Project (2019-2022) at the Center for Human Rights and Global Justice at NYU School of Law. 

Katelyn Cioffi, Senior Research Scholar, Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law.