New Casebook—International Human Rights by P. Alston available in an Open Access Publication

HUMAN RIGHTS MOVEMENT

New Casebook—International Human Rights by P. Alston available in an Open Access Publication

Philip Alston’s International Human Rights textbook is now available free of charge in a comprehensively revised edition and on an Open Access basis starting July 8, 2024.

This book examines the world of contemporary human rights, including legal norms, political contexts and moral ideals. It acknowledges the regime’s strengths and weaknesses, and focuses on today’s principal challenges. These include the struggles against resurgent racism and anti-gender ideology, the implications of new technologies for fact-finding and many other parts of the regime, the continuing marginality of economic, social and cultural rights, radical inequality, climate change, and the evermore central role of the private sector.

The boundaries of the subject have steadily expanded as the post-World War II regime has become an indelible part of the legal, political and moral landscape. Given the breadth and complexity of the regime, the book takes an interdisciplinary and critical approach.

imaginative and stimulating materials with thought-provoking commentary… a wonderful teaching tool, as well as a valuable starting point for research.

Judge Hilary Charlesworth, Judge of the International Court of Justice.

Features include:

  • A focus on current issues such as new technologies, climate change, counter-terrorism, reparations, sanctions, and universal jurisdiction;
  • Expanded focus on race, gender, sexual orientation, disability and other forms of discrimination and the backlash against efforts to combat them;
  • Introductory chapters that provide the necessary overview of international law;
  • An interdisciplinary approach that puts human rights issues into their broader political, economic, and cultural contexts;
  • Diverse and critical perspectives dealt with throughout;
  • Sections dealing with political economy of human rights and the challenge of growing inequality;
  • Issues of international humanitarian law are widely reflected; and
  • Focus on current situations in Ukraine, Gaza, Myanmar, Venezuela, and others

Major themes that run through the book include the colonial and imperial objectives often pursued in the name of human rights, evolving notions of autonomy and sovereignty, the changing configuration of the public-private divide in human rights ordering, the escalating tensions between international human rights and national security, and the striking evolution of ideas about the nature and purposes of the regime itself.

This book is a successor to previous volumes entitled International Human Rights in Context (1996, 2000 and 2008, all co-authored with Henry Steiner and in 2008 also with Ryan Goodman) and International Human Rights: Text and Materials (2013, co-authored with Ryan Goodman). “All four volumes were published by Oxford University Press, and I am grateful to them for reverting all rights to the author in order to enable this Open Access publication” says Alston. 

The 2024 comprehensively revised edition will be available free of charge and can be downloaded in either a single pdf file for the entire book or separate files for each of the eighteen chapters.

Co-creating a Shared Human Rights Agenda for AI Regulation and the Digital Welfare State

TECHNOLOGY AND HUMAN RIGHTS

Co-creating a Shared Human Rights Agenda for AI Regulation and the Digital Welfare State

On September 26, 2023, the Digital Welfare State and Human Rights Project at the Center for Human Rights and Global Justice at NYU Law and Amnesty Tech’s Algorithmic Accountability Lab (AAL) brought together 50 participants from civil society organizations across the globe to discuss the use and regulation of artificial intelligence in the public sector, within a collaborative online strategy session entitled ‘Co-Creating a Shared Human Rights Agenda for AI and the Digital Welfare State.’ Participants spanned diverse geographies and contexts—from Nigeria to Chile, and from Pakistan to Brazil—and included organizations working across a broad spectrum of human rights issues such as privacy, social security, education, and health. Through a series of lightning talks and breakout room discussions, the session surfaced shared concerns regarding the use of AI in public sector contexts, key gaps in existing discussions surrounding AI regulation, and potential joint advocacy opportunities.

Global discussions on the regulation of artificial intelligence (AI) have, in many contexts, thus far been preoccupied with whether to place meaningful constraints on the development, sale, and use of AI by private technology companies. Less attention has been paid to the need to place similar constraints on governments’ use of AI. But governments’ enthusiastic adoption of AI across public sector programs and critical public services has been accelerating apace around the world. AI-based systems are consistently tested in spheres where some of the most marginalized and low-income groups are unable to opt out – for instance, machine learning and other technologies are used to detect welfare benefit fraud, to assess vulnerability and determine eligibility for social benefits like housing, and to monitor people on the move. All too often, however, this technological experimentation results in discrimination, restriction of access to key services, privacy violations, and many other human rights harms. As governments eagerly build “digital welfare states,” incorporating AI into critical public services, the scale and severity of potential implications demands that meaningful constraints be placed on these developments. 

In the past few years, a wide array of regulatory and policy initiatives aimed at regulating the development and use of AI have been introduced – in Brazil, China, Canada, the EU, and the African Commission on Human and Peoples’ Rights, among many other countries and policy fora. However, what is emerging from these initiatives is an uneven patchwork of approaches to AI regulation, with concerning gaps and omissions when it comes to public sector applications of AI. Some of the world’s largest economies – where many powerful technology companies are based – are embarking on new regulatory initiatives with impacts far beyond their territorial confines, while many of the groups likely to be most affected have not been given sufficient opportunities to participate in these processes.

Despite these shortcomings, ongoing efforts to craft regulatory regimes do offer a crucial and urgent entry point for civil society organizations to seek to highlight critical gaps, to foster greater participation, and to contribute to shaping future deployments of AI in these important sectors.

In hosting this collaborative event on AI regulation and the digital welfare state, the AAL and the Center sought to build an inclusive space for civil society groups from across regions and sectors to forge new connections, share lessons, and collectively strategize. We sought to expand mobilization and build solidarity by convening individuals from dozens of countries, who work across a wide range of fields – including “digital rights” organizations, but also bringing in human rights and social justice groups who have not previously worked on issues relating to new technologies. Our aim was to brainstorm how actors across the human rights ecosystem can, in practice, help to elevate more voices into ongoing discussions about AI regulation.

Key issues for AI regulation in the digital welfare state

In breakout sessions, participants emphasized the urgent need to address serious harms that are already resulting from governments’ AI uses, particularly in contexts such as border control, policing, the judicial system, healthcare, and social protection. The public narrative – and accelerated impetus for regulation – has been dominated by discussion of existential threats AI may pose in the future, rather than the severe and widespread threats that are already seen in almost every area of public services. In Serbia, the roll-out of Social Cards in the welfare system has excluded thousands of the most marginalized from accessing their social protection entitlements; in Brazil, the deployment of facial recognition in public schools has subjected young children to discriminatory biases and serious privacy risks. Deployments of AI across public services are consistently entrenching inequalities and exacerbating intersecting discrimination – and participants noted that governments’ increasing interest in generative AI, which has the potential to encode harmful racial bias and stereotypes, will likely only intensify these risks.

Participants also noted that it is likely that AI will continue to impact groups that may defy traditional categorizations – including, for instance, those who speak minority languages. Indeed, a key theme across discussions was the insufficient attention paid in regulatory debates to AI’s impacts on culture and language. Given that systems are generally trained only in dominant languages, breakout discussions surfaced concerns about the potential erasure of traditional languages and loss of cultural nuance.

As advocates work not only to remedy some of these existing harms, but also to anticipate the impacts of the next iterations of AI, many expressed concern about the dominant role that the private sector plays in governments’ roll-outs of AI systems, as well as in discussions surrounding regulation. Where tech companies – who are often protected by powerful lobby groups, commercial confidentiality, and intellectual property regimes – are selling combinations of software, hardware, and technical guidance to governments, this can pose significant transparency challenges. It can be difficult for civil society organizations and affected individuals to understand who is providing these systems, as well as to understand how decisions are made. In the welfare context, for example, beneficiaries are often unaware of whether and how AI systems are making highly consequential decisions about their entitlements. Participants noted that human rights actors need the capacity and resources to move beyond traditional human rights work, to engage with processes such as procurement, standard-setting, and auditing, and to address issues related to intellectual property regimes and proliferating public-private partnerships underlying governments’ uses of AI.

These issues are compounded by the fact that, in many instances, AI-based systems are designed and built in countries such as the US and then marketed and sold to governments around the world for use across critical public services. Often, these systems are not designed with sensitivity to local contexts, cultures, and languages, nor with cognizance of how the technology will interface with the political, social, and economic landscape where it is deployed. In addition, civil society organizations face additional barriers when seeking transparency and access to information from foreign companies. As AI regulation efforts advance, a failure to consider potential extraterritorial harms will leave a significant accountability gap and risk deepening global inequalities. Many participants therefore noted both the importance of ensuring that regulation in countries where tech companies are based includes diverse voices and addresses extraterritorial impacts, but also to ensure that Global North models of regulation, which may not be fit for purpose, are not automatically “exported.”

A way forward

The event ended with a strategizing session that revealed the diverse strengths of the human rights movement and multiple areas for future work. Several specific and urgent calls to action emerged from these discussions.

First, given the disproportionate impacts of governments’ AI deployments on marginalized communities, a key theme was the need for broader participation in discussions on emerging AI regulation. This includes specially protected groups such as indigenous peoples, minoritized ethnic and racial groups, immigrant communities, people with disabilities, women’s rights activists, children, and LGBTQ+ groups, to name just a few. Without learning from and elevating the perspectives and experiences of these groups, regulatory initiatives will fail to address the full scope of the realities of AI. We must therefore develop participatory methodologies that bring the voices of communities into key policy spaces. More routes to meaningful consultation would lead to greater power and autonomy for previously marginalized voices to shape a more human rights-centric agenda for AI regulation. 

Second, the unique impacts that public sector use of AI can have on human rights, especially for marginalized groups, demands a comprehensive approach to AI regulation that takes careful account of specific sectors. Regulatory regimes that fail to include meaningful sector-specific safeguards for areas such as health, education, and social security will fail to address the full range of AI related harms. Participants noted that existing tools and mechanisms can provide a starting point – such as consultation and testing requirements, specific prohibitions on certain kinds of systems, requirements surrounding proportionality, mandatory human rights impact assessments, transparency requirements, periodic evaluations, and supervision mechanisms.

Finally, there was a shared desire to build stronger solidarity across a wider range of actors, and a call to action for more effective collaborations. Participants from around the world were keen to share resources, partner on specific advocacy goals, and exchange lessons learned. Since participants focus on many diverse issues, and adopt different approaches to achieve better human rights outcomes, collaboration will allow us to draw on a much deeper pool of collective knowledge, methodologies, and networks. It will be especially critical to bridge silos between those who identify more as “digital rights” organizations and groups working on issues such as healthcare, or migrants’ rights, or on the rights of people with disabilities. Elevating the work of grassroots groups, and improving diversity and representation among those empowered to enter spaces where key decisions around AI regulation are made, should also be central in movement-building. 

There is also an urgent need for more exchange not only across the human rights ecosystem, but also with actors from other disciplines who bring different forms of technical expertise, such as engineers and public interest technologists. Given the barriers to entry to regulatory spaces – including the resources, long-term commitment, and technical vocabularies imposed – effective coalition-building and information sharing could help to lessen these burdens.

While this event brought together a fantastic and energetic group of advocates from dozens of countries, these takeaways reflect the views of only a small subset of the relevant stakeholders in these debates. We ended the session hopeful, but with the recognition that there is a great deal more work needed to allow for the full participation of affected communities from around the world. Moving forward, we aim to continue to create spaces for varied groups to self-organize, continue the dialogue, and share information. We will help foster collaborations and concretely support organizations in building new partnerships across sectors and geographies, and hope to continue to co-create a shared human rights agenda for AI regulation for the digital welfare state.

As we continue this work and seek to support efforts and build collaborations, we would love to hear from you – please get in touch if you are interested in joining these efforts.

November 14, 2023. Digital Welfare State and Human Rights Project at NYU Law Center for Human Rights and Global Justice, and Amnesty Tech’s Algorithmic Accountability Lab. 

Shaping Digital Standards: An Explainer and Recommendations on Technical Standard-Setting for Digital Identity Systems.

AREA OF WORK

Shaping Digital Standards

An Explainer and Recommendations on Technical Standard-Setting for Digital Identity Systems.

In April 2023, we submitted comments to the United States National Institute of Standards and Technology (NIST), to contribute to its Guidelines on Digital Identity. Given that the NIST guidelines are very technical — the Guidelines are written for a specialist audience — we published this short “explainer” document with the hope of providing a resource to empower other civil society organizations and public interest lawyers, to engage with technical standards-setting bodies to raise human rights concerns related to digitalization in the future. This document therefore sets out the importance of standards bodies, provides an accessible “explainer” on the Digital Identity Guidelines, and summarizes our comments and recommendations.

The National Institute of Standards and Technology (NIST), which is part of the U.S. Department of Commerce, is a prominent and powerful standards body. Its standards are influential, shaping the design of digital systems in the United States and elsewhere. Over the past few years, NIST has been in the process of creating and updating a set of official Guidelines on Digital Identity, which “present the process and technical requirements for meeting digital identity management assurance levels … including requirements for security and privacy as well as considerations for fostering equity and the usability of digital identity solutions and technology.”

The primary audiences for the Guidelines are IT professionals and senior administrators in U.S. federal agencies that utilize, maintain, or develop digital identity technologies to advance their mission. The Guidelines fall under a wider NIST initiative to design a Roadmap on Identity Access and Management that explores topics like accelerating adoption of mobile drivers licenses, expanding biometric measurement programs, promoting interoperability, and modernizing identity management for U.S. federal government employees and contractors.

This technical guidance is particularly influential, as it shapes decision-making surrounding the design and architecture of digital identity systems. Biometrics and identity and security companies frequently cite their compliance with NIST standards to promote their technology and to convince governments to purchase their hardware and software products to build digital identity systems. Other technical standards bodies look to NIST and cite NIST standards. These technical guidelines thus have a great deal of influence well beyond the United States, affecting what is deemed acceptable or not within digital identity systems, such as how and when biometrics can be used. . 

Such technical standards are therefore of vital relevance to all those who are working on digital identity. In particular, these standards warrant the attention of civil society organizations and groups who are concerned with the ways in which digital identity systems have been associated with discrimination, denial of services, violations of privacy and data protection, surveillance, and other human rights violations. Through this explainer, we hope to provide a resource that can be helpful to such organizations, enabling and encouraging them to contribute to technical standard-setting processes in the future and to bring human rights considerations and recommendations into the standards that shape the design of digital systems. 

The Time is Now: Mexico Must Grant Haitians Refugee Protections under the Cartagena Declaration

HUMAN RIGHTS MOVEMENT

The Time is Now: Mexico Must Grant Haitians Refugee Protections under the Cartagena

This report published by Centro de Derechos Humanos Fray Matías de Córdova A.C. and the Global Justice Clinic shows why Mexico–and, by extension, all countries that have signed the Cartagena Declaration on Refugees–must grant Haitians refugee status. 

Haitians living outside of Haiti often lack access to basic human rights, face anti-Black discrimination, and in many countries, live under the threat of being sent back to Haiti. Pathways to legal status in other countries are essential for Haitians seeking safety, but governments rarely grant legal status to Haitians and, when they do, protections are often temporary.

Mexico is one of the many countries that Haitian people have migrated to in the past decade. Tens of thousands of Haitians enter Mexico every year. Mexico has incorporated the Cartagena Declaration–which provides a broader definition of “refugee” than the 1951 Refugee Convention and 1966 Protocol–into its domestic law, legally binding it to grant refugee status to people who, based on an objective analysis of the circumstances in their country of origin, meet the elements of the declaration. This report establishes how three of the Declaration’s elements–generalized violence, massive violations of human rights, and other circumstances that seriously disturb public order–are pervasive in Haiti.

  • The Global Justice Clinic and Centro de Derechos Humanos Fray Matías de Córdova A.C. launched the report in Mexico City in late April 2024, and met with representatives of Mexican government agencies, including the Comisión Mexicana de Ayuda a Refugiados (Mexican Commission for Refugee Assistance) and the Secretaría de Relaciones Exteriores (Secretariat of Foreign Affairs) to urge them to apply the Cartagena Declaration to Haitian nationals.

Mexico Must Extend Cartagena’s Protection Principles to Haitian Asylum Seekers

HUMAN RIGHTS MOVEMENT

Mexico Must Extend Cartagena’s Protection Principles to Haitian Asylum Seekers

Intersecting crises in Haiti have left tens of thousands of Haitians no choice but to flee their country, and Haitians who fled in prior years are unable to return home. A report by Centro de Derechos Humanos Fray Matías de Córdova A.C. and the Global Justice Clinic shows why Mexico–and, by extension, all countries that have signed the Cartagena Declaration on Refugees–must grant Haitians refugee status. 

Cover art graphics

The report comes at a critical moment. Haiti currently faces extraordinary violence and a near-complete collapse of state institutions. Armed groups killed more than 1,500 people in the first three months of 2024, displaced more than 360,000 people within Haiti’s borders, and seized control of the capital, ports, and hospitals. Sexual violence is endemic. Escalated violence and targeted attacks on government infrastructure in March 2024 plunged Haiti into a two-months long state of emergency. 

Mexico is one of the many countries that Haitian people have migrated to in the past decade. Tens of thousands of Haitians enter Mexico every year. Mexico has incorporated the Cartagena Declaration–which provides a broader definition of “refugee” than the 1951 Refugee Convention and 1966 Protocol–into its domestic law, legally binding it to grant refugee status to people who, based on an objective analysis of the circumstances in their country of origin, meet the elements of the declaration. This report establishes how three of the Declaration’s elements–generalized violence, massive violations of human rights, and other circumstances that seriously disturb public order–are pervasive in Haiti.

Between 2021 and 2023, Mexico approved approximately 5,200 out of more than 110,000 Haitians’ refugee applications — representing a 4.6% approval rate. In those years Haitians were also the nationality that filed the most refugee applications in Mexico.

This disproportionately low approval rate of Haitian applicants, who by any measure face persecution and extremely challenging conditions at home, flies in the face of Mexico’s legal obligations to establish nondiscriminatory migratory procedures.

Enrique Vidal, Interim Director of CDH Fray Matías.

Haitians living outside of Haiti often lack access to basic human rights, face anti-Black discrimination, and in many countries, live under the threat of being sent back to Haiti. Pathways to legal status in other countries are essential for Haitians seeking safety, but governments rarely grant legal status to Haitians and, when they do, protections are often temporary.

Recognizing Haitian nationals as refugees under the Cartagena Declaration is one necessary step to correct the systemic denial of Haitians’ rights. In doing so, Mexico could pave the way for greater protection of human rights in the hemisphere. 

Mexico has the opportunity to be a leader in protecting the rights of Haitian people in the region. Governments throughout the region must assess country conditions objectively, and cease to discriminate against the Haitian people

Gabrielle Apollon, Director of the Haitian Immigrant Rights Project at the Global Justice Clinic, in light of the upcoming 40th anniversary of the signing of the Cartagena Declaration. 

GJC and CDH Fray Matías launched the report, in Spanish, in Mexico City in late April 2024. They met with representatives of Mexican government agencies, including the Comisión Mexicana de Ayuda a Refugiados (Mexican Commission for Refugee Assistance) and the Secretaría de Relaciones Exteriores (Secretariat of Foreign Affairs) to urge them to apply the Cartagena Declaration to Haitian nationals. GJC and Fray Matías staff also observed firsthand the inhumane living conditions that many Haitian migrants and asylum-seekers endure in migrant encampments in Mexico. These conditions underscore the urgency of providing greater refugee protections for Haitians.

Today, GJC and CDH Fray Matías make this report available in English. Although the Mexican government remains the primary advocacy target, this report presents the case for all signatories to the Cartagena Declaration to extend refugee protection to Haitian nationals, and for countries throughout the Hemisphere to provide maximum protections to Haitian migrants and asylum-seekers.

May 24, 2024. For more information, please contact Gabrielle Apollon (English and Kreyòl) or Ellie Happel (English, Kreyòl, Spanish).

Public Transport, Private Profit: The Human Cost of Privatizing Buses in the United Kingdom

INEQUALITIES

Public Transport, Private Profit: The Human Cost of Privatizing Buses in the United Kingdom

The Human Rights and Privatization Project launched a report on the deregulation of local buses in the United Kingdom in July 2021. 

The report finds that the government’s 1985 decision to privatize and deregulate the bus sector in England (outside London), Scotland, and Wales has failed passengers and undermined their rights. Taxpayers are subsidizing corporate profits, while private operators are providing a service that is expensive, unreliable, and often dysfunctional. Fares have skyrocketed while ridership has plummeted, undermining efforts to reduce greenhouse emissions. This approach has also significantly impacted individual’s lives and rights. We found that people have lost jobs and benefits, faced barriers to healthcare, been forced to give up on education, sacrificed food and utilities, and been cut off from friends and family. The government’s new strategy for England leaves this deregulated system in place, and does not address its structural shortcomings. 

The report finds that running a bus service premised on profit and market competition, rather than on the well-being of the public, leads to violations of people’s rights and is incompatible with human rights law. It calls for public control of bus transport as the default approach, which would be more cost-effective and allow for reinvestment of profits, integrated networks, more efficient coverage, simpler fares, consistency with climate goals, and public accountability. Given the importance of public transport on access to essential services and rights, it also calls for a statutory minimum level of service frequency.

Paving a Digital Road to Hell? A Primer on the Role of the World Bank and Global Networks in Promoting Digital ID

TECHNOLOGY AND HUMAN RIGHTS

Paving a Digital Road to Hell? 

A Primer on the Role of the World Bank and Global Networks in Promoting Digital ID

Around the world, governments are enthusiastically adopting digital identification systems. In this 2022 report, we show how global actors, led by the World Bank, are energetically promoting such systems. They proclaim that digital ID will provide an indispensable foundation for an equitable, inclusive future. But a specific model of digital ID is being promoted—and a growing body of evidence shows that this model of digital ID is linked to large-scale human rights violations. In this report, we argue that, despite undoubted good intentions, this model of digital ID is failing to live up to its promises and may in fact be causing severe harm. As international development actors continue to promote and support digital ID rollouts, there is an urgent need to consider the full implications of these systems and to ensure that digital ID realizes rather than violates human rights.

In this report, we provide a carefully researched primer, as well as a call to action with practical recommendations. We first compile evidence from around the world, providing a rigorous overview of the impacts that digital ID systems have had on human rights across different contexts. We show that the implementation of the dominant model of digital ID is increasingly causing severe and large-scale human rights violations, especially since such systems may exacerbate pre-existing forms of exclusion from public and private services. The use of new technologies may also lead to new forms of harm, including biometric exclusion, discrimination along new cleavages, and the many harms associated with surveillance capitalism. Meanwhile, the promised benefits of such systems have not been convincingly proven. This primer draws on the work of experts and activists working across multiple fields to identify critical concerns and evidentiary gaps within this new development consensus on digital ID.

The report points specifically to the World Bank and its Identification for Development (ID4D) Initiative as playing a central role in the rapid proliferation of a particular model of digital ID, one that is heavily inspired by the Aadhaar system in India. Under this approach to digital ID, the aim is to provide individuals with a ‘transactional’ identity, rather than to engage with questions surrounding legal status and rights. We argue that a driving force behind the widespread and rapid adoption of such systems is a powerful new development consensus, which holds that digital ID can contribute to inclusive and sustainable development—and is even a prerequisite for the realization of human rights. This consensus is packaged and promoted by key global actors like the World Bank, as well as by governments, foundations, vendors and consulting firms. It is contributing to the proliferation of digital ID around the world, all while insufficient attention is paid to risks and necessary safeguards.

The report concludes by arguing for a shift in policy discussions around digital ID, including the need to open new critical conversations around the “Identification for Development Agenda,” and encourage greater discourse around the role of human rights in a digital age. We issue a call to action for civil society actors and human rights stakeholders, with practical suggestions for those in the human rights ecosystem to consider. The report sets out key questions that civil society can ask of governments and international development institutions, and specific asks that can be made—including demanding that processes be slowed down so that sufficient care is taken, and increasing transparency surrounding discussions about digital ID systems, among others—to ensure that human rights are safeguarded in the implementation of digital ID systems.

Chosen by a Secret Algorithm: Colombia’s top-down pandemic payments

TECHNOLOGY AND HUMAN RIGHTS

Chosen by a Secret Algorithm: Colombia’s top-down pandemic payments

The Colombian government was applauded for delivering payments to 2.9 million people in just 2 weeks during the pandemic, thanks to a big-data-driven approach. But this new approach represents a fundamental change in social policy which shifts away from political participation and from a notion of rights.

On Wednesday, November 24, 2021, the Digital Welfare State and Human Rights Project hosted the ninth episode in the Transformer States conversation series on Digital Government and Human Rights, in an event entitled: “Chosen by a secret algorithm: A closer look at Colombia’s Pandemic Payments.” Christiaan van Veen and Victoria Adelmant had a conversation with Joan López, Researcher at the Global Data Justice Initiative and at Colombian NGO Fundación Karisma about Colombia’s pandemic payments and its reliance on data-driven technologies and prediction. This blog highlights some core issues related to taking a top-down, data-driven approach to social protection.

From expert interviews to a top-down approach

The System of Possible Beneficiaries of Social Programs (SISBEN in Spanish) was created to assist in the targeting of social programs in Colombia. This system classifies the Colombian population along a spectrum of vulnerability through the collection of information about households, including health data, family composition, access to social programs, financial information, and earnings. This data is collected through nationwide interviews conducted by experts. Beneficiaries are then rated on a scale of 1 to 100, with 0 as the least prosperous and 100 as the most prosperous, through a simple algorithm. SISBEN therefore aims to identify and rank “the poorest of the poor.” This centralized classification system is used by 19 different social programs to determine eligibility: each social program chooses its own cut-off score between 1 and 100 as a threshold for eligibility.

But in 2016, the National Development Office – the Colombian entity in charge of SISBEN – changed the calculation used to determine the profile of the poorest. It introduced a new and secret algorithm which would create a profile based on predicted income generation capacity. Experts collecting data for SISBEN through interviews had previously looked at the realities of people’s conditions: if a person had access to basic services such as water, sanitation, education, health and/or employment, the person was not deemed poor. But the new system sought instead to create detailed profiles about what a person could earn, rather than what a person has. This approach sought, through modelling, to predict households’ situation, rather than to document beneficiaries’ realities.

A new approach to social policy

During the pandemic, the government launched a new system of payments called the Ingreso Solidario (meaning “solidarity income”). This system would provide monthly payments to people who were not covered by any other existing social program that relied on SISBEN; the ultimate goal of Ingreso Solidario was to send money to 2.9 million people who needed assistance due to the crisis caused by COVID-19. The Ingreso Solidario was, in some ways, very effective. People did not have to apply for this program: if they were selected as eligible, they would automatically receive a payment. Many people received the money immediately into their bank accounts, and payments were made very rapidly, within just a few weeks. Moreover, the Ingreso Solidario was an unconditional transfer and did not condition the receipt of the money to the fulfillment of certain requirements.

But the Ingreso Solidario was based on a new approach to social policy, driven by technology and data sharing. The Government entered agreements with private companies, including Experian and Transunion, to access their databases. Agreements were also made between different government agencies and departments. Through data-sharing arrangements across 34 public and private databases, the government cross- checked the information provided in the interviews with information in dozens of databases to find inconsistencies and exclude anyone deemed not to require social assistance. In relying on cross-checking databases to “find” people who are in need, this approach depends heavily on enormous data collection, and it increases government’s reliance on the private sector.

The implications of this new approach

This new approach to social policy, as implemented through the Ingreso Solidario, has fundamental implications. First, this system is difficult to challenge. The algorithm used to profile vulnerability, to predict income generating capacity, and to assign a score to people living in poverty, is confidential. The Government consistently argued that disclosing information about the algorithm would lead to a macroeconomic crisis because if people knew how the system worked, they would try to cheat the system. Additionally, SISBEN has been normalized. Though there are many other ways that eligibility for social programs could be assessed, the public accepts it as natural and inevitable that the government has taken this arbitrary approach reliant on numerical scoring and predictions. Due to this normalization, combined with the lack of transparency, this new approach to determining eligibility for social programs has therefore not been contested.

Second, in adopting an approach which relies on cross-checking and analyzing data, the Ingreso Solidario is designed to avoid any contestation in the design and implementation of the algorithm. This is a thoroughly technocratic endeavor. The idea is to use databases and avoid going to, and working with, the communities. The government was, in Joan’s words, “trying to control everything from a distance” to “avoid having political discussions about who should be eligible.” There were no discussions and negotiations between the citizens and the Government to jointly address the challenges of using this technology to target poor people. Decisions about who the extra 2.9 million beneficiaries should be were taken unilaterally from above. As Joan argued, this was intentional: “The mindset of avoiding political discussion is clearly part of the idea of Ingreso Solidario.”

Third, because people were unaware that they were going to receive money, those who received a payment felt like they had won the lottery. Thus, as Joan argued, people saw this money not “as an entitlement, but just as a gift that this person was lucky to get.” This therefore represents a shift away from a conception of assistance as something we are entitled to by right. But in re-centering the notion of rights, we are reminded of the importance of taking human rights seriously when analyzing and redesigning these kinds of systems. Joan noted that we need to move away from an approach of deciding what poverty is from above, and instead move towards working with communities. We must use fundamental rights as guidance in designing a system that will provide support to those in poverty in an open, transparent, and participatory manner which does not seek to bypass political discussion.

María Beatriz Jiménez, LLM program, NYU School of Law with research focus on digital rights. She previously worked for the Colombian government in the Ministry of Information and Communication Technologies and the Ministry of Trade.

“We are not Data Points”: Highlights from our Conversation on the Kenyan Digital ID System

TECHNOLOGY AND HUMAN RIGHTS

“We are not Data Points”: Highlights from our Conversation on the Kenyan Digital ID System

On October 28, 2020, the Digital Welfare State and Human Rights Project held a virtual conversation with Nanjala Nyabola for the second in the Transformer States Conversation Series on the topic of inclusion and exclusion in Kenya’s digital ID system. Nanjala is a writer, political analyst, and activist based in Nairobi and author of Digital Democracy, Analogue Politics: How the Internet Era is Transforming Politics in Kenya. Through an energetic and enlightening conversation with Christiaan van Veen and Victoria Adelmant, Nanjala explained the historical context of the Huduma Namba system, Kenya’s latest digital ID scheme, and pointed out a number of pressing concerns with the project.

Kenya’s new digital identity system, known as Huduma Namba, was announced in 2018 and involved the establishment of the Kenyan National Integrated Identity Management System (NIIMS). According to its enabling legislation, NIIMS is intended to be a comprehensive national registration and identity system to promote efficient delivery of public services, by consolidating and harmonizing the law on the registration of persons. This ‘master database’ would, according to the government, become the ‘single source of truth’ on Kenyans. A “Huduma Namba” (a unique identifying number) and “Huduma Card” (a biometric identity card) would be assigned to Kenyan citizens and residents.

Huduma Namba is the latest in a long series of biometric identity systems in Kenya that began with colonization. Kenya has had a form of mandatory identification under the Kipande system since the Native Registration Ordinance of 1915 under the British colonial government. The Kipande system required black men over the age of 16 to be fingerprinted and to carry identification that effectively restricted their freedom of movement and association. Non-compliance carried the threat of criminal punishment and forced labor. Rather than repealing this “cornerstone of the colonial project” upon independence, the government instead embraced and further formalized the Kipande system, making it mandatory for all men over 18. New ID systems were introduced, but always maintained several core elements: biometrics, the collection of ethnic data, and punishment. ID remained necessary for accessing certain buildings, opening bank accounts, buying or selling property and free movement both within and out of Kenya. The fact that women were not included in the national ID system until 1978 further reveals the exclusionary nature of such systems, in this instance along gendered lines.

While, in theory, these ID systems have been mandatory such that anyone should be able to demand and receive an ID, in practice, Kenyans from border communities must be “vetted” before receiving their ID. They must return to their paternal family village to be “vetted” by the local chief as to their community membership. Given the contested nature of Kenya’s borders, many Kenyans who may be ethnically Somali or Masai can face significant difficulty in proving they are “Kenyan” and obtaining the necessary ID. The vetting process can also serve to significantly delay applications. Nanjala explained that some ethnically Somali Kenyans who struggled to gain access to legal identification and therefore were excluded from basic entitlements had resorted to registering as refugees in order to access services.

Given the history of legal identity systems in Kenya, Huduma Namba may offer a promising break from the past and may serve to better include marginalized groups. Huduma Namba is supposed to give a “360 degree legal identity” to Kenyan citizens and residents; it includes women and children; and it is more than just a legal identity, it is also a form of entitlement. For example, Huduma Namba has been said to provide the enabling conditions for universal healthcare, to “facilitate adequate resource allocation” and to “enable citizens to get government services”. However, Nanjala also emphasized that Huduma Namba does not address any of the pre-existing exclusions experienced by certain Kenyans, especially those from border communities. Nanjala noted that the Huduma Namba is “layered over a history of exclusion,” and preserves many of the discriminatory practices experienced under previous systems. As residents must present existing identity documents in order to obtain a Huduma Card, vetting practices will still hinder border communities’ access to the new system, and thereby hinder access to the services to which Huduma Namba will be tied.

Over the course of the conversation Nanjala drew on her rich knowledge and experience to highlight what she sees as a number of ‘red flags’ raised by the Huduma Namba project. These go to the need to properly examine the true motivations behind such digital ID schemes and the actors who promote them. In brief, these are:

  • The false promise of the efficiency argument, being that “efficient’ technological solutions and data will fix social problems. This argument ignores the social, political and historical context and complexities of governing a state, and merely perpetuates the ‘McKinseyfication’ of government (being an increasing pervasiveness of management consultancy in development). Further, there is little evidence that such efficient solutions will actually work, as was seen in relation to the Integrated Financial Management Information System (IFMIS) rolled out in Kenya in 2013. Such arguments detract attention from examining why problems such as poor infrastructure, healthcare or education systems have arisen or have not been addressed. Nanjala noted that the ongoing COVID-19 pandemic has made the risks of this clear: while the Kenyan government has spent over $6million on the Huduma Namba system, the country has only 518 ICU beds.
  • The fact that the government is relying on threats and intimidation to “encourage” citizens to register for Huduma Namba. Nanjala posited that if a government is offering citizens a real service or benefit, it should be able to articulate a strong case for adoption such that citizens will see the benefit and willingly sign up.
  • The lack of clear information and analysis, including any cost benefit analysis or clear articulation of the why and how of the Huduma Namba system, available to citizens or researchers.
  • The complex political motivations behind the government’s actions, which hinge primarily on the current administration’s campaign promises and eye to the next election, rather than centring longer-term benefits to the population.
  • The risks associated with unchecked data collection, which include improper use and monetization of citizens’ data by government.

While much of the conversation addressed clear concerns with the Huduma Namba project, Nanjala also discussed how human rights law, movements and actors can help bring about more positive developments in this area. Firstly, this year’s decision by the Kenyan High Court, which was brought by the Kenyan Human Rights Commission, Kenya National Commission on Human Rights and Nubian Rights Forum, held that the Huduma Namba scheme could not proceed without appropriate data protection and privacy safeguards, was an inspiring example of the effectiveness of grassroots activism and rights-based litigation.

Further, this case provided an example of how human rights frameworks can enable transnational conversations about rights issues. Nanjala reminded us to question why it is that the UK can vote to avoid digital ID systems while British companies are simultaneously deploying digital ID technologies in the developing world, that is, why digital ID might be seen to be good enough for the colonized, but not the colonizers. And as digital ID systems are being widely promulgated by the World Bank throughout the Global South, Nanjala identified the successful south-south collaboration and knowledge exchange between Indian and Kenyan activists, lawyers and scholars in relation to India’s widely criticized digital ID system, Aadhaar. By learning about the Indian experience, Kenyan organizations were able to more effectively push back against some of the particular concerns with Huduma Namba. Looking at the severe harms that have arisen from the centralized biometric system in India can also help demonstrate the risks of such schemes.

Digital ID systems risk reducing humanity to mere data points, and so, to the extent that they do so, should be resisted. We are not just data points, and considering data as the “new” gold or oil positions our identities as resources to be exploited by companies and governments as they see fit. Nanjala explained that the point of government is not to oversimplify or exploit the human experience, but rather to leverage the resources that government collects to maximize the human experience of its residents. In the context of ever increasing intrusions into privacy cloaked in claims of making life “easier”, Nanjala’s comments and critique provided a timely reminder to focus on the humans at the center of ongoing debates about our digital lives, identities and rights.

Holly Ritson, LLM program, NYU School of Law; and Human Rights Scholar with the Digital Welfare State and Human Rights Project.

Poor Enough for the Algorithm? Exploring Jordan’s Poverty Targeting System

TECHNOLOGY AND HUMAN RIGHTS

Poor Enough for the Algorithm? Exploring Jordan’s Poverty Targeting System

The Jordanian government is using an algorithm to rank social protection applicants from least poor to poorest, as part of a poverty alleviation program. While helpful to those individuals who receive aid, the system is excluding beneficiaries in need, as it is failing to accurately reflect the complex realities of poverty. It uses an outdated poverty measure, weights imperfect indicators—such as utility consumption—and relies on a static view of socioeconomic status.

On November 28, 2023, the Digital Welfare State and Human Rights project hosted the sixteenth episode in the Transformer States conversation series on Digital Government and Human Rights. Victoria Adelmant and Katelyn Cioffi interviewed Hiba Zayadin, a senior researcher in the Middle East and North Africa division at Human Rights Watch (HRW), about a report published by HRW on the Jordanian government’s use of an algorithmic system to rank applicants for a welfare program based on their poverty level, using data like electricity usage and car ownership. This blog highlights key issues related to the system’s inability to reflect the complexities of poverty and its algorithmic exclusion of individuals in need.

The context behind Jordan’s poverty targeting program 

Poverty targeting’ is generally understood to mean directing social program benefits towards those most in need, with the aim of efficiently using limited government resources and improving living conditions for the poorest individuals. This approach entails the collection of wide-ranging information about socioeconomic circumstances, often through in-depth surveys and interviews, to enable means testing or proxy means testing. Some governments have adopted an approach in which beneficiaries are ‘ranked’ from richest to poorest, and target aid only to those falling below a certain threshold. The World Bank has long advocated for poverty targeting in social assistance. For example, since 2003, the World Bank has supported Brazil’s Bolsa Família program, which is a program targeted at the poorest 40% of the population

Increasingly, the World Bank has turned to new technologies to seek to improve the accuracy of poverty targeting programs. It has provided funding to many countries for data-driven, algorithm-enabled solutions to enhance targeting. Similar programs have been implemented in countries including Jordan, Mauritania, Palestine, Morocco, Iraq, Tunis, Jordan, Egypt, and Lebanon.

Launched in 2019 with World Bank support, Jordan’s Takaful program, an automated cash transfer program, provides monthly support to families (roughly US $56 to $192) to mitigate poverty. Managed by the National Aid Fund, the program targets the more than 24% of Jordan’s population that falls under the poverty line. The Takaful program has been especially welcome in Jordan, in light of rising living costs. However, policy choices underpinning this program have excluded many individuals who are in need: eligibility restrictions limit access solely to Jordanian nationals, such that the program does not cover registered Syrian refugees, Palestinians without Jordanian passports, migrant workers, and the non-Jordanian families of Jordanian women—since Jordanian women cannot pass on citizenship to their children. Initial phases of the program entailed broader eligibility, but criteria were tightened in subsequent iterations.

Mismatch between the Takaful program’s indicators and the reality of people’s lives

In addition, further exclusions have arisen because of the operation of the algorithmic system used in the program. When a person applies to Takaful, the system first determines eligibility by checking whether an applicant is a citizen and whether they are under the poverty line. It subsequently employs an algorithm, relying on 57 socioeconomic indicators, to rank people from least poor to poorest. The National Aid Fund uses existing databases as well as applicants’ answers to a questionnaire – that they must fill out online. Indicators include household size, geographic location, utilities consumption, ownership of businesses, and car ownership. It is unclear how these indicators are weighted, but the National Aid Fund has admitted that some indicators will lead to the automatic exclusion of applicants from the Takaful program. Applicants who own a car that is less than five years old or a business valued at over 3000 Jordanian Dinars, for instance, are automatically excluded. 

In its recent report, HRW highlights a number of shortcomings of the algorithmic system deployed in the Takaful program, critiquing its inability to reflect the complex and dynamic nature of poverty. The system, HRW argues, uses an outdated poverty measure, and embeds many problematic assumptions. For example, the algorithm gives some weight to whether an applicant owns a car. However, there are cars in people’s names that they do not actually own; some people own cars that broke down long ago, but they cannot afford to repair them. Additionally, the algorithm assumes that higher electricity and water consumption indicates that a family is less vulnerable. However, poorer households in Jordan in many cases actually have higher consumption—a 2020 survey showed that almost 75% of low- to middle-income households lived in apartments with poor thermal insulation.

Furthermore, this algorithmic system is designed on the basis of a single assessment of socioeconomic circumstances at a fixed point in time. But poverty is not static; people’s lives change and their level of need fluctuates. Another challenge is the unpredictability of aid: in this conversation with CHRGJ’s Digital Welfare State and Human Rights team, Hiba shared the story of a new mother who had been suddenly and unexpectedly cut off from the Takaful program, precisely when she was most in need.

At a broader level, introducing an algorithmic system such as this can also exacerbate information asymmetries. HRW’s report highlights issues concerning opacity in algorithmic decision-making—both for government officials themselves and those subject to the algorithm’s decisions—such that it is more difficult to understand how decisions are being made within this system.

Recommendations to improve the Takaful program

Given these wide-ranging implications, HRW’s primary recommendation is to move away from poverty targeting algorithms and toward universal social protection, which could cost under 1% of the country’s GDP. This could be funded through existing resources, tackling tax avoidance, implementing progressive taxes, and leveraging the influence of the World Bank to guide governments towards sustainable solutions. 

When asked during this conversation whether the algorithm used in the Takaful program could be improved, Hiba noted that a technically perfect algorithm executing a flawed policy will still lead to negative outcomes. She argued that it is the policy itself – the attempt to rank people from least poor to poorest – that is prone to exclusion errors, and warns that technology may be shiny, promising to make targeting accurate, effective, and efficient, but that it can also be a distraction from the policy issues at hand.

Thus, instead of flattening economic realities and leading to the exclusion of people who are, in reality, in immense need, Hiba recommended that support be provided inclusively and universally—to everyone during vulnerable stages of life, regardless of their income and their wealth. Therefore, rather than focusing on using technology that will enable ever-more precise targeting, Jordan should focus on embracing solutions that allow for more universal social protection.

Rebecca Kahn, JD program, NYU School of Law;  and  Human Rights Scholar at the Digital Welfare State & Human Rights project. Her research interests relate to responsible AI governance, digital rights, and consumer protection. She previously worked in the U.S. House and Senate as a legislative staffer.