Response to the White House Office of Science and Technology Policy’s Request for Information on Biometric Identification Technologies

TECHNOLOGY AND HUMAN RIGHTS

Response to the White House Office of Science and Technology Policy’s Request for Information on Biometric Identification Technologies

Response to Request for Information (RFI) FR Doc. 2021–21975 

In April 2023, the Digital Welfare State & Human Rights Project at the Center along with the Institute for Law, Innovation & Technology (iLIT) at Temple University, Beasley School of Law as well as a group of international legal experts and civil society representatives with extensive experience studying the impacts of biometric technologies submitted a response to  White House Office of Science and Technology Policy’s (OSTP) Request for Information on Biometric Identification Technologies. 

This response provides international and comparative information to inform OSTP’s understanding of the social, economic, and political impacts of biometric technologies, in research and regulation. The complete recommendations can be found in Section V.

U.S. government must adopt moratorium on mandatory use of biometric technologies in critical sectors, look to evidence abroad, urge human rights experts

TECHNOLOGY AND HUMAN RIGHTS

U.S. Government must adopt moratorium on mandatory use of biometric technologies in critical sectors, look to evidence abroad, urge human rights experts

In a 10-page submission responding to OSTP’s Request for Information, a group of human rights experts argue that biometric identification technologies such as facial recognition and fingerprint-based recognition pose existential threats to human rights, democracy, and the rule of law.

As the White House Office of Science and Technology Policy (OSTP) embarks on an initiative to design a ‘Bill of Rights for an AI-Powered World,’ it must begin by immediately imposing a moratorium on the mandatory use of AI-enabled biometrics in critical sectors, such as health, social welfare programs, and education, argue a group of human rights experts at the Digital Welfare State & Human Rights Project (the DWS Project) at the Center for Human Rights and Global Justice at NYU School of Law, and the Institute for Law, Innovation & Technology (iLIT) at Temple University School of Law.

In a 10-page submission responding to OSTP’s Request for Information, the DWS Project and iLIT argue that biometric identification technologies such as facial recognition and fingerprint-based recognition pose existential threats to human rights, democracy, and the rule of law. Drawing on comparative research and consultation with some of the leading international experts on biometrics and human rights, the submission details evidence of some of the concerns raised in countries including Ireland, India, Uganda, and Kenya. It catalogues the often-catastrophic effects of biometric failure, of unwieldly administrative requirements imposed on public services, and the pervasive lack of legal remedies and basic transparency about use of biometrics in government.

“We now have a great deal of evidence about the ways that biometric identification can exclude and discriminate, denying entire groups access to basic social rights,” said Katelyn Cioffi, a Research Scholar at the DWS Project, “Under many biometric identification systems, you can be denied health care, access to education, or even a drivers’ license, if you are not able or willing to authenticate aspects of your identity biometrically.” An AI Bill of Rights that allows for equal enjoyment of rights must learn from comparative examples, the submission argues, and ensure that AI-enabled biometrics do not merely perpetuate systematic discrimination. This means looking beyond frequently-raised concerns about surveillance and privacy, to how biometric technologies affect social rights such as health, social security, education, housing, and employment.

A key factor of success for the initiative will be much-needed legal and regulatory reform across the United States federal system. “This initiative represents an opportunity for the U.S. government to examine the shortcomings of current laws and regulations, including equal protection, civil rights laws, and administrative law,” Laura Bingham, Executive Director of iLIT stated. “The protections that Americans depend on fail to provide the necessary legal tools to defend their rights and safeguard democratic institutions in a society that increasingly relies on digital technologies to make critical decisions.”

The submission also urges the White House to place constraints on the actions of the U.S. government and U.S. companies abroad. “The United States plays a major role in the development and uptake of biometric technologies globally, through its foreign investment, foreign policy, and development aid,” said Victoria Adelmant, a Research Scholar at the DWS Project. “As the government moves to regulate biometric technologies, it must not ignore U.S. companies’ roles in developing, selling, and promoting such technologies abroad, as well as the government’s own actions in spheres such as international development, defense, and migration.”

For the government to mount an effective response to these harms, the experts argue that it must also take heed of parallel efforts of other powerful political actors, including China and the European Union, which are currently attempting to regulate biometric technologies. However, it must also avoid a race to the bottom or jump into a perceived ‘arms race’ with countries like China, by pursuing an increasingly securitized biometric state and allowing the private sector to continue its unfettered ‘self-regulation’ and experimentation. Instead, the U.S. government should focus on acting as a global leader in enabling human rights-sustaining technological innovation.

The submission makes the following recommendations:

  1. Impose an immediate moratorium on the use of biometric technologies in critical sectors: biometric identification should never be mandatory in critical sectors such as education, welfare benefits programs, or healthcare.
  2. Propose and enact legislation to address the indirect and disparate impact of biometrics.
  3. Engage in further review and study of the human rights impacts of biometric technologies as well as of different legal and regulatory approaches.
  4. Build a comprehensive legal and regulatory approach that addresses the complex, systemic concerns raised by AI-enabled biometric identification technologies.
  5. Ensure that any new laws, regulations, and policies are subject to a democratic, transparent, and open process.
  6. Ensure that public education materials and any new laws, regulations, and policies are described and written in clear, non-technical, and easily accessible language.

This post was originally published as a press release on January 17, 2022.

The Digital Welfare State and Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law aims to investigate systems of social protection and assistance in countries worldwide that are increasingly driven by digital data and technologies.

The Temple University Institute for Law, Innovation & Technology (iLIT) at Beasley School of Law pursues action research, experiential instruction, and advocacy with a mission to deliver equity, bridge academic and practical boundaries, and inform new approaches to technological innovation in the public interest.

Profiling the Poor in the Dutch Welfare State

TECHNOLOGY AND HUMAN RIGHTS

Profiling the Poor in the Dutch Welfare State

Report on court hearing in litigation in the Netherlands about digital welfare fraud detection system (‘SyRI’)

On Tuesday, October 29, 2019, I attended a hearing before the District Court of The Hague (the Netherlands) in litigation by a coalition of Dutch civil society organizations challenging the Dutch government’s System Risk Indication (“SyRI”). The Digital Welfare State and Human Rights Project at NYU Law, which I direct, recently collaborated with the United Nations Special Rapporteur on extreme poverty and human rights in preparing an amicus brief to the District Court. The Special Rapporteur became involved in this case because SyRI has exclusively been used to detect welfare fraud and other irregularities in poor neighborhoods in four Dutch cities and affects the right to social security and to privacy of the poorest members of Dutch society. This litigation may also set a highly relevant legal precedent with impact beyond Dutch borders in an area that has received relatively little judicial scrutiny to date.

Lies, damn lies, and algorithms

What is SyRI? The formal answer can be found in legislation and implementing regulations from 2014. In order to coordinate government action against illicit use of government funds and benefits in the area of social security, tax benefits and labor law, Dutch law allows for the sharing of data between municipalities, welfare authorities, tax authorities and other relevant government authorities since 2014. A total of 17 categories of data held by government authorities may be shared in this context, from employment and tax data, to benefit data, health insurance data and enforcement data, among other categories of digitally stored information. Government authorities wishing to cooperate in a concrete SyRI project request the Minister for Social Affairs and Employment to use the SyRI tool by pooling and analyzing the relevant data from various authorities using an algorithmic risk model.

The Minister has outsourced the tasks of pooling and analyzing the data to a private foundation, somewhat unfortunately named ‘The Intelligence Agency (‘Inlichtingenbureau’). The Intelligence Agency pseudonymizes the data pool, analyzes the data using an algorithmic risk model and creates a file for those individuals (or corporations) who are deemed to be at a higher risk of being involved in benefit fraud and other irregularities. The Minister then analyzes these files and notifies the cooperating government authorities of those individuals (or corporations) are considered at higher risk of committing benefit fraud or other irregularities (‘risk notification’). Risk notifications are included in a register for two years. Those who are included in the register are not actively notified of this registration, but they can receive access to their information in the register after a specific request.

The preceding understanding of how the system works can be derived from the legislative texts and history, but a surprising amount of uncertainty remains about how exactly SyRI works in practice. This became abundantly clear in the hearing in the SyRI-case before the District Court of The Hague on October 29. The court is assessing the plaintiffs’ claim that SyRI, as legislated in 2014, violates norms of applicable international law, including the rights to privacy, data protection and a fair trial recognized in the European Convention on Human Rights, the Charter of Fundamental Rights of the European Union, the International Covenant on Civil and Political Rights and the EU General Data Protection Regulation.  In a courtroom packed with representatives from the 8 plaintiffs, reporters and concerned citizens from areas where SyRI has been used, the first question by the three-judge panel was to clarify the radically different views held by the plaintiffs and the Dutch State as to what SyRI is exactly.

According to the State, SyRI merely compares data from different government databases, operated by different authorities, in order to find simple inconsistencies. Although this analysis is undertaken with the assistance of an algorithm, the State underlined that this algorithm operates on the basis of pre-defined indicators of risk and that the algorithm is not of the ‘learning’ type. The State further emphasized that SyRI is not a Big Data or data-mining system, but that it employs a targeted analysis on the basis of a limited dataset with a clearly defined objective. It also argued that a risk notification by SyRI is merely a – potential – starting point for further investigations by individual government authorities and does not have any direct and automatic legal consequences such as the imposition of a fine or the suspension or withdrawal of government benefits or assistance.

But plaintiffs strongly contested the State’s characterization of SyRI. They claimed instead that SyRI is not narrowly targeted but instead aims at entire (poor) neighborhoods, that diverse and unconnected categories of personal data are brought together in SyRI projects, and that the resulting data exchange and analysis occur on a large scale. In their view, SyRI projects could therefore be qualified as projects involving problematic uses of Big Data, data-mining and profiling. They also made clear that it is exceedingly difficult for them or the District Court to assess what SyRI actually is or is not doing, because key elements of the system remain secret and the relevant legislation does not restrict the methods used, including the request to cooperating authorities to undertake a SyRI project, the risk model used, and the ways in which personal data can be processed.  All of these elements remain hidden from outside scrutiny.

Game the system, leave your water tap running

The District Court asked a series of probing and critical follow-up questions in an attempt to clarify the exact functioning of SyRI and to understand the justification for the secrecy surrounding it. One can sympathize with the court’s attempt to grasp the basic facts about SyRI in order to enable it to undertake its task of judicial oversight. Pushed by the District Court to clarify why the State could not be more open about the functioning of SyRI, the attorney for the State warned about welfare beneficiaries ‘gaming the system’. Referring to a pilot project pre-dating SyRI, in which welfare authority data about individuals claiming low-income benefits was matched with usage data held by publicly-owned drinking water companies to identify beneficiaries who committed fraud by falsely claiming they were living alone while actually living together (to claim a higher benefit level), the attorney for the State claimed that making it known that water usage is a ‘risk indicator’ could lead beneficiaries to leave their taps running to avoid detection. Some individuals attending the hearing could be heard snickering when this prediction was made.

Another fascinating exchange between the judges and the attorney for the State dealt with the standards applied by the Minister when assessing a request for a SyRI project by municipal and other government authorities. According to the State’s attorney, what would commonly happen is that a municipality has a ‘problem neighborhood’ and wants to tackle its problems, which are presumed to include welfare fraud and other irregularities, through SyRI. The request to the Minister is typically based ‘on the law, experience and logical thinking’ according to the State. Unsatisfied with this reply, the District Court probed the State for a more concrete justification of the use of SyRI and the precise standards applied to justify its use: ‘In Bloemendaal (one of the richest municipalities of the Netherlands) a lot of people enjoy going to classical concerts; in a problem neighborhood, there are a lot of people who receive government welfare benefits; why is that a justification for the use of SyRI?’, the Court asked. The attorney for the State had to admit that specific neighborhoods were targeted because those areas housed more people who were on welfare benefits and that, while participating authorities usually have no specific evidence that there are high(er) levels of benefit fraud in those neighborhoods, this higher proportion of people on benefits is enough reason to use SyRI.

Finally, and of great relevance to the intensity of the Court’s judicial scrutiny, the question of the gravity of the invasion of human rights – more specifically, the right to privacy – was a central topic of the hearing. The State argued that the data being shared and analyzed was existing data and not new data. It furthermore argued that for those individuals whose data was shared and analyzed, but who were not considered a ‘higher risk’, there was no harm at all: their data had been pseudonymized and was removed after the analysis. The opposing view by plaintiffs was that the government-held data that was shared and analyzed in SyRI was not originally collected for the specific purpose of enforcement. Plaintiffs also argued that – due to the wide categories of data that were potentially shared and analyzed in SyRI – a very intimate profile could be made of individuals in targeted neighborhoods: ‘This is all about profiling and creating files on people’.

Judgment expected in early 2020

The District Court announced that it expects to publish its judgment in this case on 29 January 2020. There are many questions to be answered by the Court. In non-legal language, they include at least the following: How does SyRI work exactly? Does it matter whether SyRI uses a relatively straightforward ‘decision-tree’ type of algorithm or, instead, machine learning algorithms? What is the harm in pooling previously siloed government data? What is the harm in classifying an individual as ‘high risk’? Does SyRI discriminate on the basis of socio-economic status, migrant status, race or color? Does the current legislation underpinning SyRI give sufficient clarity and adequate legal standards to meaningfully curb the use of State power to the detriment of individual rights? Can current levels of secrecy be maintained in a democracy based on the rule of law?

In light of the above, there will be many eyes focused on the Netherlands in January when a potentially groundbreaking legal precedent will be set in the debate on digital welfare states and human rights.

November 1, 2019.  Christiaan van Veen, Digital Welfare State & Human Rights Project (2019-2022), Center for Human Rights and Global Justice at NYU School of Law. 

CSOs Call for a Full Integration of Human Rights in the Deployment of Digital Identification Systems

TECHNOLOGY AND HUMAN RIGHTS

CSOs Call for a Full Integration of Human Rights in the Deployment of Digital Identification Systems

The Principles on Identification for Sustainable Development (the Principles), the creation of which was facilitated by the World Bank’s Identification for Development (ID4D) initiative in 2017, provide one of the few attempts at global standard-setting for the development of digital identification systems across the world. They are endorsed by many global and regional organizations (the “Endorsing Organizations”) that are active in funding, designing, developing, and deploying digital identification programs across the world, especially in developing and less developed countries.

Digital identification programs are coming up across the world in various forms, and will have long term impacts on the lives and the rights of the individuals enrolled in these programs. Engagement with civil society can help ensure the lived experience of people affected by these identification programs inform the Principles and the practices of International Organizations. 

Access Now, Namati, and the Open Society Justice Initiative co-organized a Civil Society Organization (CSO) consultation in August 2020 that brought together over 60 civil society organizations from across the world for dialogue with the World Bank’s ID4D Initiative and Endorsing Organizations. The consultation occurred alongside the first review and revision of the Principles, which has been led by the Endorsing Organizations during 2020. 

The consultation provided a platform for civil society feedback towards revisions to the Principles as well as dialogue around the roles of International Organizations (IOs) and Civil Society Organizations in developing rights-respecting digital identification programs. 

This new civil society-drafted report presents a summary of the top-level comments and discussions that took place in the meeting, including recommendations such as: 

  1. There is an urgent need for human rights criteria to be recognized as a tool for evaluation and oversight of existing and proposed digital identification systems, including throughout the Principles document 
  2. Endorsing Organizations should commit to the application of these Principles in practice, including an affirmation that their support will extend only with identification programs that align with the Principles 
  3. CSOs need to be formally recognized as partners with governments and corporations in designing and implementing digital identification systems, including greater country-level engagement with CSOs from the earliest stages of potential digital identification projects through to monitoring ongoing implementation
  4. Digital identification systems across the globe are already being deployed in a manner that enables repression through enhanced censorship, exclusion, and surveillance, but centering transparent and democratic processes as drivers of the development and deployment of these systems can mitigate these and other risks

Following the consultation and in line with this new report, we welcome the opportunity to further integrate the principles of the Universal Declaration of Human Rights and other sources of human rights in international law into the Principles of Identification and the design, deployment, and monitoring of digital identification systems in practice. We encourage the establishment of permanent and formal structures for the engagement of civil society organizations in global and national-level processes related to digital identification, in order to ensure identification technologies are used in service of human agency and dignity and to prevent further harms in the exercise of fundamental rights in their deployment. 

We call on United Nations and regional human rights mechanisms, including the High Commissioner on Human Rights, treaty bodies, and Special Procedures, to take up the severe human rights risks involved in the context of digital identification systems as an urgent agenda item under their respective mandates.

We welcome further dialogue and engagement with the World Bank’s ID4D Initiative and other Endorsing Organizations and promoters of digital identification systems in order to ensure oversight and guidance towards human rights-aligned implementation of those systems.

This post was was originally published as a press release on December 17, 2020

  1. Access Now
  2. AfroLeadership
  3. Asociación por los Derechos Civiles (ADC)
  4. Collaboration on International ICT Policy for East and Southern Africa (CIPESA)
  5. Derechos Digitales
  6. Development and Justice Initiative 
  7. Digital Welfare State and Human Rights Project, Center for Human Rights and Global Justice
  8. Haki na Sheria Initiative 
  9. Human Rights Advocacy and Research Foundation (HRF)
  10. Myanmar Centre for Responsible Business (MCRB) 
  11. Namati

Statements of the Digital Welfare State & Human Rights Project do not purport to represent the views of NYU or the Center, if any.

Contesting the Foundation of Digital Public Infrastructure

TECHNOLOGY AND HUMAN RIGHTS

Contesting the Foundations of Digital Public Infrastructure

What Digital ID Litigation Can Tell Us About the Future of Digital Government and Society

Many governments and international organizations have embraced the transformative potential of ‘digital public infrastructure’—a concept that refers to large-scale digital platforms run by or supported by governments, such as digital ID, digital payments, or data exchange platforms. However, many of these platforms remain heavily contested, and recent legal challenges in several countries have vividly demonstrated some of the risks and limitations of existing approaches.

In this short explainer, we discuss four case studies from Uganda, Mexico, Kenya, and Serbia, in which civil society organizations have brought legal challenges to contest initiatives to build digital public infrastructure. What connects the experiences in these countries is that efforts to introduce new national-scale digital platforms have had harmful impacts on the human rights of marginalized groups—impacts that, the litigants argue, were disregarded as governments rolled out these digital infrastructures, and which are wholly disproportionate to the purported benefits that these digital systems are supposed to bring.

These four examples therefore hold important lessons for policymakers, highlighting the urgent need for effective safeguards, mitigations, and remedies as the development and implementation of digital public infrastructure continues to accelerate.

The explainer document builds upon discussions we had during an event we hosted, entitled “Contesting the Foundations of Digital Public Infrastructure: What Digital ID Litigation Can Tell Us About the Future of Digital Government and Society,” where we brought together the civil society actors who have been litigating these four different cases.

August 28, 2023. Katelyn Cioffi, Victoria Adelmant, Danilo Ćurčić, Brian Kiira, Grecia Macías, and Yasah Musa

Co-creating a Shared Human Rights Agenda for AI Regulation and the Digital Welfare State

TECHNOLOGY AND HUMAN RIGHTS

Co-creating a Shared Human Rights Agenda for AI Regulation and the Digital Welfare State

On September 26, 2023, the Digital Welfare State and Human Rights Project at the Center for Human Rights and Global Justice at NYU Law and Amnesty Tech’s Algorithmic Accountability Lab (AAL) brought together 50 participants from civil society organizations across the globe to discuss the use and regulation of artificial intelligence in the public sector, within a collaborative online strategy session entitled ‘Co-Creating a Shared Human Rights Agenda for AI and the Digital Welfare State.’ Participants spanned diverse geographies and contexts—from Nigeria to Chile, and from Pakistan to Brazil—and included organizations working across a broad spectrum of human rights issues such as privacy, social security, education, and health. Through a series of lightning talks and breakout room discussions, the session surfaced shared concerns regarding the use of AI in public sector contexts, key gaps in existing discussions surrounding AI regulation, and potential joint advocacy opportunities.

Global discussions on the regulation of artificial intelligence (AI) have, in many contexts, thus far been preoccupied with whether to place meaningful constraints on the development, sale, and use of AI by private technology companies. Less attention has been paid to the need to place similar constraints on governments’ use of AI. But governments’ enthusiastic adoption of AI across public sector programs and critical public services has been accelerating apace around the world. AI-based systems are consistently tested in spheres where some of the most marginalized and low-income groups are unable to opt out – for instance, machine learning and other technologies are used to detect welfare benefit fraud, to assess vulnerability and determine eligibility for social benefits like housing, and to monitor people on the move. All too often, however, this technological experimentation results in discrimination, restriction of access to key services, privacy violations, and many other human rights harms. As governments eagerly build “digital welfare states,” incorporating AI into critical public services, the scale and severity of potential implications demands that meaningful constraints be placed on these developments. 

In the past few years, a wide array of regulatory and policy initiatives aimed at regulating the development and use of AI have been introduced – in Brazil, China, Canada, the EU, and the African Commission on Human and Peoples’ Rights, among many other countries and policy fora. However, what is emerging from these initiatives is an uneven patchwork of approaches to AI regulation, with concerning gaps and omissions when it comes to public sector applications of AI. Some of the world’s largest economies – where many powerful technology companies are based – are embarking on new regulatory initiatives with impacts far beyond their territorial confines, while many of the groups likely to be most affected have not been given sufficient opportunities to participate in these processes.

Despite these shortcomings, ongoing efforts to craft regulatory regimes do offer a crucial and urgent entry point for civil society organizations to seek to highlight critical gaps, to foster greater participation, and to contribute to shaping future deployments of AI in these important sectors.

In hosting this collaborative event on AI regulation and the digital welfare state, the AAL and the Center sought to build an inclusive space for civil society groups from across regions and sectors to forge new connections, share lessons, and collectively strategize. We sought to expand mobilization and build solidarity by convening individuals from dozens of countries, who work across a wide range of fields – including “digital rights” organizations, but also bringing in human rights and social justice groups who have not previously worked on issues relating to new technologies. Our aim was to brainstorm how actors across the human rights ecosystem can, in practice, help to elevate more voices into ongoing discussions about AI regulation.

Key issues for AI regulation in the digital welfare state

In breakout sessions, participants emphasized the urgent need to address serious harms that are already resulting from governments’ AI uses, particularly in contexts such as border control, policing, the judicial system, healthcare, and social protection. The public narrative – and accelerated impetus for regulation – has been dominated by discussion of existential threats AI may pose in the future, rather than the severe and widespread threats that are already seen in almost every area of public services. In Serbia, the roll-out of Social Cards in the welfare system has excluded thousands of the most marginalized from accessing their social protection entitlements; in Brazil, the deployment of facial recognition in public schools has subjected young children to discriminatory biases and serious privacy risks. Deployments of AI across public services are consistently entrenching inequalities and exacerbating intersecting discrimination – and participants noted that governments’ increasing interest in generative AI, which has the potential to encode harmful racial bias and stereotypes, will likely only intensify these risks.

Participants also noted that it is likely that AI will continue to impact groups that may defy traditional categorizations – including, for instance, those who speak minority languages. Indeed, a key theme across discussions was the insufficient attention paid in regulatory debates to AI’s impacts on culture and language. Given that systems are generally trained only in dominant languages, breakout discussions surfaced concerns about the potential erasure of traditional languages and loss of cultural nuance.

As advocates work not only to remedy some of these existing harms, but also to anticipate the impacts of the next iterations of AI, many expressed concern about the dominant role that the private sector plays in governments’ roll-outs of AI systems, as well as in discussions surrounding regulation. Where tech companies – who are often protected by powerful lobby groups, commercial confidentiality, and intellectual property regimes – are selling combinations of software, hardware, and technical guidance to governments, this can pose significant transparency challenges. It can be difficult for civil society organizations and affected individuals to understand who is providing these systems, as well as to understand how decisions are made. In the welfare context, for example, beneficiaries are often unaware of whether and how AI systems are making highly consequential decisions about their entitlements. Participants noted that human rights actors need the capacity and resources to move beyond traditional human rights work, to engage with processes such as procurement, standard-setting, and auditing, and to address issues related to intellectual property regimes and proliferating public-private partnerships underlying governments’ uses of AI.

These issues are compounded by the fact that, in many instances, AI-based systems are designed and built in countries such as the US and then marketed and sold to governments around the world for use across critical public services. Often, these systems are not designed with sensitivity to local contexts, cultures, and languages, nor with cognizance of how the technology will interface with the political, social, and economic landscape where it is deployed. In addition, civil society organizations face additional barriers when seeking transparency and access to information from foreign companies. As AI regulation efforts advance, a failure to consider potential extraterritorial harms will leave a significant accountability gap and risk deepening global inequalities. Many participants therefore noted both the importance of ensuring that regulation in countries where tech companies are based includes diverse voices and addresses extraterritorial impacts, but also to ensure that Global North models of regulation, which may not be fit for purpose, are not automatically “exported.”

A way forward

The event ended with a strategizing session that revealed the diverse strengths of the human rights movement and multiple areas for future work. Several specific and urgent calls to action emerged from these discussions.

First, given the disproportionate impacts of governments’ AI deployments on marginalized communities, a key theme was the need for broader participation in discussions on emerging AI regulation. This includes specially protected groups such as indigenous peoples, minoritized ethnic and racial groups, immigrant communities, people with disabilities, women’s rights activists, children, and LGBTQ+ groups, to name just a few. Without learning from and elevating the perspectives and experiences of these groups, regulatory initiatives will fail to address the full scope of the realities of AI. We must therefore develop participatory methodologies that bring the voices of communities into key policy spaces. More routes to meaningful consultation would lead to greater power and autonomy for previously marginalized voices to shape a more human rights-centric agenda for AI regulation. 

Second, the unique impacts that public sector use of AI can have on human rights, especially for marginalized groups, demands a comprehensive approach to AI regulation that takes careful account of specific sectors. Regulatory regimes that fail to include meaningful sector-specific safeguards for areas such as health, education, and social security will fail to address the full range of AI related harms. Participants noted that existing tools and mechanisms can provide a starting point – such as consultation and testing requirements, specific prohibitions on certain kinds of systems, requirements surrounding proportionality, mandatory human rights impact assessments, transparency requirements, periodic evaluations, and supervision mechanisms.

Finally, there was a shared desire to build stronger solidarity across a wider range of actors, and a call to action for more effective collaborations. Participants from around the world were keen to share resources, partner on specific advocacy goals, and exchange lessons learned. Since participants focus on many diverse issues, and adopt different approaches to achieve better human rights outcomes, collaboration will allow us to draw on a much deeper pool of collective knowledge, methodologies, and networks. It will be especially critical to bridge silos between those who identify more as “digital rights” organizations and groups working on issues such as healthcare, or migrants’ rights, or on the rights of people with disabilities. Elevating the work of grassroots groups, and improving diversity and representation among those empowered to enter spaces where key decisions around AI regulation are made, should also be central in movement-building. 

There is also an urgent need for more exchange not only across the human rights ecosystem, but also with actors from other disciplines who bring different forms of technical expertise, such as engineers and public interest technologists. Given the barriers to entry to regulatory spaces – including the resources, long-term commitment, and technical vocabularies imposed – effective coalition-building and information sharing could help to lessen these burdens.

While this event brought together a fantastic and energetic group of advocates from dozens of countries, these takeaways reflect the views of only a small subset of the relevant stakeholders in these debates. We ended the session hopeful, but with the recognition that there is a great deal more work needed to allow for the full participation of affected communities from around the world. Moving forward, we aim to continue to create spaces for varied groups to self-organize, continue the dialogue, and share information. We will help foster collaborations and concretely support organizations in building new partnerships across sectors and geographies, and hope to continue to co-create a shared human rights agenda for AI regulation for the digital welfare state.

As we continue this work and seek to support efforts and build collaborations, we would love to hear from you – please get in touch if you are interested in joining these efforts.

November 14, 2023. Digital Welfare State and Human Rights Project at NYU Law Center for Human Rights and Global Justice, and Amnesty Tech’s Algorithmic Accountability Lab. 

Shaping Digital Standards: An Explainer and Recommendations on Technical Standard-Setting for Digital Identity Systems.

AREA OF WORK

Shaping Digital Standards

An Explainer and Recommendations on Technical Standard-Setting for Digital Identity Systems.

In April 2023, we submitted comments to the United States National Institute of Standards and Technology (NIST), to contribute to its Guidelines on Digital Identity. Given that the NIST guidelines are very technical — the Guidelines are written for a specialist audience — we published this short “explainer” document with the hope of providing a resource to empower other civil society organizations and public interest lawyers, to engage with technical standards-setting bodies to raise human rights concerns related to digitalization in the future. This document therefore sets out the importance of standards bodies, provides an accessible “explainer” on the Digital Identity Guidelines, and summarizes our comments and recommendations.

The National Institute of Standards and Technology (NIST), which is part of the U.S. Department of Commerce, is a prominent and powerful standards body. Its standards are influential, shaping the design of digital systems in the United States and elsewhere. Over the past few years, NIST has been in the process of creating and updating a set of official Guidelines on Digital Identity, which “present the process and technical requirements for meeting digital identity management assurance levels … including requirements for security and privacy as well as considerations for fostering equity and the usability of digital identity solutions and technology.”

The primary audiences for the Guidelines are IT professionals and senior administrators in U.S. federal agencies that utilize, maintain, or develop digital identity technologies to advance their mission. The Guidelines fall under a wider NIST initiative to design a Roadmap on Identity Access and Management that explores topics like accelerating adoption of mobile drivers licenses, expanding biometric measurement programs, promoting interoperability, and modernizing identity management for U.S. federal government employees and contractors.

This technical guidance is particularly influential, as it shapes decision-making surrounding the design and architecture of digital identity systems. Biometrics and identity and security companies frequently cite their compliance with NIST standards to promote their technology and to convince governments to purchase their hardware and software products to build digital identity systems. Other technical standards bodies look to NIST and cite NIST standards. These technical guidelines thus have a great deal of influence well beyond the United States, affecting what is deemed acceptable or not within digital identity systems, such as how and when biometrics can be used. . 

Such technical standards are therefore of vital relevance to all those who are working on digital identity. In particular, these standards warrant the attention of civil society organizations and groups who are concerned with the ways in which digital identity systems have been associated with discrimination, denial of services, violations of privacy and data protection, surveillance, and other human rights violations. Through this explainer, we hope to provide a resource that can be helpful to such organizations, enabling and encouraging them to contribute to technical standard-setting processes in the future and to bring human rights considerations and recommendations into the standards that shape the design of digital systems. 

Pilots, Pushbacks, and the Panopticon: Digital Technologies at the EU’s Borders

TECHNOLOGY & HUMAN RIGHTS

Pilots, Pushbacks, and the Panopticon: Digital Technologies at the EU’s Borders

The European Union is increasingly introducing digital technologies into its border control operations. But conversations about these emerging “digital borders” are often silent about the significant harms experienced by those subjected to these technologies, their experimental nature, and their discriminatory impacts.

On October 27, 2021, we hosted the eighth episode in our Transformer States Series on Digital Government and Human Rights, in an event entitled “Artificial Borders? The Digital and Extraterritorial Protection of ‘Fortress Europe.’” Christiaan van Veen and Ngozi Nwanta interviewed Petra Molnar about the European Union’s introduction of digital technologies into its border control and migration management operations. The video and transcript of the event, along with additional reading materials, can be found below. This blog post outlines key themes from the conversation.

Digital technologies are increasingly central to the EU’s efforts to curb migration and “secure” its borders. Against a background of growing violent pushbacks, surveillance technologies such as unpiloted drones and aerostat machines with thermo-vision sensors are being deployed at the borders. The EU-funded “ROBORDER” project aims to develop “a fully-functional autonomous border surveillance system with unmanned mobile robots.” Refugee camps on the EU’s borders, meanwhile, are being turned into a “surveillance panopticon,” as the adults and children living within them are constantly monitored by cameras, drones, and motion-detection sensors. Technologies also mediate immigration and refugee determination processes, from automated decision-making, to social media screening, and a pilot AI-driven “lie detector.”

In this Transformer States conversation, Petra argued that technologies are enabling a “sharpening” of existing border control policies. As discussed in her excellent report entitled “Technological Testing Grounds,” completed with European Digital Rights and the Refugee Law Lab, new technologies are not only being used at the EU’s borders, but also to surveil and control communities on the move before they reach European territory. The EU has long practiced “border externalization,” where it shifts its border control operations ever-further away from its physical territory, partly through contracting non-Member States to try to prevent migration. New technologies are increasingly instrumental in these aims. The EU is funding African states’ construction of biometric ID systems for migration control purposes; it is providing cameras and surveillance software to third countries to prevent travel towards Europe; and it supports efforts to predict migration flows through big data-driven modeling. Further, borders are increasingly “located” on our smartphones and in enormous databases as data-based risk profiles and pre-screening become a central part of the EU’s border control agenda.

Ignoring human experience and impacts

But all too often, discussions about these technologies are sanitized and depoliticized. People on the move are viewed as a security problem, and policymakers, consultancies, and the private sector focus on the “opportunities” presented by technologies in securitizing borders and “preventing migration.” The human stories of those who are subjected to these new technological tools and the discriminatory and deadly realities of “digital borders” are ignored within these technocratic discussions. Some EU policy documents describe the “European Border Surveillance System” without mentioning people at all.

In this interview, Petra emphasized these silences. She noted that “human experience has been left to the wayside.” First-person accounts of the harmful impacts of these technologies are not deemed to be “expert knowledge” by policymakers in Brussels, but it is vital to expose the human realities and counter the sanitized policy discussions. Those who are subjected to constant surveillance and tracking are dehumanized: Petra reports that some are left feeling “like a piece of meat without a life, just fingerprints and eye scans.” People are being forced to take ever-deadlier routes to avoid high-tech surveillance infrastructures, and technology-enabled interdictions and pushbacks are leading to deaths. Further, difference in treatment is baked into these technological systems, as they enable and exacerbate discriminatory inferences along racialized lines. As UN Special Rapporteur on Racism E. Tendayi Achiume writes, “digital border technologies are reinforcing parallel border regimes that segregate the mobility and migration of different groups” and are being deployed in racially discriminatory ways. Indeed, some algorithmic “risk assessments” of migrants have been argued to represent racial profiling.

Policy discussions about “digital borders” also do not acknowledge that, while the EU spends vast sums on technologies, the refugee camps at its borders have neither running water nor sufficient food. Enormous investment in digital migration management infrastructures is being “prioritized over human rights.” As one man commented, “now we have flying computers instead of more asylum.”

Technological experimentation and pilot programs in “gray zones”

Crucially, these developments are occurring within largely-unregulated spaces. A central theme of this Transformer States conversation—mirroring the title of Petra’s report, “Technological Testing Grounds”—was the notion of experimentation within the “gray zones” of border control and migration management. Not only are non-citizens and stateless persons accorded fewer rights and protections than EU citizens, but immigration and asylum decision-making is also an area of law which is highly discretionary and contains fewer legal safeguards.

This low-rights, high-discretion environment makes it rife for testing new technologies. This is especially the case in “external” spaces far from European territory which are subject to even less regulation. Projects which would not be allowed in other spaces are being tested on populations who are literally at the margins, as refugee camps become testing zones. The abovementioned “lie detector,” whereby an “avatar” border guard flagged “biomarkers of deceit,” was “merely” a pilot program. This has since been fiercely criticized, including by the European Parliament, and challenged in court.

Experimentation is deliberately occurring in these zones as refugees and migrants have limited opportunities to challenge this experimentation. The UN Special Rapporteur on Racism has noted that digital technologies in this area are therefore “uniquely experimental.” This has parallels with our work, where we consistently see governments and international organizations piloting new technologies on marginalized and low-income communities. In a previous Transformer States conversation, we discussed Australia’s Cashless Debit Card system, in which technologies were deployed upon aboriginal people through a pilot program. In the UK, radical reform to the welfare system through digitalization was also piloted, with low-income groups being tested on with “catastrophic” effects.

Where these developments are occurring within largely-unregulated areas, human rights norms and institutions may prove useful. As Petra noted, the human rights framework requires courts and policymakers to focus upon the human impacts of these digital border technologies, and highlights the discriminatory lines along which their effects are felt. The UN Special Rapporteur on Racism has outlined how human rights norms require mandatory impact assessments, moratoria on surveillance technologies, and strong regulation to prevent discrimination and harm.

November 23, 2021. Victoria Adelmant,Director of the Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law. 

Carbon Markets, Forests and Rights: An Introductory Series for Indigenous Peoples

CLIMATE AND ENVIRONMENT

Carbon Markets, Forests and Rights

An Introductory Series for Indigenous Peoples

Indigenous peoples are experiencing a rush of interest in their lands and territories from actors involved in carbon markets. Many indigenous communities have expressed that to make informed decisions about how to engage with carbon markets, they need accessible information about what these markets are, and how participating in them may affect their rights.

In response to this demand for information, the Global Justice Clinic and the Forest Peoples Programme have developed a series of introductory materials about carbon markets. The materials were initially developed for GJC partner the South Rupununi District Council in Guyana and have been adapted for a global audience.

The explainer materials can be read in any order:

  • Explainer 1 introduces key concepts that are essential background to understanding carbon markets. It introduces what climate change is, what the carbon cycle and carbon dioxide is, and the link between carbon dioxide, forests and climate change. 
  • Explainer 2 outlines what carbon markets and carbon credits are, and provides a brief introduction to why these markets are developing and how they function
  • Explainer 3 focuses on indigenous peoples’ rights and carbon markets. It highlights some of the particular risks that carbon markets pose to indigenous peoples and communities. It also highlights key questions communities should ask themselves as they consider how to engage with or respond to carbon markets
  • Explainer 4 provides an overview of the key environmental critiques and concerns around carbon markets
  • Explainer 5 provides a short introduction to ART-TREES. ART-TRESS is an institution and standard that is involved in ‘certifying’ carbon credits and that is gaining a lot of attention internationally.

Digital Identification and Inclusionary Delusion in West Africa

TECHNOLOGY & HUMAN RIGHTS

Digital Identification and Inclusionary Delusion in West Africa 

Over 1 billion persons have been categorized as invisible in the world, of which about 437 million persons are reported to be from sub-Saharan Africa. In West Africa alone, the World Bank has identified a huge “identification gap” and different identification projects are underway to identify millions of invisible West Africans.[1] These individuals are regarded as invisible not because they are unrecognizable or non-existent, but because they do not fit a certain measure of visibility that matches existing or new database(s) of an identifying institution[2], such as the State or international bodies.

One existing digital identification project in West Africa is the West Africa Unique Identification for Regional Integration and Inclusion (WURI) program initiated by the World Bank under its Identification for Development initiative. The WURI program is to serve as an umbrella under which West African States can collaborate with the Economic Community of West African States (ECOWAS) to design and build a digital identification system, financed by the World Bank, that would create foundational IDs (fID)[3] for all persons in the ECOWAS region.[4] Many West African States that have had past failed attempts at digitizing their identification systems have embraced assistance via WURI. The goal of WURI is to enable access to services for millions of people and ensure “mutual recognition of identities” across countries. The promise of digital identification is that it will facilitate development by promoting regional integration, security, social protection of aid beneficiaries, financial inclusion, reduction of poverty and corruption, healthcare insurance and delivery, and act as a stepping stone to an integrated digital economy in West Africa. This way, millions of invisible individuals would become visible to the state and become financially, politically and socially included.

Nevertheless, the outlook of WURI and the reliance on digital IDs by development agencies proposes a reliance on technologies, also known as techno-solutionism, as the approach to dealing with institutional challenges and developmental goals in West Africa. This reliance on digital technologies does not address some of the major root causes of developmental delays in the countries and may instead worsen the state of things by excluding the vast majority of people who are either unable to be identified or excluded by virtue of technological failures. This exclusion emerges in a number of ways, including the service-based structure and/or mandatory nature of many digital identification projects which adopt the stance of exclusion first before inclusion. This means that in cases where access to services and infrastructures, such as opening a bank account, registering sim cards, getting healthcare or receiving government aid and benefits, are made subject to registration and possession of a national ID card or unique identification number (UIN), individuals are first excluded unless they register for and possess the national ID card or UIN.

There are three contexts in which exclusion may arise. Firstly, an individual may be unable to register for a fID. For instance, in Kenya, many individuals without identity verification documents like birth certificates were excluded from the registration process of its fID, Huduma Namba. A second context arises where an individual may be unable to obtain a fID card or unique identification number (UIN) after registration. This is the case in Nigeria where the National Identity Management Commission has been unable to deliver ID cards to the majority of those who have registered under the identity program. The risk of exclusion of individuals may increase in Nigeria when the government conditions access to services on the possession of an fID card or UIN.

A third scenario involves the inability of an individual to access infrastructures after obtaining a fID card or UIN, due to the breakdown or malfunctioning of the technology for authentication by the identifying institution. In Tanzania, for example, although some individuals have the fID card or UIN, they are unable to proceed with their SIM registration process due to breakdown of the data storage systems. There are also numerous reports of people not getting access to services in India because of technology failures. This leaves a large group of vulnerable individuals, particularly where the fID is required to access key services such as SIM card registration. An unpublished 2018 poll carried out in Cote d’Ivoire reveals that over 65% of those who register for National ID used it to apply for SIM card services and about 23% for financial services.[5]

The mandatory or service-based model of most identification systems in West Africa take away powers or rights of access to and control of resources and identity from individuals and confers them on the State and private institutions, thereby raising some human rights concerns for those who are unable to fit the criteria for registration and identification. Thus, a person who ordinarily would move around freely, shop from a grocery store, open a bank account or receive healthcare from a hospital can only do that, upon commencement of mandatory use of the fID, through possession of the fID card or UIN. In Nigeria, for instance, the new national computerized identity card is equipped with a microprocessor designed to host and store multiple e-services and applications like biometric e-ID, electronic ID, payment application, travel document, and serve as the national identity card of individuals. A Thales publication also states that in a second phase for the Nigerian fID, driver’s license, eVoting, eHealth or eTransport applications are to be added to the cards. This is a long list of e-services for a country where only about 46% of its population is reported to have access to the internet. Where a person loses this ID card or is unable to provide the UIN that digitally represents that person, such person would be potentially excluded from access to all the services and infrastructures that the fID card or UIN serves as a gateway to. This exclusion risk is intensified by the fact that identifying institutions in remote or local areas may lack authentication technologies or electronic connection to the ID database to verify the existence of individuals at all times they seek to be identified, make a payment, receive healthcare, or travel.

It is important to note that exclusion does not only stem from mandatory fID systems or voluntary but service-integrated ID systems. There are also risks with voluntary ID systems where adequate measures are not taken to protect the data and interests of all those who are registered. An adequate data storage facility, data protection designs and data privacy regulation to protect the data of individuals is required, else individuals face increased risks of identity theft, fraud and cybercrime which would exclude and shut them off from fundamental services and infrastructures.

The history of political instability, violence and extremism, ethnic and religious conflicts, and disregard for the rule of law in many West African countries also heightens the risk of exclusion of individuals. Different instances of this abound, such as religious extremism, insurgences and armed conflicts in Northern Nigeria, civilian attacks and unrest in some communities in Burkina Faso, crisis and terrorist attacks in Mali, election violence, and military intervention in State governance. An OECD report accounts for over 3,317 violent events in West Africa between 2011 – 2019 with fatalities rising above 11,911 within those periods. A UN report also puts the number of deaths in Burkina Faso to over 1800 in 2019 and over 25,000 displaced persons in the same year. This instability can act as a barrier to registration for a fID and lead to exclusion where certain groups of persons are targeted and profiled by state and/or non-state (illegal) actors.

In addition to cases where registration is mandatory or where individuals are highly dependent on the infrastructures and services they wish to access, there might also be situations where people might opt to rely less on the use of the fID or decide not to register due to worries about surveillance, identity theft or targeted disciplinary control, thereby excluding them from resources they would have ordinarily gotten access to. In Nigeria, only about 20% of the population is reported to have registered for the Nigerian Identity Number (NIN) (this was about 6% in 2017). Similarly, though implementation of WURI program objectives in Guinea and Cote d’Ivoire commenced in 2018, as of date, the registration and identification output in both countries is still marginally low.

World Bank findings and lessons from Phase I reveal that digital identification can exacerbate exclusion and marginalization, while diminishing privacy and control over data, despite the benefits it may carry. Some of the challenges identified by the World Bank resonate with the major concerns listed here, and they include risks of surveillance, discrimination, inequality, distrust between the State and individuals, and legal, political and historical differences among countries. The solutions proposed, under the WURI program objectives, to address these problems – consultations, dialogues, ethnographic studies, provision of additional financing and capacity – are laudable but insufficient to dealing with the root causes. On the contrary, the solutions offered might reveal the inadequacies of a digitized State in West Africa where a large population of West African are digital illiterates, lack the means to access digital platforms, or operate largely in the informal sector.

Practically, the task of tactically addressing the root causes to most of the problems mentioned above, particularly the major ones involving political instability, institutional inadequacies, corruption, conflicts and capacity building, is an arduous one which may involve a more domestic/grassroot/bottom-up approach. However, the solution to these challenges is either unknown, difficult or less desirable than the “quick fix” offered by techno-solutionism and reliance on digital identification.

  1. It is uncertain why the conventional wisdom is that West African countries, many of whom have functional IDs, specifically need to have a national digital ID card system while some of their developed counterparts in Europe and North-America lack a national ID card but rely on different functional IDs
  2. Identifying institution is used here to refer to any institution that seeks to authenticate the identity of a person based on the ID card or number that person possesses.
  3. A foundational identity system is an identity system which enables the creation of identities or unique identification numbers used for general purposes, such as national identity cards. A functional identity system is one that is created for or evolves out of a specific use case but may likely be suitable for use across other sectors such as driver’s license, voter’s card, bank number, insurance number, insurance records, credit history, health record, tax records.
  4. Member States of ECOWAS include the Republic of Benin, Burkina Faso, Cape Verde, the Gambia, Ghana, Guinea, Guinea Bissau, Liberia, Mali, Niger, Nigeria, Senegal, Sierra Leone, Togo.
  5. See Savita Bailur, Helene Smertnik & Nnenna Nwakanma, End User Experience with identification in Côte d’Ivoire. Unpublished Report by Caribou Digital.

October 19, 2020. Ngozi Nwanta, JSD program, NYU School of Law with research interests in systemic analysis of national identification systems, governance of credit data, financial inclusion, and development.