Law Clinics Condemn U.S. Government Support for Haiti’s Regime as Country Faces Human Rights and Humanitarian Catastrophe

HUMAN RIGHTS MOVEMENT

Law Clinics Condemn U.S. Government Support for Haiti’s Regime as Country Faces Human Rights and Humanitarian Catastrophe

To mark the second anniversary of the assassination of Haitian President Jovenel Moïse, the Global Justice Clinic and the International Human Rights Clinic at Harvard Law School submitted a letter to Secretary of State Antony Blinken and Assistant Secretary Brian Nichols calling on the U.S. government to cease to support the de facto Ariel Henry administration. Progress on human rights and security and a return to constitutional order will only be possible if Haitian people have the opportunity to change their government.

In the wake of Moïse’s murder and at the urging of the United States, Dr. Henry assumed leadership as de facto prime minister. The past two years, Dr. Henry has presided over a humanitarian and human rights catastrophe. He has consolidated power in what remains of Haiti’s institutions, and has proposed to amend the Constitution in an unlawful manner. Further, there is evidence that ties Dr. Henry to the assassination of President Moïse. Despite the monumental failure of Dr. Henry’s government, the United States continues to support this illegitimate and unpopular regime.

The letter declares that any transitional government must be evaluated against Haiti’s Constitution and established human rights principles. Proposals such as Dr. Henry’s that violate the spirit of the Constitution and further state capture cannot be a path to democracy.

This post was originally published as a press release on July 10, 2023 by the Global Justice Clinic at NYU School of Law, and the International Human Rights Clinic at Harvard Law School. 

Guyanese Indigenous Council Rejects Canadian Mining Company’s Flimsy Environmental and Social Impact Assessment, Calls for Rejection of Mining Permit

CLIMATE & ENVIRONMENT

Guyanese Indigenous Council Rejects Canadian Mining Company’s Flimsy Environmental and Social Impact Assessment, Calls for Rejection of Mining Permit

The Global Justice Clinic has been working with the South Rupununi District Council (SRDC) since 2016. Through the clinic, students have provided data analysis and legal support for monitoring activity undertaken by the SRDC. 

Last week, the South Rupununi District Council (SRDC), a legal representative institution for the Wapichan people, released a statement forcefully denouncing the procedurally and substantively defective environmental and social impact assessment (ESIA) submitted by a Canadian mining company (Guyana Goldstrike) seeking to begin large-scale mining operations on Marutu Taawa through its Guyanese subsidiary (Romanex). Marutu Taawa, also known as Marudi Mountain, stands deep in the traditional territory of the Wapichan and holds historical, cultural, spiritual and biological significance for the entire region. Because Marutu Taawa sits at a critical watershed, the environmental impact of large-scale mining operations would threaten the ability of the Wapichan people to continue living in the ancestral lands they have called home for centuries. Notwithstanding the threat to the Wapichan people posed by large-scale mining, SRDC finds that Guyana Goldstrike’s ESIA relies on incomplete, inaccurate, or decades-old information to ignore the substantial environmental, public health, and cultural consequences that would occur if such mining operations were allowed to proceed. The SRDC also strongly condemns the mining company’s failure to consult the Council, as a legal representative institution of the Wapichan people. This failure to meaningfully consult stands in direct violation of both Guyanese and international law.

Given the inadequacy of the ESIA and Guyana Goldstrike’s flouting of domestic and international law, the SRDC has strongly encouraged the Guyanese Environmental Protection Agency (EPA) to deny the Canadian company’s subsidiary the environmental permit needed to initiate large-scale operations in the territory. The SRDC also calls on the EPA to oversee a process that ensures that Guyana Goldstrike and Romanex adhere to Guyanese and international law and best practices in the international mining sector.

This post was originally published as a press release on September 28, 20218.

Recommendations to Funders to Improve Mental Health and Wellbeing in the Human Rights Field

HUMAN RIGHTS MOVEMENT

Recommendations to Funders to Improve Mental Health and Wellbeing in the Human Rights Field 

Improving and maintaining well-being is essential to individual health, to organizational functioning, and to the sustainability and effectiveness of the human rights field as a whole. There are many concrete, immediately actionable reforms that are achievable in the near-term and which address a variety of causes of distress, or which can support efforts to transform the field over the long term. Such steps should be taken while the human rights field works toward deep transformation. 

Human rights advocacy can be a source of significant joy, purpose, political agency, belonging, and community. Yet advocates can also experience harms, and trauma in their efforts to advance justice and equality, including those caused by heavy workloads, time pressures, discrimination and bullying in the workplace, vicarious exposure to trauma and human rights abuse, and direct experience of threats and attacks. Advocates can experience suffering, sometimes very severe, as a result, including demotivation, alienation, anxiety, fear, depression, and post-traumatic stress disorder. How advocates experience their work and any resulting harms can vary widely, and may be highly contextual and culturally specific.

Improving and maintaining well-being is essential to individual health, to organizational functioning, and to the sustainability and effectiveness of the human rights field as a whole. 

Positively transforming mental health and well-being in the human rights field will require significant reforms and both structural changes and close attention to the contextually-specific needs of individual advocates and organizations. The causes and dynamics at play are complex, and there are no quick fixes that can address the cultural shifts required. As efforts are taken to improve well-being, it is important that the field avoids tick-the-box or commodified approaches. Improving the wellbeing of human rights advocates requires a holistic response and a movement-wide prioritization of well-being, with careful attention to context, culture, and the diverse needs of advocates and organizations.  

Recognition of the deeply-rooted problems requiring radical change or of the complexities of the issues and the difficulty of defining a clear set of recommendations applicable across the board should not operate as an excuse to take no action now to improve well-being. There are many concrete, immediately actionable reforms that are achievable in the near-term and which address a variety of causes of distress, or which can support efforts to transform the field over the long term. Such steps should be taken while the human rights field works toward deep transformation. Some of these steps include the following recommended actions, which are drawn from our research with advocates around the world.

Rights groups warn private healthcare is failing many, draining public resources

INEQUALITIES

Rights groups warn private healthcare is failing many, draining public resources

Government-backed expansion of the private healthcare sector in Kenya is leading to exclusion and setting back the goal of universal health coverage, said two rights groups in a report released today. National policies intended to increase private sector participation in healthcare, alongside chronic underinvestment in the public system, have contributed to an explosion of for-profit private actors who often provide poor value for money, neglect public health priorities, and push Kenyans into poverty and crushing debt.

The 49-page report, “Wrong Prescription: The Impact of Privatizing Healthcare in Kenya,” is authored by Hakijamii and the Center for Human Rights and Global Justice at New York University. It finds that privatization has proven costly for individuals and the government, has shut people out of access to healthcare, and is undermining the right to health. The government’s signature policy for achieving universal health coverage—the planned expansion of private-sector friendly social insurance through the National Hospital Insurance Fund (NHIF)—risks exacerbating these problems.

“Privatization is the wrong prescription for achieving universal health coverage,” said Philip Alston, former United Nations Special Rapporteur and co-author of the report. “Proponents of private healthcare make all sorts of promises about how it will lower costs and improve access, but our research finds private actors have really failed to deliver.”

“Promoters of private care have gravely misdiagnosed the situation,” said Nicholas Orago, Executive Director of Hakijamii and co-author of the report. “While many associate private care with high-quality facilities, the ‘haves’ and ‘have nots’ experience entirely different private sectors. Private healthcare has been disastrous for poor and vulnerable communities, who are left with low-quality, low-cost providers pedaling services that are too often unsafe or even illegal.”

Privatizing care has proven costly for both individuals and the government. The private health sector relies heavily on government funding, including tens of billions of shillings each year to contract with private facilities, subsidize access to private care, and pay for secretive public-private partnerships. Individuals face excessively high fees at private facilities, where treatment can cost in excess of twelve times more than the public sector.

“Healthcare is a big business, with global corporations and private equity firms lining up to profit off the sector in Kenya,” said Rebecca Riddell, Co-director of the Human Rights and Privatization Project at the Center and co-author of the report. “These companies expect returns on their investments, leading to overwhelmingly higher prices in the private sector while scarce public resources prop up private profits.”

The report draws from more than 180 interviews with healthcare users and providers, government officials, and experts. Researchers spoke with community members from informal settlements in Mombasa and Nairobi as well as rural areas in Isiolo. Many described being excluded from private care or facing hardships to afford treatment, such as selling important assets like land or forgoing educational and livelihood opportunities. Others described tragic consequences of low-quality care at private providers, including unnecessary deaths and disabilities. The impact has been particularly severe for people who are poor or low income, women, people with disabilities, and those in rural areas.

Researchers also found that the private sector in Kenya is concentrated in more profitable forms of care, and has neglected less commercially viable areas, patients, and services. Private sector healthcare workers described having to meet patient “targets” as well as working in conditions significantly inferior to those in the public sector.

“The disconnect between profits and public health goals should cause policymakers to rethink their reliance on the private sector,” said Bassam Khawaja, Co-director of the Human Rights and Privatization Project and report co-author. “Many essential health services are incredibly valuable or even lifesaving but may not be profitable as one-off transactions.”

The anticipated nationwide rollout of mandatory NHIF coverage will divert more public money to private actors without preventing exclusion and high costs. Though the NHIF is a public insurer, it contracts extensively with private facilities, offers private providers higher reimbursement rates, and sends most of its claims money to private actors. “Expanding coverage through the NHIF instead of investing in a strong public health system is a major step backwards,” Orago said.

Much of the pressure to privatize has come from external actors in the global North. Key development actors have urged Kenya to increase the private sector’s role in health, including international financial institutions, private foundations, and wealthy countries looking for new markets.

“An ideological commitment to the private sector has trumped the rights of the Kenyan people, as development actors promote private care and financing without accountability,” Alston said. “The extreme secrecy around many arrangements with the private health sector opens the door to corruption and self-dealing.”

The report concludes that the government should rethink its support for the private sector and prioritize the public healthcare system, which still delivers the majority of inpatient and outpatient care in Kenya despite being starved of resources. “While the government should address serious shortcomings in the public system, popular recent investments illustrate an enduring appetite for public care,” said Alston.

“With sufficient political will and resources, the public healthcare system is best positioned to provide all Kenyans with the accessible, affordable, and quality healthcare that they have a right to,” said Orago.

This post was originally published as a press release on November 16, 2021.

Co-creating a Shared Human Rights Agenda for AI Regulation and the Digital Welfare State

TECHNOLOGY AND HUMAN RIGHTS

Co-creating a Shared Human Rights Agenda for AI Regulation and the Digital Welfare State

On September 26, 2023, the Digital Welfare State and Human Rights Project at the Center for Human Rights and Global Justice at NYU Law and Amnesty Tech’s Algorithmic Accountability Lab (AAL) brought together 50 participants from civil society organizations across the globe to discuss the use and regulation of artificial intelligence in the public sector, within a collaborative online strategy session entitled ‘Co-Creating a Shared Human Rights Agenda for AI and the Digital Welfare State.’ Participants spanned diverse geographies and contexts—from Nigeria to Chile, and from Pakistan to Brazil—and included organizations working across a broad spectrum of human rights issues such as privacy, social security, education, and health. Through a series of lightning talks and breakout room discussions, the session surfaced shared concerns regarding the use of AI in public sector contexts, key gaps in existing discussions surrounding AI regulation, and potential joint advocacy opportunities.

Global discussions on the regulation of artificial intelligence (AI) have, in many contexts, thus far been preoccupied with whether to place meaningful constraints on the development, sale, and use of AI by private technology companies. Less attention has been paid to the need to place similar constraints on governments’ use of AI. But governments’ enthusiastic adoption of AI across public sector programs and critical public services has been accelerating apace around the world. AI-based systems are consistently tested in spheres where some of the most marginalized and low-income groups are unable to opt out – for instance, machine learning and other technologies are used to detect welfare benefit fraud, to assess vulnerability and determine eligibility for social benefits like housing, and to monitor people on the move. All too often, however, this technological experimentation results in discrimination, restriction of access to key services, privacy violations, and many other human rights harms. As governments eagerly build “digital welfare states,” incorporating AI into critical public services, the scale and severity of potential implications demands that meaningful constraints be placed on these developments. 

In the past few years, a wide array of regulatory and policy initiatives aimed at regulating the development and use of AI have been introduced – in Brazil, China, Canada, the EU, and the African Commission on Human and Peoples’ Rights, among many other countries and policy fora. However, what is emerging from these initiatives is an uneven patchwork of approaches to AI regulation, with concerning gaps and omissions when it comes to public sector applications of AI. Some of the world’s largest economies – where many powerful technology companies are based – are embarking on new regulatory initiatives with impacts far beyond their territorial confines, while many of the groups likely to be most affected have not been given sufficient opportunities to participate in these processes.

Despite these shortcomings, ongoing efforts to craft regulatory regimes do offer a crucial and urgent entry point for civil society organizations to seek to highlight critical gaps, to foster greater participation, and to contribute to shaping future deployments of AI in these important sectors.

In hosting this collaborative event on AI regulation and the digital welfare state, the AAL and the Center sought to build an inclusive space for civil society groups from across regions and sectors to forge new connections, share lessons, and collectively strategize. We sought to expand mobilization and build solidarity by convening individuals from dozens of countries, who work across a wide range of fields – including “digital rights” organizations, but also bringing in human rights and social justice groups who have not previously worked on issues relating to new technologies. Our aim was to brainstorm how actors across the human rights ecosystem can, in practice, help to elevate more voices into ongoing discussions about AI regulation.

Key issues for AI regulation in the digital welfare state

In breakout sessions, participants emphasized the urgent need to address serious harms that are already resulting from governments’ AI uses, particularly in contexts such as border control, policing, the judicial system, healthcare, and social protection. The public narrative – and accelerated impetus for regulation – has been dominated by discussion of existential threats AI may pose in the future, rather than the severe and widespread threats that are already seen in almost every area of public services. In Serbia, the roll-out of Social Cards in the welfare system has excluded thousands of the most marginalized from accessing their social protection entitlements; in Brazil, the deployment of facial recognition in public schools has subjected young children to discriminatory biases and serious privacy risks. Deployments of AI across public services are consistently entrenching inequalities and exacerbating intersecting discrimination – and participants noted that governments’ increasing interest in generative AI, which has the potential to encode harmful racial bias and stereotypes, will likely only intensify these risks.

Participants also noted that it is likely that AI will continue to impact groups that may defy traditional categorizations – including, for instance, those who speak minority languages. Indeed, a key theme across discussions was the insufficient attention paid in regulatory debates to AI’s impacts on culture and language. Given that systems are generally trained only in dominant languages, breakout discussions surfaced concerns about the potential erasure of traditional languages and loss of cultural nuance.

As advocates work not only to remedy some of these existing harms, but also to anticipate the impacts of the next iterations of AI, many expressed concern about the dominant role that the private sector plays in governments’ roll-outs of AI systems, as well as in discussions surrounding regulation. Where tech companies – who are often protected by powerful lobby groups, commercial confidentiality, and intellectual property regimes – are selling combinations of software, hardware, and technical guidance to governments, this can pose significant transparency challenges. It can be difficult for civil society organizations and affected individuals to understand who is providing these systems, as well as to understand how decisions are made. In the welfare context, for example, beneficiaries are often unaware of whether and how AI systems are making highly consequential decisions about their entitlements. Participants noted that human rights actors need the capacity and resources to move beyond traditional human rights work, to engage with processes such as procurement, standard-setting, and auditing, and to address issues related to intellectual property regimes and proliferating public-private partnerships underlying governments’ uses of AI.

These issues are compounded by the fact that, in many instances, AI-based systems are designed and built in countries such as the US and then marketed and sold to governments around the world for use across critical public services. Often, these systems are not designed with sensitivity to local contexts, cultures, and languages, nor with cognizance of how the technology will interface with the political, social, and economic landscape where it is deployed. In addition, civil society organizations face additional barriers when seeking transparency and access to information from foreign companies. As AI regulation efforts advance, a failure to consider potential extraterritorial harms will leave a significant accountability gap and risk deepening global inequalities. Many participants therefore noted both the importance of ensuring that regulation in countries where tech companies are based includes diverse voices and addresses extraterritorial impacts, but also to ensure that Global North models of regulation, which may not be fit for purpose, are not automatically “exported.”

A way forward

The event ended with a strategizing session that revealed the diverse strengths of the human rights movement and multiple areas for future work. Several specific and urgent calls to action emerged from these discussions.

First, given the disproportionate impacts of governments’ AI deployments on marginalized communities, a key theme was the need for broader participation in discussions on emerging AI regulation. This includes specially protected groups such as indigenous peoples, minoritized ethnic and racial groups, immigrant communities, people with disabilities, women’s rights activists, children, and LGBTQ+ groups, to name just a few. Without learning from and elevating the perspectives and experiences of these groups, regulatory initiatives will fail to address the full scope of the realities of AI. We must therefore develop participatory methodologies that bring the voices of communities into key policy spaces. More routes to meaningful consultation would lead to greater power and autonomy for previously marginalized voices to shape a more human rights-centric agenda for AI regulation. 

Second, the unique impacts that public sector use of AI can have on human rights, especially for marginalized groups, demands a comprehensive approach to AI regulation that takes careful account of specific sectors. Regulatory regimes that fail to include meaningful sector-specific safeguards for areas such as health, education, and social security will fail to address the full range of AI related harms. Participants noted that existing tools and mechanisms can provide a starting point – such as consultation and testing requirements, specific prohibitions on certain kinds of systems, requirements surrounding proportionality, mandatory human rights impact assessments, transparency requirements, periodic evaluations, and supervision mechanisms.

Finally, there was a shared desire to build stronger solidarity across a wider range of actors, and a call to action for more effective collaborations. Participants from around the world were keen to share resources, partner on specific advocacy goals, and exchange lessons learned. Since participants focus on many diverse issues, and adopt different approaches to achieve better human rights outcomes, collaboration will allow us to draw on a much deeper pool of collective knowledge, methodologies, and networks. It will be especially critical to bridge silos between those who identify more as “digital rights” organizations and groups working on issues such as healthcare, or migrants’ rights, or on the rights of people with disabilities. Elevating the work of grassroots groups, and improving diversity and representation among those empowered to enter spaces where key decisions around AI regulation are made, should also be central in movement-building. 

There is also an urgent need for more exchange not only across the human rights ecosystem, but also with actors from other disciplines who bring different forms of technical expertise, such as engineers and public interest technologists. Given the barriers to entry to regulatory spaces – including the resources, long-term commitment, and technical vocabularies imposed – effective coalition-building and information sharing could help to lessen these burdens.

While this event brought together a fantastic and energetic group of advocates from dozens of countries, these takeaways reflect the views of only a small subset of the relevant stakeholders in these debates. We ended the session hopeful, but with the recognition that there is a great deal more work needed to allow for the full participation of affected communities from around the world. Moving forward, we aim to continue to create spaces for varied groups to self-organize, continue the dialogue, and share information. We will help foster collaborations and concretely support organizations in building new partnerships across sectors and geographies, and hope to continue to co-create a shared human rights agenda for AI regulation for the digital welfare state.

As we continue this work and seek to support efforts and build collaborations, we would love to hear from you – please get in touch if you are interested in joining these efforts.

November 14, 2023. Digital Welfare State and Human Rights Project at NYU Law Center for Human Rights and Global Justice, and Amnesty Tech’s Algorithmic Accountability Lab. 

Shaping Digital Standards: An Explainer and Recommendations on Technical Standard-Setting for Digital Identity Systems.

AREA OF WORK

Shaping Digital Standards

An Explainer and Recommendations on Technical Standard-Setting for Digital Identity Systems.

In April 2023, we submitted comments to the United States National Institute of Standards and Technology (NIST), to contribute to its Guidelines on Digital Identity. Given that the NIST guidelines are very technical — the Guidelines are written for a specialist audience — we published this short “explainer” document with the hope of providing a resource to empower other civil society organizations and public interest lawyers, to engage with technical standards-setting bodies to raise human rights concerns related to digitalization in the future. This document therefore sets out the importance of standards bodies, provides an accessible “explainer” on the Digital Identity Guidelines, and summarizes our comments and recommendations.

The National Institute of Standards and Technology (NIST), which is part of the U.S. Department of Commerce, is a prominent and powerful standards body. Its standards are influential, shaping the design of digital systems in the United States and elsewhere. Over the past few years, NIST has been in the process of creating and updating a set of official Guidelines on Digital Identity, which “present the process and technical requirements for meeting digital identity management assurance levels … including requirements for security and privacy as well as considerations for fostering equity and the usability of digital identity solutions and technology.”

The primary audiences for the Guidelines are IT professionals and senior administrators in U.S. federal agencies that utilize, maintain, or develop digital identity technologies to advance their mission. The Guidelines fall under a wider NIST initiative to design a Roadmap on Identity Access and Management that explores topics like accelerating adoption of mobile drivers licenses, expanding biometric measurement programs, promoting interoperability, and modernizing identity management for U.S. federal government employees and contractors.

This technical guidance is particularly influential, as it shapes decision-making surrounding the design and architecture of digital identity systems. Biometrics and identity and security companies frequently cite their compliance with NIST standards to promote their technology and to convince governments to purchase their hardware and software products to build digital identity systems. Other technical standards bodies look to NIST and cite NIST standards. These technical guidelines thus have a great deal of influence well beyond the United States, affecting what is deemed acceptable or not within digital identity systems, such as how and when biometrics can be used. . 

Such technical standards are therefore of vital relevance to all those who are working on digital identity. In particular, these standards warrant the attention of civil society organizations and groups who are concerned with the ways in which digital identity systems have been associated with discrimination, denial of services, violations of privacy and data protection, surveillance, and other human rights violations. Through this explainer, we hope to provide a resource that can be helpful to such organizations, enabling and encouraging them to contribute to technical standard-setting processes in the future and to bring human rights considerations and recommendations into the standards that shape the design of digital systems. 

Pilots, Pushbacks, and the Panopticon: Digital Technologies at the EU’s Borders

TECHNOLOGY & HUMAN RIGHTS

Pilots, Pushbacks, and the Panopticon: Digital Technologies at the EU’s Borders

The European Union is increasingly introducing digital technologies into its border control operations. But conversations about these emerging “digital borders” are often silent about the significant harms experienced by those subjected to these technologies, their experimental nature, and their discriminatory impacts.

On October 27, 2021, we hosted the eighth episode in our Transformer States Series on Digital Government and Human Rights, in an event entitled “Artificial Borders? The Digital and Extraterritorial Protection of ‘Fortress Europe.’” Christiaan van Veen and Ngozi Nwanta interviewed Petra Molnar about the European Union’s introduction of digital technologies into its border control and migration management operations. The video and transcript of the event, along with additional reading materials, can be found below. This blog post outlines key themes from the conversation.

Digital technologies are increasingly central to the EU’s efforts to curb migration and “secure” its borders. Against a background of growing violent pushbacks, surveillance technologies such as unpiloted drones and aerostat machines with thermo-vision sensors are being deployed at the borders. The EU-funded “ROBORDER” project aims to develop “a fully-functional autonomous border surveillance system with unmanned mobile robots.” Refugee camps on the EU’s borders, meanwhile, are being turned into a “surveillance panopticon,” as the adults and children living within them are constantly monitored by cameras, drones, and motion-detection sensors. Technologies also mediate immigration and refugee determination processes, from automated decision-making, to social media screening, and a pilot AI-driven “lie detector.”

In this Transformer States conversation, Petra argued that technologies are enabling a “sharpening” of existing border control policies. As discussed in her excellent report entitled “Technological Testing Grounds,” completed with European Digital Rights and the Refugee Law Lab, new technologies are not only being used at the EU’s borders, but also to surveil and control communities on the move before they reach European territory. The EU has long practiced “border externalization,” where it shifts its border control operations ever-further away from its physical territory, partly through contracting non-Member States to try to prevent migration. New technologies are increasingly instrumental in these aims. The EU is funding African states’ construction of biometric ID systems for migration control purposes; it is providing cameras and surveillance software to third countries to prevent travel towards Europe; and it supports efforts to predict migration flows through big data-driven modeling. Further, borders are increasingly “located” on our smartphones and in enormous databases as data-based risk profiles and pre-screening become a central part of the EU’s border control agenda.

Ignoring human experience and impacts

But all too often, discussions about these technologies are sanitized and depoliticized. People on the move are viewed as a security problem, and policymakers, consultancies, and the private sector focus on the “opportunities” presented by technologies in securitizing borders and “preventing migration.” The human stories of those who are subjected to these new technological tools and the discriminatory and deadly realities of “digital borders” are ignored within these technocratic discussions. Some EU policy documents describe the “European Border Surveillance System” without mentioning people at all.

In this interview, Petra emphasized these silences. She noted that “human experience has been left to the wayside.” First-person accounts of the harmful impacts of these technologies are not deemed to be “expert knowledge” by policymakers in Brussels, but it is vital to expose the human realities and counter the sanitized policy discussions. Those who are subjected to constant surveillance and tracking are dehumanized: Petra reports that some are left feeling “like a piece of meat without a life, just fingerprints and eye scans.” People are being forced to take ever-deadlier routes to avoid high-tech surveillance infrastructures, and technology-enabled interdictions and pushbacks are leading to deaths. Further, difference in treatment is baked into these technological systems, as they enable and exacerbate discriminatory inferences along racialized lines. As UN Special Rapporteur on Racism E. Tendayi Achiume writes, “digital border technologies are reinforcing parallel border regimes that segregate the mobility and migration of different groups” and are being deployed in racially discriminatory ways. Indeed, some algorithmic “risk assessments” of migrants have been argued to represent racial profiling.

Policy discussions about “digital borders” also do not acknowledge that, while the EU spends vast sums on technologies, the refugee camps at its borders have neither running water nor sufficient food. Enormous investment in digital migration management infrastructures is being “prioritized over human rights.” As one man commented, “now we have flying computers instead of more asylum.”

Technological experimentation and pilot programs in “gray zones”

Crucially, these developments are occurring within largely-unregulated spaces. A central theme of this Transformer States conversation—mirroring the title of Petra’s report, “Technological Testing Grounds”—was the notion of experimentation within the “gray zones” of border control and migration management. Not only are non-citizens and stateless persons accorded fewer rights and protections than EU citizens, but immigration and asylum decision-making is also an area of law which is highly discretionary and contains fewer legal safeguards.

This low-rights, high-discretion environment makes it rife for testing new technologies. This is especially the case in “external” spaces far from European territory which are subject to even less regulation. Projects which would not be allowed in other spaces are being tested on populations who are literally at the margins, as refugee camps become testing zones. The abovementioned “lie detector,” whereby an “avatar” border guard flagged “biomarkers of deceit,” was “merely” a pilot program. This has since been fiercely criticized, including by the European Parliament, and challenged in court.

Experimentation is deliberately occurring in these zones as refugees and migrants have limited opportunities to challenge this experimentation. The UN Special Rapporteur on Racism has noted that digital technologies in this area are therefore “uniquely experimental.” This has parallels with our work, where we consistently see governments and international organizations piloting new technologies on marginalized and low-income communities. In a previous Transformer States conversation, we discussed Australia’s Cashless Debit Card system, in which technologies were deployed upon aboriginal people through a pilot program. In the UK, radical reform to the welfare system through digitalization was also piloted, with low-income groups being tested on with “catastrophic” effects.

Where these developments are occurring within largely-unregulated areas, human rights norms and institutions may prove useful. As Petra noted, the human rights framework requires courts and policymakers to focus upon the human impacts of these digital border technologies, and highlights the discriminatory lines along which their effects are felt. The UN Special Rapporteur on Racism has outlined how human rights norms require mandatory impact assessments, moratoria on surveillance technologies, and strong regulation to prevent discrimination and harm.

November 23, 2021. Victoria Adelmant,Director of the Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law. 

Carbon Markets, Forests and Rights: An Introductory Series for Indigenous Peoples

CLIMATE AND ENVIRONMENT

Carbon Markets, Forests and Rights

An Introductory Series for Indigenous Peoples

Indigenous peoples are experiencing a rush of interest in their lands and territories from actors involved in carbon markets. Many indigenous communities have expressed that to make informed decisions about how to engage with carbon markets, they need accessible information about what these markets are, and how participating in them may affect their rights.

In response to this demand for information, the Global Justice Clinic and the Forest Peoples Programme have developed a series of introductory materials about carbon markets. The materials were initially developed for GJC partner the South Rupununi District Council in Guyana and have been adapted for a global audience.

The explainer materials can be read in any order:

  • Explainer 1 introduces key concepts that are essential background to understanding carbon markets. It introduces what climate change is, what the carbon cycle and carbon dioxide is, and the link between carbon dioxide, forests and climate change. 
  • Explainer 2 outlines what carbon markets and carbon credits are, and provides a brief introduction to why these markets are developing and how they function
  • Explainer 3 focuses on indigenous peoples’ rights and carbon markets. It highlights some of the particular risks that carbon markets pose to indigenous peoples and communities. It also highlights key questions communities should ask themselves as they consider how to engage with or respond to carbon markets
  • Explainer 4 provides an overview of the key environmental critiques and concerns around carbon markets
  • Explainer 5 provides a short introduction to ART-TREES. ART-TRESS is an institution and standard that is involved in ‘certifying’ carbon credits and that is gaining a lot of attention internationally.

Digital Identification and Inclusionary Delusion in West Africa

TECHNOLOGY & HUMAN RIGHTS

Digital Identification and Inclusionary Delusion in West Africa 

Over 1 billion persons have been categorized as invisible in the world, of which about 437 million persons are reported to be from sub-Saharan Africa. In West Africa alone, the World Bank has identified a huge “identification gap” and different identification projects are underway to identify millions of invisible West Africans.[1] These individuals are regarded as invisible not because they are unrecognizable or non-existent, but because they do not fit a certain measure of visibility that matches existing or new database(s) of an identifying institution[2], such as the State or international bodies.

One existing digital identification project in West Africa is the West Africa Unique Identification for Regional Integration and Inclusion (WURI) program initiated by the World Bank under its Identification for Development initiative. The WURI program is to serve as an umbrella under which West African States can collaborate with the Economic Community of West African States (ECOWAS) to design and build a digital identification system, financed by the World Bank, that would create foundational IDs (fID)[3] for all persons in the ECOWAS region.[4] Many West African States that have had past failed attempts at digitizing their identification systems have embraced assistance via WURI. The goal of WURI is to enable access to services for millions of people and ensure “mutual recognition of identities” across countries. The promise of digital identification is that it will facilitate development by promoting regional integration, security, social protection of aid beneficiaries, financial inclusion, reduction of poverty and corruption, healthcare insurance and delivery, and act as a stepping stone to an integrated digital economy in West Africa. This way, millions of invisible individuals would become visible to the state and become financially, politically and socially included.

Nevertheless, the outlook of WURI and the reliance on digital IDs by development agencies proposes a reliance on technologies, also known as techno-solutionism, as the approach to dealing with institutional challenges and developmental goals in West Africa. This reliance on digital technologies does not address some of the major root causes of developmental delays in the countries and may instead worsen the state of things by excluding the vast majority of people who are either unable to be identified or excluded by virtue of technological failures. This exclusion emerges in a number of ways, including the service-based structure and/or mandatory nature of many digital identification projects which adopt the stance of exclusion first before inclusion. This means that in cases where access to services and infrastructures, such as opening a bank account, registering sim cards, getting healthcare or receiving government aid and benefits, are made subject to registration and possession of a national ID card or unique identification number (UIN), individuals are first excluded unless they register for and possess the national ID card or UIN.

There are three contexts in which exclusion may arise. Firstly, an individual may be unable to register for a fID. For instance, in Kenya, many individuals without identity verification documents like birth certificates were excluded from the registration process of its fID, Huduma Namba. A second context arises where an individual may be unable to obtain a fID card or unique identification number (UIN) after registration. This is the case in Nigeria where the National Identity Management Commission has been unable to deliver ID cards to the majority of those who have registered under the identity program. The risk of exclusion of individuals may increase in Nigeria when the government conditions access to services on the possession of an fID card or UIN.

A third scenario involves the inability of an individual to access infrastructures after obtaining a fID card or UIN, due to the breakdown or malfunctioning of the technology for authentication by the identifying institution. In Tanzania, for example, although some individuals have the fID card or UIN, they are unable to proceed with their SIM registration process due to breakdown of the data storage systems. There are also numerous reports of people not getting access to services in India because of technology failures. This leaves a large group of vulnerable individuals, particularly where the fID is required to access key services such as SIM card registration. An unpublished 2018 poll carried out in Cote d’Ivoire reveals that over 65% of those who register for National ID used it to apply for SIM card services and about 23% for financial services.[5]

The mandatory or service-based model of most identification systems in West Africa take away powers or rights of access to and control of resources and identity from individuals and confers them on the State and private institutions, thereby raising some human rights concerns for those who are unable to fit the criteria for registration and identification. Thus, a person who ordinarily would move around freely, shop from a grocery store, open a bank account or receive healthcare from a hospital can only do that, upon commencement of mandatory use of the fID, through possession of the fID card or UIN. In Nigeria, for instance, the new national computerized identity card is equipped with a microprocessor designed to host and store multiple e-services and applications like biometric e-ID, electronic ID, payment application, travel document, and serve as the national identity card of individuals. A Thales publication also states that in a second phase for the Nigerian fID, driver’s license, eVoting, eHealth or eTransport applications are to be added to the cards. This is a long list of e-services for a country where only about 46% of its population is reported to have access to the internet. Where a person loses this ID card or is unable to provide the UIN that digitally represents that person, such person would be potentially excluded from access to all the services and infrastructures that the fID card or UIN serves as a gateway to. This exclusion risk is intensified by the fact that identifying institutions in remote or local areas may lack authentication technologies or electronic connection to the ID database to verify the existence of individuals at all times they seek to be identified, make a payment, receive healthcare, or travel.

It is important to note that exclusion does not only stem from mandatory fID systems or voluntary but service-integrated ID systems. There are also risks with voluntary ID systems where adequate measures are not taken to protect the data and interests of all those who are registered. An adequate data storage facility, data protection designs and data privacy regulation to protect the data of individuals is required, else individuals face increased risks of identity theft, fraud and cybercrime which would exclude and shut them off from fundamental services and infrastructures.

The history of political instability, violence and extremism, ethnic and religious conflicts, and disregard for the rule of law in many West African countries also heightens the risk of exclusion of individuals. Different instances of this abound, such as religious extremism, insurgences and armed conflicts in Northern Nigeria, civilian attacks and unrest in some communities in Burkina Faso, crisis and terrorist attacks in Mali, election violence, and military intervention in State governance. An OECD report accounts for over 3,317 violent events in West Africa between 2011 – 2019 with fatalities rising above 11,911 within those periods. A UN report also puts the number of deaths in Burkina Faso to over 1800 in 2019 and over 25,000 displaced persons in the same year. This instability can act as a barrier to registration for a fID and lead to exclusion where certain groups of persons are targeted and profiled by state and/or non-state (illegal) actors.

In addition to cases where registration is mandatory or where individuals are highly dependent on the infrastructures and services they wish to access, there might also be situations where people might opt to rely less on the use of the fID or decide not to register due to worries about surveillance, identity theft or targeted disciplinary control, thereby excluding them from resources they would have ordinarily gotten access to. In Nigeria, only about 20% of the population is reported to have registered for the Nigerian Identity Number (NIN) (this was about 6% in 2017). Similarly, though implementation of WURI program objectives in Guinea and Cote d’Ivoire commenced in 2018, as of date, the registration and identification output in both countries is still marginally low.

World Bank findings and lessons from Phase I reveal that digital identification can exacerbate exclusion and marginalization, while diminishing privacy and control over data, despite the benefits it may carry. Some of the challenges identified by the World Bank resonate with the major concerns listed here, and they include risks of surveillance, discrimination, inequality, distrust between the State and individuals, and legal, political and historical differences among countries. The solutions proposed, under the WURI program objectives, to address these problems – consultations, dialogues, ethnographic studies, provision of additional financing and capacity – are laudable but insufficient to dealing with the root causes. On the contrary, the solutions offered might reveal the inadequacies of a digitized State in West Africa where a large population of West African are digital illiterates, lack the means to access digital platforms, or operate largely in the informal sector.

Practically, the task of tactically addressing the root causes to most of the problems mentioned above, particularly the major ones involving political instability, institutional inadequacies, corruption, conflicts and capacity building, is an arduous one which may involve a more domestic/grassroot/bottom-up approach. However, the solution to these challenges is either unknown, difficult or less desirable than the “quick fix” offered by techno-solutionism and reliance on digital identification.

  1. It is uncertain why the conventional wisdom is that West African countries, many of whom have functional IDs, specifically need to have a national digital ID card system while some of their developed counterparts in Europe and North-America lack a national ID card but rely on different functional IDs
  2. Identifying institution is used here to refer to any institution that seeks to authenticate the identity of a person based on the ID card or number that person possesses.
  3. A foundational identity system is an identity system which enables the creation of identities or unique identification numbers used for general purposes, such as national identity cards. A functional identity system is one that is created for or evolves out of a specific use case but may likely be suitable for use across other sectors such as driver’s license, voter’s card, bank number, insurance number, insurance records, credit history, health record, tax records.
  4. Member States of ECOWAS include the Republic of Benin, Burkina Faso, Cape Verde, the Gambia, Ghana, Guinea, Guinea Bissau, Liberia, Mali, Niger, Nigeria, Senegal, Sierra Leone, Togo.
  5. See Savita Bailur, Helene Smertnik & Nnenna Nwakanma, End User Experience with identification in Côte d’Ivoire. Unpublished Report by Caribou Digital.

October 19, 2020. Ngozi Nwanta, JSD program, NYU School of Law with research interests in systemic analysis of national identification systems, governance of credit data, financial inclusion, and development.

Social Credit in China: Looking Beyond the “Black Mirror” Nightmare

TECHNOLOGY & HUMAN RIGHTS

Social Credit in China: Looking Beyond the “Black Mirror” Nightmare

The Chinese government’s Social Credit program has received much attention from Western media and academics, but misrepresentations have led to confusion over what it truly entails. Such mischaracterizations unhelpfully distract from the dangers and impacts of the realities of Social Credit. On March 31, 2021, Christiaan Van Veen and I hosted the sixth event in the Transformer States conversation series, which focuses on the human rights implications of the emerging digital state. We interviewed Dr. Chenchen Zhang, Assistant Professor at Queen’s University Belfast, to explore the much-discussed but little-understood Social Credit program in China.

Though the Chinese government’s Social Credit program has received significant attention from Western media and rights organizations, much of this discussion has often misrepresented the program. Social Credit is imagined as a comprehensive, nation-wide system in which every action is monitored and a single score is assigned to each individual, much like a Black Mirror episode. This is in fact quite far from reality. But this image has become entrenched in the West, as discussions and some academic debate has focused on abstracted portrayals of what Social Credit could be. In addition, the widely-discussed voluntary, private systems run by corporations, such as Alipay’s Sesame Credit or Tencent’s WeChat score, are often mistakenly conflated with the government’s Social Credit program.

Jeremy Daum has argued that these widespread misrepresentations of Social Credit serve to distract from examining “the true causes for concern” within the systems actually in place. They also distract from similar technological developments occurring in the West, which seem acceptable by comparison. An accurate understanding is required to acknowledge the human rights concerns that this program raises.

The crucial starting point here is that the government’s Social Credit system is a heterogeneous assemblage of fragmented and decentralized systems. Central government, specific government agencies, public transport networks, municipal governments, and others are experimenting with diverse initiatives with different aims. Indeed, xinyong, the term which is translated as “credit” in Social Credit, encompasses notions of financial creditworthiness, regulatory compliance, and moral trustworthiness, therefore covering programs with different visions and narratives. A common thread across these systems is a reliance on information-sharing and lists to encourage or discourage certain behaviors, including blacklists to “shame” wrongdoers and “redlists” publicizing those with a good record.

One national-level program called the Joint Rewards and Sanctions mechanism shares information across government agencies about companies which have violated regulations. Once a company is included on one agency’s blacklist for having, for example, failed to pay migrant workers’ wages, other agencies may also sanction that company and refuse to grant it a license or contract. But blacklisting mechanisms also affect individuals: the People’s Court of China maintains a list of shixin (dishonest) people who default on judgments. Individuals on this list are prevented from accessing “non-essential consumption” (including travel by plane or high-speed train) and their names are published, adding an element of public shaming. Other local or sector-specific “credit” programs aim at disciplining individual behavior: anyone caught smoking on the high-speed train is placed on the railway system’s list of shixin persons and subjected to a 6-month ban from taking the train. Localized “citizen scoring” schemes are also being piloted in a dozen cities. Currently, these resemble “club membership” schemes with minor benefits and have low sign-up rates; some have been very controversial. In 2019, in response to controversies, the National Development and Reform Commission issued guidelines stating that citizen scores must only be used for incentivizing behavior and not as sanctions or to limit access to basic public services. Presently, each of the systems described here are separate from one another.

But even where generalizations and mischaracterizations of Social Credit are dispelled, many aspects nonetheless raise significant concerns. Such systems will, of course, worsen issues surrounding privacy, chilling effects, discrimination, and disproportionate punishment. These have been explored at length elsewhere, but this conversation with Chenchen raised additional important issues.

First, a stated objective behind the use of blacklists and shaming is the need to encourage compliance with existing laws and regulations, since non-compliance undermines market order. This is not a unique approach: the US Department of Labor names and shames corporations that violate labor laws, and the World Bank has a similar mechanism. But the laws which are enforced through Social Credit exist in and constitute an extremely repressive context, and these mechanisms are applied to individuals. An individual can be arrested for protesting labor conditions or for speaking about certain issues on social media, and systems like the People’s Court blacklist amplify the consequences of these repressive laws. Mechanisms which “merely” seek to increase legal compliance are deeply problematic in this context.

Second, as with so many of the digital government initiatives discussed in the Transformer States series, Social Credit schemes exhibit technological solutionism which invisibilizes the causes of the problems they seek to address. Non-payment of migrant workers’ wages, for example, is a legitimate issue which must be tackled. But in turning to digital solutions such as an app which “scores” firms based on their record of wage payments, a depoliticized technological fix is promised to solve systemic problems. In the process, it obscures the structural reasons behind migrant workers’ difficulties in accessing their wages, including a differentiated citizenship regime that denies them equal access to social provisions.

Separately, there are disparities in how individuals in different parts of the country are affected by Social Credit. Around the world, governments’ new digital systems are consistently trialed on the poorest or most vulnerable groups: for example, smartcard technology for quarantining benefit income in Australia was first introduced within indigenous communities. Similarly, experimentation with Social Credit systems is unequally targeted, especially on a geographical basis. There is a hierarchy of cities in China with provincial-level cities like Beijing at the top, followed by prefectural-level cities, county-level cities, then towns and villages. A pattern is emerging whereby smaller or “lower-ranked” cities have adopted more comprehensive and aggressive citizen scoring schemes. While Shanghai has local legislation that defines the boundaries of its Social Credit scheme, less-known cities seeking to improve their “branding” are subjecting residents to more arbitrary and concerning practices.

Of course, the biggest concern surrounding Social Credit relates to how it may develop in the future. While this is currently a fragmented landscape of disparate schemes, the worry is that these may be consolidated. Chenchen stated that a centralized, nationwide “citizen scoring” system remains unlikely and would not enjoy support from the public or the Central Bank which oversees the Social Credit program. But it is not out of the question that privately-run schemes such as Sesame Credit might eventually be linked to the government’s Social Credit system. Though the system is not (yet) as comprehensive and coordinated as has been portrayed, its logics and methodologies of sharing ever-more information across siloes to shape behaviors may well push in this direction, in China and elsewhere.

April 20, 2021. Victoria Adelmant, Director of the Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law.