Law Clinics Condemn U.S. Government Support for Haiti’s Regime as Country Faces Human Rights and Humanitarian Catastrophe

HUMAN RIGHTS MOVEMENT

Law Clinics Condemn U.S. Government Support for Haiti’s Regime as Country Faces Human Rights and Humanitarian Catastrophe

To mark the second anniversary of the assassination of Haitian President Jovenel Moïse, the Global Justice Clinic and the International Human Rights Clinic at Harvard Law School submitted a letter to Secretary of State Antony Blinken and Assistant Secretary Brian Nichols calling on the U.S. government to cease to support the de facto Ariel Henry administration. Progress on human rights and security and a return to constitutional order will only be possible if Haitian people have the opportunity to change their government.

In the wake of Moïse’s murder and at the urging of the United States, Dr. Henry assumed leadership as de facto prime minister. The past two years, Dr. Henry has presided over a humanitarian and human rights catastrophe. He has consolidated power in what remains of Haiti’s institutions, and has proposed to amend the Constitution in an unlawful manner. Further, there is evidence that ties Dr. Henry to the assassination of President Moïse. Despite the monumental failure of Dr. Henry’s government, the United States continues to support this illegitimate and unpopular regime.

The letter declares that any transitional government must be evaluated against Haiti’s Constitution and established human rights principles. Proposals such as Dr. Henry’s that violate the spirit of the Constitution and further state capture cannot be a path to democracy.

This post was originally published as a press release on July 10, 2023 by the Global Justice Clinic at NYU School of Law, and the International Human Rights Clinic at Harvard Law School. 

Guyanese Indigenous Council Rejects Canadian Mining Company’s Flimsy Environmental and Social Impact Assessment, Calls for Rejection of Mining Permit

CLIMATE & ENVIRONMENT

Guyanese Indigenous Council Rejects Canadian Mining Company’s Flimsy Environmental and Social Impact Assessment, Calls for Rejection of Mining Permit

The Global Justice Clinic has been working with the South Rupununi District Council (SRDC) since 2016. Through the clinic, students have provided data analysis and legal support for monitoring activity undertaken by the SRDC. 

Last week, the South Rupununi District Council (SRDC), a legal representative institution for the Wapichan people, released a statement forcefully denouncing the procedurally and substantively defective environmental and social impact assessment (ESIA) submitted by a Canadian mining company (Guyana Goldstrike) seeking to begin large-scale mining operations on Marutu Taawa through its Guyanese subsidiary (Romanex). Marutu Taawa, also known as Marudi Mountain, stands deep in the traditional territory of the Wapichan and holds historical, cultural, spiritual and biological significance for the entire region. Because Marutu Taawa sits at a critical watershed, the environmental impact of large-scale mining operations would threaten the ability of the Wapichan people to continue living in the ancestral lands they have called home for centuries. Notwithstanding the threat to the Wapichan people posed by large-scale mining, SRDC finds that Guyana Goldstrike’s ESIA relies on incomplete, inaccurate, or decades-old information to ignore the substantial environmental, public health, and cultural consequences that would occur if such mining operations were allowed to proceed. The SRDC also strongly condemns the mining company’s failure to consult the Council, as a legal representative institution of the Wapichan people. This failure to meaningfully consult stands in direct violation of both Guyanese and international law.

Given the inadequacy of the ESIA and Guyana Goldstrike’s flouting of domestic and international law, the SRDC has strongly encouraged the Guyanese Environmental Protection Agency (EPA) to deny the Canadian company’s subsidiary the environmental permit needed to initiate large-scale operations in the territory. The SRDC also calls on the EPA to oversee a process that ensures that Guyana Goldstrike and Romanex adhere to Guyanese and international law and best practices in the international mining sector.

This post was originally published as a press release on September 28, 20218.

Recommendations to Funders to Improve Mental Health and Wellbeing in the Human Rights Field

HUMAN RIGHTS MOVEMENT

Recommendations to Funders to Improve Mental Health and Wellbeing in the Human Rights Field 

Improving and maintaining well-being is essential to individual health, to organizational functioning, and to the sustainability and effectiveness of the human rights field as a whole. There are many concrete, immediately actionable reforms that are achievable in the near-term and which address a variety of causes of distress, or which can support efforts to transform the field over the long term. Such steps should be taken while the human rights field works toward deep transformation. 

Human rights advocacy can be a source of significant joy, purpose, political agency, belonging, and community. Yet advocates can also experience harms, and trauma in their efforts to advance justice and equality, including those caused by heavy workloads, time pressures, discrimination and bullying in the workplace, vicarious exposure to trauma and human rights abuse, and direct experience of threats and attacks. Advocates can experience suffering, sometimes very severe, as a result, including demotivation, alienation, anxiety, fear, depression, and post-traumatic stress disorder. How advocates experience their work and any resulting harms can vary widely, and may be highly contextual and culturally specific.

Improving and maintaining well-being is essential to individual health, to organizational functioning, and to the sustainability and effectiveness of the human rights field as a whole. 

Positively transforming mental health and well-being in the human rights field will require significant reforms and both structural changes and close attention to the contextually-specific needs of individual advocates and organizations. The causes and dynamics at play are complex, and there are no quick fixes that can address the cultural shifts required. As efforts are taken to improve well-being, it is important that the field avoids tick-the-box or commodified approaches. Improving the wellbeing of human rights advocates requires a holistic response and a movement-wide prioritization of well-being, with careful attention to context, culture, and the diverse needs of advocates and organizations.  

Recognition of the deeply-rooted problems requiring radical change or of the complexities of the issues and the difficulty of defining a clear set of recommendations applicable across the board should not operate as an excuse to take no action now to improve well-being. There are many concrete, immediately actionable reforms that are achievable in the near-term and which address a variety of causes of distress, or which can support efforts to transform the field over the long term. Such steps should be taken while the human rights field works toward deep transformation. Some of these steps include the following recommended actions, which are drawn from our research with advocates around the world.

Rights groups warn private healthcare is failing many, draining public resources

INEQUALITIES

Rights groups warn private healthcare is failing many, draining public resources

Government-backed expansion of the private healthcare sector in Kenya is leading to exclusion and setting back the goal of universal health coverage, said two rights groups in a report released today. National policies intended to increase private sector participation in healthcare, alongside chronic underinvestment in the public system, have contributed to an explosion of for-profit private actors who often provide poor value for money, neglect public health priorities, and push Kenyans into poverty and crushing debt.

The 49-page report, “Wrong Prescription: The Impact of Privatizing Healthcare in Kenya,” is authored by Hakijamii and the Center for Human Rights and Global Justice at New York University. It finds that privatization has proven costly for individuals and the government, has shut people out of access to healthcare, and is undermining the right to health. The government’s signature policy for achieving universal health coverage—the planned expansion of private-sector friendly social insurance through the National Hospital Insurance Fund (NHIF)—risks exacerbating these problems.

“Privatization is the wrong prescription for achieving universal health coverage,” said Philip Alston, former United Nations Special Rapporteur and co-author of the report. “Proponents of private healthcare make all sorts of promises about how it will lower costs and improve access, but our research finds private actors have really failed to deliver.”

“Promoters of private care have gravely misdiagnosed the situation,” said Nicholas Orago, Executive Director of Hakijamii and co-author of the report. “While many associate private care with high-quality facilities, the ‘haves’ and ‘have nots’ experience entirely different private sectors. Private healthcare has been disastrous for poor and vulnerable communities, who are left with low-quality, low-cost providers pedaling services that are too often unsafe or even illegal.”

Privatizing care has proven costly for both individuals and the government. The private health sector relies heavily on government funding, including tens of billions of shillings each year to contract with private facilities, subsidize access to private care, and pay for secretive public-private partnerships. Individuals face excessively high fees at private facilities, where treatment can cost in excess of twelve times more than the public sector.

“Healthcare is a big business, with global corporations and private equity firms lining up to profit off the sector in Kenya,” said Rebecca Riddell, Co-director of the Human Rights and Privatization Project at the Center and co-author of the report. “These companies expect returns on their investments, leading to overwhelmingly higher prices in the private sector while scarce public resources prop up private profits.”

The report draws from more than 180 interviews with healthcare users and providers, government officials, and experts. Researchers spoke with community members from informal settlements in Mombasa and Nairobi as well as rural areas in Isiolo. Many described being excluded from private care or facing hardships to afford treatment, such as selling important assets like land or forgoing educational and livelihood opportunities. Others described tragic consequences of low-quality care at private providers, including unnecessary deaths and disabilities. The impact has been particularly severe for people who are poor or low income, women, people with disabilities, and those in rural areas.

Researchers also found that the private sector in Kenya is concentrated in more profitable forms of care, and has neglected less commercially viable areas, patients, and services. Private sector healthcare workers described having to meet patient “targets” as well as working in conditions significantly inferior to those in the public sector.

“The disconnect between profits and public health goals should cause policymakers to rethink their reliance on the private sector,” said Bassam Khawaja, Co-director of the Human Rights and Privatization Project and report co-author. “Many essential health services are incredibly valuable or even lifesaving but may not be profitable as one-off transactions.”

The anticipated nationwide rollout of mandatory NHIF coverage will divert more public money to private actors without preventing exclusion and high costs. Though the NHIF is a public insurer, it contracts extensively with private facilities, offers private providers higher reimbursement rates, and sends most of its claims money to private actors. “Expanding coverage through the NHIF instead of investing in a strong public health system is a major step backwards,” Orago said.

Much of the pressure to privatize has come from external actors in the global North. Key development actors have urged Kenya to increase the private sector’s role in health, including international financial institutions, private foundations, and wealthy countries looking for new markets.

“An ideological commitment to the private sector has trumped the rights of the Kenyan people, as development actors promote private care and financing without accountability,” Alston said. “The extreme secrecy around many arrangements with the private health sector opens the door to corruption and self-dealing.”

The report concludes that the government should rethink its support for the private sector and prioritize the public healthcare system, which still delivers the majority of inpatient and outpatient care in Kenya despite being starved of resources. “While the government should address serious shortcomings in the public system, popular recent investments illustrate an enduring appetite for public care,” said Alston.

“With sufficient political will and resources, the public healthcare system is best positioned to provide all Kenyans with the accessible, affordable, and quality healthcare that they have a right to,” said Orago.

This post was originally published as a press release on November 16, 2021.

Co-creating a Shared Human Rights Agenda for AI Regulation and the Digital Welfare State

TECHNOLOGY AND HUMAN RIGHTS

Co-creating a Shared Human Rights Agenda for AI Regulation and the Digital Welfare State

On September 26, 2023, the Digital Welfare State and Human Rights Project at the Center for Human Rights and Global Justice at NYU Law and Amnesty Tech’s Algorithmic Accountability Lab (AAL) brought together 50 participants from civil society organizations across the globe to discuss the use and regulation of artificial intelligence in the public sector, within a collaborative online strategy session entitled ‘Co-Creating a Shared Human Rights Agenda for AI and the Digital Welfare State.’ Participants spanned diverse geographies and contexts—from Nigeria to Chile, and from Pakistan to Brazil—and included organizations working across a broad spectrum of human rights issues such as privacy, social security, education, and health. Through a series of lightning talks and breakout room discussions, the session surfaced shared concerns regarding the use of AI in public sector contexts, key gaps in existing discussions surrounding AI regulation, and potential joint advocacy opportunities.

Global discussions on the regulation of artificial intelligence (AI) have, in many contexts, thus far been preoccupied with whether to place meaningful constraints on the development, sale, and use of AI by private technology companies. Less attention has been paid to the need to place similar constraints on governments’ use of AI. But governments’ enthusiastic adoption of AI across public sector programs and critical public services has been accelerating apace around the world. AI-based systems are consistently tested in spheres where some of the most marginalized and low-income groups are unable to opt out – for instance, machine learning and other technologies are used to detect welfare benefit fraud, to assess vulnerability and determine eligibility for social benefits like housing, and to monitor people on the move. All too often, however, this technological experimentation results in discrimination, restriction of access to key services, privacy violations, and many other human rights harms. As governments eagerly build “digital welfare states,” incorporating AI into critical public services, the scale and severity of potential implications demands that meaningful constraints be placed on these developments. 

In the past few years, a wide array of regulatory and policy initiatives aimed at regulating the development and use of AI have been introduced – in Brazil, China, Canada, the EU, and the African Commission on Human and Peoples’ Rights, among many other countries and policy fora. However, what is emerging from these initiatives is an uneven patchwork of approaches to AI regulation, with concerning gaps and omissions when it comes to public sector applications of AI. Some of the world’s largest economies – where many powerful technology companies are based – are embarking on new regulatory initiatives with impacts far beyond their territorial confines, while many of the groups likely to be most affected have not been given sufficient opportunities to participate in these processes.

Despite these shortcomings, ongoing efforts to craft regulatory regimes do offer a crucial and urgent entry point for civil society organizations to seek to highlight critical gaps, to foster greater participation, and to contribute to shaping future deployments of AI in these important sectors.

In hosting this collaborative event on AI regulation and the digital welfare state, the AAL and the Center sought to build an inclusive space for civil society groups from across regions and sectors to forge new connections, share lessons, and collectively strategize. We sought to expand mobilization and build solidarity by convening individuals from dozens of countries, who work across a wide range of fields – including “digital rights” organizations, but also bringing in human rights and social justice groups who have not previously worked on issues relating to new technologies. Our aim was to brainstorm how actors across the human rights ecosystem can, in practice, help to elevate more voices into ongoing discussions about AI regulation.

Key issues for AI regulation in the digital welfare state

In breakout sessions, participants emphasized the urgent need to address serious harms that are already resulting from governments’ AI uses, particularly in contexts such as border control, policing, the judicial system, healthcare, and social protection. The public narrative – and accelerated impetus for regulation – has been dominated by discussion of existential threats AI may pose in the future, rather than the severe and widespread threats that are already seen in almost every area of public services. In Serbia, the roll-out of Social Cards in the welfare system has excluded thousands of the most marginalized from accessing their social protection entitlements; in Brazil, the deployment of facial recognition in public schools has subjected young children to discriminatory biases and serious privacy risks. Deployments of AI across public services are consistently entrenching inequalities and exacerbating intersecting discrimination – and participants noted that governments’ increasing interest in generative AI, which has the potential to encode harmful racial bias and stereotypes, will likely only intensify these risks.

Participants also noted that it is likely that AI will continue to impact groups that may defy traditional categorizations – including, for instance, those who speak minority languages. Indeed, a key theme across discussions was the insufficient attention paid in regulatory debates to AI’s impacts on culture and language. Given that systems are generally trained only in dominant languages, breakout discussions surfaced concerns about the potential erasure of traditional languages and loss of cultural nuance.

As advocates work not only to remedy some of these existing harms, but also to anticipate the impacts of the next iterations of AI, many expressed concern about the dominant role that the private sector plays in governments’ roll-outs of AI systems, as well as in discussions surrounding regulation. Where tech companies – who are often protected by powerful lobby groups, commercial confidentiality, and intellectual property regimes – are selling combinations of software, hardware, and technical guidance to governments, this can pose significant transparency challenges. It can be difficult for civil society organizations and affected individuals to understand who is providing these systems, as well as to understand how decisions are made. In the welfare context, for example, beneficiaries are often unaware of whether and how AI systems are making highly consequential decisions about their entitlements. Participants noted that human rights actors need the capacity and resources to move beyond traditional human rights work, to engage with processes such as procurement, standard-setting, and auditing, and to address issues related to intellectual property regimes and proliferating public-private partnerships underlying governments’ uses of AI.

These issues are compounded by the fact that, in many instances, AI-based systems are designed and built in countries such as the US and then marketed and sold to governments around the world for use across critical public services. Often, these systems are not designed with sensitivity to local contexts, cultures, and languages, nor with cognizance of how the technology will interface with the political, social, and economic landscape where it is deployed. In addition, civil society organizations face additional barriers when seeking transparency and access to information from foreign companies. As AI regulation efforts advance, a failure to consider potential extraterritorial harms will leave a significant accountability gap and risk deepening global inequalities. Many participants therefore noted both the importance of ensuring that regulation in countries where tech companies are based includes diverse voices and addresses extraterritorial impacts, but also to ensure that Global North models of regulation, which may not be fit for purpose, are not automatically “exported.”

A way forward

The event ended with a strategizing session that revealed the diverse strengths of the human rights movement and multiple areas for future work. Several specific and urgent calls to action emerged from these discussions.

First, given the disproportionate impacts of governments’ AI deployments on marginalized communities, a key theme was the need for broader participation in discussions on emerging AI regulation. This includes specially protected groups such as indigenous peoples, minoritized ethnic and racial groups, immigrant communities, people with disabilities, women’s rights activists, children, and LGBTQ+ groups, to name just a few. Without learning from and elevating the perspectives and experiences of these groups, regulatory initiatives will fail to address the full scope of the realities of AI. We must therefore develop participatory methodologies that bring the voices of communities into key policy spaces. More routes to meaningful consultation would lead to greater power and autonomy for previously marginalized voices to shape a more human rights-centric agenda for AI regulation. 

Second, the unique impacts that public sector use of AI can have on human rights, especially for marginalized groups, demands a comprehensive approach to AI regulation that takes careful account of specific sectors. Regulatory regimes that fail to include meaningful sector-specific safeguards for areas such as health, education, and social security will fail to address the full range of AI related harms. Participants noted that existing tools and mechanisms can provide a starting point – such as consultation and testing requirements, specific prohibitions on certain kinds of systems, requirements surrounding proportionality, mandatory human rights impact assessments, transparency requirements, periodic evaluations, and supervision mechanisms.

Finally, there was a shared desire to build stronger solidarity across a wider range of actors, and a call to action for more effective collaborations. Participants from around the world were keen to share resources, partner on specific advocacy goals, and exchange lessons learned. Since participants focus on many diverse issues, and adopt different approaches to achieve better human rights outcomes, collaboration will allow us to draw on a much deeper pool of collective knowledge, methodologies, and networks. It will be especially critical to bridge silos between those who identify more as “digital rights” organizations and groups working on issues such as healthcare, or migrants’ rights, or on the rights of people with disabilities. Elevating the work of grassroots groups, and improving diversity and representation among those empowered to enter spaces where key decisions around AI regulation are made, should also be central in movement-building. 

There is also an urgent need for more exchange not only across the human rights ecosystem, but also with actors from other disciplines who bring different forms of technical expertise, such as engineers and public interest technologists. Given the barriers to entry to regulatory spaces – including the resources, long-term commitment, and technical vocabularies imposed – effective coalition-building and information sharing could help to lessen these burdens.

While this event brought together a fantastic and energetic group of advocates from dozens of countries, these takeaways reflect the views of only a small subset of the relevant stakeholders in these debates. We ended the session hopeful, but with the recognition that there is a great deal more work needed to allow for the full participation of affected communities from around the world. Moving forward, we aim to continue to create spaces for varied groups to self-organize, continue the dialogue, and share information. We will help foster collaborations and concretely support organizations in building new partnerships across sectors and geographies, and hope to continue to co-create a shared human rights agenda for AI regulation for the digital welfare state.

As we continue this work and seek to support efforts and build collaborations, we would love to hear from you – please get in touch if you are interested in joining these efforts.

November 14, 2023. Digital Welfare State and Human Rights Project at NYU Law Center for Human Rights and Global Justice, and Amnesty Tech’s Algorithmic Accountability Lab. 

Shaping Digital Standards: An Explainer and Recommendations on Technical Standard-Setting for Digital Identity Systems.

AREA OF WORK

Shaping Digital Standards

An Explainer and Recommendations on Technical Standard-Setting for Digital Identity Systems.

In April 2023, we submitted comments to the United States National Institute of Standards and Technology (NIST), to contribute to its Guidelines on Digital Identity. Given that the NIST guidelines are very technical — the Guidelines are written for a specialist audience — we published this short “explainer” document with the hope of providing a resource to empower other civil society organizations and public interest lawyers, to engage with technical standards-setting bodies to raise human rights concerns related to digitalization in the future. This document therefore sets out the importance of standards bodies, provides an accessible “explainer” on the Digital Identity Guidelines, and summarizes our comments and recommendations.

The National Institute of Standards and Technology (NIST), which is part of the U.S. Department of Commerce, is a prominent and powerful standards body. Its standards are influential, shaping the design of digital systems in the United States and elsewhere. Over the past few years, NIST has been in the process of creating and updating a set of official Guidelines on Digital Identity, which “present the process and technical requirements for meeting digital identity management assurance levels … including requirements for security and privacy as well as considerations for fostering equity and the usability of digital identity solutions and technology.”

The primary audiences for the Guidelines are IT professionals and senior administrators in U.S. federal agencies that utilize, maintain, or develop digital identity technologies to advance their mission. The Guidelines fall under a wider NIST initiative to design a Roadmap on Identity Access and Management that explores topics like accelerating adoption of mobile drivers licenses, expanding biometric measurement programs, promoting interoperability, and modernizing identity management for U.S. federal government employees and contractors.

This technical guidance is particularly influential, as it shapes decision-making surrounding the design and architecture of digital identity systems. Biometrics and identity and security companies frequently cite their compliance with NIST standards to promote their technology and to convince governments to purchase their hardware and software products to build digital identity systems. Other technical standards bodies look to NIST and cite NIST standards. These technical guidelines thus have a great deal of influence well beyond the United States, affecting what is deemed acceptable or not within digital identity systems, such as how and when biometrics can be used. . 

Such technical standards are therefore of vital relevance to all those who are working on digital identity. In particular, these standards warrant the attention of civil society organizations and groups who are concerned with the ways in which digital identity systems have been associated with discrimination, denial of services, violations of privacy and data protection, surveillance, and other human rights violations. Through this explainer, we hope to provide a resource that can be helpful to such organizations, enabling and encouraging them to contribute to technical standard-setting processes in the future and to bring human rights considerations and recommendations into the standards that shape the design of digital systems. 

Carbon Markets, Forests and Rights: An Introductory Series for Indigenous Peoples

CLIMATE AND ENVIRONMENT

Carbon Markets, Forests and Rights

An Introductory Series for Indigenous Peoples

Indigenous peoples are experiencing a rush of interest in their lands and territories from actors involved in carbon markets. Many indigenous communities have expressed that to make informed decisions about how to engage with carbon markets, they need accessible information about what these markets are, and how participating in them may affect their rights.

In response to this demand for information, the Global Justice Clinic and the Forest Peoples Programme have developed a series of introductory materials about carbon markets. The materials were initially developed for GJC partner the South Rupununi District Council in Guyana and have been adapted for a global audience.

The explainer materials can be read in any order:

  • Explainer 1 introduces key concepts that are essential background to understanding carbon markets. It introduces what climate change is, what the carbon cycle and carbon dioxide is, and the link between carbon dioxide, forests and climate change. 
  • Explainer 2 outlines what carbon markets and carbon credits are, and provides a brief introduction to why these markets are developing and how they function
  • Explainer 3 focuses on indigenous peoples’ rights and carbon markets. It highlights some of the particular risks that carbon markets pose to indigenous peoples and communities. It also highlights key questions communities should ask themselves as they consider how to engage with or respond to carbon markets
  • Explainer 4 provides an overview of the key environmental critiques and concerns around carbon markets
  • Explainer 5 provides a short introduction to ART-TREES. ART-TRESS is an institution and standard that is involved in ‘certifying’ carbon credits and that is gaining a lot of attention internationally.

Digital Identification and Inclusionary Delusion in West Africa

TECHNOLOGY & HUMAN RIGHTS

Digital Identification and Inclusionary Delusion in West Africa 

Over 1 billion persons have been categorized as invisible in the world, of which about 437 million persons are reported to be from sub-Saharan Africa. In West Africa alone, the World Bank has identified a huge “identification gap” and different identification projects are underway to identify millions of invisible West Africans.[1] These individuals are regarded as invisible not because they are unrecognizable or non-existent, but because they do not fit a certain measure of visibility that matches existing or new database(s) of an identifying institution[2], such as the State or international bodies.

One existing digital identification project in West Africa is the West Africa Unique Identification for Regional Integration and Inclusion (WURI) program initiated by the World Bank under its Identification for Development initiative. The WURI program is to serve as an umbrella under which West African States can collaborate with the Economic Community of West African States (ECOWAS) to design and build a digital identification system, financed by the World Bank, that would create foundational IDs (fID)[3] for all persons in the ECOWAS region.[4] Many West African States that have had past failed attempts at digitizing their identification systems have embraced assistance via WURI. The goal of WURI is to enable access to services for millions of people and ensure “mutual recognition of identities” across countries. The promise of digital identification is that it will facilitate development by promoting regional integration, security, social protection of aid beneficiaries, financial inclusion, reduction of poverty and corruption, healthcare insurance and delivery, and act as a stepping stone to an integrated digital economy in West Africa. This way, millions of invisible individuals would become visible to the state and become financially, politically and socially included.

Nevertheless, the outlook of WURI and the reliance on digital IDs by development agencies proposes a reliance on technologies, also known as techno-solutionism, as the approach to dealing with institutional challenges and developmental goals in West Africa. This reliance on digital technologies does not address some of the major root causes of developmental delays in the countries and may instead worsen the state of things by excluding the vast majority of people who are either unable to be identified or excluded by virtue of technological failures. This exclusion emerges in a number of ways, including the service-based structure and/or mandatory nature of many digital identification projects which adopt the stance of exclusion first before inclusion. This means that in cases where access to services and infrastructures, such as opening a bank account, registering sim cards, getting healthcare or receiving government aid and benefits, are made subject to registration and possession of a national ID card or unique identification number (UIN), individuals are first excluded unless they register for and possess the national ID card or UIN.

There are three contexts in which exclusion may arise. Firstly, an individual may be unable to register for a fID. For instance, in Kenya, many individuals without identity verification documents like birth certificates were excluded from the registration process of its fID, Huduma Namba. A second context arises where an individual may be unable to obtain a fID card or unique identification number (UIN) after registration. This is the case in Nigeria where the National Identity Management Commission has been unable to deliver ID cards to the majority of those who have registered under the identity program. The risk of exclusion of individuals may increase in Nigeria when the government conditions access to services on the possession of an fID card or UIN.

A third scenario involves the inability of an individual to access infrastructures after obtaining a fID card or UIN, due to the breakdown or malfunctioning of the technology for authentication by the identifying institution. In Tanzania, for example, although some individuals have the fID card or UIN, they are unable to proceed with their SIM registration process due to breakdown of the data storage systems. There are also numerous reports of people not getting access to services in India because of technology failures. This leaves a large group of vulnerable individuals, particularly where the fID is required to access key services such as SIM card registration. An unpublished 2018 poll carried out in Cote d’Ivoire reveals that over 65% of those who register for National ID used it to apply for SIM card services and about 23% for financial services.[5]

The mandatory or service-based model of most identification systems in West Africa take away powers or rights of access to and control of resources and identity from individuals and confers them on the State and private institutions, thereby raising some human rights concerns for those who are unable to fit the criteria for registration and identification. Thus, a person who ordinarily would move around freely, shop from a grocery store, open a bank account or receive healthcare from a hospital can only do that, upon commencement of mandatory use of the fID, through possession of the fID card or UIN. In Nigeria, for instance, the new national computerized identity card is equipped with a microprocessor designed to host and store multiple e-services and applications like biometric e-ID, electronic ID, payment application, travel document, and serve as the national identity card of individuals. A Thales publication also states that in a second phase for the Nigerian fID, driver’s license, eVoting, eHealth or eTransport applications are to be added to the cards. This is a long list of e-services for a country where only about 46% of its population is reported to have access to the internet. Where a person loses this ID card or is unable to provide the UIN that digitally represents that person, such person would be potentially excluded from access to all the services and infrastructures that the fID card or UIN serves as a gateway to. This exclusion risk is intensified by the fact that identifying institutions in remote or local areas may lack authentication technologies or electronic connection to the ID database to verify the existence of individuals at all times they seek to be identified, make a payment, receive healthcare, or travel.

It is important to note that exclusion does not only stem from mandatory fID systems or voluntary but service-integrated ID systems. There are also risks with voluntary ID systems where adequate measures are not taken to protect the data and interests of all those who are registered. An adequate data storage facility, data protection designs and data privacy regulation to protect the data of individuals is required, else individuals face increased risks of identity theft, fraud and cybercrime which would exclude and shut them off from fundamental services and infrastructures.

The history of political instability, violence and extremism, ethnic and religious conflicts, and disregard for the rule of law in many West African countries also heightens the risk of exclusion of individuals. Different instances of this abound, such as religious extremism, insurgences and armed conflicts in Northern Nigeria, civilian attacks and unrest in some communities in Burkina Faso, crisis and terrorist attacks in Mali, election violence, and military intervention in State governance. An OECD report accounts for over 3,317 violent events in West Africa between 2011 – 2019 with fatalities rising above 11,911 within those periods. A UN report also puts the number of deaths in Burkina Faso to over 1800 in 2019 and over 25,000 displaced persons in the same year. This instability can act as a barrier to registration for a fID and lead to exclusion where certain groups of persons are targeted and profiled by state and/or non-state (illegal) actors.

In addition to cases where registration is mandatory or where individuals are highly dependent on the infrastructures and services they wish to access, there might also be situations where people might opt to rely less on the use of the fID or decide not to register due to worries about surveillance, identity theft or targeted disciplinary control, thereby excluding them from resources they would have ordinarily gotten access to. In Nigeria, only about 20% of the population is reported to have registered for the Nigerian Identity Number (NIN) (this was about 6% in 2017). Similarly, though implementation of WURI program objectives in Guinea and Cote d’Ivoire commenced in 2018, as of date, the registration and identification output in both countries is still marginally low.

World Bank findings and lessons from Phase I reveal that digital identification can exacerbate exclusion and marginalization, while diminishing privacy and control over data, despite the benefits it may carry. Some of the challenges identified by the World Bank resonate with the major concerns listed here, and they include risks of surveillance, discrimination, inequality, distrust between the State and individuals, and legal, political and historical differences among countries. The solutions proposed, under the WURI program objectives, to address these problems – consultations, dialogues, ethnographic studies, provision of additional financing and capacity – are laudable but insufficient to dealing with the root causes. On the contrary, the solutions offered might reveal the inadequacies of a digitized State in West Africa where a large population of West African are digital illiterates, lack the means to access digital platforms, or operate largely in the informal sector.

Practically, the task of tactically addressing the root causes to most of the problems mentioned above, particularly the major ones involving political instability, institutional inadequacies, corruption, conflicts and capacity building, is an arduous one which may involve a more domestic/grassroot/bottom-up approach. However, the solution to these challenges is either unknown, difficult or less desirable than the “quick fix” offered by techno-solutionism and reliance on digital identification.

  1. It is uncertain why the conventional wisdom is that West African countries, many of whom have functional IDs, specifically need to have a national digital ID card system while some of their developed counterparts in Europe and North-America lack a national ID card but rely on different functional IDs
  2. Identifying institution is used here to refer to any institution that seeks to authenticate the identity of a person based on the ID card or number that person possesses.
  3. A foundational identity system is an identity system which enables the creation of identities or unique identification numbers used for general purposes, such as national identity cards. A functional identity system is one that is created for or evolves out of a specific use case but may likely be suitable for use across other sectors such as driver’s license, voter’s card, bank number, insurance number, insurance records, credit history, health record, tax records.
  4. Member States of ECOWAS include the Republic of Benin, Burkina Faso, Cape Verde, the Gambia, Ghana, Guinea, Guinea Bissau, Liberia, Mali, Niger, Nigeria, Senegal, Sierra Leone, Togo.
  5. See Savita Bailur, Helene Smertnik & Nnenna Nwakanma, End User Experience with identification in Côte d’Ivoire. Unpublished Report by Caribou Digital.

October 19, 2020. Ngozi Nwanta, JSD program, NYU School of Law with research interests in systemic analysis of national identification systems, governance of credit data, financial inclusion, and development.

India’s New National Digital Health Mission: A Trojan Horse for Privatization

TECHNOLOGY & HUMAN RIGHTS

India’s New National Digital Health Mission: A Trojan Horse for Privatization

Through the national Digital Health ID, India’s Modi government is implementing techno-solutionist and market-based reforms to further entrench the centrality of the private sector in healthcare. This has serious consequences for all Indians, but most of all, for its vulnerable populations.

On August 15, 2021, India’s Prime Minister Narendra Modi launched the National Digital Health Mission (NDHM), under which every Indian citizen is to be provided with a unique digital health ID. This ID will contain patients’ health records—including prescriptions, diagnostic reports, and medical histories—and will enable easy access for both patients and health service providers. The aim of the NDHM is to allow patients to seamlessly switch between health service providers by facilitating their access to patients’ health data and enabling insurance providers to quickly verify and process claims. Accessible registries of health master data will also be created. But this digital health ID program is emblematic of a larger problem in India—the government’s steady withdrawal from healthcare, both as welfare and as a public service.

The digital health ID is a crucial part of Modi’s plans to create a new digital health infrastructure called the National Health Stack. This will form the health component of the existing India Stack, which is defined as “a set of digital public goods” that are intended to make it easy for innovators to introduce digital services in India across different sectors. The India Stack is built on the existing foundational user-base provided by Aadhaar digital ID numbers. A “Unified Health Interface” will be created as a digital platform to manage healthcare-related transactions. It will be administered by the National Health Authority (NHA), which is also responsible for administering the flagship public health insurance scheme, the Ayushman Bharat Pradhan Mantri Jan Arogya Yojana (AB-PMJAY), providing health coverage for around 500 million poor Indians.

The Modi government proclaims that the NDHM and digital health ID will revolutionize the Indian healthcare system through technology-driven solutions. But this glosses over the government’s real motive, which is to incentivize the private sector to participate in and rescue India’s ailing healthcare system. Rather than invest more funds in public health infrastructure, the Indian government has decided to outsource healthcare services to private healthcare providers and insurance companies, using access to vast troves of health data as the proverbial carrot.
Indeed, the benefits of the NDHM for the private healthcare sector are numerous. It will provide valuable, interoperable data in the form of “health registries” which link data silos and act as a “single source of truth” for all healthcare stakeholders. This will enable quicker processing of claims and payments to health service providers. In an op-ed lauding the NDHM, the head of a major Indian hospital chain noted that the NDHM will “reduce administrative burden related to doctor onboarding, regulatory approvals and renewals, and hospital or payer empanelment.”
The government appears to have learned its lessons from the implementation of the AB-PMJAY, which allowed people below the poverty line to purchase healthcare services through state-funded health insurance. Although the scheme included both private and public hospitals, it relied heavily on private hospitals, as public hospitals lacked sufficient facilities. However, not enough private hospitals onboarded because rates were non-competitive as compared to the market, and because the scheme was plagued by long delays in insurance payments and insurance fraud. But, instead of building up public healthcare and reducing dependency on the private sector, the government is eager to fix this problem by providing better incentives to private providers through the NDHM.

Meanwhile, it is unclear what the benefits to the public will be. Digitizing the healthcare system and making it easier for insurance companies to pay private hospitals for services does not solve more urgent and serious problems, such as the lack of healthcare facilities in rural areas. The COVID-19 pandemic saw public hospitals playing a dominant role in treatment and vaccination, while private hospitals took a backseat. Given this, increasing the reliance placed on the private healthcare system through the NDHM is counterintuitive.

This growing reliance on the private sector is also likely to further disadvantage people living in poverty. The lack of suitable government hospitals forces people into private hospitals, and they are often required to pay more than the amount covered by the government-funded AB-PMJAY. Further, India’s National Human Rights Commission has taken the position that denial of care by private service providers is outside its ambit, notwithstanding their enrollment into state-funded insurance schemes like AB-PMJAY. Also, as the digital health ID will enable insurance companies’ access to sensitive health data, they may deny insurance or charge higher premiums to those most in need, thereby further entrenching discrimination and inequalities. Getting coverage with a genetic disorder, for instance, is already extremely difficult in India, something a digital health ID could worsen because insurance companies could access this information, rendering premiums prohibitively expensive for millions who need it. Digitization also renders highly-personal health records susceptible to breaches: such privacy concerns led many persons living with HIV to drop out of treatment programs when antiretroviral therapy centers began collecting Aadhaar details from patients.

Not having a digital health ID could lead to exclusion from vital healthcare. This is not a hypothetical. The government had to issue a clarification that no one should be denied COVID-19 vaccines or oxygen for lack of Aadhaar after numerous concerning reports, including allegations that a patient died after two hospitals demanded Aadhaar details which he did not have.

Nonetheless, plans are speeding ahead as the “usual suspects” of India’s techno-solutionist projects turn their efforts to healthcare. RS Sharma, the ex-Director General of the government agency responsible for Aadhaar, is the current CEO of the NHA. The National Health Stack was reportedly developed in consultation with i-SPIRT, a group of so-called “volunteers” with private sector backgrounds who act as a go-between between the Indian government and the tech sector and played a vital role in embedding Aadhaar in society through private companies. A committee set up to examine the merits of the National Health Stack was headed by another former UIDAI chairman.

Steered by individuals with an endless faith in the power of technology and in the private sector’s entrepreneurial drive to save the Indian government and governance, India is determinedly marching forward with its technology-driven and market-based reforms in public services and welfare. This is all underlined by a heavy tendency towards privatization and is in turn inspired by the private sector. The NDHM, for instance, is guided by the tagline “Think Big, Start Small, Scale Fast,” a business philosophy for start-ups.

Perhaps most concerningly, the neoliberal withdrawal of government from crucial public services to make space for the private sector has resulted in the rationing of those goods and services, with fewer people having access to them. Having a digital health ID is not likely to change this for India’s health sector, and is allowing for this privatization by stealth.

December 14, 2021. Sharngan Aravindakshan, LL.M. program, NYU School of Law; Human Rights Scholar with the Digital Welfare State & Human Rights Project in 2021-22. He previously worked for the Centre for Communication Governance in India.

Nothing is Inevitable! Main Takeaways from an Event on “Techno-Racism and Human Rights: A Conversation with the UN Special Rapporteur on Racism”

TECHNOLOGY & HUMAN RIGHTS

Nothing is Inevitable! Main Takeaways from an Event on “Techno-Racism and Human Rights: A Conversation with the UN Special Rapporteur on Racism”

On July 23, 2020, the Digital Welfare State and Human Rights Project hosted a virtual event on techno-racism and human rights. The immediate reason for organizing this conversation was a recent report to the Human Rights Council by the United Nations Special Rapporteur on Racism, Tendayi Achiume, on the racist impacts of emerging technologies. The event sought to further explore these impacts and to question the role of international human rights norms and accountability mechanisms in efforts to address these. Christiaan van Veen moderated the conversation between the Special Rapporteur, Mutale Nkonde, CEO of AI for the People, and Nanjala Nyabola, author of Digital Democracy, Analogue Politics.

This event and Tendayi’s report come at a moment of multiple international crises, including a global wave of protests and activism against police brutality and systemic racism after the killing of George Floyd, and a pandemic which, among many other tragic impacts, has laid bare how deeply embedded inequality, racism, xenophobia, and intolerance are within our societies. Just last month, as Tendayi explained during the event, the Human Rights Council held a historic urgent debate on systemic racism and police brutality in the United States and elsewhere, which would have been inconceivable just a few months ago.

The starting point for the conversation was an attempt to define techno-racism and provide varied examples from across the globe. This global dimension was especially important as so many discussions on techno-racism remain US-centric. Speakers were also asked to discuss not only private use of technology or government use within the criminal justice area, but to address often-overlooked technological innovation within welfare states, from social security to health care and education.

Nanjala started the conversation by defining techno-racism as the use of technology to lock in power disparities that are predicated on race. Such techno-racism can occur within states: Mutale discussed algorithmic hiring decisions and facial recognition technologies used in housing in the United States, while Tendayi mentioned racist digital employment systems in South America. But techno-racism also has a transnational dimension: technologies entrench power disparities between States that are building technologies and States that are buying them; Nanjala called this “digital colonialism.”

The speakers all agreed that emerging technologies are consistently presented as agnostic and neutral, despite being loaded with the assumptions of their builders (disproportionately white males educated at elite universities) about how society works. For example, the technologies increasingly used in welfare states are designed with the idea that people living in poverty are constantly attempting to defraud the government; Christiaan and Nanjala discussed an algorithmic benefit fraud detection tool used in the Netherlands, which was found by a Dutch court to be exclusively targeting neighborhoods with low-income and minority residents, as an excellent example of this.

Nanjala also mentioned the ‘Huduma Namba’ digital ID system in Kenya as a powerful example of the politics and complexity underneath technology. She explained the racist history of ID systems in Kenya – designed by colonial authorities to enable the criminalization of black people and the protection of white property – and argued that digitalizing a system that was intended to discriminate “will only make the discrimination more efficient”. This exacerbation of discrimination is also visible within India’s ‘Aadhaar’ digital ID system, through which existing exclusions have been formalized, entrenched, and anesthetized, enabling those in power to claim that exclusion, such as the removal of hundreds of thousands of people from food distribution lists, simply results from the operation of the system rather than from political choices.

Tendayi explained that she wrote her report in part to address her “deep frustration” with the fact that race and non-discrimination analyses are often absent from debates on technology and human rights at the UN. Though she named a report by the Center Faculty Director Philip Alston, prepared in cooperation with the Digital Welfare State and Human Rights Project, as one of few exceptions, discussions within the international human rights field remain focused upon privacy and freedom of expression and marginalize questions of equality. But techno-racism should not be an afterthought in these discussions, especially as emerging technologies often exacerbate pre-existing racism and enable a completely different scale of discrimination.

Given the centrality of Tendayi’s Human Rights Council report to the conversation, Christiaan asked the speakers whether and how international human rights frameworks and norms can help us evaluate the implications of techno-racism, and what potential advantages global human rights accountability mechanisms can bring relative to domestic legal remedies. Mutale expressed that we need to ask, “who is human in human rights?” She noted that the racist design of these technologies arises from the notion that Black people are not human. Tendayi argued that there is, therefore, also a pressing need to change existing ways of thinking about who violates human rights. During the aforementioned urgent debate in the Human Rights Council, for example, European States and Australia had worked to water down a powerful draft resolution and blocked the establishment of a Commission of Inquiry to investigate systemic racism specifically in the United States, on the grounds that it is a liberal democracy. Mutale described this as another indication that police brutality against Black people in a Western country like the United States is too easily dismissed as not of international concern.

Tendayi concurred and expressed her misgivings about the UN’s human rights system. She explained that the human rights framework is deeply implicated in transnational racially discriminatory projects of the past, including colonialism and slavery, and noted that powerful institutions (including governments, the UN, and international human rights bodies) are often “ground zero” for systemic racism. Mutale echoed this and urged the audience to consider how international human rights organs like the Human Rights Council may constitute a political body for sustaining white supremacy as a power system across borders.

Nanjala also expressed concerns with the human rights regime and its history, but identified three potential benefits of the human rights framework in addressing techno-racism. First, the human rights regime provides another pathway outside domestic law for demanding accountability and seeking redress. Second, it translates local rights violations into international discourse, thus creating potential for a global accountability movement and giving victims around the world a powerful and shared rights-based language. Third, because of its relative stability since the 1940s, human rights legal discourse helps advocates develop genealogies of rights violations, document repeated institutional failures, and establish patterns of rights violations over time, allowing advocates to amplify domestic and international pressure for accountability. Tendayi added that she is “invested in a future that is fundamentally different from the present,” and that human rights can potentially contribute to transforming political institutions and undoing structures of injustice around the world.

In addressing an audience question about technological responses to COVID-19, Mutale described how an algorithm designed to assign scarce medical equipment such as ventilators systematically discounted black patient viability. Noting that health outcomes around the world are consistently correlated with poverty and life experiences (including the “weathering effects” suffered by racial and ethnic minorities), she warned that, by feeding algorithms data from past hospitalizations and health outcomes, “we are training these AI systems to deem that black lives are not viable.” Tendayi echoed this, suggesting that our “baseline assumption” should be that new technologies will have discriminatory impacts simply because of how they are made and the assumptions that inform their design.
In response to an audience member’s concern that governments and private actors will adopt racist technologies regardless, Nanjala countered that “nothing is inevitable” and “everything is a function of human action and agency.” San Francisco’s decision to ban the use of facial recognition software by municipal authorities, for example, demonstrates that the use of these technologies is not inevitable, even in Silicon Valley. Tendayi, in her final remarks, noted that “worlds are being made and remade all of the time” and that it is vital to listen to voices, such as those of Mutale, Nanjala, and the Center’s Digital Welfare State Project, which are “helping us to think differently.” “Mainstreaming” the idea of techno-racism can help erode the presumption of “tech neutrality” that has made political change related to technology so difficult to achieve in the past. Tendayi concluded that this is why it is so vital to have conversations like these.

We couldn’t agree more!

To reflect that this was an informal conversation, first names are used in this story. 

July 29, 2020

Victoria Adelmant & Adam Ray 

Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law.