Acaparamiento De Tierras En Haití Viola Los Derechos De Las Mujeres Y Profundiza La Crisis Climática, Explican Grupos De Derechos

CLIMATE AND ENVIRONMENT

Acaparamiento De Tierras En Haití Viola Los Derechos De Las Mujeres Y Profundiza La Crisis Climática, Explican Grupos De Derechos

La sumisión de la Clínica de Justicia Global de NYU y Solidarite Fanm Ayisyèn a la Relatora Especial de la ONU sobre la violencia contra la mujer subraya las consecuencias del acaparamiento violento de tierras contra las mujeres en Savane Diane, Haití 

English | Kreyòl

Un acaparamiento violenta de tierras desplazó a mujeres agricultoras en Savane Diane, Haití y constituyó violencia de género y ha agravado la vulnerabilidad a los cambios de clima, según la sumisión que la Clínica de Justicia Global de NYU y Solidarite Fanm Ayisyèn (SOFA) le presentaron a la Relatora Especial de la ONU sobre la violencia contra la mujer tarde la semana pasada. El acaparamiento de tierra en Savane Diane, el cual le quitó tierra usada por SOFA para educar a mujeres en técnicas agrícolas más ecológicamente sostenibles, es sólo uno de varios ejemplos de tal acaparamiento en los últimos meses. Acaparamientos de tierra están aumentando en Haití, mientras el poder judicial haitiano no ha respondido.

“Solicitamos la atención de la Relatora Especial porque no hemos podido garantizar la justicia en Haití,” dijo Sharma Aurelien, la directora ejecutiva de SOFA. “Esta tierra ayudó a las mujeres a combatir la pobreza y benefició a toda la sociedad,” ella continuó.

En 2020, hombres armados violentamente echaron a los miembros de SOFA de las tierras sobre cuales el gobierno haitiano les había otorgado derechos exclusivos de uso. En el proceso, golpearon brutalmente a algunos. Desde ese entonces, SOFA se ha enterado que la empresa agroindustrial, Stevia Agroindustrias S.A., estaba reclamando título del área para cultivar stevia para exportación. El gobierno haitiano revocó los derechos de SOFA a la tierra, sin ningún proceso judicial, y, en principios del 2021, el difunto presidente, Jovenel Moïse, convirtió la tierra en una zona franca agroindustrial por decreto ejecutivo.

“El Ministro de Agricultura asumió el papel de juez, apoyando a Industrias Stevia y permitiendo que continúen con sus actividades mientras que SOFA fue ordenada a suspender las nuestras,” dijo Marie Frantz Joachim, miembro del comité coordinadora.

La sumisión de las organizaciones enfatizó la violación de los derechos conjuntos ocasionada por la apropiación de la tierra. Esto está profundizando la pobreza e inseguridad alimenticia en la zona, y las mujeres que trabajan con las Industrias Stevia han sufrido explotación sexual y robo de salarios. El acaparamiento también vulnera el derecho al agua durante esta misma crisis climática: los terrenos incautados incluyen tres reservorios de agua protegidos por el Estado.

“Perdimos nuestras reservas de agua porque ya le pertenecen a [la compañía]. Mientras tanto, estamos sufriendo una gran crisis de agua,” dijo Esther Jolissaint, miembro de SOFA afectado en Savane Diane.

El cambio climático, el acaparamiento de tierras, y la violencia contra las mujeres son fenómenos interconectados, explican las organizaciones. Haití frecuentemente está listado como uno de los cinco países más afectados por el cambio climático. El acaparamiento de tierras puede resultar de la vulnerabilidad climática, y también puede contribuir a ella, ya que las tierras agrícolas, cada vez más escasas, se convierten en monocultivos agrícolas que degraden el medio ambiente. Las mujeres son particularmente vulnerables.

“Los derechos a la tierra de las mujeres rurales y el acceso a los recursos agrícolas son esenciales para garantizar sus derechos humanos y apoyar la resiliencia climática,” dijo Sienna Merope-Synge, la codirectora de la Iniciativa de Justicia Climática del Caribe de la Clínica de Justicia Global. “El acaparamiento de tierras contra las mujeres debería ser reconocido como una forma de violencia de género,” ella continuó.

La sumisión conjunta enfatiza el llamado de SOFA por reparaciones y restitución para las mujeres afectadas por el acaparamiento de tierras. También destaca el llamado de SOFA y movimientos sociales haitianos para una mayor protección de los derechos de los campesinos a la tierra, ya que las comunidades rurales en Haití han notado un aumento en el acaparamiento de sus tierras. Las organizaciones explican que se necesita más atención y condenación internacional. “Estamos pidiendo la solidaridad de otros comprometidos en la lucha mundial por el respeto de los derechos humanos,” concluyó Aurelien.

Este post fue publicado originalmente como un comunicado de prensa abril 5, 2022.

Este post refleja la declaración de la Global Justice Clinic, y no necesariamente las opiniones de NYU, NYU Law, o de el Center for Human Rights and Global Justice.

Singapore’s “smart city” initiative: one step further in the surveillance, regulation and disciplining of those at the margins

TECHNOLOGY & HUMAN RIGHTS

Singapore’s “smart city” initiative: one step further in the surveillance, regulation and disciplining of those at the margins

Singapore’s smart city initiative creates an interconnected web of digital infrastructures which promises citizens safety, convenience, and efficiency. But the smart city is experienced differently by individuals at the margins, particularly migrant workers, who are experimented on at the forefront of technological innovation.

On February 23, 2022, we hosted the tenth event of the Transformer States Series on Digital Government and Human Rights, titled “Surveillance of the Poor in Singapore: Poverty in ‘Smart City’.” Christiaan van Veen and Victoria Adelmant spoke with Dr. Monamie Bhadra Haines about the deployment of surveillance technologies as part of Singapore’s “smart city” initiative. This blog outlines the key themes discussed during the conversation.

The smart city in the context of institutionalized racial hierarchy

Singapore has consistently been hailed as the world’s leading smart city. For a decade, the city-state has been covering its territory with ubiquitous sensors and integrated digital infrastructures with the aim, in the government’s words, of collecting information on “everyone, everything, everywhere, all the time.” But these smart city technologies are layered on top of pre-existing structures and inequalities, which mediate how these innovations are experienced.

One such structure is an explicit racial hierarchy. As an island nation with a long history of multi-ethnicity and migration, Singapore has witnessed significant migration from Southern China, the Malay Peninsula, India, and Bangladesh. Borrowing from the British model of race-based regulation, this multi-ethnicity is governed by the post-colonial state through the explicit adoption of four racial categories – Chinese, Malay, Indian and Others (or “CMIO” for short) – which are institutionalized within immigration policies, housing, education and employment. As a result, while migrant workers from South and Southeast Asia are the backbone of Singapore’s blue-collar labor market, they occupy the bottom tier of the racial hierarchy; are subject to stark precarity; and have become the “objects” of extensive surveillance by the state.

The promise of the smart city

Singapore’s smart city initiative is “sold” to the public through narratives of economic opportunities and job creation in the knowledge economy, improving environmental sustainability, and increasing efficiency and convenience. Through collecting and inter-connecting all kinds of “mundane” data – such as electricity patterns, data from increasingly-intrusive IoT products, and geo-location and mobility data – into centralized databases, smart cities are said to provide more safety and convenience. Singapore’s hyper-modern technologically-advanced society promises efficient and seamless public services, and the constant technology-driven surveillance and the loss of a few civil liberties are viewed by many as a small price to pay for such efficiency.

Further, the collection of large quantities of data from individuals is promised to enable citizens to be better connected with the government; while governments’ decisions, in turn, will be based upon the purportedly objective data from sensors and devices, thereby freeing decision-making from human fallibility and rendering it more neutral.

The realities: disparate impacts of smart city surveillance on migrant workers

However, smart cities are not merely economic or technological endeavors, but techno-social assemblages that create and impact different publics differently. As Monamie noted, specific imaginations and imagery of Singapore as a hyper-modern, interconnected, and efficient smart city can obscure certain types of racialized physical labor, such as the domestic labor of female Southeast-Asian migrant workers.

Migrant workers are uniquely impacted by increasing digitalization and datafication in Singapore. For years, these workers have been housed in dormitories with occupancy often exceeding capacity, located in the literal “margins” or outskirts of the city: migrant workers have long been physically kept separate from the rest of Singapore’s population within these dormitory complexes. They are stereotyped as violent or frequently inebriated, and the dormitories have for years been surveilled through digital technologies including security cameras, biometric sensors, and data from social media and transport services.

The pandemic highlighted and intensified the disproportionate surveillance of migrant workers within Singapore. Layered on top of the existing technological surveillance of migrants’ dormitories, a surveillance assemblage for COVID-19 contact tracing was created. Measures in the name of public health were deployed to carefully surveil these workers’ bodies and movements. Migrant workers became “objects” of technological experimentation as they were required to use a multitude of new mobile-based apps that integrated immigration data and work permit data with health data (such as body temperature and oximeter readings) and Covid-19 contact tracing data. The permissions required by these apps were also quite broad – including access to Bluetooth services and location data. All the data was stored in a centralized database.

Even though surveillant contact-tracing technologies were later rolled out across Singapore and normalized around the world, the important point here is that these systems were deployed exclusively on migrant workers first. Some apps, Monamie pointed out, were indeed only required by migrant workers, while citizens did not have to use them. This use of interconnected networks of surveillance technologies thus highlights the selective experimentation that underpins smart city initiatives. While smart city initiatives are, by their nature, premised on large-scale surveillance, we often see that policies, apps, and technologies are tried on individuals and communities with the least power first, before spilling out to the rest of the population. In Singapore, the objects of such experimentation are migrant workers who occupy “exceptional spaces” – of being needed to ensure the existence of certain labor markets, but also of needing to be disciplined and regulated. These technological initiatives, in subjecting specific groups at the margins to more surveillance than the rest of the population and requiring them to use more tech-based tools than others, serve to exacerbate the “othering” and isolation of migrant workers.

Forging eddies of resistance

While Monamie noted that “activism” is “still considered a dirty word in Singapore,” there have been some localized efforts to challenge some of the technologies within the smart city, in part due to the intensification of surveillance spurred by the pandemic. These efforts, and a rapidly-growing recognition of the disproportionate targeting and disparate impacts of such technologies, indicate that the smart city is also a site of contestation with growing resistance to its tech-based tools.

March 18, 2022. Ramya Chandrasekhar, LLM program at NYU School of Law whose research interests relate to data governance, critical infrastructure studies, and critical theory. She previously worked with technology policy organizations and at a reputed law firm in India.

Experimental automation in the UK immigration system

TECHNOLOGY & HUMAN RIGHTS

Experimental automation in the UK immigration system

The UK government is experimenting with automated immigration systems. The promised benefits of automation are inevitably attractive, but these experiments routinely expose people—including some of the most vulnerable—to unacceptable risks of harm.

In April 2019, The Guardian reported that couples accused of sham marriages were increasingly being subjected to invasive investigations by the Home Office, the UK government body responsible for immigration policy. Couples reported having their wedding ceremonies interrupted to be quizzed about their sex life, being told they were not in a genuine relationship because they were wearing pajamas in bed, and being present while their intimate photos were shared between officials.

The official tactics reported are worrying enough, but it has since come to light through the efforts of a legal charity (the Public Law Project) and investigative journalists that an automated system is largely determining who gets investigated in the first place. An algorithm, hidden from public view, is sorting couples into “pass” and “fail” categories, based on eight unknown criteria.
Couples who “fail” this covert algorithmic test are subjected to intrusive investigations. They must attend an interview and hand over extensive evidence about their relationship, a process which has been described as “insulting” and “grueling.” These investigations can also prevent couples from getting married altogether. If the Home Office decides that a couple has failed to “comply” with an investigation—even if they are in a genuine relationship—the couple is denied a marriage certificate and forced to start the process all over again. One couple was reportedly ruled non-compliant for failing to provide six months of bank statements for an account that had only been open for four months. This makes it difficult for people to plan their weddings and their lives. And the investigation can lead to other immigration enforcement actions, such as visa cancellation, detention, and deportation. In one case, a sham marriage dawn raid led to a man being detained for four months, until the Home Office finally accepted that his relationship was genuine.

We know little about how this automated system operates in practice or its effectiveness in detecting sham marriages. The Home Office refuses to disclose or otherwise explain the eight criteria at the center of the system. There is a real risk that the system is racially discriminatory, however. The criteria were derived from historical data, which may well be skewed against certain nationalities. The Home Office’s own analysis shows that some nationalities, including Bulgarian, Greek, Romanian and Albanian people, receive “fail” ratings more frequently than others.

The sham marriages algorithm is, in many respects, a typical case of the deployment of automation in the UK immigration system. It is not difficult to understand why officials are seeking to automate immigration decision-making. Administering immigration policy is a tough job. Officials are often inexperienced and under pressure to process large volumes of decisions. Each decision will have profound effects for those subjected to it. This is not helped by the dense complexity of, and frequent changes in, immigration law and policy, which can bamboozle even the most hardened administrative lawyer. All of this, of course, takes place in an environment where migration remains one of the most vexed issues on the political agenda. Automation’s promised benefits of greater efficiency, lower costs, and increased consistency are, from the government’s perspective, inevitably attractive.

But in reality, a familiar pattern of risky experimentation and failure is already emerging. It begins with the Home Office deploying a novel automated system with the goal of cheaper, quicker, and more accurate decision-making. There is often little evidence to support the system’s effectiveness in delivering those goals and scant consideration of the risks of harm. Such systems are generally intended to benefit the government or the general, non-migrant population, rather than the people subject to them. When the system goes wrong and harms individuals, the Home Office fails to take adequate steps to address those harms. The justice system—with its principles and procedures developed in response to more traditional forms of public administration—is left to muddle through in trying to provide some form of redress. That redress, even where best efforts are made, is often unsatisfactory.

This is the story we seek to tell in our new book, Experiments in Automating Immigration Systems, through an exploration of three automated immigration systems in the UK: a voice recognition system used to detect fraud in English language testing; an algorithm for identifying “risky” visa applications; and automated decision-making in the process for EU citizens to apply to remain in the UK after Brexit. It is, at its core, a story of risky bureaucratic experimentation that routinely exposes people, including some of the most vulnerable, to unacceptable risks of harm. For example, some of the students caught up in the English language testing scandal were detained and deported, while others had to abandon their studies and fight for years through the courts to prove their innocence. While we focus on the UK experience, this story will no doubt be increasingly familiar in many countries around the world.

It is important to remember, however, that this story is just beginning. While it would be naïve to think that the tensions in public administration can ever be wholly overcome, the government must strive to reap the benefits of automation for all of society, in a way that is sensitive to and mitigates the attendant risks of injustice. That work is, of course, best led by the government itself.

But the collective work of journalists, charities, NGOs, lawyers, researchers, and others will continue to play a crucial role in ensuring, as far as possible, that automated administration is just and fair.

March 14, 2022. Joe Tomlinson and Jack Maxwell.
Dr. Joe Tomlinson is a Senior Lecturer in Public Law at the University of York.
Jack Maxwell is a barrister at the Victorian Bar.

GJC Partners in Haiti and Guyana Testify Before IACHR on Detriment of Extractive Industry in the Caribbean

CLIMATE AND ENVIRONMENT

GJC Partners in Haiti and Guyana Testify Before IACHR on Detriment of Extractive Industry in the Caribbean

On October 26, 2021, advocates and experts from five Caribbean countries, Haiti, Jamaica, Guyana, Trinidad and Tobago, and The Bahamas, presented on the impact of extractive industry activities on human rights and climate change in the Caribbean in a hearing before the Inter-American Commission on Human Rights (IACHR). Samuel Nesner, a founding member of Kolektif Jistis Min and long-time partner of NYU Law’s Global Justice Clinic, presented on the serious harm of extraction and land grabs in Haiti to the human rights of rural communities. Another Global Justice Clinic partner and member of the South Rupununi District Council, Immaculata Casimero, presented on the impact of extractive industries on indigenous women.

Samuel Nesner highlighted that for centuries land in Haiti has been expropriated and transferred to the elite with rural communities facing the brunt of the harm. Repeated expropriation of land, also known as land grabbing, has forced farmers and their families from their land, many times under threat of violence and almost always without adequate compensation for the loss of their land and sole source of income. Many believe that the land grabs relate to the content of the soil: much of the area that has been taken from farmers in the rural North is known for its mineral resources. Between 2006 and 2013, the Haitian government granted four U.S. and Canadian companies more than 50 mining permits. Many were granted in flagrant violation of Haitian law, without consultation of the dozen communities who live on the land under permit, and without first conducting an adequate environmental and social impact assessment. Residents of these communities have reported that company representatives entered their land without permission, taking samples and digging holes in their farmland. 

Immaculata Casimero noted that extractive industries pose a particular danger to indigenous peoples, who face longstanding land tenure insecurity. In Immaculata’s own Wapichan territory, many traditional indigenous lands are left unrecognized by the Guyanese government—and therefore vulnerable to big businesses looking to obtain agricultural leases on their land and extractive industries seeking to mine gold from their land. Immaculata emphasized that allowing mining on indigenous land harms their cultural heritage and way of life, and that women are especially affected as the main conveyors and protectors of this cultural heritage. Mining not only damages cultural heritage, but also the community’s health: it has led to mercury poisoning by contaminating crucial headwaters and has compounded the effects of climate change, with flooding, lower crop yields, and higher food insecurity. The presence of new miners has also raised social concerns, such as an increase in gender-based violence and prostitution.

Following the speakers’ presentations, IACHR Commissioners commended the speakers on their efforts to address the urgent issue of the impact of extractive industries in the Caribbean. IACHR Commissioner Margaret May Macauley (Jamaica) expressed her concern about the “complete lack of prior information and prior consultation before the majority, if not all, of these extractive industries commence. That is, the governments of these States enter into contracts with the corporations without prior information to the peoples who reside in the lands, on the lands, or by the seas, and they do not engage in prior consultation with them… The persons are left completely unprotected.” This certainly rings true in Haiti and Guyana, where foreign companies have repeatedly profited off the land of Haitian farmers and the Wapichan people without prior consultation about the use of their land.

February 14, 2022. 

U.S. government must adopt moratorium on mandatory use of biometric technologies in critical sectors, look to evidence abroad, urge human rights experts

TECHNOLOGY AND HUMAN RIGHTS

U.S. Government must adopt moratorium on mandatory use of biometric technologies in critical sectors, look to evidence abroad, urge human rights experts

As the White House Office of Science and Technology Policy (OSTP) embarks on an initiative to design a ‘Bill of Rights for an AI-Powered World,’ it must begin by immediately imposing a moratorium on the mandatory use of AI-enabled biometrics in critical sectors, such as health, social welfare programs, and education, argue a group of human rights experts at the Digital Welfare State & Human Rights Project (the DWS Project) at the Center for Human Rights and Global Justice at NYU School of Law, and the Institute for Law, Innovation & Technology (iLIT) at Temple University School of Law.

In a 10-page submission responding to OSTP’s Request for Information, the DWS Project and iLIT argue that biometric identification technologies such as facial recognition and fingerprint-based recognition pose existential threats to human rights, democracy, and the rule of law. Drawing on comparative research and consultation with some of the leading international experts on biometrics and human rights, the submission details evidence of some of the concerns raised in countries including Ireland, India, Uganda, and Kenya. It catalogues the often-catastrophic effects of biometric failure, of unwieldly administrative requirements imposed on public services, and the pervasive lack of legal remedies and basic transparency about use of biometrics in government.

“We now have a great deal of evidence about the ways that biometric identification can exclude and discriminate, denying entire groups access to basic social rights,” said Katelyn Cioffi, a Research Scholar at the DWS Project, “Under many biometric identification systems, you can be denied health care, access to education, or even a drivers’ license, if you are not able or willing to authenticate aspects of your identity biometrically.” An AI Bill of Rights that allows for equal enjoyment of rights must learn from comparative examples, the submission argues, and ensure that AI-enabled biometrics do not merely perpetuate systematic discrimination. This means looking beyond frequently-raised concerns about surveillance and privacy, to how biometric technologies affect social rights such as health, social security, education, housing, and employment.

A key factor of success for the initiative will be much-needed legal and regulatory reform across the United States federal system. “This initiative represents an opportunity for the U.S. government to examine the shortcomings of current laws and regulations, including equal protection, civil rights laws, and administrative law,” Laura Bingham, Executive Director of iLIT stated. “The protections that Americans depend on fail to provide the necessary legal tools to defend their rights and safeguard democratic institutions in a society that increasingly relies on digital technologies to make critical decisions.”

The submission also urges the White House to place constraints on the actions of the U.S. government and U.S. companies abroad. “The United States plays a major role in the development and uptake of biometric technologies globally, through its foreign investment, foreign policy, and development aid,” said Victoria Adelmant, a Research Scholar at the DWS Project. “As the government moves to regulate biometric technologies, it must not ignore U.S. companies’ roles in developing, selling, and promoting such technologies abroad, as well as the government’s own actions in spheres such as international development, defense, and migration.”

For the government to mount an effective response to these harms, the experts argue that it must also take heed of parallel efforts of other powerful political actors, including China and the European Union, which are currently attempting to regulate biometric technologies. However, it must also avoid a race to the bottom or jump into a perceived ‘arms race’ with countries like China, by pursuing an increasingly securitized biometric state and allowing the private sector to continue its unfettered ‘self-regulation’ and experimentation. Instead, the U.S. government should focus on acting as a global leader in enabling human rights-sustaining technological innovation.

The submission makes the following recommendations:

  1. Impose an immediate moratorium on the use of biometric technologies in critical sectors: biometric identification should never be mandatory in critical sectors such as education, welfare benefits programs, or healthcare.
  2. Propose and enact legislation to address the indirect and disparate impact of biometrics.
  3. Engage in further review and study of the human rights impacts of biometric technologies as well as of different legal and regulatory approaches.
  4. Build a comprehensive legal and regulatory approach that addresses the complex, systemic concerns raised by AI-enabled biometric identification technologies.
  5. Ensure that any new laws, regulations, and policies are subject to a democratic, transparent, and open process.
  6. Ensure that public education materials and any new laws, regulations, and policies are described and written in clear, non-technical, and easily accessible language.

This post was originally published as a press release on January 17, 2022.

The Digital Welfare State and Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law aims to investigate systems of social protection and assistance in countries worldwide that are increasingly driven by digital data and technologies.

The Temple University Institute for Law, Innovation & Technology (iLIT) at Beasley School of Law pursues action research, experiential instruction, and advocacy with a mission to deliver equity, bridge academic and practical boundaries, and inform new approaches to technological innovation in the public interest.

Response to the White House Office of Science and Technology Policy’s Request for Information on Biometric Identification Technologies

TECHNOLOGY AND HUMAN RIGHTS

Response to the White House Office of Science and Technology Policy’s Request for Information on Biometric Identification Technologies

In January 2022, the Digital Welfare State & Human Rights Project team at the Center together with their partners at the Institute for Law, Innovation & Technology (iLIT) at Temple University, Beasley School of Law, submitted expert commentary to the United States White House’s Blueprint for an AI Bill of Rights initiative. 

The White House Office of Science and Technology Policy (OSTP) had embarked on an initiative to design a “Bill of Rights for an AI-Powered World,” and issued a Request for Information on Biometric Identification Technologies. The OSTP asked for input from varied experts to provide information about the scope and extent of the usage of biometric technologies, and to help the OSTP to better understand ‘the stakeholders that are, or may be, impacted by their use or regulation.’ In response to this request, our team submitted a 10-page submission to provide international and comparative information to inform OSTP’s understanding of the social, economic, and political impacts of biometric technologies, in research and regulation. The submission discusses the implications of AI-driven biometric technologies for human rights law, democracy, and the rule of law, and provides information about the ways in which various groups and communities can be negatively impacted by such technologies.

In this submission, we sought especially to draw attention to the importance of learning from other countries’ experiences with biometrics, and to show that the implications of biometric technologies go far beyond the frequently-raised concerns about surveillance and privacy. We therefore provided a range of comparative examples from countries around the world where biometric technologies have been adopted, including within essential services such as social security and housing sectors. We argued that the OSTP, in drafting its upcoming “AI Bill of Rights,” should learn from these comparative examples, to take account of how biometric technologies can affect social rights such as health, social security, education, housing, and employment. The submission also urges the OSTP to place constraints on the actions of the U.S. government and U.S. companies abroad.

This submission fed into the United States White House’s Blueprint for an AI Bill of Rights, released in October 2022. The Blueprint has since laid the groundwork for regulatory efforts to assess, manage, and prevent the risks posed by AI in the United States and abroad, and has been built upon in subsequent policy efforts.

GJC Among Organizations Demanding Halt to Deportations of Haitian Migrants Amidst Worsening Crisis in Haiti

HUMAN RIGHTS MOVEMENT

GJC Among Organizations Demanding Halt to Deportations of Haitian Migrants Amidst Worsening Crisis in Haiti

The Global Justice Clinic, in collaboration with several human rights and migrant rights organizations, jointly issued a factsheet analyzing the ongoing crisis of U.S. deportations and expulsions to Haiti in the midst of an ever-worsening political and humanitarian crisis. It shows the numerous ways the U.S. has violated its legal obligations to Haitian migrants.

Recommendations include an immediate end to deportations to Haiti, the restoration of access to asylum, and an end to the U.S. government’s discriminatory treatment of Haitian migrants. The signatories of the statement include Haitian organizations Groupe d’Appui aux Rapatriés et Réfugiés (Support Group for Repatriated People and Refugees, GARR), Rezo Fwontalye Jano Siksè (Jano Siksè Border Network, RFJS), and Service Jésuite aux Migrants-Haiti (Jesuit Service for Migrants-Haiti, SJM).

Additional signatories include Amnesty International, the Center for Gender & Refugee Studies, Haitian Bridge Alliance, and Refugees International.

This post was originally published as a press release on December 16, 2021.

Chosen by a Secret Algorithm: Colombia’s top-down pandemic payments

TECHNOLOGY AND HUMAN RIGHTS

Chosen by a Secret Algorithm: Colombia’s top-down pandemic payments

The Colombian government was applauded for delivering payments to 2.9 million people in just 2 weeks during the pandemic, thanks to a big-data-driven approach. But this new approach represents a fundamental change in social policy which shifts away from political participation and from a notion of rights.

On Wednesday, November 24, 2021, the Digital Welfare State and Human Rights Project hosted the ninth episode in the Transformer States conversation series on Digital Government and Human Rights, in an event entitled: “Chosen by a secret algorithm: A closer look at Colombia’s Pandemic Payments.” Christiaan van Veen and Victoria Adelmant had a conversation with Joan López, Researcher at the Global Data Justice Initiative and at Colombian NGO Fundación Karisma about Colombia’s pandemic payments and its reliance on data-driven technologies and prediction. This blog highlights some core issues related to taking a top-down, data-driven approach to social protection.

From expert interviews to a top-down approach

The System of Possible Beneficiaries of Social Programs (SISBEN in Spanish) was created to assist in the targeting of social programs in Colombia. This system classifies the Colombian population along a spectrum of vulnerability through the collection of information about households, including health data, family composition, access to social programs, financial information, and earnings. This data is collected through nationwide interviews conducted by experts. Beneficiaries are then rated on a scale of 1 to 100, with 0 as the least prosperous and 100 as the most prosperous, through a simple algorithm. SISBEN therefore aims to identify and rank “the poorest of the poor.” This centralized classification system is used by 19 different social programs to determine eligibility: each social program chooses its own cut-off score between 1 and 100 as a threshold for eligibility.

But in 2016, the National Development Office – the Colombian entity in charge of SISBEN – changed the calculation used to determine the profile of the poorest. It introduced a new and secret algorithm which would create a profile based on predicted income generation capacity. Experts collecting data for SISBEN through interviews had previously looked at the realities of people’s conditions: if a person had access to basic services such as water, sanitation, education, health and/or employment, the person was not deemed poor. But the new system sought instead to create detailed profiles about what a person could earn, rather than what a person has. This approach sought, through modelling, to predict households’ situation, rather than to document beneficiaries’ realities.

A new approach to social policy

During the pandemic, the government launched a new system of payments called the Ingreso Solidario (meaning “solidarity income”). This system would provide monthly payments to people who were not covered by any other existing social program that relied on SISBEN; the ultimate goal of Ingreso Solidario was to send money to 2.9 million people who needed assistance due to the crisis caused by COVID-19. The Ingreso Solidario was, in some ways, very effective. People did not have to apply for this program: if they were selected as eligible, they would automatically receive a payment. Many people received the money immediately into their bank accounts, and payments were made very rapidly, within just a few weeks. Moreover, the Ingreso Solidario was an unconditional transfer and did not condition the receipt of the money to the fulfillment of certain requirements.

But the Ingreso Solidario was based on a new approach to social policy, driven by technology and data sharing. The Government entered agreements with private companies, including Experian and Transunion, to access their databases. Agreements were also made between different government agencies and departments. Through data-sharing arrangements across 34 public and private databases, the government cross- checked the information provided in the interviews with information in dozens of databases to find inconsistencies and exclude anyone deemed not to require social assistance. In relying on cross-checking databases to “find” people who are in need, this approach depends heavily on enormous data collection, and it increases government’s reliance on the private sector.

The implications of this new approach

This new approach to social policy, as implemented through the Ingreso Solidario, has fundamental implications. First, this system is difficult to challenge. The algorithm used to profile vulnerability, to predict income generating capacity, and to assign a score to people living in poverty, is confidential. The Government consistently argued that disclosing information about the algorithm would lead to a macroeconomic crisis because if people knew how the system worked, they would try to cheat the system. Additionally, SISBEN has been normalized. Though there are many other ways that eligibility for social programs could be assessed, the public accepts it as natural and inevitable that the government has taken this arbitrary approach reliant on numerical scoring and predictions. Due to this normalization, combined with the lack of transparency, this new approach to determining eligibility for social programs has therefore not been contested.

Second, in adopting an approach which relies on cross-checking and analyzing data, the Ingreso Solidario is designed to avoid any contestation in the design and implementation of the algorithm. This is a thoroughly technocratic endeavor. The idea is to use databases and avoid going to, and working with, the communities. The government was, in Joan’s words, “trying to control everything from a distance” to “avoid having political discussions about who should be eligible.” There were no discussions and negotiations between the citizens and the Government to jointly address the challenges of using this technology to target poor people. Decisions about who the extra 2.9 million beneficiaries should be were taken unilaterally from above. As Joan argued, this was intentional: “The mindset of avoiding political discussion is clearly part of the idea of Ingreso Solidario.”

Third, because people were unaware that they were going to receive money, those who received a payment felt like they had won the lottery. Thus, as Joan argued, people saw this money not “as an entitlement, but just as a gift that this person was lucky to get.” This therefore represents a shift away from a conception of assistance as something we are entitled to by right. But in re-centering the notion of rights, we are reminded of the importance of taking human rights seriously when analyzing and redesigning these kinds of systems. Joan noted that we need to move away from an approach of deciding what poverty is from above, and instead move towards working with communities. We must use fundamental rights as guidance in designing a system that will provide support to those in poverty in an open, transparent, and participatory manner which does not seek to bypass political discussion.

María Beatriz Jiménez, LLM program, NYU School of Law with research focus on digital rights. She previously worked for the Colombian government in the Ministry of Information and Communication Technologies and the Ministry of Trade.

India’s New National Digital Health Mission: A Trojan Horse for Privatization

TECHNOLOGY & HUMAN RIGHTS

India’s New National Digital Health Mission: A Trojan Horse for Privatization

Through the national Digital Health ID, India’s Modi government is implementing techno-solutionist and market-based reforms to further entrench the centrality of the private sector in healthcare. This has serious consequences for all Indians, but most of all, for its vulnerable populations.

On August 15, 2021, India’s Prime Minister Narendra Modi launched the National Digital Health Mission (NDHM), under which every Indian citizen is to be provided with a unique digital health ID. This ID will contain patients’ health records—including prescriptions, diagnostic reports, and medical histories—and will enable easy access for both patients and health service providers. The aim of the NDHM is to allow patients to seamlessly switch between health service providers by facilitating their access to patients’ health data and enabling insurance providers to quickly verify and process claims. Accessible registries of health master data will also be created. But this digital health ID program is emblematic of a larger problem in India—the government’s steady withdrawal from healthcare, both as welfare and as a public service.

The digital health ID is a crucial part of Modi’s plans to create a new digital health infrastructure called the National Health Stack. This will form the health component of the existing India Stack, which is defined as “a set of digital public goods” that are intended to make it easy for innovators to introduce digital services in India across different sectors. The India Stack is built on the existing foundational user-base provided by Aadhaar digital ID numbers. A “Unified Health Interface” will be created as a digital platform to manage healthcare-related transactions. It will be administered by the National Health Authority (NHA), which is also responsible for administering the flagship public health insurance scheme, the Ayushman Bharat Pradhan Mantri Jan Arogya Yojana (AB-PMJAY), providing health coverage for around 500 million poor Indians.

The Modi government proclaims that the NDHM and digital health ID will revolutionize the Indian healthcare system through technology-driven solutions. But this glosses over the government’s real motive, which is to incentivize the private sector to participate in and rescue India’s ailing healthcare system. Rather than invest more funds in public health infrastructure, the Indian government has decided to outsource healthcare services to private healthcare providers and insurance companies, using access to vast troves of health data as the proverbial carrot.
Indeed, the benefits of the NDHM for the private healthcare sector are numerous. It will provide valuable, interoperable data in the form of “health registries” which link data silos and act as a “single source of truth” for all healthcare stakeholders. This will enable quicker processing of claims and payments to health service providers. In an op-ed lauding the NDHM, the head of a major Indian hospital chain noted that the NDHM will “reduce administrative burden related to doctor onboarding, regulatory approvals and renewals, and hospital or payer empanelment.”
The government appears to have learned its lessons from the implementation of the AB-PMJAY, which allowed people below the poverty line to purchase healthcare services through state-funded health insurance. Although the scheme included both private and public hospitals, it relied heavily on private hospitals, as public hospitals lacked sufficient facilities. However, not enough private hospitals onboarded because rates were non-competitive as compared to the market, and because the scheme was plagued by long delays in insurance payments and insurance fraud. But, instead of building up public healthcare and reducing dependency on the private sector, the government is eager to fix this problem by providing better incentives to private providers through the NDHM.

Meanwhile, it is unclear what the benefits to the public will be. Digitizing the healthcare system and making it easier for insurance companies to pay private hospitals for services does not solve more urgent and serious problems, such as the lack of healthcare facilities in rural areas. The COVID-19 pandemic saw public hospitals playing a dominant role in treatment and vaccination, while private hospitals took a backseat. Given this, increasing the reliance placed on the private healthcare system through the NDHM is counterintuitive.

This growing reliance on the private sector is also likely to further disadvantage people living in poverty. The lack of suitable government hospitals forces people into private hospitals, and they are often required to pay more than the amount covered by the government-funded AB-PMJAY. Further, India’s National Human Rights Commission has taken the position that denial of care by private service providers is outside its ambit, notwithstanding their enrollment into state-funded insurance schemes like AB-PMJAY. Also, as the digital health ID will enable insurance companies’ access to sensitive health data, they may deny insurance or charge higher premiums to those most in need, thereby further entrenching discrimination and inequalities. Getting coverage with a genetic disorder, for instance, is already extremely difficult in India, something a digital health ID could worsen because insurance companies could access this information, rendering premiums prohibitively expensive for millions who need it. Digitization also renders highly-personal health records susceptible to breaches: such privacy concerns led many persons living with HIV to drop out of treatment programs when antiretroviral therapy centers began collecting Aadhaar details from patients.

Not having a digital health ID could lead to exclusion from vital healthcare. This is not a hypothetical. The government had to issue a clarification that no one should be denied COVID-19 vaccines or oxygen for lack of Aadhaar after numerous concerning reports, including allegations that a patient died after two hospitals demanded Aadhaar details which he did not have.

Nonetheless, plans are speeding ahead as the “usual suspects” of India’s techno-solutionist projects turn their efforts to healthcare. RS Sharma, the ex-Director General of the government agency responsible for Aadhaar, is the current CEO of the NHA. The National Health Stack was reportedly developed in consultation with i-SPIRT, a group of so-called “volunteers” with private sector backgrounds who act as a go-between between the Indian government and the tech sector and played a vital role in embedding Aadhaar in society through private companies. A committee set up to examine the merits of the National Health Stack was headed by another former UIDAI chairman.

Steered by individuals with an endless faith in the power of technology and in the private sector’s entrepreneurial drive to save the Indian government and governance, India is determinedly marching forward with its technology-driven and market-based reforms in public services and welfare. This is all underlined by a heavy tendency towards privatization and is in turn inspired by the private sector. The NDHM, for instance, is guided by the tagline “Think Big, Start Small, Scale Fast,” a business philosophy for start-ups.

Perhaps most concerningly, the neoliberal withdrawal of government from crucial public services to make space for the private sector has resulted in the rationing of those goods and services, with fewer people having access to them. Having a digital health ID is not likely to change this for India’s health sector, and is allowing for this privatization by stealth.

December 14, 2021. Sharngan Aravindakshan, LL.M. program, NYU School of Law; Human Rights Scholar with the Digital Welfare State & Human Rights Project in 2021-22. He previously worked for the Centre for Communication Governance in India.

Pilots, Pushbacks, and the Panopticon: Digital Technologies at the EU’s Borders

TECHNOLOGY & HUMAN RIGHTS

Pilots, Pushbacks, and the Panopticon: Digital Technologies at the EU’s Borders

The European Union is increasingly introducing digital technologies into its border control operations. But conversations about these emerging “digital borders” are often silent about the significant harms experienced by those subjected to these technologies, their experimental nature, and their discriminatory impacts.

On October 27, 2021, we hosted the eighth episode in our Transformer States Series on Digital Government and Human Rights, in an event entitled “Artificial Borders? The Digital and Extraterritorial Protection of ‘Fortress Europe.’” Christiaan van Veen and Ngozi Nwanta interviewed Petra Molnar about the European Union’s introduction of digital technologies into its border control and migration management operations. The video and transcript of the event, along with additional reading materials, can be found below. This blog post outlines key themes from the conversation.

Digital technologies are increasingly central to the EU’s efforts to curb migration and “secure” its borders. Against a background of growing violent pushbacks, surveillance technologies such as unpiloted drones and aerostat machines with thermo-vision sensors are being deployed at the borders. The EU-funded “ROBORDER” project aims to develop “a fully-functional autonomous border surveillance system with unmanned mobile robots.” Refugee camps on the EU’s borders, meanwhile, are being turned into a “surveillance panopticon,” as the adults and children living within them are constantly monitored by cameras, drones, and motion-detection sensors. Technologies also mediate immigration and refugee determination processes, from automated decision-making, to social media screening, and a pilot AI-driven “lie detector.”

In this Transformer States conversation, Petra argued that technologies are enabling a “sharpening” of existing border control policies. As discussed in her excellent report entitled “Technological Testing Grounds,” completed with European Digital Rights and the Refugee Law Lab, new technologies are not only being used at the EU’s borders, but also to surveil and control communities on the move before they reach European territory. The EU has long practiced “border externalization,” where it shifts its border control operations ever-further away from its physical territory, partly through contracting non-Member States to try to prevent migration. New technologies are increasingly instrumental in these aims. The EU is funding African states’ construction of biometric ID systems for migration control purposes; it is providing cameras and surveillance software to third countries to prevent travel towards Europe; and it supports efforts to predict migration flows through big data-driven modeling. Further, borders are increasingly “located” on our smartphones and in enormous databases as data-based risk profiles and pre-screening become a central part of the EU’s border control agenda.

Ignoring human experience and impacts

But all too often, discussions about these technologies are sanitized and depoliticized. People on the move are viewed as a security problem, and policymakers, consultancies, and the private sector focus on the “opportunities” presented by technologies in securitizing borders and “preventing migration.” The human stories of those who are subjected to these new technological tools and the discriminatory and deadly realities of “digital borders” are ignored within these technocratic discussions. Some EU policy documents describe the “European Border Surveillance System” without mentioning people at all.

In this interview, Petra emphasized these silences. She noted that “human experience has been left to the wayside.” First-person accounts of the harmful impacts of these technologies are not deemed to be “expert knowledge” by policymakers in Brussels, but it is vital to expose the human realities and counter the sanitized policy discussions. Those who are subjected to constant surveillance and tracking are dehumanized: Petra reports that some are left feeling “like a piece of meat without a life, just fingerprints and eye scans.” People are being forced to take ever-deadlier routes to avoid high-tech surveillance infrastructures, and technology-enabled interdictions and pushbacks are leading to deaths. Further, difference in treatment is baked into these technological systems, as they enable and exacerbate discriminatory inferences along racialized lines. As UN Special Rapporteur on Racism E. Tendayi Achiume writes, “digital border technologies are reinforcing parallel border regimes that segregate the mobility and migration of different groups” and are being deployed in racially discriminatory ways. Indeed, some algorithmic “risk assessments” of migrants have been argued to represent racial profiling.

Policy discussions about “digital borders” also do not acknowledge that, while the EU spends vast sums on technologies, the refugee camps at its borders have neither running water nor sufficient food. Enormous investment in digital migration management infrastructures is being “prioritized over human rights.” As one man commented, “now we have flying computers instead of more asylum.”

Technological experimentation and pilot programs in “gray zones”

Crucially, these developments are occurring within largely-unregulated spaces. A central theme of this Transformer States conversation—mirroring the title of Petra’s report, “Technological Testing Grounds”—was the notion of experimentation within the “gray zones” of border control and migration management. Not only are non-citizens and stateless persons accorded fewer rights and protections than EU citizens, but immigration and asylum decision-making is also an area of law which is highly discretionary and contains fewer legal safeguards.

This low-rights, high-discretion environment makes it rife for testing new technologies. This is especially the case in “external” spaces far from European territory which are subject to even less regulation. Projects which would not be allowed in other spaces are being tested on populations who are literally at the margins, as refugee camps become testing zones. The abovementioned “lie detector,” whereby an “avatar” border guard flagged “biomarkers of deceit,” was “merely” a pilot program. This has since been fiercely criticized, including by the European Parliament, and challenged in court.

Experimentation is deliberately occurring in these zones as refugees and migrants have limited opportunities to challenge this experimentation. The UN Special Rapporteur on Racism has noted that digital technologies in this area are therefore “uniquely experimental.” This has parallels with our work, where we consistently see governments and international organizations piloting new technologies on marginalized and low-income communities. In a previous Transformer States conversation, we discussed Australia’s Cashless Debit Card system, in which technologies were deployed upon aboriginal people through a pilot program. In the UK, radical reform to the welfare system through digitalization was also piloted, with low-income groups being tested on with “catastrophic” effects.

Where these developments are occurring within largely-unregulated areas, human rights norms and institutions may prove useful. As Petra noted, the human rights framework requires courts and policymakers to focus upon the human impacts of these digital border technologies, and highlights the discriminatory lines along which their effects are felt. The UN Special Rapporteur on Racism has outlined how human rights norms require mandatory impact assessments, moratoria on surveillance technologies, and strong regulation to prevent discrimination and harm.

November 23, 2021. Victoria Adelmant,Director of the Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law.